key: cord- -afg nmib authors: saksena, sumeet; fox, jefferson; epprecht, michael; tran, chinh c.; nong, duong h.; spencer, james h.; nguyen, lam; finucane, melissa l.; tran, vien d.; wilcox, bruce a. title: evidence for the convergence model: the emergence of highly pathogenic avian influenza (h n ) in viet nam date: - - journal: plos one doi: . /journal.pone. sha: doc_id: cord_uid: afg nmib building on a series of ground breaking reviews that first defined and drew attention to emerging infectious diseases (eid), the ‘convergence model’ was proposed to explain the multifactorial causality of disease emergence. the model broadly hypothesizes disease emergence is driven by the co-incidence of genetic, physical environmental, ecological, and social factors. we developed and tested a model of the emergence of highly pathogenic avian influenza (hpai) h n based on suspected convergence factors that are mainly associated with land-use change. building on previous geospatial statistical studies that identified natural and human risk factors associated with urbanization, we added new factors to test whether causal mechanisms and pathogenic landscapes could be more specifically identified. our findings suggest that urbanization spatially combines risk factors to produce particular types of peri-urban landscapes with significantly higher hpai h n emergence risk. the work highlights that peri-urban areas of viet nam have higher levels of chicken densities, duck and geese flock size diversities, and fraction of land under rice or aquaculture than rural and urban areas. we also found that land-use diversity, a surrogate measure for potential mixing of host populations and other factors that likely influence viral transmission, significantly improves the model’s predictability. similarly, landscapes where intensive and extensive forms of poultry production overlap were found at greater risk. these results support the convergence hypothesis in general and demonstrate the potential to improve eid prevention and control by combing geospatial monitoring of these factors along with pathogen surveillance programs. two decades after the institute of medicine's seminal report [ ] recognized novel and reemerging diseases as a new category of microbial threats, the perpetual and unexpected nature of the emergence of infectious diseases remains a challenge in spite of significant clinical and biomedical research advances [ ] . highly pathogenic avian influenza (hpai) (subtype h n ) is the most significant newly emerging pandemic disease since hiv/aids. its eruption in southeast asia in - and subsequent spread globally to more than countries fits the complex systems definition of "surprise" [ ] . in this same year that iom had published its final report on microbial threats which highlighted h n 's successful containment in hong kong in [ ] , massive outbreaks occurred in southeast asia where it remains endemic, along with egypt's nile delta. since , hpai h n has killed millions of poultry in countries throughout asia, europe, and africa, and humans have died from it in sixteen countries according to who data as of january . the threat of a pandemic resulting in millions of human cases worldwide remains a possibility [ ] . lederberg et al. [ ] first pointed to the multiplicity of factors driving disease emergence, which later were elaborated and described in terms of 'the convergence model' [ ] . the model proposes emergence events are precipitated by the intensifying of biological, environmental, ecological, and socioeconomic drivers. microbial "adaptation and change," along with "changing ecosystems" and "economic development and land use" form major themes. joshua lederberg, the major intellectual force behind the studies summed-up saying "ecological instabilities arise from the ways we alter the physical and biological environment, the microbial and animal tenants (humans included) of these environments, and our interactions (including hygienic and therapeutic interventions) with the parasites" [ ] . combining such disparate factors and associated concepts from biomedicine, ecology, and social sciences in a single framework remains elusive. one approach suggested has been to employ social-ecological systems theory that attempts to capture the behavior of so-called 'coupled natural-human systems', including the inevitable unexpected appearance of new diseases, themselves one of the "emerging properties" of complex adaptive systems (cas) [ , ] . the convergence model can be so adapted by incorporating the dynamics of urban, agricultural, and natural ecosystem transformations proposed with this framework. these associated multifaceted interactions including feedbacks that affect ecological communities, hosts and pathogen populations, are the proximate drivers of disease emergence. the initial hpai h n outbreaks in vietnam represent an ideal opportunity to adapt and test a cas-convergence model. emergence risk should be highest in the most rapidly transforming urban areas, peri-urban zones where mixes of urban-rural, modern-traditional land uses and poultry husbandry coincide most intensely. specifically we hypothesized a positive association between the presence of hpai outbreaks in poultry at the commune level and: ) peri-urban areas, as defined by saksena et al. [ ] , ) land-use diversity, and ) co-location of intensive and extensive systems of poultry. we used the presence or absence at the commune level of hpai h n outbreaks in poultry as the dependent variable. vietnam experienced its first hpai h n outbreak in late , since then, there have been five waves and sporadic outbreaks recorded over the years [ , ] . we chose to study the first wave (wave ) that ended in february and the second wave (wave ) that occurred between december and april . we used data from the viet nam agricultural census to develop an urbanicity classification that used data collected at a single point in time ( ) but across space ( , communes) to infer processes of change (urbanization, land-use diversification, and poultry intensification) [ ] . the provinces in vietnam (not counting the urban provinces that are governed centrally) are divided into rural districts, provincial towns, and provincial cities. rural districts are further divided into communes (rural areas) and towns, and provincial towns and cities are divided into wards (urban subdistricts) and communes. a commune in viet nam is thus the third level administrative subdivision, consisting of villages/hamlets. for the purpose of simplicity we will henceforth use the term "commune" to refer to the smallest administrative unit whether it is a commune, town, or ward. we included risk factors documented in previous work. we also aimed to understand the differences, if any, in risk dynamics at different scales; comparing risks at the national scale to those at two sub-national agro-ecological zones. for this purpose we chose to study the red river and mekong river deltas, well known hot spots of the disease. hence we conducted two sets of analyses (waves and ) for three places (nation, red river delta, and mekong delta) producing a total of wave-place analyses. data on outbreaks were obtained from the publicly available database of viet nam's department of animal health. given the highly complex dynamics of the epidemics and in keeping with recent methodological trends, we used multiple modeling approaches-parametric and non-parametric-with a focus on spatial analysis. we used both 'place' oriented models that can take into account variations in factors such as policies and administration as well as 'space' oriented models that recognize the importance of physical proximity in natural phenomenon [ ] . very few empirical studies have attempted to determine whether urbanization is related to eid outbreaks or whether urbanization is associated primarily with other factors related to eid outbreaks. one immediate problem researchers face is defining what is rural, urban, and transitional (i.e., peri-urban). some studies have used official administrative definitions of urban and rural areas, but this approach is limited in its bluntness [ ] . other studies prioritized human population density as a satisfactory surrogate [ , [ ] [ ] [ ] [ ] [ ] [ ] [ ] , but this approach ignores the important fact that density is not a risk factor if it is accompanied by sufficient infrastructure to handle the population. spencer [ ] examined urbanization as a non-linear characteristic, using household-level variables such as water and sanitation services. he found evidence that increased diversity in water supply sources and sanitation infrastructure were associated with higher incidences of hpai. these studies employed a limited definition of urbanization that lacked a well-defined characterization of peri-urbanization. still other studies have mapped the relative urban nature of a place, a broad concept that is often referred to as 'urbanicity' [ ] [ ] [ ] [ ] . while these studies show differences in the rural/ urban nature of communities across space and time, they have been limited to small-to medium-scale observational studies; and they have failed to distinguish between different levels of "ruralness". perhaps the best known model of peri-urbanization is mcgee's concept of desakota (indonesian for "village-town") [ ] . mcgee identified six characteristics of desakota regions: ) a large population of smallholder cultivators; ) an increase in non-agricultural activities; ) extreme fluidity and mobility of population; ) a mixture of land uses, agriculture, cottage industries, suburban development; ) increased participation of the female labor force; and ) "grey-zones", where informal and illegal activities group [ ] . saksena et al. [ ] built on mcgee's desakota concepts and data from the viet nam agricultural census to establish an urbanicity classification. that study identified and mapped the , communes, the smallest administrative unit for which data are collected, as being rural, peri-urban, urban, or urban core. this project used the saksena classification to assess associations between urbanicity classes, other risks factors, and hpai outbreaks. researchers have estimated that almost % of zoonotic diseases are associated with landcover and land-use changes (lcluc) [ , ] . lcluc such as peri-urbanization and agricultural diversification frequently result in more diverse and fragmented landscapes (number of land covers or land uses per unit of land). the importance of landscape pattern, including diversity and associated processes, which equate to host species' habitat size and distribution, and thus pathogen transmission dynamics is axiomatic though the specific mechanisms depend on the disease [ , ] . landscape fragmentation produces ecotones, defined as abrupt edges or transitions zones between different ecological systems, thought to facilitate disease emergence by increasing the intensity and frequency of contact between host species [ ] furthermore, fragmentation of natural habitat tends to interrupt and degrade natural processes, including interspecies interactions that regulate densities of otherwise opportunistic species that may serve as competent hosts [ ] , although it is not clear if reduced species diversity necessarily increases pathogen transmission [ ] . rarely has research connected land-use diversification to final health endpoints in humans or livestock; this study attempts to link land-use diversity with hpai h n outbreaks. human populations in the rapidly urbanizing cities of the developing world require access to vegetables, fruits, meat, etc. typically produced elsewhere. as theorized by von thünen in [ ] , much of this demand is met by farms near cities [ ] , many in areas undergoing processes of peri-urbanization [ ] . due to the globalization of poultry trade, large-scale chicken farms raising thousands of birds have expanded rapidly in southeast asia and compete with existing small backyard farmers [ ] . large, enterprise-scale ( , - , birds) operations are still rare in viet nam (only communes have such a facility). on the other hand, domestic and multinational companies frequently contract farmers to raise between , and , birds. recent studies have examined the relative role of extensive (backyard) systems and intensive systems [ , [ ] [ ] [ ] ] . in much of asia there is often a mix of commercial and backyard farming at any one location [ ] . experts have suggested that from a biosecurity perspective the co-location of extensive and intensive systems is a potential risk factor [ ] . intensive systems allow for virus evolution (e.g. low pathogenic avian influenza to hpai) and transformation, while extensive systems allow for environmental persistence and circulation [ ] . previous studies of chicken populations as a risk factor have distinguished between production systems-native chickens, backyard chickens; flock density; commercial chickens, broilers and layers density, etc. [ , [ ] [ ] [ ] ] . in isolation, however, none of these number and/or density based poultry metrics adequately measures the extent of co-location of intensive and extensive systems in any given place. intensive and extensive systems in viet nam have their own fairly well defined flock sizes. a diversity index of the relative number of intensive and extensive systems of poultry-raising can better estimate the effect of such co-location; this study attempts to link a livestock diversity index with the presence or absence of hpai h n outbreaks at the commune level. this study investigated for the , communes of viet nam a wide suite of socio-economic, agricultural, climatic and ecological variables relevant to poultry management and the transmission and persistence of the hpai virus. many of these variables were identified based on earlier studies of hpai (as reviewed in gilbert and pfeiffer [ ] ). three novel variables were included based on hypotheses generated by this project. all variables were measured or aggregated to the commune level. the novel variables were: • degree of urbanization: we used the urbanicity classification developed by saksena et al. [ ] to define the urban character of each commune. the classification framework is based on four characteristics: ) percentage of households whose main income is from agriculture, aquaculture and forestry, ) percentage of households with modern forms of toilets, ) percentage of land under agriculture, aquaculture and forestry and ) the normalized differentiated vegetation index (ndvi). the three-way classification enabled testing for non-linear and non-monotonous responses. • land-use diversity: we measured land-use diversity using the gini-simpson diversity index [ ] . the gini-simpson diversity index is given by -λ, where λ equals the probability that two entities taken at random from the dataset of interest represent the same type. in situations with only one class (complete homogeneity) the gini-simpson index would have a value equal to zero. such diversity indices have been used to measure land-use diversity [ ] . we used the following five land-use classes: annual crops, perennial crops, forests, aquaculture and built-up land (including miscellaneous uses) for which data were collected in the agricultural census. the area under the last class was calculated as the difference between the total area and the sum of the first four classes. the following variables are listed according to their role in disease introduction, transmission and persistence, though some of these factors may have multiple roles. • human population related transmission. human population density [ , - , , , , ] . • poultry trade and market. towns and cities were assumed to be active trading places [ , , , , ] . so, the distance to the nearest town/city was used as indicator of poultry trade. trade is facilitated by access to transportation infrastructure [ , , ] . so, the distance to the nearest a) national highway and b) provincial highway was used as indicator of transportation infrastructure. • disease introduction and amplification. the densities of chicken were calculated based on commune area [ , , , ] . • intermediate hosts. duck and geese densities were calculated using total commune area [ , , ] . as previous studies have shown a link between scavenging in rice fields by ducks and outbreaks, we also calculated duck density using only the area under rice. • agro-ecological and environmental risk factors. previous studies have shown that the extent of rice cultivation is a risk factor, mainly due its association with free ranging ducks acting as scavengers [ ] . we used percentage of land under rice cultivation as a measure of extent. rice cropping intensity is also a known risk factor [ , , ] . we used the mean number of rice crops per year as a measure of intensity. the extent of aquaculture is a known risk factor [ ] , possibly because water bodies offer routes for transmission and persistence of the virus. the percentage of land under aquaculture was used as a metric. proximity to water bodies increases the risk of outbreaks [ , [ ] [ ] [ ] , possibly by increasing the chance of contact between wild water birds and domestic poultry. we measured the distance between the commune and the nearest: a) lake and b) river. climatic variables-annual mean temperature and annual precipitation-have been associated with significant changes in risk [ , ] . elevation, which is associated with types of land cover and agriculture, has been shown to be a significant risk factor in vietnam [ ] . compound topographical index (cti, also known as topographical wetness index) is a measure of the tendency for water to pool. studies in thailand and elsewhere [ ] have shown that the extent of surface water is a strong risk factor, possibly due to the role of water in long-range transmission and persistence of the virus. in the absence of reliable and inexpensive data on the extent of surface water we used cti as a proxy. cti has been used in ecological niche models (enm) of hpai h n [ , ] . however, given the nature of enm studies, the effect of cti as a risk factor has been unknown so far. cti has been used as a risk factor in the study of other infectious and non-infectious diseases [ ] . some studies have shown that at local scales, the slope of the terrain (a component of cti) was significantly correlated with reservoir species dominance [ ] . cti is a function of both the slope and the upstream contributing area per unit width orthogonal to the flow direction. cti is computed as follows: cti = ln (a s / (tan (β)) where; a s = area value calculated as ((flow accumulation + ) à (pixel area in m )) and β is the slope expressed in radians [ ] . though previous studies have indicated that normalized difference vegetation index (ndvi) is a risk factor [ , , , , ], we did not include it explicitly in our models, as the urban classification index we used included ndvi [ ] . we obtained commune level data on hpai h n outbreaks from the publicly available database of the department of animal health [ ] . viet nam experienced its first major epidemic waves between december and february [ ] . we chose to study the first wave (wave ) that ended in february and the second wave (wave ) that occurred between december and april . in wave , % of the communes and in wave , % of the communes experienced outbreaks. we used data from the population census of viet nam to estimate human population per commune. we relied on data from two agriculture censuses of viet nam. this survey is conducted every five years covering all rural households and those peri-urban households that own farms. thus about three-fourths of all of the country's households are included. the contents of the survey include number of households in major production activities, population, labor classified by sex, age, qualification, employment and major income source; agriculture, forestry and aquaculture land used by households classified by source, type, cultivation area for by crop type; and farming equipment by purpose. commune level surveys include information on rural infrastructure, namely electricity, transportation, medical stations, schools; fresh water source, communication, markets, etc. detailed economic data are collected for large farms. we used the agriculture census for most variables because the first three epidemic waves occurred between the agricultural censuses of and but were closer in time to the census [ ] . however, for data on poultry numbers we used the agriculture census data set because between and the poultry population grew at an average rate of % annually. however, in , after the first wave of the h n epidemic, the poultry population fell %. only by mid- did the poultry population return close to pre-epidemic levels. thus, we considered the poultry population data from the census to be more representative. we aggregated census household data to the commune level. a three-way classification of the rural-to-urban transition was based on a related study [ ] . raster data on annual mean temperature and precipitation were obtained from the world-clim database and converted to commune level data. the bioclimatic variables were compiled from the monthly temperature and precipitation values and interpolated to surfaces at m spatial resolution [ ] . this public database provides data on the average climatic conditions of the period - . elevation was generated from srtm meter digital elevation models (dem) acquired from the consortium for spatial information (cgiar-csi). compound topographical index (cti) data were generated using the geomorphometry and gradient metrics toolbox for arc-gis . . prior to risk factor analysis we cleaned the data by identifying illogical values for all variables and then either assigning a missing value to them or adjusting the values. illogical values occurred mainly (less than % of the cases) for land-related variables such as percentage of commune land under a particular type of land use. next we tested each variable for normality using the bestfit software (palisade corporation). most of the variables were found to follow a log-normal distribution and a log-transform was used on them. we then examined the bi-variate correlations between all the risk factors (or their log-transform, as the case may be). correlations were analyzed separately for each place. certain risk factors were then eliminated from consideration when |r| ! . (r is the pearson correlation coefficient). when two risk factors were highly correlated, we chose to include the one which had not been adequately studied explicitly in previously published risk models. notably, we excluded a) elevation (correlated with human population density, chicken density, duck density, percentage land under paddy, annual temperature and compound topographical index), b) human population density (correlated with elevation and cti), c) chicken density (only at national level, correlated with cti), d) duck and goose density (correlated with elevation, chicken density, percentage land under paddy, land use diversity index and cti), e) annual temperature (correlated with elevation and cti) and f) cropping intensity (correlated with percentage land under paddy). considering the importance of spatial autocorrelation in such epidemics, we used two modeling approaches: ) multi-level generalized linear mixed model (glmm) and ) boosted regression trees (brt) [ , ] with an autoregressive term [ ] . glmm is a 'place' oriented approach that is well suited to analyzing the effect of administrative groupings, while brt is a 'space' oriented approach that accounts for the effects of physical proximity. we began by deriving an autoregressive term by averaging the presence/absence among a set of neighbors defined by the limit of autocorrelation, weighted by the inverse of the euclidean distance [ ] . the limit of the autocorrelation of the response variable was obtained from the range of the spatial correlogram ρ (h) [ ] . to determine which predictor variables to include in the two models, we conducted logistic regression modeling separately for each of them one by one but included the autoregressive term each time. we finally included only those variables whose coefficient had a significance value p . (in at least one wave-place combination) and we noted the sign of the coefficient. this choice of p value for screening risk factors is common in similar studies [ , , , ] . we used a two-level glmm (communes nested under districts) to take account of random effects for an area influenced by its neighbors, and thus, we studied the effect of spatial autocorrelation. we used robust standard errors for tests of fixed effects. boosted regression trees, also known as stochastic gradient boosting, was performed to predict the probability of hpai h n occurrence and determine the relative influence of each risk factor to the hpai h n occurrence. this method was developed recently and applied widely for distribution prediction in various fields of ecology [ , ] . it is widely used for species distribution modeling where only the sites of occurrence of the species are known [ ] . the method has been applied in numerous studies for predicting the distribution of hpai h n disease [ , , [ ] [ ] [ ] . brt utilizes regression trees and boosting algorithms to fit several models and combines them for improving prediction by performing iterative loop throughout the model [ , ] . the advantage of brt is that it applies stochastic processes that include probabilistic components to improve predictive performance. we used regression trees to select relevant predictor variables and boosting to improve accuracy in a single tree. the sequential process allows trees to be fitted iteratively through a forward stage-wise procedure in the boosting model. two important parameters specified in the brt model are learning rate (lr) and tree complexity (tc) to determine the number of trees for optimal prediction [ , ] . in our model we used sets of training and test points for cross-validation, a tree complexity of , a learning rate of . , and a bag fraction of . . other advantages of brt include its insensitivity to co-linearity and non-linear responses. however, for the sake of consistency with the glmm method, we chose to eliminate predictors that were highly correlated with other predictors and to make log-transforms where needed. in the glmm models we used p . to identify significant risk factors. the predictive performances of the models were assessed by the area under the curve (auc) of the receiver operation characteristic (roc) curve. auc is a measure of the overall fit of the model that varies from . (chance event) to . (perfect fit) [ ] . a comparison of auc with other accuracy metrics concluded that it is the most robust measure of model performance because it remained constant over a wide range of prevalence rates [ ] . we used the corrected akaike information criteria (aicc) to compare each glmm model with and without its respective suite of fixed predictors. we used spss version (ibm corp., new york, ) for glmm and r version . . (the r foundation for statistical computing, ) for the brt. for calculating the spatial correlogram we used the spdep package of r. the fourteen predictor variables we modeled (see tables) were all found to be significantly associated with hpai h n outbreaks (p . ) in at least one wave-place combination based on univariate analysis (but including the autoregressive term) ( table ) . land-use diversity, chicken density, poultry flock size diversity and distance to national highway were found to have significant associations across five of the six wave-place combinations. power of the glmm models, as measured by the auc, is very good with auc values ranging from . to . (tables - ). the predictive power of the national models was higher than that of the delta models. the predictive power of the brt models is good, with aucs ranging from . to . . the brt models also had a better predictive power at the national level than at the delta level. these values are higher than those reported for wave (auc = . ) and wave (auc = . ) by gilbert et al. [ ] . both gilbert et al. [ ] and this study found that at the national level the predictive performance for wave was higher than that for wave . wave mainly affected the mekong river delta. previous studies indicated the duck density was an important predictor [ ] ; our results, however, indicated that the diversity of duck flock size was a more important predictor than duck density. both the glmm and brt models found annual precipitation to be a significant factor. the glmm model indicated a negative association; similar to what was found by studies in china [ ] and in the red river delta [ ] . a global study of human cases also found occurrence to be higher under drier conditions [ ] . generally, the role of precipitation was found to be far more significant in the deltas than for the country as a whole. the unadjusted relative risk (rr) of peri-urban areas in comparison with non-peri-urban areas was . and . for waves and , respectively. in terms of urbanicity, we found that chicken density, percentage of land under rice, percentage of land under aquaculture, flock size diversity for duck and geese, and the compound topographical index (cti) to be highest in peri-urban areas (fig a- e) . we also found that land-use diversity was higher in rural areas, but peri-urban areas had diversity levels only marginally lower (fig f) . the urbanicity variable alone, however, was not found to be significantly associated with hpai h n in any place according to the glmm model except for the urban level in red river delta for wave and in the mekong river delta for wave . the brt model ranked urbanicity as one of the least influential variables. land-use diversity was found to be significantly associated with hpai h n in both waves for viet nam according to the glmm model, but at the delta level the association was significant only for wave in the mekong river delta. the brt model indicated that land-use diversity highly influenced hpai h n at the national level in wave . for the remaining waveplace combinations land-use diversity had middle to below-middle rank of influence. both the glmm and brt models indicated that the diversity of chicken flock-size had a strong association with hpai h n for both waves at the national level. this was generally found to be true at the delta levels with some exceptions. the diversity of duck and goose flock size was also significantly associated with hpai h n in all places, but the associations were much stronger in wave than in wave . the glmm model indicated that the cti had a very strong association with hpai h n at the national level in both waves although this was not true in the two deltas. the cti is a steady state wetness index commonly used to quantify topographic control on hydrological processes. accumulation numbers in flat areas, like deltas, are very large; hence the cti was not a relevant variable in the glmm model in these areas. the brt model however indicated that cti had middle to low influence in all waves and places. we found very high spatial clustering effects as indicated by the fact that in all waves and places the brt model found the spatial autocorrelation term to have the highest rank of influence. as expected, the relative influence of the autocorrelation term at the national level was higher ( - %) than at the delta levels ( - %). in the glmm models we found the akaike information criterion (aic) using the entire set of variables to be much lower than the aics of a glmm model without fixed effects. this indicated that though clustering effects were significant, our theory driven predictor variables improved model performance. a limitation of using surveillance methods for the dependent variable (poultry outbreaks) is that the data may have reporting/detection biases [ ] . under-reporting/detection in rural areas as compared to peri-urban areas is possible. we believe that the urbanicity and the shortest distance to nearest town risk factors serve as rough proxies for reporting/detection efficiency. previous studies have tended to use human population density as a proxy for this purpose. in our study we found a strong association between human population density and urbanicity. but we acknowledge that a categorical variable such as urbanicity may provide less sensitivity than a continuous variable such as human population density in this specific context. this study explored the validity of a general model for disease emergence that combined the iom 'convergence model' [ ] and the social-ecological systems model [ , ] , for investigating the specific case of hpai in vietnam. we sought to test the hypotheses that measures of urbanization, land-use diversification, and poultry intensification are correlated with outbreaks in poultry. our results generally support the hypothesis that social-ecological system transformations are associated with h ni outbreaks in poultry. the results presented here highlight three main findings: ) when relevant risk factors are taken into account, urbanization is generally not a significant independent risk factor; but in peri-urban landscapes emergence factors converge, including higher levels of chicken densities, duck and geese flock size diversities, and fraction of land under rice or aquaculture; ) high land-use diversity landscapes, a variable not previously considered in spatial studies of hpai h n , are at significantly greater risk for hpai h n outbreaks; as are ) landscapes where intensive and extensive forms of poultry production are co-located. only one other study has explicitly examined urbanicity in the context of hpai h n . loth et al. [ ] found peri-urban areas in indonesia were significantly associated with hpai h n cases, even based on multivariate models. our study, however, attempted both to associate hpai h n with degree of urbanicity and to determine the features of peri-urban areas that place them at risk. when those features (i.e., chicken densities, duck and geese flock size diversities, and the fraction of land under rice or aquaculture) are included in multivariate models, the role of the urbanization variable per se diminishes. we found in the main river deltas in viet nam (red river and mekong), urbanization had no significant association with hpai h n . this may be due to the fact that the deltas are more homogenous, in terms of urbanization, than the country as a whole. this is the first study to examine land-use diversity as a risk factor for hpai h n . measured by the gini-simpson diversity index of the five land-use classes on which data were collected in the viet nam agricultural census, and the presence or absence of hpai outbreaks at the commune level, our results indicate a strong association between land-use diversity and hpai h n at the national level and in the mekong river delta. this metric captures both the variety of habitats and of the complexity of geospatial patterning likely associated with transmission intensity. our results are similar to what has been observed by studies of other eids using fragmentation metrics (e.g. [ ] [ ] [ ] . this is one of the few studies, however, to link landscape fragmentation to an eid disease in poultry and not just to the vector and/or hosts of the eid. previous studies have focused on poultry production factors such as type of species, size of flocks, and extent of commercialization (e.g. [ , [ ] [ ] [ ] . this study expands on those findings by providing evidence that when intensive and extensive systems of chicken and/or duck and geese production co-exist in the same commune, the commune experiences higher risk of disease outbreak. future studies need to examine the biological causal mechanisms in this context. we suggest that national census data (particularly agricultural censuses) compiled at local levels of administration provide valuable information that are not available from remotely sensed data (such as poultry densities) or require a large amount of labor to map at national to larger scales (land-use diversity). mapping land-use classes at the national scale for local administrative units (i.e., the , communes in viet nam) is not an insignificant task. future studies, however, could examine the correlation between a census-based metric with metrics derived from remote sensing used to measure proportional abundance of each landcover type within a landscape [ ] . vietnam is relatively advanced in making digital national population and agricultural census data available in a format that can be linked to administrative boundaries. while other nations are beginning to develop similar capacities, in the short term the application of this method to other countries may be limited. ultimately, both census and remotely sensed data can be used independently to map the urban transition and diversity of land use; these tools, however, may provide their greatest insights when used together. another important contribution of this study was the discovery of the importance of cti. so far cti had been used only in ecological niche modeling studies of hpai h n ; the specific role and direction of influence of cti had has so far been unknown. our study, the first to use cti as a risk factor, found it had a large positive influence on hpai h n risk at the national level. previous studies have highlighted the role of surface water extent in the persistence and transmission of the hpai h n virus. these studies measured surface water extent as area covered by water, magnitude of seasonal flooding, distance to the nearest body of water, or other variables that are often difficult to map using remotely sensed data, especially for large area studies. cti on the other hand has the potential to serve as an excellent surrogate which can easily be measured in a gis database. the national and regional (delta) models differed quite considerably, both in terms of performance and significant risk factors. in the deltas we commonly found only chicken density, duck flock size diversity and annual precipitation to be significant. this suggests dynamics of risk at the commune level are strongly dependent on the spatial range of analysis, consistent with another study in the mekong delta [ ] . though that study's model initially included three dozen commonly known risk factors, the significant risk factors were limited to poultry flock density, proportion households with electricity, re-scaled ndvi median may-october, buffalo density and sweet potato yield. another study in the red river delta [ ] found that in addition to the typical poultry density metrics, only the presence of poultry traders was significant. we speculate that for smaller regions, especially for known hot-spots, the relevant risk factors are those that reflect short-range, short-term driving forces such as poultry trading, presence of live bird markets and wet markets etc. improving model performance for smaller regions would require highly refined and nuanced metrics for poultry trading, road infrastructure, water bodies, etc.-data that are typically not available through census surveys. the differences between the national and regional models suggest that our results can inform planners making decisions at different hierarchical levels of jurisdiction: national, region and local. our study has the potential to inform the design of future research related to the epidemiology of other eids in viet nam and elsewhere. for example, we speculate that in southeast asia, japanese encephalitis, the transmission of which is associated with rice cultivation and flood irrigation [ ] , may also show a strong association with peri-urbanization. in some areas of asia these ecological conditions occur near, or occasionally within, urban centers. likewise, hantaan virus, the cause of korean hemorrhagic fever, is associated with the field mouse apodemus agrarius and rice harvesting in fields where the rodents are present [ ] . our work has demonstrated that the percentage of land under rice in peri-urban areas and rural areas is similar. hence diseases associated with rice production are likely to peak in peri-urban areas given other risk factors such as land-use diversity, cti, and distance to infrastructure. our poultry flock-size diversity findings may also be relevant to understanding the dynamics of other poultry related infections such as newcastle disease. finally, these results suggest the validity of a general model of zoonotic disease emergence that integrates iom's convergence model with the subsequently proposed social-ecological systems and eid framework. thus, convergence represents the coalescence in time and space of processes associated with land-cover and land-use changes. project results question whether the urban/rural land-use dichotomy is useful when large areas and parts of the population are caught between the two. planners need better tools for mapping the rural-urban transition, and for understanding how the specific nature of peri-urban environments creates elevated health risk that require adaptation of existing planning, land use, and development practices. committee on emerging microbial threats to health in the st century. emerging infections: microbial threats to health in the united states emerging infectious diseases in : years after the institute of medicine report navigating social-ecological systems: building resilience for complexity and change committee on emerging microbial threats to health in the st century. microbial threats to health: the threat of pandemic influenza avian influenza virus (h n ): a threat to human health committee on emerging microbial threats to health in the st century. microbial threats to health: emergence, detection, and response emerging and reemerging infectious diseases: biocomplexity as an interdisciplinary paradigm disease ecology and the global emergence of zoonotic pathogens classifying and mapping the urban transition in vietnam an analysis of the spatial and temporal patterns of highly pathogenic avian influenza occurrence in vietnam using national surveillance data mapping h n highly pathogenic avian influenza risk in southeast asia area variations in health: a spatial multilevel modeling approach. health place world development report : reshaping economic geography risk factors of poultry outbreaks and human cases of h n avian influenza virus infection in west java province, indonesia ecologic risk factor investigation of clusters of avian influenza a (h n ) virus infection in thailand spatial distribution and risk factors of highly pathogenic avian influenza (hpai) h n in china identifying risk factors of highly pathogenic avian influenza (h n subtype) in indonesia risk factors and clusters of highly pathogenic avian influenza h n outbreaks in bangladesh freegrazing ducks and highly pathogenic avian influenza modelling the ecology and distribution of highly pathogenic avian influenza (h n ) in the indian subcontinent the urban health transition hypothesis: empirical evidence of an avian influenza kuznets curve in vietnam? urbanization and the spread of diseases of affluence in china defining the "urban" in urbanization and health: a factor analysis approach understanding community context and adult health changes in china: development of an urbanicity scale quantifying the urban environment: a scale measure of urbanicity outperforms the urban-rural dichotomy the emergence of desakota in asia: expanding a hypothesis risk factors for human disease emergence global trends in emerging infectious diseases pathogenic landscapes: interactions between land, people, disease vectors, and their animal hosts unhealthy landscapes: policy recommendations on land use change and infectious disease emergence the role of ecotones in emerging infectious diseases ecological consequences of habitat fragmentation: implications for landscape architecture and planning does biodiversity protect humans against infectious disease? wartenber cm. von thunen's isolated state health and peri-urban natural resource production livestock production: recent trends, future prospects anthropogenic factors and the risk of highly pathogenic avian influenza h n : prospects from a spatial-based model prospects for emerging infections in east and southeast asia years after severe acute respiratory syndrome zoonosis emergence linked to agricultural intensification and environmental change risk factor modelling of the spatio-temporal patterns of highly pathogenic avian influenza (hpaiv) h n : a review diversity and evenness: a unifying notation and its consequences land mosaics: the ecology of landscapes and regions characterization of poultry production systems in vietnam spatio-temporal epidemiology of highly pathogenic avian influenza (subtype h n ) in poultry in eastern india agro-environmental determinants of avian influenza circulation: a multisite study in thailand, vietnam and madagascar risk factors for highly pathogenic avian influenza (hpai) h n infection in backyard chicken farms risk analysis for the highly pathogenic avian influenza in mainland china using meta-modeling environmental factors contributing to the spread of h n avian influenza in mainland china flying over an infected landscape: distribution of highly pathogenic avian influenza h n risk in south asia and satellite tracking of wild waterfowl environmental and anthropogenic risk factors for highly pathogenic avian influenza subtype h n outbreaks in romania mapping spread and risk of avian influenza a (h n ) in china risk for infection with highly pathogenic avian influenza virus (h n ) in backyard chickens spatio-temporal occurrence modeling of highly pathogenic avian influenza subtype h n : a case study in the red river delta rivers and flooded areas identified by medium-resolution remote sensing improve risk prediction of the highly pathogenic avian influenza h n in thailand ecology and geography of avian influenza (hpai h n ) transmission in the middle east and northeastern africa predictable ecology and geography of avian influenza (h n ) transmission in nigeria and west africa chagas disease risk in texas the effect of habitat fragmentation and species diversity loss on hantavirus prevalence in panama soil-landscape modeling and spatial prediction of soil attributes spatio-temporal dynamics of global h n outbreaks match bird migration patterns risk factors and characteristics of h n highly pathogenic avian influenza (hpai) post-vaccination outbreaks very high resolution interpolated climate surfaces for global land areas a working guide to boosted regression trees novel methods improve prediction of species' distributions from occurrence data an autologistic model for the spatial distribution of wildlife multivariable geostatistics in s: the gstat package ecological determinants of highly pathogenic avian influenza (h n ) outbreaks in bangladesh species distribution models: ecological explanation and prediction across space and time improving risk models for avian influenza: the role of intensive poultry farming and flooded land during the thailand epidemic modeling habitat suitability for occurrence of highly pathogenic avian influenza virus h n in domestic poultry in asia: a spatial multicriteria decision analysis approach predicting the risk of avian influenza a h n infection in live-poultry markets across asia principles and practical application of the receiver-operating characteristic analysis for diagnostic tests the effects of species' range sizes on the accuracy of distribution models: ecological phenomenon or statistical artefact? seasonal patterns in human a (h n ) virus infection: analysis of global cases integrated mapping of establishment risk for emerging vector-borne infections: a case study of canine leishmaniasis in southwest france fragmentation analysis for prediction of suitable habitat for vectors: example of riverine tsetse flies in burkina faso the impact of habitat fragmentation on tsetse abundance on the plateau of eastern zambia spatial pattern analysis program for quantifying landscape structure risk factors of highly pathogenic avian influenza h n occurrence at the village and farm levels in the red river delta region in vietnam factors in the emergence of infectious diseases we thank nargis sultana, university of hawaii, manoa for assistance with compiling a gis database. we thank the following for giving us advice and suggestions on the statistical model- key: cord- -llstohys authors: you, shu-han; chen, szu-chieh; liao, chung-min title: health-seeking behavior and transmission dynamics in the control of influenza infection among different age groups date: - - journal: infect drug resist doi: . /idr.s sha: doc_id: cord_uid: llstohys background: it has been found that health-seeking behavior has a certain impact on influenza infection. however, behaviors with/without risk perception on the control of influenza transmission among age groups have not been well quantified. objectives: the purpose of this study was to assess to what extent, under scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. materials and methods: a behavior-influenza model was used to estimate the spread rate of age-specific risk perception in response to an influenza outbreak. a network-based information model was used to assess the effect of network-driven risk perception information transmission on influenza infection. a probabilistic risk model was used to assess the infection risk effect of risk perception with a health behavior change. results: the age-specific overlapping percentage was estimated to be %– %, %– %, and %– % for child, teenage and adult, and elderly age groups, respectively. individuals perceive the preventive behavior to improve risk perception information transmission among teenage and adult and elderly age groups, but not in the child age group. the population with perceived health behaviors could not effectively decrease the percentage of infection risk in the child age group, whereas for the elderly age group, the percentage of decrease in infection risk was more significant, with a . th percentile estimate of %. conclusion: the present integrated behavior-infection model can help health authorities in communicating health messages for an intertwined belief network in which health-seeking behavior plays a key role in controlling influenza infection. it has been found that health-seeking behavior has a certain impact on influenza infection. therefore, to facilitate public health decisions about intervention and management in controlling the spread of infectious diseases, it is crucial to assess to what extent, under scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. to control respiratory infectious diseases, the development of vaccination, contact tracing, isolation, and the promotion of protective behaviors are the important measures. indeed, the effectiveness of control measures fundamentally depends greatly on human beliefs on public infection awareness and risk perception for driving the change in self-behavior. infection and drug resistance : submit your manuscript | www.dovepress.com you et al risk perception can be referred to as an awareness or belief about the potential hazard and/or harm, which plays an important role in shaping health-related behaviors to reducing susceptibility and infectivity. generally, risk perception is affected by social factors such as media release by health authorities, observation or interaction with relation-specific groups, past experiences of similar hazards, habits, and culture. these factors result in variation in risk perception among individuals. epidemiological studies have found that variances in risk perception can be observed by examining the behavioral responses among different age groups. steelfisher et al indicated that % of the adult population said that they did not intend to acquire the h n vaccine for themselves. in addition, perception of vaccine safety and personal vulnerability were the major reasons for vaccine acceptance. allison et al indicated that children could use accurate protective behavior; for example, they could use hand gel for preventing influenza. on the other hand, childhood vaccination is more likely to depend on parental decision making. moreover, researchers have suggested assessment of the risk perception and behavior across different age groups. , a social network could be an important social structure in which people could exchange information about risk-related events to spur the health behavior change. scherer and cho suggested that individual perceptions could be affected by self-perception in the social network. researchers have explored the interactions between epidemic spreading and risk perception in the network. , however, the influence of risk perception on the risk of infectious disease is controversial, because the perceptual capacity of individuals may both create and reduce the disease risks. therefore, the behavior-disease dynamics in the social network structure may result in amplification or attenuation of the disease outbreak. most epidemic modeling techniques have used a simple epidemic model such as the susceptible-infected-recovered (sir) model for describing a homogeneous disease network. moreover, the effects the network with human responses to disease spreading were studied extensively and attracted substantial attention. funk et al used sirbased perceptual-influenza model for examining the effects of risk perception on behavioral change and susceptibility reduction. they also indicated that the effects within a disease network can induce health behavioral changes in the population. in turn, the influence of risk perception could result in a feedback signal to alter the progress of the disease dynamics. , recently, information theoretical approaches have been applied to infer the relations in diseases or social networks. , zhao et al developed a model to quantify the effects of a dynamic network, indicating that the behavior responses correspond to the entropy derived from different information content of the dynamic social network. greenbaum et al proposed an information theoretic model to assess pandemic risk. they indicated that mutual information was a key determinant in minimizing risk of the pandemic threats. we have previously incorporated the information-theoretic framework into a behavior-influenza (bi) transmission dynamic system to understand the effect of individual behavioral change on influenza epidemics. , here we assess that if, how, and to what extent, under different scenarios of with/without control and preventive/protective behaviors, the age-specific network-driven risk perception influences influenza infection. in this study, we analyzed the emergency admission rates from the weekly ili visits, which were obtained from taiwan centers for disease control (tcdc) by sentinel primary care physicians. the ili cases were detected through the real-time outbreak and taiwan national infectious disease statistics system. the definition of an ili case must meet three criteria: ) fever (ear temperature ≥ . °c) and respiratory tract symptoms (including rhinorrhea, nasal congestion, sneezing, sore throat, cough, and dyspnea), ) one of the symptoms of muscle ache, headache, and extreme fatigue; and ) exclusion of simple running nose, tonsillitis, and bronchitis. data on emergency admission rates for six influenza seasons in the period from week of to week of were adopted to test how health-seeking behavior influences the influenza infection dynamics. influenza season was defined from july (week ) to june (week ) of the following year in taiwan. the ili-related emergency admission rates were detected by using the icd- -cm codes for influenza and pneumonia ( - ). we also estimated the age-specific admission infection fraction (if) for each age group, including child ( - years), teenage and adult ( - years) , and elderly ( + years), for different human behaviors or influenza risk perceptions. we multiplied the annual mid-year population estimates health-seeking behavior in the control of influenza infection , and then divided the result by the number of ili visits to estimate if, which is given as mid-year population incidence rate where i is the different age groups (child, teenage and adult, and elderly) and j is the yearly based time period in the period of - . the concept of the bi model developed in our previous studies , mainly incorporated the sir-based perception model into an information-theoretic framework, which was used to simulate the information flow of risk perception in response to an influenza outbreak. briefly, the bi model uses six compartments to represent the disease states of susceptible, infected, and recovered by dividing the population into a with/without perception structure. the description of input parameters for the bi model is given in table . basic reproduction number (r ) can be used to quantify disease infection severity, defined as the average number of secondary cases produced successfully by an infected individual in a totally susceptible population. therefore, based on the bi model, we can also estimate r with the perception state (r a ) and without perception state (r d ). it can be described as input source information with perception s a = r a = a / λ , where a is the rate of perception spread and l is the rate of perception loss. on the other hand, input source information can be described without perception is the infection rate describing contact between infected and susceptible populations and g is the recovery rate from infected to recovered populations. we assumed that r can be treated as the basic reproduction number resulting from individuals with risk perception information (r ,rpi ) for each age group in the period of - . thus, r ,rpi can be estimated as: . furthermore, to better characterize perception spread rate for different age groups during each year (a ij ) on the bi transmission dynamics, we adopted a ij from an epidemic equilibrium structure. the equilibrium information flow of risk perception from population without perception r d where r a e is the basic reproduction number at equilibrium with information flow of risk perception from population without perception, s i is the reduced infectivity factor from infected with perception to susceptible without perception, w is the rate of infected becoming with perception, a is the perception spread rate, and g w/o is the recovery rate of infected without perception. moreover, we assumed that people may make the decision to change behavior based on r ,rpi in the previous year. based on equation , a can then be rewritten as; where r ij ,pri, and r i j ,pri, + ( ) are the basic reproduction numbers with/without risk perception, respectively, for each age group in the period of - . to assess the effect of network-driven risk perception information transmission on influenza infection, we applied an information theoretic model referred to as the multiple access channel (mac) that is used to capture a signal r transmitting through multiple channels to the responses i , i , …, i n . we considered the network-driven risk perception information model (nm) with information bottleneck (ib). the maximum mutual risk perception information (mi max ) resulting from the nm can be estimated as; where n e is the effective information from contact numbers of individuals, s r is the variance of the r signal distribution, s ib→i is the variance introduced in each access channel through the ib to response i, and the s r →ib is the variance introduced to the ib. the ratio s s s is the signal-to-noise ratio. on the other hand, the nm model with a negative feedback was considered to explore the effect of perceived different health behaviors on reducing susceptibility. here, we used the correlation coefficient (r) and the overlapping percentage (i o ) to associate r and i from the published data (table s ) to calculate s s s in equation . we estimated the r based on the relationship between viral titer-based i and viral tier-based r corresponding to with/ without perceived different health behaviors. briefly, we selected published papers (table s ) where health behaviors treated with vaccinations and antiviral drugs for different subtypes of influenza were included. two protective behaviors (i.e., perceived of carrying the disease to vaccine and antiviral taking) were adopted in a state of greater alert. the value of r can be used to associate the amount of observed variability that is attributable to the overall biological variability and experimental noise. on the other hand, i o describes the age-specific overlapping percentage between the infected population with/ without perception adjusted by the fractions of initial infected population with perceptual state over those without perceptual state. here, we used three perceptual scenarios to assess our model with the initial infected population ratios of < , = , and > . the i o can then be estimated as algebraic manipulation of probability density functions (pdfs) of s a and s d as; thus, followed by the information-theoretic theorem with known values of r, i o , and s r , the s s can be computed as therefore, the nm model with a negative feedback in equation can be rewritten as; health-seeking behavior in the control of influenza infection where i represents the individuals perceived with/without health behaviors. we further incorporated the estimated probability distributions of the model parameter with age-specific initial population sizes in the period of - (table ) and a into the bi model, to estimate the age-specific overlapping percentages. to parameterize the reduced susceptibility factor with regard to adopting preventive behaviors, including using masks, avoiding visiting crowded places, and hand washing (s s,pre ), and protective behaviors of vaccination (s s,pro ), we applied a standard logistic regression-based equation for mathematically expressing the components of the healthbehavior model (hbm). the hbm-based health behavior with the standard logistic regression-based equation has been applied to estimate with/without preventive/protective health behaviors in respiratory infectious diseases such as severe acute respiratory syndrome (sars), influenza, - and other diseases. [ ] [ ] [ ] the estimates are equivalent to the decisions of rational individuals with influenza knowledge. here, s s can be expressed in terms of odds ratios (ors) depending on health behaviors perceived to be associated with each hbm variable. where s s is the probability of the hbm-based health behaviors (such as preventive behavior, s s,pre , and protective behavior, s s,pro ) and x is a binary variable with a value of indicating a "high" state and a value of indicating a "low" state. or is a calibration factor when all hbm variables are in a "low" state. s s represents that an individual engages in a particular behavior, and it could be calculated from equation . s s ≥ . indicates that an individual engages in a specific health behavior. r perception-based probabilistic risk assessment to develop a probabilistic risk model, a dose-response model describing the relationship between transmission potential quantified by signal r and the total proportion of the infected population (i) has to be constructed. in a previous study, we have successfully employed the joint probability distribution to assess the risk profile. it can be expressed mathematically as; where r(i) is the cumulative distribution function describing the probabilistic infection risk in a susceptible population at specific r signal, p(r ) is the probability distribution of r signal (the prior probability), and p(i|r ) is the conditional response distribution describing the dose-response relationship between i and r . the exceedance risk profile can be obtained by -r(i). in view of equation , we can relate p(i, r ) to r(i) in equation . thus, the mutual information in these interdependences between belief of risk perception and infection risk can then be expressed as a mechanism of interpersonal influence described in equation . in table , the numbers of ili visits were . × ± × (mean ± standard deviation [sd]), . × ± . × , and . × ± . × per month for child, teenage and adult, and elderly age groups, respectively. figure shows the ili-related emergency admission rates and if among the three age groups, child ( - years), teenage - ) , the ili-related emergency admission rates were estimated to be . ± . (mean ± sd), . ± . , and . ± . per , population for child, teenage and adult, and elderly age groups, respectively. overall, the ili-related emergency admission rate was the highest in the child age group ( . - . per , population, minimum-maximum), whereas the lowest one was observed in the teenage and adult age group ( . - . per , population; figure a ). on the other hand, the highest ili-related emergency admission if was in the child age group ( . ± . %), followed by teenage and adult ( . ± . %) and elderly ( . ± . %) age groups ( figure b ). to model the bi model, the age-specific perception spread rate (a) has to be determined (equation ). we first calculated age-specific r , pri based on the age-specific ili-related admission if. our results indicated that lognormal (ln) distribution with a geometric mean (gm) and a geometric standard deviation (gsd), ln (gm, gsd), was the most suitable fitted model for r , pri distributions of ln ( . , . ), ln ( . , . ), and ln ( . , . ) for child, teenage and adult, and elderly, age groups, respectively (table ). figure demonstrates age-specific overlapping percentage (i o ) between infected population with/without perception adjusted by fractions of initial infected population with perceptual state over those without perceptual state. we used three different scenarios of initial infected population fraction: i + /i -< (figure a-c) , i + /i -= ( figure d-f) , and i + /i -> ( figure g-i) . we showed that i + /i -> results in the lowest estimates of i o in child and elderly age groups ( figure g and i), whereas for teenage and adult age groups, the estimate was the highest in the case of i + /i -= ( figure e ). generally, i o estimates range from % to %, % to %, and % to % for child, teenage and adult, and elderly age groups, respectively ( figure ). thus, we used i o based on justified initial infected population fraction to further examine the mi max among each age group. health-seeking behavior in the control of influenza infection generally, the individual with risk perceptual status is more likely to have the communicable belief among the population. in the case of effective information from contact numbers of individuals (n e = ; figure a ), when i + /i -< and i + /i -= , the mi max was < bit. on the other hand, when i + /i -> , as the perceptual information increased in the population, mi max was > bit, indicating that the network-based information reflected cooperativity. our results showed that mi max -n e profile featured a smooth shape ( figure b ). in the elderly group, as the strength of n e increased, mi max -n e profile experienced a nearly smooth curve. on the other hand, in the child and elderly groups, when n e was > , mi max was > bit. our results indicated that mi max ranged from . to . bits, . to . bits, and . to . bits for child, teenage and adult, and elderly age groups, respectively, given n e ranging from to ( figure b ). to explore the impact of n e -varying perceived health behavior information on mi max , we estimated correlation coefficient (r) based on the relationship between viral titerbased i and viral tier-based r corresponding to, with and without, perceived different health behaviors. the resulting estimates were r w = . and r w/o = . ( figure s ). we further used equation to calculate mi max based on overlapping percentage (i o ) and r affected by n e . the results indicated that mi max ranged from . to . bits, . to . bits, and . to . bits for without control, and preventive, and protective behaviors in the child age group, respectively ( figure a ). for the teenage and adult age group, mi max ranged from . to . bits, . to . bits, and . to . bits for without control, and preventive, and protective behaviors, respectively ( figure b ). our results showed that individuals perceived the health behaviors to increase mi max among child, and teenage and adult, age groups ( figure a and b, respectively). our results also revealed that individuals perceived the preventive behavior to improve risk perception figure b) , and the elderly age group ( figure c ), but not in the child age group. our results indicated that there was % probability for exceeding the infected fraction of population (i); . , . , and . for child ( figure a ), teenage and adult ( figure b ), and elderly ( figure c ) age groups, respectively, in the condition of without perceived health behaviors. however, there was a % probability for reducing the infected fraction of population within the ranges of . - . for preventive behavior and . - . for protective behavior ( figure ). the age-specific ∆mi with respect to with/without health behaviors was estimated based on equations and . we found that, for instance, without any control information released, the median percentages of decrease in infection risk with ∆mis = and for elderly were ( . - ) and ( . - ), respectively, whereas the child age group had the lowest estimates of ( . - ) and ( . - ), respectively ( figure a ). on the other hand, ∆mi estimates at incremental mi changes with perceived health behaviors were %- %, %- %, and %- % for child, teenage and adult, and elderly age groups, respectively ( figure b and c). the population with perceived health behaviors could not effectively decrease the percentage of infection risk in the child group, whereas for the elderly age group, the percentage of infection risk decreased more significantly with a . th percentile estimate of % ( figure b and c). our results indicated that children may be preferable to adopt the protective behaviors. allison et al indicated that the use of hand gel for hygiene was a feasible strategy in elementary schools to prevent influenza spread. our results implicate that children could use the accurate knowledge about the protective behavior to prevent influenza infection effectively. health-seeking behavior in the control of influenza infection our results found that the perceived protective behaviors enhanced the mi max in adults, whereas the perceived vaccination behavior might not. a meta-analysis of eligible studies also confirmed that raising risk perception from low to high would have a potential effect on vaccination behavior of adults. we suggest that future studies should detect the differences among the health behaviors in adults. schneeberg et al indicated that the vaccination rate for seasonal influenza was consistently low among the elderly population in canada. walter et al indicated that the elderly population failed to obtain information about vaccine perception from the internet directly. it was also found that face mask wearing was easily performed by older adults in hong kong. elderly people also appeared to be more active in conducting preventive measures. in this article, we incorporated the probability-based hbm with regard to specific health behaviors into an sirbased epidemiological model. the hbm was used to examine individual's perceptual dimensions such as perceived susceptibility, severity, benefits, and barriers. however, the hbm has led to somewhat controversial issues in exploring the health behaviors such as for vaccination programs. , the hbm presents a rational point of view that assumes the perceiver to be uninfluenced by the emotion, on describing the human you et al response to an epidemic. , our study, however, establishes a more robust mechanistic framework on modeling the influence of network-driven risk perception on influenza infection. to our knowledge, we have conducted the first step on exploring the effects of risk perception in a population, on the spread of epidemics. we believe that our present methodology provides an innovative approach that integrates an epidemiology model with the information theory. we examined three scenarios for describing different agespecific populations to overcome susceptibility risk due to less accurate knowledge of influenza. we found that noise effect, which reflects as overlapping percentages about the uncertainty of accurate knowledge of influenza, can reduce risk perception information transfer on the network through the epidemic transmission. for example, previous studies found that participants had misconceptions between seasonal vaccine and pandemic strain. , the effect of overlapping response may have resulted from the public health campaigns. for example, people were recommended to acquire the seasonal vaccine in pandemics. this may lead to a feedback mechanism between behavior change and disease dynamics. future work should carefully consider the effects of this noise on specific age groups. moreover, each intervention should be carefully investigated to the extent possible during an epidemic. this study has several limitations. the estimation of information flow of risk perception in age groups depended on the ili data. indeed, the human response to influenza varied with time and hence is not possible to detect in realtime situations. moreover, perceptual states in specific age groups may be affected by the severity levels of disease, the amount of accurate information about influenza, and other health-related leaflets. therefore, we suggest that the health authorities could reinforce health monitoring by using information technology and then linking it to the real-time epidemiological surveillance systems. a further limitation of our study is that we did not consider the influential factors on risk perception in an epidemic model. hence, future research should explicitly consider a number of additional influential factors on risk perception within an epidemic modeling, including disease prevalence, network effects, and government and media health messages. the findings of our study have an implication for public health. risk communication might be more effective if health authorities focus on a variety of information communication channels for conveying health behavior messages. moreover, our findings concerning perception of different health behaviors show substantial differences among age groups. we found that perceived protective behaviors (e.g., covering mouth, coughing hand washing) could reduce the infection risk for all age groups. this suggests that such crucial information for control measures would allow for targeting resources to designing and implementing the education plans concerning perception of healthy behaviors that are least perceived in the health behaviors. we developed an integrated mathematical model by incorporating the epidemiological transmission dynamics, the information flow of human responses, and an information theoretic model to assess the effects of network-driven risk perception on influenza infection risk. the simulated human responses with perceived health behaviors could decrease the risk of infection among different age groups. we demonstrated that the risk perception among populations changed with the effective information varying with the contact numbers of individual. we conclude that the present integrated bi model can help public health authorities on communicating health messages for an intertwined belief network in which health-seeking behavior plays a key role in controlling influenza infection. dynamic modeling of vaccinating behavior as a function of individual beliefs assessing vaccination sentiments with online social media: implications for infectious disease dynamics and control risk perceptions related to sars and avian influenza: theoretical foundations of current empirical research the perception of risk. london: routledge factors associated with increased risk perception of pandemic influenza in australia the public's response to the h n influenza pandemic health-seeking behavior in the control of influenza infection feasibility of elementary school children's use of hand gel and facemasks during influenza season. influenza other respir viruses public knowledge, attitude and behavioural changes in an indian population during the influenza a (h n ) outbreak perceived risk, anxiety, and behavioural responses of the general public during the early phase of the influenza a (h n ) pandemic in the netherlands: results of three consecutive online surveys social contagion of risk perceptions in environmental management networks a social network contagion theory of risk perception endemic disease, awareness, and local behavioral response epidemic spreading and risk perception in multiplex networks: a self-organized percolation method social influence and the collective dynamics of opinion formation entropy of dynamical social networks measuring large-scale social networks with high resolution viral reassortment as an information exchange between viral segments assessing risk perception and behavioral responses to influenza epidemics: linking information theory to probabilistic risk modeling network information analysis reveals risk perception transmission in a behaviour-influenza dynamics system department of statics of ministry of the interior in taiwan statistical yearbook of interior taiwan national infectious disease statistics system infectious disease of humans: dynamics and control elements of information theory sars related preventive and risk behaviours practised by hong kong-mainland china cross border travellers during the outbreak of the sars epidemic in hong kong psychosocial factors influencing the practice of preventive behaviors against the severe acute respiratory syndrome among older chinese in hong kong incorporating individual health-protective decisions into disease transmission models: a mathematical framework predictors of cardiac rehabilitation initiation perceptions about hiv and condoms and consistent condom use among male clients of commercial sex workers in the philippines perceptions about preventing hepatocellular carcinoma among patients with chronic hepatitis in taiwan meta-analysis of the relationship between risk perception and health behavior: the example of vaccination knowledge, attitudes, beliefs and behaviours of older adults about pneumococcal immunization, a public health agency of canada/canadian institutes of health research influenza research network (pcirn) investigation risk perception and information-seeking behaviour during the / influenza a (h n )pdm pandemic in germany monitoring of risk perceptions and correlates of precautionary behaviour related to human avian influenza during - in the netherlands: results of seven consecutive surveys factors affecting intention to receive and self-reported receipt of pandemic (h n ) vaccine in hong kong: a longitudinal study vaccine perception among acceptors and non-acceptors in sokoto state public views of the uk media and government reaction to the swine flu pandemic a cross-sectional study of pandemic influenza health literacy and the effect of a public health campaign comparison of live, attenuated h n and h n cold-adapted and avian-human influenza a reassortant viruses and inactivated virus vaccine in adults use of the selective oral neuraminidase inhibitor oseltamivir to prevent influenza selection of influenza virus mutants in experimentally infected volunteers treated with oseltamivir efficacy and tolerability of the oral neuraminidase inhibitor peramivir in experimental human influenza: randomized, controlled trials for prophylaxis and treatment double-blind evaluation of oral ribavirin (virazole) in experimental influenza a virus infection in volunteers dose response of a/alaska/ / (h n ) cold-adapted reassortant vaccine virus in adult volunteers: role of local antibody in resistance to infection with vaccine virus efficacy and safety of low dosage amantadine hydrochloride as prophylaxis for influenza a cold recombinant influenza b/texas/ / vaccine virus (crb ): attenuation, immunogenicity, and efficacy against homotypic challenge evaluation of the infectivity, immunogenicity, and efficacy of live cold-adapted influenza b/ann arbor/ / reassortant virus vaccine in adult volunteers effects of the neuraminidase inhibitor zanamivir on otologic manifestations of experimental human influenza oral oseltamivir in human experimental influenza b infection the authors acknowledge the financial support of the ministry of science and technology, republic of china, under grant most - -e- - -my . all authors contributed toward data analysis, drafting, and critically revising the paper and agree to be accountable for all aspects of the work. the authors report no conflicts of interest in this work. infection and drug resistance is an international, peer-reviewed openaccess journal that focuses on the optimal treatment of infection (bacterial, fungal and viral) and the development and institution of preventive strategies to minimize the development and spread of resistance. the journal is specifically concerned with the epidemiology of antibiotic resistance and the mechanisms of resistance development and diffusion in both hospitals and the community. the manuscript management system is completely online and includes a very quick and fair peerreview system, which is all easy to use. visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. health-seeking behavior in the control of influenza infection key: cord- - i jert authors: ashbolt, nicholas j.; amézquita, alejandro; backhaus, thomas; borriello, peter; brandt, kristian k.; collignon, peter; coors, anja; finley, rita; gaze, william h.; heberer, thomas; lawrence, john r.; larsson, d.g. joakim; mcewen, scott a.; ryan, james j.; schönfeld, jens; silley, peter; snape, jason r.; van den eede, christel; topp, edward title: human health risk assessment (hhra) for environmental development and transfer of antibiotic resistance date: - - journal: environ health perspect doi: . /ehp. sha: doc_id: cord_uid: i jert background: only recently has the environment been clearly implicated in the risk of antibiotic resistance to clinical outcome, but to date there have been few documented approaches to formally assess these risks. objective: we examined possible approaches and sought to identify research needs to enable human health risk assessments (hhra) that focus on the role of the environment in the failure of antibiotic treatment caused by antibiotic-resistant pathogens. methods: the authors participated in a workshop held – march in québec, canada, to define the scope and objectives of an environmental assessment of antibiotic-resistance risks to human health. we focused on key elements of environmental-resistance-development “hot spots,” exposure assessment (unrelated to food), and dose response to characterize risks that may improve antibiotic-resistance management options. discussion: various novel aspects to traditional risk assessments were identified to enable an assessment of environmental antibiotic resistance. these include a) accounting for an added selective pressure on the environmental resistome that, over time, allows for development of antibiotic-resistant bacteria (arb); b) identifying and describing rates of horizontal gene transfer (hgt) in the relevant environmental “hot spot” compartments; and c) modifying traditional dose–response approaches to address doses of arb for various health outcomes and pathways. conclusions: we propose that environmental aspects of antibiotic-resistance development be included in the processes of any hhra addressing arb. because of limited available data, a multicriteria decision analysis approach would be a useful way to undertake an hhra of environmental antibiotic resistance that informs risk managers. citation: ashbolt nj, amézquita a, backhaus t, borriello p, brandt kk, collignon p, coors a, finley r, gaze wh, heberer t, lawrence jr, larsson dg, mcewen sa, ryan jj, schönfeld j, silley p, snape jr, van den eede c, topp e. . human health risk assessment (hhra) for environmental development and transfer of antibiotic resistance. environ health perspect : – ; http://dx.doi.org/ . /ehp. a workshop (antimicrobial resistance in the environment: assessing and managing effects of anthropogenic activities), held in march in québec, canada, focused on anti biotic resistance in the environment and approaches to assessing and managing effects of anthropogenic activities. the human health concern was identified as environmentally derived antibioticresistant bacteria (arb) that may adversely affect human health (e.g., reduced efficacy in clinical anti biotic use, more serious or prolonged infection) either by direct exposure of patients to antibiotic resistant pathogen(s) or by exposure of patients to resistance determinants and subsequent horizontal gene transfer (hgt) to bacterial pathogen(s) on or within a human host, as conceptualized in figure . arb hazards develop in the environment as a result of direct uptake of antibioticresistant genes (arg) via various mechanisms (e.g., mobile genetic elements such as plasmids, integrons, gene cassettes, or transposons) and/or proliferate under environmental selection caused by anti biotics and coselecting agents such as biocides, toxic metals, and nanomaterial stressors (qiu et al. ; taylor et al. ) , or by gene mutations (gillings and stokes ) . depending on the presence of recipient bacteria, these processes generate either environmental antibioticresistant bacteria (earb) or pathogens with antibioticresistance (parb) (figure ). human health risk assessment (hhra) is the process used to estimate the nature and probability of adverse health effects in humans who may be exposed to hazards in contaminated environmental media, now or in the future [u.s. environmental protection agency (epa) ]. in this review we focus on how to apply hhra to the risk of infec tions with pathogenic arb because they are an increasing cause of morbidity and mor tality, particularly in developing regions background: only recently has the environment been clearly implicated in the risk of antibiotic resistance to clinical outcome, but to date there have been few documented approaches to formally assess these risks. objective: we examined possible approaches and sought to identify research needs to enable human health risk assessments (hhra) that focus on the role of the environment in the failure of anti biotic treatment caused by antibiotic-resistant pathogens. methods: the authors participated in a workshop held - march in québec, canada, to define the scope and objectives of an environmental assessment of antibiotic-resistance risks to human health. we focused on key elements of environmental-resistance-development "hot spots," exposure assessment (unrelated to food), and dose response to characterize risks that may improve antibiotic-resistance management options. discussion: various novel aspects to traditional risk assessments were identified to enable an assessment of environmental antibiotic resistance. these include a) accounting for an added selective pressure on the environmental resistome that, over time, allows for development of antibioticresistant bacteria (arb); b) identifying and describing rates of horizontal gene transfer (hgt) in the relevant environmental "hot spot" compartments; and c) modifying traditional dose-response approaches to address doses of arb for various health outcomes and pathways. conclusions: we propose that environmental aspects of antibiotic-resistance development be included in the processes of any hhra addressing arb. because of limited available data, a multicriteria decision analysis approach would be a useful way to undertake an hhra of environmental antibiotic resistance that informs risk managers. citation: ashbolt nj, amézquita a, backhaus t, borriello p, brandt ). an antimicrobial resistant micro organism has the ability to mul tiply or persist in the presence of an increased level of an anti microbial agent compared with a susceptible counter part of the same species. for this review, we limited the resistant group of micro organisms to bacteria and therefore to anti biotic resistance, an area in which the term "antibiotic" is used synonymously with "antibacterial." it is important to understand the contribution that the environment has on the development of resistance in both human and animal pathogens because therapeutic resistant infections may lead to longer hos pitalization, longer treatment time, failure of treatment therapy, and the need for treatment with more toxic or costly antibiotics, as well as an increased likelihood of death. a vast amount of work has been under taken to understand the contribution and roles played by hospital and community settings in the dissemination and maintenance of arb infections in humans. a particular area of focus in terms of exposure in a community setting has been anti biotic use in livestock produc tion and the presence of earb and parb in food of animal origin. in , the codex alimentarius commission [established in by the food and agriculture organization of the united nations (fao) and the world health organization (who) to harmonize international food standards, guidelines, and codes of practice to protect the health of con sumers and ensure fair trade practices in the food trade] released guidelines on processes and methodologies for applying risk analy sis methods to foodborne anti microbial resis tance related to the use of anti microbials in veterinary medicine and agriculture (codex alimentarius commission ). other sources of anti biotics and other anti microbials in the environment are human sewage (dolejska et al. ) , intensive ani mal husbandry, and waste from the manu facture of pharmaceuticals (larsson et al. ). the environmental consequences from the use and release of anti biotics from various sources (kümmerer a (kümmerer , b and the hgt of antibioticresistance genes (arg) between indigenous environmental and pathogenic bacteria and their resistance determinants (börjesson et al. ; chagas et al. ; chen et al. ; cummings et al. ; forsberg et al. ; gao et al. ; qiu et al. ) has yet to be quanti fied, but is of global concern (finley et al. ; who a) . the genetic elements encoding for the ability of micro organisms to withstand the effects of an anti microbial agent are located either chromosomally or extra chromosomally and may be associated with mobile genetic elements such as plas mids, integrons, gene cassettes, or transpo sons, thereby enabling horizontal and vertical transmission from resistant to previously susceptible strains. from an hhra point of view, the emergence of arb in source and drinking water (de boeck et al. ; isozumi et al. ; shi et al. ) further highlights the need to place these emerging environmental risks in perspective. yet, assess ing the range of environmental contribu tions to anti biotic resistance may not only be complicated by lack of quantitative data but also by the need to coordinate efforts across different agencies that may have jurisdiction over environmental risks versus human and animal health. a key consideration for arb develop ment in the environment is that resistance genes can be present due to natural occur rence (d'costa et al. ). further, the use of anti microbials in crops, animals, and humans provides a continued entry of anti biotics to the environment, along with pos sible novel genes and arb. a summary of the fate, transport, and persistence of antibiotics and resistance genes after land application of waste from food animals that received antibiotics or following outflow to surface water from sewage treatment has emphasized the need to better understand the environ mental mechanisms of genetic selection and gene acquisition as well as the dynamics of resistance genes (resistome) and their bacte rial hosts (cheesanford et al. ; crtryn ) . for example, the presence of anti biotic residues in water from pharma ceuti cal manufacturers in certain parts of the world (fick et al. ), ponds receiving intensive animal wastes (barkovskii et al. ) , aqua culture waters (shah et al. ) , and sewage outfalls (dolejska et al. ) are important sources, among others, leading to the pres ence of arg in surface waters. in particu lar, the comparatively high concentrations of anti biotics found in the effluent of pharma ceuti cal production plants have been asso ciated with an increased presence of arg in surface waters (kristiansson et al. ; li et al. li et al. , . most recently, % sequence identity of arg from a diverse set of clinical pathogens and common soil bacte ria (forsberg et al. ) has highlighted the potential for environ mental hgt between earb and parb. despite these concerns, few risk assess ments have evaluated the combined impacts of anti biotics, arg, and arb in the environ ment on human and animal health (keen and montforts ) . recent epidemiological stud ies have included evaluation of arb in drink ing water and the susceptibility of commensal escherichia coli in household members. for example, coleman et al. ( ) reported that water, along with other factors not directly related to the local environment, accounted for the presence of resistant e. coli in humans. in many studies, native bacteria in drinking water systems have been shown to accumulate arg (vazmoreira et al. ) . in addition to addressing environmental risks arising from the development of anti biotic resistance, we should also consider the or development and enrichment of parb low probability but high impact "onetime event" type of risk. this exceedingly rare event that results in the transfer of a novel (to clinically important bacteria) resistance gene from a harmless environmental bacterium to a pathogen need happen only once if a human is the recipient of the novel parb. unlike the emergence of sars (severe acute respira tory syndrome) and similar viruses where, in hindsight, the risk factors are now well under stood (swift et al. ), the conditions for a "onetime event" could occur in a range of "normal" habitats. once developed, the resis tant bacterium/gene has a possibility to spread between humans around the world [such as seen with the spread of ndm (new delhi metallobetalactamase ) resistance (wilson and chen ) ], promoted by our use of anti biotics. although it seems very difficult to quantify the probability for such a rare event (including assessing the probability for where it will happen and when), there is consider able value in trying to identify the risk factors (such as pointing out critical environments for hgt to occur, or identifying pharmaceutical exposure levels that could cause selection pres sures and hence increase the abundance of a given gene). after such a critical hgt event, we may then move into a more quantitative kind of hhra. the overall goal of the workshop (anti microbial resistance in the environment: assessing and managing effects of anthropogenic activities) was to identify the significance of arb within the environment and to map out some of the complexities involved in order to identify research gaps and provide statements on the level of scientific understanding of various arb issues. a broad range of international delegates, including aca demics, government regulators, industry mem bers, and clinicians, discussed various issues. the focus of this review arose from discussions of improving our understanding of human health risks-in addition to epidemiological studies-by developing hhras to explore potential risks and inform risk manage ment. because the end goal of an assessment depends on the context (e.g., research, regulation), we provide a generic approach to under taking an hhra of environmental arb that can be adapted to the users' interest (conceptualized in figure ). given the many uncertainties, we also highlight identified research gaps. understanding other on going relevant inter national activities and the types of anti biotics used provide good starting points to aid in framing a risk assessment of arb. the codex alimentarius commission ( ) described eight principles that are specific to risk analysis for foodborne anti microbial resistance, several of which are generally applicable to a hhra of environ mental arb. examples include the recommendations of the joint fao/who/ oie expert meeting on critically important antimicrobials (food and agriculture organization of the united nations/world health organization/world organisation for animal health ) and the who advisory group on integrated surveillance of antimicrobial resistance (who b), which provided information for setting the priority anti biotics for a human risk assess ment. it should be noted that there are sig nificant national and regional differences in anti biotic use, resistance patterns, and human exposure pathways. in general, risk assessments are framed by identifying risks and management goals, so the assessment informs the need for possible management options and enables evaluation of management success. the consensus of workshop participants was that manage ment could best be applied at points of anti biotic manufacturing and use, agricultural operations including aquaculture, and wastewater treat ment plants (pruden et al. ) . assessing the relative impact of managing any particular part of a system is hampered by the lack of knowledge on the relative importance of each part of the system for the overall risk. that is, as recently stated by the who ( ), "amr is a complex problem driven by many inter connected factors so single, isolated interventions have little impact and coordi nated actions are required." hence, a start ing point for an assessment of environmental anti bioticresistance risks intended to aid risk management is a theo retical risk assessment pathway based on a) local surveillance data on the occurrence and types of anti biotics used in human medi cine, crop production, animal husbandry, and companion animals; b) infor mation on arg and arb in the various environmental compartments (in particular, soil and aquatic systems including drinking water); and c) related disease information. this assessment should be amended by discussion with the relevant stakeholders, which requires extensive risk communication and could form part of the multi criteria decision analysis (mcda) approach discussed in detail below. as a result of the workshop, pruden et al. ( ) also advocate coupling environ mental manage ment and mitigation plans with tar geted surveillance and monitoring efforts in order to judge the relative impact and success of the interventions. to undertake a useful human health risk assessment, some details require quantitative measures. thus, the key issue is how experi mental and modeling approaches can be used to derive estimates. furthermore, haz ard concentration, time, and environ mental compartmentdependent aspects should also be taken into account. first, the current understanding is that for nonmutation derived antibiotic resistance to develop in environmental bacteria (including pathogens that may actively grow outside of hosts) to develop into earb/parb ( figure , pro cesses and ), a selective pressure (i.e., pres ence of anti biotics or antibioticresistance determinants) must be maintained over time in the presence of arg; for existing parb released into the environment, sur vival in environmental media is the critical factor. however, the exact mechanisms and quantitative relationships between selective pressures and arb development have yet to be elucidated, and they may be different depending on the anti biotic, bacterial spe cies, and resistance mechanisms involved. in cases where selective pressure is removed, the abundance of antibioticresistance arb may be reduced, but not to extinction. hughes , ; cottell et al. ) . even a small number of arb at the com munity level represents a reservoir of arg for horizontal transfer once pressure is reap plied. because it seems inevitable that arb will eventually develop against any anti biotic (levy and marshall ) , the key manage ment aim seems to be to delay and confine such a development as much as possible. second, a robust quantitative risk assess ment will require rates of hgt and/or gene mutations in the relevant compartments ( figure , processes - ) to be described for different combinations of donating earb strains and receiving parb strains. the lack of quantitative estimates for mutation/hgt of arg is a major data gap. third, traditional microbial risk assess ment dose-response approaches (figure , processes and ) could be used to address the likeli hood of infection [codex alimentarius commission ; u.s. epa and u.s. department of agriculture/food safety and inspection service (usda/fsis) ], but the novel aspect required here-in addition to hgt and arb selection-would be to address quantitative dose-response relation ships for earb (in the presence of a sensitive pathogen in or on a human) (figure , pro cesses and ). importantly, the key difference from traditional hhra undertaken in some jurisdictions is that it is essential to include environmental processes to fully assess human risks associated with anti biotic resistance. therefore, the type of information that should be documented for a human healthoriented risk assessment of environmental arb includes the following [adapted from codex alimentarius commission ( )]: • clinical and environmental surveillance programs for anti biotics, arb, and their determinants, with a focus on regional data volume | number | september • environmental health perspectives reporting the types and use of anti biotics in human medicine, crops, and commercial and companion animals, as well as globally where crops and food animals are produced • epidemiological investigations of outbreaks and sporadic cases associated with arb, including clinical studies on the occurrence, frequency, and severity of arb infections • identification of the selection pressures (time and dose of selecting/coselecting agents) required to select for resistance in differ ent environments, and subsequent hgt to humanrelevant bacteria, both based on reports describing the frequency of hgt and uptake of arg into environmental bac teria, including environmental pathogens, in previously identified hot spots • human, laboratory, and/or field animal/crop trials addressing the link between anti biotic use and resistance (particularly regional data) • investigations of the characteristics of arb and their determinants (ex situ and in situ) • studies on the link between resistance, viru lence, and/or ecological fitness (e.g., surviv ability or adaptability) of arb • studies on the environmental fate of anti biotic residues in water and soil and their bioavailability associated with the selection of arb in any given environmental com partment, animal, or human host result ing in parb • existing risk assessments of arb and related pathogens. in summary, many sources of data are required to undertake a human health risk assessment for environ mental arb, and much of the data may be severely limited (particularly for a quantitative assessment). thus, the final risk assessment report should emphasize the importance of the evidence trail and weight of evidence for each finding. furthermore, when models are constructed, previously unused data sets should be consid ered for model verifications where possible. human health risk assessment of anti biotics in the environment builds on traditional chemical risk assessments (national research council ), starting, for example, with an accept able daily intake (adi) based on resistance data (vich steering committee ). a corresponding metric for environ mental anti biotic concentration could be developed based on the concept of the minimum selective concentration (msc) (gullberg et al. ) , defined as the minimum concentration of an anti biotic agent that selects for resistance. unlike the traditional chemical risk assess ment approach, with the msc assay it would be necessary to address the human health effects arising from args and the resistance determinants that give rise to arb, including resistance associated with mutations (figure , processes and ). in the absence of specific data, an msc assay could inform a risk asses sor of the selective concentration of a pharma ceutical or complex mixture of compounds in a matrix of choice, allowing description of thresholds for significant arb development. pathogen risks may be evaluated through microbial risk assessment (mra), a struc tured, systematic, sciencebased approach that builds on the chemical risk assessment paradigm; the mra involves a) problem for mulation (describing the hazards, risk setting, and pathways), b) exposure assessment of the hazard (arb, arg), c) dose-response assess ment that quantifies the relationship between hazard dose and parb infection in humans (figure , processes and ) , and d) com bination of these procedures to characterize risk for the various pathways of exposure to pathogens identified to be assessed. an mra is used qualitatively or quantitatively to evalu ate the level of exposure and subsequent risk to human health from microbiological haz ards. in the context of anti bioticresistant micro organisms, environmental mra is in its infancy but is needed to address resistant bac teria and/or their determinants. the mra was originally developed for fecal pathogen hazards in food and water [ilsi (international life sciences institute) ], with more recent modifications to include biofilmassociated environmental pathogens such as legionella pneumophila (schoen and ashbolt ) . some human pathogens can grow in the envi ronment (and may become parb; figure , processes and ), and many will infect only compromised individuals (generally termed opportunistic pathogens). over the past years, the mra has largely evolved by input from the inter national food safety community, and it is now a wellrecognized and accepted approach for food safety risk analysis. in , the codex alimentarius adopted the principles and guidelines for the conduct of microbiological risk assessment (cac/gl ) (codex alimentarius commission ). the most recent codex alimentarius guidelines for risk analysis of foodborne antimicrobial resistance include eight principles (codex alimentarius commission ), and in the united states, mra guidelines for food and water (u.s. epa and usda/fsis ) continue to use the fourstep framework originally described for chemical risk assessment. several arb risk assessments have been published and reviewed in recent years (geenen et al. ; mcewen ; snary et al. ) . however, nearly all of these studies focus on foodborne transmis sion; human health risk assessments dealing with arb transmission via various environ mental routes or direct contact with arg are sparse. for example, geenen et al. ( ) studied extendedspectrum betalactamase (esbl) producing bacteria and identified the following risk factors: previous admission to healthcare facilities, use of anti microbial drugs, travel to highendemic countries, and having esbl positive family members. the authors con cluded that an environ mental risk assessment would be helpful in addressing the problem of esblproducing bacteria but that none had been performed. hazard identification and hazard charac terization. unfortunately, we are unaware of data that quantitatively link arg uptake and human health effects (figure , processes and ). what data do exist and are rapidly improving in quality, however, are on the presence of args within various environ mental compartments (allen et al. ; cummings et al. ; ham et al. ) , specifically on clinically rele vant resistance genes within soils (forsberg et al. ) (figure , process ). precursors that lead to the develop ment of arb hazards include arg and mecha nisms to mobilize these genes, anti biotics, and coselecting agents (qiu et al. ; taylor et al. ) along with gene mutations (gillings and stokes ) . depending on the presence of recipient bac teria, these processes generate either earb or parb (figure , processes and ) . in regard to the numerous parameters rele vant to individual environmental compart ments, we are not aware of the availability of comprehensive data on a) anti biotic resistance development by anti biotics and other coselect ing agents; b) the flow of arg (resistome) and acquisition elements (e.g, integrons) in native environmental compartment bacteria; or c) the likely range in rates of horizontal and vertical gene transfer within environ mental compartments. nonetheless, factors that are considered important include the range of potential pathways involving the release of anti biotics, arg, and arb into and amplify ing in environmental compartments such as the rhizosphere, bulk soil, compost, biofilms, wastewater lagoons, rivers, sedi ments, aqua culture, plants, birds, and wildlife. with respect to anti biotics, in general, the following information is required to aid haz ard characterization: a) a list of the local anti biotic classes of concern, b) what is known of their environmental fate, and c) where they may accumulate, in particular, environmental compartments (e.g., the rhizosphere, general soil, compost, biofilms, wastewater lagoons, rivers, sediments, aquaculture, plants, birds, wildlife, farm animals, or companion ani mals). selection for arb (figure , process ) will depend on the type and in situ bio availability of selecting/coselecting agents, the abundance of bacterial host, and the abun dance of ar determinants. selection for arb is further modulated by the nutritional status of members of the rele vant bacterial community because high meta bolic activity and high cell density promote bacterial community succession and hgt (brandt et al. ; sørensen et al. ) . in contrast, hgt is relatively independent of anti biotics-although anti biotics and arb may be cotransported (chen et al. )and increases in hgt rates are thought to occur in stressed bacteria. for example, integrase expression can be upregulated (increased) by the bacterial sos response (process for dna repair) in the presence of certain anti biotics (guerin et al. ). although quantitative data that describe the development of parb in the environment are lacking, ample evidence exists for the co uptake by an antibioticsensitive pathogen in the presence of an anti biotic, arg (such as on a plasmid with metal resistance), and/or carbon utilization genes (knapp et al. ; laverde gomez et al. ) , or as demon strated in vitro for a disinfectant/nanomaterial (qiu et al. ; soumet et al. ) . the spatial distribution of organisms (opportunity for close proximity) may also affect gene transfer, which results from inher ent patchi ness, soil structure, presence of substrates, and so forth. in considering gene transfer rates, there may be hot spots of activ ity; for example, there is evidence for hgt of clinically rele vant resistance genes between bacteria in manureimpacted soils (forsberg et al. ) and in association with the rhi zosphere because of its organicrich condi tions (pontiroli et al. ). in addition, selection pressures for subsequent prolifera tion of earb may be higher in these hot spot environments (brandt et al. ; li et al. ) . therefore, it is important to reco gnize likely zones of high activity during the prob lem formulation and hazard characterization stages of a risk assessment, and when using sampling to identify in situ exchange rates. as an example marker of anthropogenic impact with potential to predict the impact on the mobile resistome, class integrons could be used because of their ability to integrate gene cassettes that confer a wide range of anti biotic and biocide resistance (gaze et al. ) . in semipristine soils, prevalence may be two or three orders of magnitude lower than in impacted soils and sedi ments ( . vs. %, respectively) (gaze et al. ; zhu et al. ) . in addition to a huge diversity of earb hazards, there are several pathogens that could be evaluated in microbial risk assess ments: a) foodborne and waterborne fecal pathogens represented by campylobacter jejuni, salmonella enterica, or various patho genic e. coli; and b) environ mental pathogens, such as respiratory, skin, or wound pathogens represented by legionella pneumophila, staphylococcus aureus, and pseudomonas aeruginosa. each of these fecal and environmental pathogens is well known to be able to acquire arg; thus, given further data on environmen tal hgt rates, they could be used as refer ence pathogens in microbial risk assessments. however, what is much more problematic for risk assessment-and a current limiting factor-is the rate at which the indigenous bacteria transfer resistance to these pathogens within each environmental compartment and within the human/animal host (figure , pro cesses - ). methods to model and experi mentally derive relevant information on these environmental issues are discussed below in "environmental exposure assessment." data on hgt within the human gastro intestinal tract have been summarized by hunter et al. ( ) . dose-response relationships. to properly charac terize human risks, it is typical to select hazards for which there are dose-response health data described either deterministically or stochastically, as available for the refer ence enteric pathogens (e.g., campylobacter jejuni, salmonella enterica, e. coli) (schoen and ashbolt ) , but these dose-response health data have yet to be quantified for the skin/wound reference pathogens (mena and gerba ; rose and haas ) . however, as noted above for processes - , (figure ), an important difference for arb is the need to account for the phenomena associated with selective environmental pressures for the development of arb, and ultimately that form the human infective dose of either earb or parb. the exact mechanisms and doseresponse relationships have yet to be eluci dated, and may be different depending on the bacterial species and resistance mechanisms involved. nevertheless, it seems reasonable for the non compromised human exposed to a parb to fit the published dose-response/ infection relationship (e.g., derived from "feeding" trials with healthy adults or from information collected during outbreak inves tigations) for strains of the same pathogen without antibiotic resistance. what appears more limiting are dose-response models that describe the probability of illness based on the conditional probability of infection and including people who are already compro mised, such as those under going anti biotic therapy. although there is definitive data on parb being more pathogenic or causing more severe illness than their antimicrobial susceptible equivalents (barza ; helms et al. helms et al. , travers and barza ) , that may not always be the case (evans et al. ; wassenaar et al. ) . clear examples of excess mortality include associ ated blood stream infections for methicillin resistant staphylococcus aureus (mrsa) and from third generation cephalosporinresistant e. coli (g crec). in in participating european countries, , cases of mrsa were associated with , excess deaths and , excess hospital days, and , epi sodes of g crec blood stream infections were responsible for , excess deaths and , extra hospital days (de kraker et al. ) . the authors predicted that the combined burden of resistance of mrsa and g crec will likely lead to a pre dicted incidence of . associated deaths per , inhabitants in . yet for many regions of the world, such predictions are less well understood. the final step of mra is risk charac teriza tion, which integrates the outputs from the hazard identification, the hazard charac terization, dose response, and the exposure assessment with the intent to generate an overall estimate of the risk. this estimate may be expressed in various measures of risk, for example, in terms of individual or popula tion risk, or an estimate of annual risk based on exposure to specific hazard(s). depending on the purpose of the risk assessment, the risk characterization can also include the key scientific assumptions used in the risk assessment, sources of variability and uncer tainty, and a scientific evalua tion of risk management options. based on our conceptualization of the pro cesses important to undertake hhra of arb (figure ), most elements related to arb development in environmental media (pro cesses , , and ) have been addressed above in "hazard identification and hazard charac terization." here we focus on describing important environmental compartments for and human exposure to arb (figure , pro cesses and ). concentrations of environ mental factors (such as anti biotics) and arb, along with their fate and transport to points of human uptake, are critical to exposure assessment. for a particular human health risk assessment of arb, it would be impor tant to select/expand on individual pathway scenarios (identifying critical environmental compartments to human contact) relevant to the anti biotic/resistance determinants identi fied in the problem formulation and hazard characterization stages. compartments of potential concern include soil environments receiving animal manure or biosolids, compost, and lagoons, rivers, and their sediments receiving waste waters (chen et al. ). more traditional routes of human exposures to contaminants that could include earb and parb are drinking water, recreational and irrigation waters impacted by sewage and/or anti biotic volume | number | september • environmental health perspectives production wastewaters, food, and air affected by farm buildings and exposure to farm ani mal manures, as discussed by pruden et al. ( ) . what is emerging as an important research gap is the in situ development of arb within biofilms (boehm et al. ) and their associated freeliving protozoa that may protect and transport arb (abraham ) to and within drinking water systems (schwartz et al. ; silva et al. ). this latter route could be particularly problem atic for hospital drinking water systems where an already vulnerable population is exposed. in addition, with the increasing use and exposure to domestically collected rainwa ter, atmospheric fallout of arb may "seed" household systems (kaushik et al. ) . after identifying anti biotic concentra tions and pathogen densities in the environ ment, as well as possible levels and rates of arb generation in each environmental compartment, a range of fate and transport models are available to estimate the amounts of anti biotics, pathogens, arb, and arg at points of human contact (figure , pro cesses and ). such models are largely based on hydro dynamics, with pathogenspecific parameters to account for likely inactivation/ predation in soil and aquatic environments, such as sunlight inactiva tion (bradford et al. ; cho et al. ; ferguson et al. ) . a key aspect of the fate and transport models is the ability to account for the inherent vari ability of any system component. in addition, our uncertainties in assessing model parameter values should be factored into fate and trans port models such as by using bayesian syn thesis methods (albert et al. ; williams et al. ) . to better account for param eter uncertainties, more recent models are including bayesian learning algorithms that help to integrate information using meteo rologic, hydrologic, and microbial explana tory variables (dotto et al. ; motamarri and boccelli ) . overall, these models also help to identify management opportunities to mitigate exposures to arb and anti biotics and are an important aspect in describing the path ways of hazards to points of human exposure in any risk assessment. considering the complexity of exposure path ways associated with environmental arb risks and the large uncertainty in the input data for some nodes along the various exposure path ways, outputs would inevitably be difficult for decision makers to interpret and could in fact be counter productive. thus, there is merit in considering decision analysis approaches to prioritize risks, guide resource allocation and data collection activities, and facilitate decision making. although there is a range of ranking options, uses of weightings, and selection criteria (cooper et al. ; pires and hald ) , as well as failure mode and effects analysis (pillay and wang ) , in the overall area of microbial risk assessment, there is a consolidation to mcda approaches that may include bayesian algorithms (lienert et al. ; ludwig et al. ; ruzante et al. ) . approaches such as mcda are designed to provide a structured framework for mak ing choices where multiple factors need to be considered in the decisionmaking pro cess. mcda is a wellestablished tool that can be used for evaluating and document ing the importance assigned to different fac tors in ranking risks (lienert et al. ) , albeit heavily dependent on expert opinion. in the context of mra, mcda has been used to rank foodborne microbial risks based on multiple factors, including public health, market impacts, consumer perception and acceptance, and social sensitivity (ruzante et al. ) , as well as to prioritize and select inter ventions to reduce pathogen exposures (fazil et al. ) . examples of mcda applications in structuring decisions for man aging eco toxico logi cal risks have also been reported (linkov et al. ; semenzin et al. ) and provide useful mcda approaches. mcda could be used, for example, to evalu ate and rank the relative risks between habi tats highly polluted with anti biotics, arg, and arg determinants, as described above for possible hot spots for hgt and develop ment of arb. mcda could be applied to evaluate the relative contribution of coselect ing agents (e.g., detergents, biocides, met als, nano materials) from various sources to the overall risk of arb in the environment. moreover, for a range of anti biotics consid ered to be of environmental concern, mcda approaches could be used for risk ranking according to criteria based on relevant con tributing factors (e.g., mobility of resistance determinants in genetic elements, antibiotic resistance transfer rates in different environ mental compartments, accumulation levels of anti biotics in environmental compartments, environmental fate and transport to expo sure points). in the mcda process, it is also important to identify low probability but high impact "onetimeevent" types of risk. because mcda techniques rely on expert opinion (which is often regarded as a limi tation of such approaches), wellstructured and recognized elicitation practices should be used in order to avoid introduction of biases and errors by subjective scoring. in contrast, one of the main advantages of mcda tech niques is that they capture a consensus opin ion among an expert group about the most relevant criteria and their relative weight on the decision. there are several research gaps that need to be addressed. in particular, specific atten tion should be paid to contaminated habitats (hot spots) in which anti biotics, coselecting agents, bacteria carrying resistance determi nants on mobile genetic elements, and favor able conditions for bacterial growth and activity-all conditions expected to favor hgt-prevail at the same time. however, because these data are currently very limited, workshop participants evaluated alternative ways and possible experimental methods to address these data gaps for hhra as they relate to the processes identified in figure . assays to determine msc (processes , , and ) . assays could be developed to mea sure msc (gullberg et al. ) for a range of anti biotics and environmental conditions. for example, assays could be developed and validated in sandy and clay soils, different sediments, and water types, with isogenic pairs of the model organism inoculated into the matrix of choice and subjected to a titra tion of the selective agent to sufficiently high dilution. selection at sub inhibitory concen trations and assay development for environ mental matrices are key areas of research that need to be addressed. however, overall care is needed when interpreting ex situ studies and extrapolating to in situ environmental condi tions, as well as in dealing with illdefined hazard mixtures in the environment. (processes , , and ) . hot spots, locations where a highlevel of hgt and anti biotic resistance develop, may, for instance, include aquatic environments affected by pharma ceutical industry effluents, aqua culture, or sewage discharges, as well as terrestrial environments affected by the deposition of biosolids or animal manures. the degree of persistence of anti biotic resistance (i.e., the rate by which resistance disappears without having an environ mental selection pressure for resistance) must also be considered for risk assessment and will depend on the fit ness cost of resistance. however, the fitness costs within complex and variable environ ments are difficult to assess. furthermore, standard methods have not been developed for evaluating environ mental selection pres sures in complex microbial communities, but several experimental approaches are possible and have been described elsewhere (berg et al. ; brandt et al. ). the approaches identified by berg et al. ( ) and brandt et al. ( ) could be labo ra tory based (to assess the potency of known compounds/mixtures) or applied in the field to assess whether the environment in question (with, for example, its unknown mixture of chemicals) is a hot spot. defining "critical exposure levels" is therefore an important hhra output to aid manage ment activities, which will likely vary between and within environmental compartments, depending on the location. screening for novel resistance determinants (to reduce process ). screening procedures could be introduced early in the development cycle of novel anti biotics to ensure that exist ing resistance determinants are not prevalent in environmental compartments. marked recipient strains could be inoculated into environmental matrices [e.g., soil, biosolids, or fecal slurry (with sterilized matrix equiva lents as negative controls)], incubated, and then seeded onto media containing the study compound along with a selective anti biotic to recover marked recipient strains demon strating resistance. plasmids, or the entire genome of the recipient, could then be cloned into small insert expression vectors, transformed into e. coli or other hosts, and seeded back onto media containing the study compound. in this way, novel resistance determinants would be identified. alternatively, functional meta genomics could be used to identify novel resistance determinants in meta genomic dna (allen et al. ). in brief, dna would be extracted from an environmental sample, cloned into an expression vector, and trans formed into a bacterial host such as e. coli. transformants could then be screened on the study compound and resistance genes identi fied using transposon muta genesis followed by sequencing and bio informatic analyses. this would allow detection of novel resistance determinants that may not be plasmid borne but may transfer to other pathogens. dose-response data needs (processes , , and ). we were unaware of dose-response data representing a combined arg and a recipient, previously susceptible pathogen dose, and human or animal disease (figure , processes and ). in contrast, various exam ples illustrate increased morbidity and mor tality when humans are exposed to parb, as described above in "dose-response rela tionships." hence, existing published doseresponse models for non resistant pathogens may not be appropriate to use beyond the end point of infection, and further dose-response models that address people of various lifestages need to be described and summarized to facilitate parb risk assessments. there is also a need to develop dose-response information for sec ondary illness end points (sequelae) resulting from parb infections. regarding anti biotic concentration and time of exposure giving rise to selection of parb within a human (for couptake of earb and a sensitive pathogen), safety could be based on the effective concentration for the specific anti biotic under consideration. in other words, screening values to determine whether further action is warranted could be derived from the acute or mean daily anti biotic intake, with uncertainty factors applied as appropriate, until future research is under taken on pathogen anti bioticresponse changes in the presence of specific anti biotic treatment. alternatively, epidemiological data from exist ing clones of anti bioticresistant strains (e.g., ndm ) could provide useful data for doseresponse and exposure assessments. options for ranking risks (overall hhra). in the absence of fully quantitative data to undertake a hhra, riskranking approaches based on exposure assessment modeling could be adopted and developed to inform allocation of resources for data generation as part of an hhra of arb. evers et al. ( ) presented one such approach in the context of food safety for estimating the relative contribution of campylobacter spp. sources and transmis sion routes on exposure per personday in the netherlands. their study included transmis sion routes related to direct contact with animals and ingestion of food and water, and resulted in a ranking of the most significant sources of exposure. although their study focused on foodborne transmission routes and did not deal with anti bioticresistant campylobacter strains, a similar approach could be applied to estimate human exposure to arb hazards using the environmental exposure pathways described by evers et al. ( ) . this would require data on the prevalence of arg and arb, as well as lev els of anti biotics present in all exposure routes to be considered in the risk assessment. although such an approach is probably not currently fea sible, improved environmental data for a select number of pathogen-gene combinations could be developed in the future. an alternative approach to capturing knowledge of experts and other stakeholders could be to develop a bayesian network based on expert knowledge and add to that as data become available, as described for campylo bacters in foods by albert et al. ( ) . because we are addressing an inter national problem and because the precautionary approach is used in many jurisdictions, there are many risk management approaches that can be implemented now, before anti biotic resistance issues worsen, as noted in the related risk management paper resulting from the workshop (pruden et al. ) . furthermore, many current risk management schemes start the process from a management perspec tive and delve into quantitative assessments as needed in order to improve risk manage ment actions, such as in the who water safety plans (who ) . we propose that environmental aspects of anti bioticresistance development be included in the processes of any hhra addressing arb. in general terms, an mra appears suitable to address environ mental human health risks posed by the envi ronmental release of anti biotics, arb, and arg; however, at present, there are still too many data gaps to realize that goal. further development of this type of approach requires data mining from previous epidemiological studies to aid in model development, param eterization, and validation, as well as in the collection of new information, particularly that related to conditions and rates of arb development in various hot spot environ ments, and for various human health doseresponse unknowns identified in this review. in the nearterm, options likely to provide a firstpass assessment of risks seem likely to be based on mcda approaches, which could be facilitated by bayesian network models. once these mra models gain more acceptance, they may facilitate scenario testing to deter mine which control points may be most effec tive in reducing risks and which riskdriving attributes should be specifically considered and minimized during the development of novel anti biotics. megacities as sources for pathogenic bacteria in rivers and their fate downstream quantitative risk assessment from farm to fork and beyond: a global bayesian approach concerning food-borne diseases functional metagenomics reveals diverse betalactamases in a remote alaskan soil antibiotic resistance and its cost: is it possible to reverse resistance? persistence of antibiotic resistance in bacterial populations positive and negative selection towards tetracycline resistance genes in manure treatment lagoons potential mechanisms of increased disease in humans from antimicrobial resistance in food animals cu exposure under field conditions coselects for antibiotic resistance as determined by a novel cultivationindependent bacterial community tolerance assay second messenger signalling governs escherichia coli biofilm induction upon ribosomal stress quantification of genes encoding resistance to aminoglycosides, beta-lactams and tetra cyclines in wastewater environments by real-time pcr transport and fate of microbial pathogens in agricultural settings environmental health perspectives community tolerance to sulfadiazine in soil hotspots amended with artificial root exudates multiresistance, beta-lactamase-encoding genes and bacterial diversity in hospital wastewater in rio de janeiro, brazil fate and transport of antibiotic residues and antibiotic resistance genes following land application of manure waste differentiating anthropogenic impacts on args in the pearl river estuary by using suitable gene indicators class integrons, selected virulence genes, and antibiotic resistance in escherichia coli isolates from the minjiang river the modified swat model for predicting fecal coliforms in the wachusett reservoir watershed, usa principles and guidelines for the conduct of microbiological risk assessment. cac/ gl- guidelines for risk analysis of foodborne antimicrobial resistance the role of drinking water in the transmission of antimicrobial-resistant e. coli preliminary risk assessment database and risk ranking of pharmaceuticals in the environment persistence of transferable extended-spectrum-β-lactamase resistance in the absence of antibiotic pressure the soil resistome: the anthropogenic, the native, and the unknown broad dissemination of plasmidmediated quinolone resistance genes in sediments of two urban coastal wetlands antibiotic resistance is ancient esbl-positive enterobacteria isolates in drinking water mortality and hospital stay associated with resistant staphylococcus aureus and escherichia coli bacteremia: estimating the burden of antibiotic resistance in europe ctx-m- -producing escherichia coli clone b -o b-st and klebsiella spp. isolates in municipal wastewater treatment plant effluents comparison of different uncertainty techniques in urban stormwater quantity and quality modelling short-term and medium-term clinical outcomes of quinolone-resistant campylobacter infection campylobacter source attribution by exposure management choices, choices: the application of multi-criteria decision analysis to a food safety decision-making problem modeling of variations in watershed pathogen concentrations for risk management and load estimations contamination of surface, ground, and drinking water from pharmaceutical production the scourge of antibiotic resistance: the important role of the environment united nations/world health organization/ world organisation for animal health the shared antibiotic resistome of soil bacteria and human pathogens correlation of tetra cycline and sulfonamide antibiotics with corresponding resistance genes and resistant bacteria in a conventional municipal waste water treatment plant impacts of anthropogenic activity on the ecology of class integrons and integron-associated genes in the environment risk profile on antimicrobial resistance transmissible from food animals to humans. rivm rapport . bilhoven:national institute for public health and the environment (rivm) are humans increasing bacterial evolvability? a framework for global surveillance of antibiotic resistance the sos response controls integron recombination selection of resistant bacteria at very low antibiotic concentrations quantitative microbial risk assessment distribution of antibiotic resistance in urban watershed in japan quinolone resistance is associated with increased risk of invasive illness or death during infection with salmonella serotype typhimurium adverse health events associated with antimicrobial drug resistance in campylobacter species: a registry-based cohort study metaanalysis of experimental data concerning antimicrobial resistance gene transfer rates during conjugation a conceptual framework to assess the risks of human disease following exposure to pathogens bla ndm- -positive klebsiella pneumoniae from environment influence of air quality on the composition of microbial pathogens in fresh rainwater antimicrobial resistance in the environment antibiotic resistance gene abundances correlate with metal and geochemical conditions in archived scottish soils pyrosequencing of antibioticcontaminated river sediments reveals high levels of resistance and gene transfer elements antibiotics in the aquatic environment-a review-part i antibiotics in the aquatic environment-a review-part ii effluent from drug manufactures contains extremely high levels of pharmaceuticals a multiresistance megaplasmid plg bearing a hyl efm genomic island in hospital enterococcus faecium isolates antibacterial resistance worldwide: causes, challenges and responses antibioticresistance profile in environmental bacteria isolated from penicillin production wastewater treatment plant and the receiving river antibiotic resistance characteristics of environmental bacteria from an oxytetracycline production wastewater treatment plant and the receiving river occurrence of chloramphenicol-resistance genes as environmental pollutants from swine feedlots multiple-criteria decision analysis reveals high stakeholder preference to remove pharmaceuticals from hospital wastewater from comparative risk assessment to multi-criteria decision analysis and adaptive management: recent developments and applications identifying associations in escherichia coli antimicrobial resistance patterns using additive bayesian networks quantitative human health risk assessments of antimicrobial use in animals and selection of resistance: a review of publicly available reports risk assessment of pseudomonas aeruginosa in water development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms modified failure mode and effects analysis using approximate reasoning assessing the differences in public health impact of salmonella subtypes using a bayesian microbial subtyping approach for source attribution visual evidence of horizontal gene transfer between plants and bacteria in the phytosphere of transplastomic tobacco management options for reducing the release of antibiotics and antibiotic resistance genes to the environment nanoalumina promotes the horizontal transfer of multiresistance genes mediated by plasmids across genera a risk assessment framework for the evaluation of skin infections and the potential impact of antibacterial soap washing a multifactorial risk prioritization framework for foodborne pathogens assessing pathogen risk to swimmers at non-sewage impacted recreational beaches an in-premise model for legionella exposure during showering events detection of antibiotic-resistant bacteria and their resistance genes in wastewater, surface water, and drinking water biofilms integration of bioavailability, ecology and ecotoxicology by three lines of evidence into ecological risk indexes for contaminated soil assessment prevalence of antibiotic resistance genes in the bacterial flora of integrated fish farming environments of pakistan and tanzania metagenomic insights into chlorination effects on microbial antibiotic resistance in drinking water characterisation of potential virulence markers in pseudomonas aeruginosa isolated from drinking water antimicrobial resistance: a microbial risk assessment perspective studying plasmid horizontal transfer in situ: a critical review resistance to phenicol compounds following adaptation to quaternary ammonium compounds in escherichia coli wildlife trade and the emergence of infectious diseases aquatic systems: maintaining, mixing and mobilising antimicrobial resistance? morbidity of infections caused by antimicrobial-resistant bacteria environmental protection agency). . human health risk assessment microbial risk assessment guideline: pathogenic microorganisms with focus on food and water. epa/ /j- / diversity and antibiotic resistance patterns of sphingomonadaceae isolates from drinking water studies to evaluate the safety of residues of veterinary drugs in human food: general approach to establish a microbiological adi. vich gl (r) re-analysis of the risks attributed to ciprofloxacin-resistant campylobacter jejuni infections water safety plan manual: step-by-step risk management for drinking-water suppliers. geneva:world health organization a. report of the rd meeting of the who advisory group on integrated surveillance of antimicrobial resistance the evolving threat of antimicrobial resistance: options for action. geneva:world health organization antimicrobial resistance. fact sheet no. framework for microbial food-safety risk assessments amenable to bayesian modeling ndm- and the role of travel in its dissemination diverse and abundant antibiotic resistance genes in chinese swine farms key: cord- - g p vtm authors: wang, ting-ting; zhou, ming; hu, xue-feng; liu, jiang-qin title: perinatal risk factors for pulmonary hemorrhage in extremely low-birth-weight infants date: - - journal: world j pediatr doi: . /s - - - sha: doc_id: cord_uid: g p vtm background: pulmonary hemorrhage (ph) is a life-threatening respiratory complication of extremely low-birth-weight infants (elbwis). however, the risk factors for ph are controversial. therefore, the purpose of this study was to analyze the perinatal risk factors and short-term outcomes of ph in elbwis. methods: this was a retrospective cohort study of live born infants who had birth weights that were less than g, lived for at least hours, and did not have major congenital anomalies. a logistic regression model was established to analyze the risk factors associated with ph. results: there were elbwis born during this period. a total of infants were included, and infants were diagnosed with ph. risk factors including gestational age, small for gestational age, intubation in the delivery room, surfactant in the delivery room, repeated use of surfactant, higher fio( ) during the first day, invasive ventilation during the first day and early onset sepsis (eos) were associated with the occurrence of ph by univariate analysis. in the logistic regression model, eos was found to be an independent risk factor for ph. the mortality and intraventricular hemorrhage rate of the group of elbwis with ph were significantly higher than those of the group of elbwis without ph. the rates of periventricular leukomalacia, moderate-to-severe bronchopulmonary dysplasia and severe retinopathy of prematurity, and the duration of the hospital stay were not significantly different between the ph and no-ph groups. conclusions: although ph did not extend hospital stay or increase the risk of bronchopulmonary dysplasia, it increased the mortality and intraventricular hemorrhage rate in elbwis. eos was the independent risk factor for ph in elbwis. pulmonary hemorrhage (ph) is a life-threatening respiratory complication of newborns [ ] , especially in extremely lowbirth-weight infants (elbwis) who are vulnerable to conditions that require invasive ventilation and intensive care after birth. the incidence of clinical ph is estimated to be - per live births [ ] , whereas the rate of ph in very-lowbirth-weight infants (vlbwis) varies from - % [ ] [ ] [ ] [ ] . the variation in its incidence is mainly due to the unclear etiology and diagnostic criteria of ph. the pathophysiology of ph in newborns is hemorrhagic edema [ , ] . the severity may vary from a mild, self-limited disorder to a massive, deteriorating and end-stage syndrome. it is associated with significant morbidity and high mortality. usually, infants with ph need aggressive positive pressure ventilation, high oxygen supplementation, critical circulatory support and blood transfusions. asphyxia, prematurity, intrauterine growth restriction, infection, hypoxia and coagulopathy are considered as perinatal risk factors for ph in many studies [ , , ] . a few case reports have indicated that healthy term infants with ph are associated with inborn errors of metabolism. furthermore, risk factors associated with the care of preterm infants, including surfactant replacement, the management of patent ductus arteriosus (pda) and the fluid intake of ph, might be prominent in elbwis with ph [ ] [ ] [ ] . however, the risk factors for ph in elbwis are controversial, and more studies are needed to further enhance the understanding of the pathophysiology of ph in these extremely premature infants. therefore, the purpose of this study was to analyze the perinatal risk factors and short-term outcomes of ph in elbwis. this is a retrospective cohort study. infants were eligible for the analysis with birth weight less than g, living for at least hours and no extreme cogenital anomalies at a hospital between january st, and december st, . elbwis were excluded from the study if their parents decided to withdraw treatment of their newborns within the first hours of life due to extreme prematurity. infants transferred to other children's hospitals due to cardiac, gastrointestinal or other abnormalities within the first week of life were also excluded. this study was approved by the ethics committee of the hospital. all medical records/information were anonymized and deidentified prior to analysis. all elbwis were resuscitated by a pediatric team led by an attending pediatric physician according to the management guidelines of elbwis. briefly, the elbwis were wrapped with plastic bags under a radiant warmer and with respiratory support by a t-piece resuscitator. a peep cm h o and/or pip cm h o was provided through a face mask immediately after birth. intubation and/or prophylactic surfactant replacement was provided at the discretion of the attending physician in the delivery room. oxygen supplementation was given and adjusted according to the target saturation on a pulse oximeter [ ] . when the infants were transferred into the neonatal intensive care unit (nicu) and put on a ventilator or nasal continuous positive airway pressure (ncpap), a physician on duty at the nicu evaluated the respiratory severity and decided to extubate the infant to ncpap after giving surfactant if required. an umbilical venous catheter was inserted, and total parenteral nutrition (tpn) infusion was given. ph was defined as bright red blood secretion from the endotracheal tube that was associated with clinical deterioration, including increased ventilator support with a fraction of inspired oxygen (fio ) increase of > . from the baseline [ ] or an acute drop in hematocrit (> %) [ ] , in addition to multilobular infiltrates on chest radiography. the record of ventilation of every infant was reviewed by two attending neonatologists independently and confirmed the diagnosis of ph. when a clinical diagnosis of ph was made, the infant was intubated and ventilated with high-frequency oscillatory ventilation (hfov). the ventilation parameters were adjusted appropriately according to the oxygen saturation, the results of arterial blood gas assessment and the chest x-ray. surfactant replacement was considered if necessary. the perinatal data of all infants and their mothers were collected by retrospective chart review that contained sex, gestational age (ga), birth weight (bw), small for gestational age (sga), apgar score at and minutes, delivery method, maternal age, prenatal infection, pregnancy hypertension, gestational diabetes (gdm), prenatal antibiotics and corticosteroids, cause of premature birth, cervical cerclage, surgery during pregnancy, and placental abruption. the short-term outcomes of the infants were also recorded. neonatal respiratory distress syndrome (nrds) and its severity were diagnosed by the neonatologists of the nicu based on the clinical profile and chest radiograph. early onset sepsis (eos) was defined as infectious diseases within hours after birth as confirmed by blood culture. brain injury, including grade iii-iv intraventricular hemorrhage (ivh) and periventricular leukomalacia (pvl), was identified by serial head ultrasounds. bronchopulmonary dysplasia (bpd) was defined as the requirement for supplemental oxygen at weeks postmenstrual age among infants who survived to nicu discharge. retinopathy of prematurity (rop) was screened by an ophthalmologist. echocardiography was performed between days and by a cardiologist and repeated as appropriate. the hemodynamically significant pda was managed by neonatologists, and ibuprofen was given to close the patent ductus. the treatment was withheld if there was identification of gastrointestinal bleeding or oliguria (urine output of less than ml/kg/hour) according to the protocol for pda management in our nicu. the decision was made by the neonatologists to transfer the elbwi for surgical ligation if more than two courses of oral ibuprofen were given and the pda was still significant [ ] . the data were analyzed with spss version . . descriptive statistic analysis were used to describe the characteristics of mothers and infants. the normally distributed results are reported as the mean and standard deviation (sd); the remaining results are reported as the median, interquartile range (iqr) or percentage. chi-squared test, student's t-test and logistic regression model were used for statistical analysis. a total of elbwis were born in this hospital and admitted to the nicu between january st, , and december st, . six infants were transferred to other hospitals for surgical diseases, and two infants died (they were identical twins who were born at weeks and days of ga; their birth weights were g and g, respectively). their parents withdrew care within hours of life due to concerns about adverse long-term outcomes. among the infants included in this study, infants were diagnosed with ph (ph group), leading to an incidence of ph in these elbwis of . %. the median age of infants with ph occurrence was (iqr - . ) days. one elbwi had ph occurred within hours, on day , day and day after birth, respectively; had ph that occurred on day - after birth, three on day and two on day . the perinatal risk factors of ph are listed in table and . the ga of the infants with ph was significantly lower than that of the non-ph infants. there were fewer sga infants in the ph group than no-ph group. because most cases of ph occurred within days of life and the majority occurred in the first week of life, the average fluid intake within the first and days of life was also compared between the ph and no-ph group. unsurprisingly, the infants with ph were more likely to be intubated and treated with surfactant and oxygen supplementation. a multivariate analysis (including ga, sga, intubation in the delivery room, surfactant in the delivery room, repeated use of surfactant, higher fio during the first day, invasive ventilation during the first day, and eos) was performed by using the logistic regression model, which found that eos was an independent risk factor for ph (table ). the mortality of infants with ph was . % ( / ), which was significantly higher than that of infants without ph ( . %, / ). the rate of major ivh was higher in the ph group than that in the no-ph group. however, the rates of pvl, moderate-to-severe bpd, and severe rop were not significantly different between the ph and no-ph group (table ) . among the patients who were discharged home ( in the ph group and in the no-ph group), there were no significant differences of the duration of assisted ventilation, invasive mechanical ventilation, oxygen supplementation, hospital stay, or moderate-to-severe bpd between the two groups (table ). in this study, we found that elbwis with ph were likely to be intubated and require surfactant therapy, invasive ventilation and oxygen supplementation, whereas the mortality and major ivh rates were also increased. logistic regression analysis showed that eos could increase the risk of the incidence of ph is significantly higher in elbwis than that in other neonatal populations, and the precise etiology remains unclear. a -year retrospective study has shown that the rate of ph in vlbwis is % [ ] . another study reported that the rate of ph was approximately % in vlbwis but was - . % in elbwis [ , ] . in our cohort, the rate of ph in elbwis was . %. it has been shown that sga, eos, low birth weight (lbw), lower apgar scores at and minutes, severe rds and surfactant replacement are risk factors for ph [ ] . usually, smaller gestational age and lower birth weight increase the odds of eos in preterm infants. ph may occur as a result of unstable hemodynamics and coagulopathy in elbwis with eos. it has been proven that delayed cord clamping reduces the risk of ph [ ] . circulatory stabilization is the fundamental management strategy for elbwis and reduces the risk not only for pulmonary disease but also of mortality and ivh. many studies have shown that pda is associated with the occurrence of ph [ , , ] . as a result of decreased pulmonary vascular resistance, left-to-right shunting through pda increases blood flow and the pressure state of the pulmonary vessels, which may compromise cardiac function with an increased risk of ph [ ] . in our cohort, the rates of pda and requirement of treatment were higher in infants with ph than in those without, but the differences were not statistically significant. interestingly, the time of ph occurrence in our cohort was earlier than that of the development of hemodynamically significant pda [ ] . the other reason might be the active management of pda in elbwis [ ] . in our study, . % of the infants with pda in the ph group and . % in the no-ph group required oral ibuprofen or ligation. in addition, there was no significant gastrointestinal bleeding or oliguria observed in either the ph or no-ph group when ibuprofen was given, while the side effects of ibuprofen were fewer than those of indomethacin [ ] . in addition, the overload of fluid intake within the first week was associated with pda and ph [ , ] . polglase et al. [ ] demonstrated that immediately after an intravenous volume overload, lambs had increases in pulmonary blood flow and the left ventricular ejection volume; % of them developed ph. the elevation in pulmonary capillary pressure can lead to alveolar capillary wall injury, causing pulmonary edema due to increased permeability with the passage of proteins [ ] . in our study, the fluid intake of these elbwis was restricted to an average of - ml/kg/day to reduce the risk of bpd and hemodynamically significant pda [ ] and showed no difference between infants with ph and those without ph. surfactant replacement is a standard treatment for rds. it has been shown that surfactant replacement increases the risk of ph [ ] . in contrast, some studies have reported that the rates of ph are not different before or after surfactant replacement therapy [ ] . it is reasonable to postulate that the infants who need surfactant are sicker and more likely to have ph than those who do not need surfactant. although an in vitro study showed that the presence of surfactant impaired coagulation function [ ] , this finding was not proven clinically. on the other hand, infants with ph can be treated with surfactant because of the inhibition of surfactant function by blood. few retrospective and observational reports have demonstrated the benefits of surfactant on ph. however, the effect of this therapy remains to be established [ ] . it seems that the chemical composition of different surfactant types affects the risk of ph [ ] . infants given poractant alfa have a significantly higher rate of ph ( %) than infants treated with surfactant-ta ( %) [ ] . however, the clinical risk index for babies scores were higher in infants treated with poractant alfa than in infants treated with poractant alpha. in our cohort, the infants with ph were similar to the infants without ph in terms of surfactant administration in the delivery room or nicu. however, the infants with ph needed multiple doses of surfactant. infants who were given surfactant prophylactically in the delivery room did not have an increased risk of ph. ph is a life-threatening condition of hemorrhagic pulmonary edema with high mortality. in our study, the mortality of elbwis with ph was % (vs. % in the no-ph group), similar to previous reports [ ] . the rate of major intraventricular hemorrhage was significantly higher in the ph infants than in the non-ph infants ( % and %, respectively, p < . ). both ph and intraventricular hemorrhage are related to perinatal hemodynamic instability [ ] . the effective management of ph includes positive pressure ventilation [ ] , blood transfusion and circulation support. however, there were no significant differences in mechanical ventilation, oxygen supplementation, or hospital stay between surviving infants in the ph and no-ph groups, mainly because these factors, in addition to ph, are independently related to prematurity. this is a retrospective study in a single center of shanghai, which may not be able to highlight all the risk factors of ph in elbwis due to the limited data and small sample size. however, analyzing the risk factors of ph will help physicians to better understand why ph occurs and how to prevent it. in summary, ph is an adverse pathophysiological event of elbwis that occurs mostly within the first hours of life. ph increases the risk of mortality and major intraventricular hemorrhage, and early onset sepsis is an independent risk factor for ph. funding no funding was received. ethical approval this study was approved by ethics committee of the shanghai first maternity and infant hospital, tongji university school of medicine. no financial or nonfinancial benefits have been received or will be received from any party related directly or indirectly to the subject of this article. open access this article is distributed under the terms of the creative commons attribution . international license (http://creat iveco mmons .org/licen ses/by/ . /), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. pulmonary hemorrhage clinical course and outcomes among very low-birth-weight infants prevalence, risk factors and outcomes associated with pulmonary hemorrhage in newborns pulmonary hemorrhage in very low-birthweight infants: risk factors and management short-term outcome of pulmonary hemorrhage in very-low-birthweight preterm infants improvement in mortality of very low birthweight infants and the changing pattern of neonatal mortality: the -year experience of one perinatal centre risk factors of pulmonary hemorrhage in very low birth weight infants: a two-year retrospective study early versus delayed neonatal administration of a synthetic surfactant: the judgment of osiris ductal shunting, high pulmonary blood flow, and pulmonary hemorrhage high-risk factors and clinical characteristics of massive pulmonary hemorrhage in infants with extremely low birth weight defining the reference range for oxygen saturation for infants after birth pda: to treat or not to treat pulmonary hemorrhage (ph) in extremely low birth weight (elbw) infants: successful treatment with surfactant circulatory management focusing on preventing intraventricular hemorrhage and pulmonary hemorrhage in preterm infants prevention and -month outcomes of serious pulmonary hemorrhage in extremely low birth weight infants: results from the trial of indomethacin prophylaxis in preterms intravenous paracetamol treatment in the management of patent ductus arteriosus in extremelylow birth weight infants failure of a repeat course of cyclooxygenase inhibitor to close a pda is a risk factor for developing chronic lung disease in elbw infants ibuprofen for the prevention of patent ductus arteriosus in preterm and/or low birth weight infants fluid regimens in the first week of life may increase risk of patent ductus arteriosus in extremely low birth weight infants risk factor profile of massive pulmonary haemorrhage in neonates: the impact on survival studied in a tertiary care centre cardiopulmonary haemodynamics in lambs during induced capillary leakage immediately after preterm birth stress failure of pulmonarycapillaries: role in lung and heart disease neonatal research network. association between fluid intake and weight loss during the first ten days of life and risk of bronchopulmonary dysplasia in extremely low birth weight infants comparison of two natural surfactants for pulmonary hemorrhage in very low-birthweight infants: a randomized controlled trial surfactant impairs coagulation in-vitro: a risk factor for pulmonary hemorrhage? surfactant for pulmonary haemorrhage in neonates efficacy of surfactant-ta, calfactant and poractant alfa for preterm infants with respiratory distress syndrome: a retrospective study acknowledgements we thank dr. po-yin cheung for his professional guidance for the preparation of this paper.author contributions ttw collected and analyzed the data, and drafted the manuscript. mz and xfh collected the data. jql designed the study. all authors approved the final version of the manuscript. key: cord- -nxw k y authors: zhang, yewu; wang, xiaofeng; li, yanfei; ma, jiaqi title: spatiotemporal analysis of influenza in china, – date: - - journal: sci rep doi: . /s - - - sha: doc_id: cord_uid: nxw k y influenza is a major cause of morbidity and mortality worldwide, as well as in china. knowledge of the spatial and temporal characteristics of influenza is important in evaluating and developing disease control programs. this study aims to describe an accurate spatiotemporal pattern of influenza at the prefecture level and explore the risk factors associated with influenza incidence risk in mainland china from to . the incidence data of influenza were obtained from the chinese notifiable infectious disease reporting system (cnidrs). the besag york mollié (bym) model was extended to include temporal and space-time interaction terms. the parameters for this extended bayesian spatiotemporal model were estimated through integrated nested laplace approximations (inla) using the package r-inla in r. a total of , influenza cases were reported in mainland china in cnidrs from – . the yearly reported incidence rate of influenza increased . times over the study period, from . in to . in per , populations. the temporal term in the spatiotemporal model showed that much of the increase occurred during the last years of the study period. the risk factor analysis showed that the decreased number of influenza vaccines for sale, the new update of the influenza surveillance protocol, the increase in the rate of influenza a (h n )pdm among all processed specimens from influenza-like illness (ili) patients, and the increase in the latitude and longitude of geographic location were associated with an increase in the influenza incidence risk. after the adjusting for fixed covariate effects and time random effects, the map of the spatial structured term shows that high-risk areas clustered in the central part of china and the lowest-risk areas in the east and west. large space-time variations in influenza have been found since . in conclusion, an increasing trend of influenza was observed from to . the insufficient flu vaccine supplements, the newly emerging influenza a (h n )pdm and expansion of influenza surveillance efforts might be the major causes of the dramatic changes in outbreak and spatio-temporal epidemic patterns. clusters of prefectures with high relative risks of influenza were identified in the central part of china. future research with more risk factors at both national and local levels is necessary to explain the changing spatiotemporal patterns of influenza in china. influenza is associated with notable mortality and morbidity worldwide, as well as in china - . the behaviours of major epidemics and pandemics of influenza were complicated due to dramatic genetic changes, subtype circulation, wave patterning and virus replacement . influenza vaccination is the most effective means to prevent infection, severe disease and mortality . the world health assembly recommends vaccinating % of key risk groups against influenza . although seasonal influenza vaccination was introduced in , influenza vaccination is not yet included on the national immunization program (nip) in china . the average national vaccination coverage was reported to be just . - . % between and , . the overall number of flu vaccines approved for sale by china's national institute for food and drug control (nifdc) has decreased in recent years , . the low coverage rate and reduction in flu vaccine supplementation have raised much concern about the increased risk of influenza incidence in china. although new emerging influenza virus types and subtypes, such as avian influenza a h n [ ] [ ] [ ] [ ] , influenza a (h n )pdm [ ] [ ] [ ] , and influenza a h n , , have been reported continuously in china, the disease burden of influenza has been dominated by a(h n ), a(h n )pdm influenza viruses, pre-pandemic a(h n ) or influenza b in recent years, which account for the majority of cases . the influenza a(h n )pdm virus was first introduced to mainland china on may , , and has been one of the dominant viruses in the seasonal influenza epidemics since then . the effect of newly emerging influenza a(h n )pdm viruses on the geographic patterns and temporal trends of influenza across the whole country is still unknown. covariates associated with the reported incidence cases of influenza. the table . the crude odds ratios (ors) and adjusted ors in both the univariate poisson models and multivariate adjusted poisson model are statistically significant. after adjusting for other covariates, a spatially unstructured random effect term (v i ), a spatially structured conditional autoregression term (υ i ), a first-order random walk-correlated time variable (γ j ), and an interaction term for time and place (δ ij ) in the multivariate adjusted spatiotemporal model, the flu vaccines (per million doses), flu surveillance protocols, rate of influenza a (h n )pdm , latitude and longitude still remain statistically significant. holding all other covariates to zero and adjusting for spatiotemporal variation, every one million increase in the number of influenza vaccines for sale approved by the china food and drug administration was associated with a . % decrease in the influenza incidence risk ( % ci = . - . ). similarly, the new update of the influenza surveillance protocol in was related to a . % increase in the influenza incidence risk ( % ci = . - . ) compared to the protocol used in to . for every % increase in the rate of influenza a (h n )pdm among all processed specimens from ili patients, there was a . % increase in the influenza incidence risk ( % ci = . - . ). every one degree increase in the latitude and longitude was associated with a . % ( % ci = . ~ . ) and . % ( % ci = . ~ . ) increase in the influenza incidence risk, respectively. the spatial and temporal effects in spatiotemporal models with covariates. the spatial effects. the map of the spatially structured conditional autoregression term demonstrated areas of spatial patterning and similarity among prefectures. the spatially structured relative risk and posterior probabilities of spatially structured relative risk greater than . are presented in figs. and , respectively. table . deviance information criterion (dic) for five spatiotemporal models. abbreviations: d, posterior mean of the deviance; pd, the number of effective parameters; dic, the deviance information criterion, as a measure of the trade-off between model fit and complexity. note: model terms used in four models include an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); uncorrelated time (γ j ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ j ). θ ij represents the relative risk of area i at time j. * model , convolution + uncorrelated time (time iid), e.g., θ α ν table . risk analysis of covariates associated with reported cases of influenza. abbreviations: or, odds ratio; ci, confidence interval. * univariate poisson analysis models. ** multivariate adjusted poisson analysis model, which included all variables in the univariate analysis models. † multivariate adjusted spatiotemporal models, which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). ‡ total number of flu vaccines approved for sale by china's national institute for food and drug control (nifdc), which were rescaled to one million doses as one unit. data were collected from nifdc. # the convolutional spatial risk term, which includes both the spatially structured conditional autoregression term (υ i ) and the spatially unstructured random effect term (ν i ) at the prefecture level, identified areas at increased risk of influenza throughout the -year study period (fig. ) . posterior probabilities for an area's spatial risk estimate exceeding . are presented in fig. . the proportion of the total spatial heterogeneity explained by the spatially structured conditional autoregression term was . %. after adjusting for fixed covariate effects and time random effects, both the map of the spatial structured term and the convolutional spatial term show that high-risk areas clustered in the central part of china and the lowest-risk areas in the east, northwest and southwest. the higher-risk prefectures were mostly distributed in guangdong, guangxi, guizhou, hunan, jiangxi, zhejiang, hubei, anhui, henan, hebei, beijing, tianjin, gansu, ningxia, and inner mongolia. the lower-risk areas in the east included some prefectures in the shandong peninsula and the prefectures of heilongjiang, liaoning, and jilin provinces in the northeast. the northwest areas are composed of prefectures in tibet, qinghai and xinjiang, while the southwest areas include chongqing and prefectures in sichuan and yunnan provinces. the temporal trend. the relative risks of the -year study period, holding the covariates and spatial risk constant, were calculated by exponentiating the marginal first-order random walk-correlated time term (γ j ) in the spatiotemporal models of influenza risk with and without covariates. for the spatiotemporal model without . ** adjusted by convolutional spatial term, space-time interaction term, and covariates, e.g., . figure . map of the spatially structured relative risk ( υ e i ), spatiotemporal model of influenza incidence risk with covariates, china prefectures, - . note: the linear terms in the model of spatiotemporal model of influenza incidence risk with covariates were , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ covariates, an overall increasing trend was found in the temporal trend term in the -year study period. the risk of influenza remained low between and . a steep increase was observed in . it dropped slightly back to a low level and remained stable in and . a rapid increase was obvious in the last years (table ) (fig. ) . for the temporal trend term in the spatiotemporal model with covariates, the relative risks in the years from to were not significantly different from that in the spatiotemporal model with covariates. the relative risks in the model with covariates in and were significantly lower than those in the model without covariates. the lower boundary of the % confidence intervals in the model with covariates showed some levelling off in recent years. the differences between the spatiotemporal model with and without covariates indicated that the recent increases in influenza incidence risks could be partially explained by the fixed covariate effects. space-time interactions. the probability exceedances for the yearly space-time interactions are presented for the study period (fig. ) . these identify areas with residual spatial risk greater than . compared to the prefecture-wide risk after the fixed effects, unstructured, spatially structured, and time random effects are held constant. changing patterns and large variations among the yearly specific spatial distributions are shown in fig. . it is interesting that most of the higher-risk areas were western areas of china before , and most of the higher-risk areas are eastern or northern areas of china after . , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). ( ) : | https://doi.org/ . /s - - - www.nature.com/scientificreports www.nature.com/scientificreports/ based on the incidence data of influenza gained from the chinese notifiable infectious disease reporting system, we used the bayesian spatiotemporal model in this study to assess the space-time patterns of the influenza epidemic at the prefecture level in mainland china from to and explored several factors that may be associated with the changing spatial and temporal patterns in the influenza incidence risk. several potential factors may be associated with the rapid increasing trend of influenza in china. first, insufficient flu vaccine supplements and a low uptake rate might be associated with an increase in influenza incidence. the results of the final spatiotemporal model showed that every million increase in the number of influenza vaccines approved for sale by the china food and drug administration was associated with a . % decrease in the influenza incidence risk ( % ci = . - . ). the rapidly increased crude rates of influenza from to coincided with a large reduction in the numbers of vaccines approved for sale at the same time. the reductions in the numbers of vaccine supplements were mostly due to the outcomes of vaccine scandals related to improper vaccine storage and production in and , respectively , , . previous studies reported that uptake figures of the influenza vaccine averaged . % nationally and . % among urban elderly aged years and above in cities of china during the - and - influenza seasons, respectively , , . it is expected that the uptake may be even lower, as people lost their faith in the safety of domestically produced vaccines after the vaccine scandals in china . our results are consistent with the study in italy, which reported an association between vaccination coverage decline and influenza incidence among italian elderly . , which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ second, currently circulating influenza strains in humans include influenza a (h n )pdm , influenza a (h n ) and influenza b viruses, (b/victoria and b/yamagata) , , . influenza a (h n )pdm has been reported to be the predominant subtype in recent years according to ili surveillance and is more likely to be the major cause of regional and widespread outbreaks . our study showed that for every % increase in the rate of influenza a (h n )pdm among all processed specimens from ili patients, there was a . % increase in the influenza incidence risk ( % ci = . - . ). shu et al. reported that the predominant subtype of seasonal influenza a (h n ) and b/yamagata could circulate from the south to the north of china from to . our study also found that every one degree increase in latitude and longitude was associated with a . % ( % ci = . ~ . ) and . % ( % ci = . ~ . ) increase in the influenza incidence risk, respectively. this result was consistent with the role of climatic factors in influenza transmission dynamics , . third, the greater effort in influenza surveillance and the use of new technologies may account for the rise in influenza incidence. in recent years, especially after the pandemic season, influenza surveillance has been expanded worldwide, as recommended by the world health organization (who) [ ] [ ] [ ] , , . as cnidrs includes all sentinel hospitals, sentinel hospitals are likely to report more cases of influenza to cnidrs. in addition, more hospitals have used electronic health information systems, which may improve both the quantity and quality of data collection and exchange from hospitals to cnidrs [ ] [ ] [ ] [ ] . fourth, the reporting on influenza a (h n )pdm , avian influenza a (h n ), highly pathogenic avian influenza (hpai) h n and avian h influenza has increased in recent years , which included all variables in the univariate analysis models; which included all variables in the univariate analysis models; an intercept (α); a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). www.nature.com/scientificreports www.nature.com/scientificreports/ media and public health campaigns against the new emerging virus have caused both the government and the public to be more concerned about influenza. the improved public perception of influenza may change people's health-seeking behaviours, especially in the epidemic seasons , . furthermore, enlarged coverage of health care insurance in both urban and rural areas in recent years in china may also induce people to use more health services , . a rapid increase in the numbers of airlines and high-speed railway transports in china has been reported in recent years . these factors would make it easy to transmit the influenza virus at a larger scale and in a shorter time across the country - . the spatial pattern. the bym model includes both a spatial conditional autoregression component and a heterogeneous random effect component. this structure allows us to know how much of the residual disease risk is due to spatially structured variation and how much is unstructured overdispersion . the spatially structured conditional autoregression term demonstrated areas of spatial patterning and similarity among prefectures. the results of spatially structured variation show a distinguished spatial pattern of risk of influenza across prefectures in china. the highest-risk areas clustered in the middle part of china, while the lowest-risk areas were distributed in the east, northwest and southwest. different patterns of influenza between the north and south in china were well reported , , , , , . in china, the line following the qinling mountain range in the west and the huaihe river in the east is often used to split the mainland into the north and the south . in this study, we observed clustering in both the north and the south in the middle part of china. the unique structured spatial patterns may be attributed to the shared risk factors among the neighbouring areas. this may be associated with similarities in the climatic zone, the predominant subtype of the virus at the time of epidemics, socioeconomic background or lifestyles. the last important factor should not be ignored. some studies reported that clustering of diseases may be a consequence of spatial heterogeneity in surveillance efforts , . the space-time interaction. the space-time interaction is a random effect term, which is interpreted as the residual effect after the unstructured, spatially structured and time effects are modelled and represent sporadic short-term outbreaks or clusters. the changes and circulations of virus subtypes may determine the characteristics of the space-time interaction terms. the year was the critical point according to the results of the spatiotemporal analysis. there are four types of ili activities: sporadic, local outbreak, regional outbreak and widespread outbreak in flunet (www. who.int/flunet), global influenza surveillance and response system (gisrs) , . since the first case of influenza a (h n )pdm was reported on may , , in mainland china, the type a (h n )pdm virus has been detected in all ili activities according to the data from flunet. the yearly ili activities may be partially associated with the changes and similarities in the patterns of the space-time interactions from to . from the flunet data mentioned above, we found that sporadic ili activities were dominant in , , and . correspondently, we found more areas with high relative risk in these years in the space-time term. this implies that the more sporadic the activities are, the larger the variations in the spatiotemporal distribution of the risk of influenza. in contrast, the large outbreaks account for most ili activities in the years , , and . few prefectures were observed to have a relative risk greater than or during that period. large outbreaks, especially large regional and widespread outbreaks, may reduce the differences in the incidence risk of influenza among the areas and times on a large scale. strengths. this work adds to the existing research on influenza epidemiology in the following ways. first, the study initially presents the spatiotemporal distributions with higher-resolution spatial data than has been reported in china for the last years, which allows more opportunity for focused investigations and interventions. next, we used the exceedance probabilities instead of the observed risk estimates to identify those areas for which the increased risk was highly unlikely to be due to chance. then, this study also provided a baseline model that can be extended to include social, economic, ecological, and environmental factors, as well as intervention measures to explore their associations with influenza. finally, the methods in this study offer practical tools for spatial analysis of other notifiable infectious diseases in cnidrs. there are some limitations to this study. cnidrs is a passive surveillance system, and accessibility to health facilities and patient visit behaviour may affect the number of cases reported. we collected both clinically diagnosed and laboratory-confirmed cases in cnidrs, so misdiagnosis and misreporting are unavoidable because it is difficult to distinguish influenza from other respiratory viruses without laboratory testing, especially in the non-epidemic seasons. this paper outlined the application of the bayesian spatiotemporal model to assess the relative disease risk of influenza at the prefecture level in mainland china. we observed an increased incidence trend of influenza from to that was fairly steady in the first years and increased rapidly in the last years. clusters of prefectures with high relative risk values concerning influenza incidence were identified in the central part of china. the identification of high-risk areas is especially a priority in china because the limited resources available for disease control need to be focused on the places most in need. we hypothesize that the insufficient flu vaccine supplements, low vaccine uptake, the newly emerging influenza a (h n )pdm and expansion of influenza surveillance efforts might be the major causes of the dramatic changes in outbreak and spatiotemporal epidemic patterns. future research with more risk factors at the national and local levels is necessary to explain the changing spatiotemporal patterns of influenza in china. model specifications for spatiotemporal analysis. the besag york mollié (bym) convolution model was used as a baseline model . using the notation of banerjee et al. , the bym model is as follows: • n is the number of areas. the y i counts of influenza cases in area i are independently identically poisson distributed. θ i is the risk for area i. e i is the number of expected cases of influenza in area i, which acts as an offset. • α quantifies the average incidence risk of influenza in all the prefectures. • ν i is a spatially unstructured random effects component that is i.i.d normally distributed with mean zero. • υ i is a spatially structured component using an intrinsic conditional autoregressive structure (icar). the random effect for each area ζ i is thus the sum of a spatially structured component υ i and an unstructured component ν i . it is termed a convolution prior , . the bym model was extended to include a linear term for space-time interaction and a nonparametric spatiotemporal time trend. possible random effects specifications for the temporal term include a linear time trend (β j ), a random time effect (γ j ), a first-order random walk (γ j ), a second-order autoregression (γ j ), etc. . four types of interactions are proposed in knorr-held ( ) , see knorr-held ( ) for a detailed description. in this study, we assume no spatial and temporal structure on the interaction, and therefore, δ ij ∼ normal( ; τ δ ). four candidate models were tested and compared: in model , the space-time interaction is a random effect term and is interpreted as the residual effect after the unstructured, spatially structured and time effects are modelled and represent sporadic short-term outbreaks or clusters. model selection was based on deviance information criteria (dic), which take into consideration the posterior mean deviance, a bayesian measure of model fit, and the complexity of the model. a smaller dic indicates a better fit of the model . the final linear model consisted of an intercept (α); a vector of national-level explanatory variables ∑ β = ( x ) k n k k for the yearly total number of lot release of influenza vaccines by the china food and drug administration, the positive rate of influenza a (h n )pdm among the number of ili specimens processed, the percentage of influenza a (h n )pdm among all the positive influenza specimens, and protocol changes; a spatially unstructured random effect term (ν i ); a spatially structured conditional autoregression term (υ i ); a first-order random walk-correlated time variable (γ j ); and an interaction term for time and place (δ ij ). the prefecture-specific structured and unstructured spatial risks of influenza compared to the whole spatial risk of all prefectures are obtained by applying an exponential transformation to the components of ν i and υ i , respectively. the relative risk of space-time interaction is computed by the exponentiation of the term δ ij . the exceedance probabilities of spatial risk and risk of space-time interaction were also calculated. the exceedance probability represents the posterior probabilities for an area's spatial risk estimate exceeding some pre-set value and has been proposed as a bayesian approach to hotspot identification , . all spatial models were computed using integrated nested laplace approximations (inla), which have been developed as a computationally efficient alternative to mcmc . all spatial analyses were conducted within microsoft r open version . using the r-inla package (version . . ). ethics approval. the authors assert that all of the procedures contributing to this work comply with the ethical standards of the relevant national and institutional committees on human experimentation and the helsinki declaration of as revised in . this article does not contain any studies of human or animal subjects performed by any of the authors. since this analysis was based on anonymous aggregated statistical data, patient informed consent and ethical committee approval were not required in china. disclaimer. the views expressed are those of the authors and do not necessarily represent the official policy of the chinese center for disease control and prevention. the burden of influenza: a complex problem the substantial hospitalization burden of influenza in central china: surveillance for severe, acute respiratory infection, and influenza viruses estimates of global seasonal influenza-associated respiratory mortality: a modelling study pandemic influenza: certain uncertainties temporal patterns of influenza a and b in tropical and temperate countries: what are the lessons for influenza vaccination? plos one , e seasonal influenza vaccine supply and target vaccinated population in china seasonal influenza vaccination in china: landscape of diverse regional reimbursement policy, and budget impact analysis chinese vaccine scandal unlikely to dent childhood immunization rates china pharma crackdown leads to flu vaccine shortage the first confirmed human case of avian influenza a (h n ) in mainland china h n and h n avian influenza suitability models for china: accounting for new poultry and live-poultry markets distribution data. stochastic environmental research and risk assessment: research journal comparative epidemiology of human infections with avian influenza a h n and h n viruses in china: a population-based study of laboratory-confirmed cases probable limited person-to-person transmission of highly pathogenic avian influenza a (h n ) virus in china geographic distribution and risk factors of the initial adult hospitalized cases of pandemic influenza a (h n ) virus infection in mainland china distribution and risk factors of pandemic influenza a (h n ) in mainland china transmission of pandemic influenza a (h n ) virus in a train in china epidemiology of human infections with avian influenza a(h n ) virus in china human infection with a novel avian-origin influenza a (h n ) virus characterization of regional influenza seasonality patterns in china and implications for vaccination strategies: spatiotemporal modeling of surveillance data clinical features of the initial cases of pandemic influenza a (h n ) virus infection in china bayesian image restoration, with two applications in spatial statistics bayesian analysis of space-time variation in disease risk geographical and environmental epidemiology: methods for small-area studies a primer on disease mapping and ecological regression using >{ exttt {inla}} > bayesian estimates of disease maps: how important are priors? diffusion and prediction of leishmaniasis in a large metropolitan area in brazil with a bayesian space-time model bayesian modelling of inseparable space-time variation in disease risk bayesian extrapolation of space-time trends in cancer registry data epidemiology of avian influenza a h n virus in human beings across five epidemics in mainland china, - : an epidemiological study of laboratory-confirmed case series global epidemiology of avian influenza a h n virus infection in humans, - : a systematic review of individual case data emergence and control of infectious diseases in china comparing the similarity and difference of three influenza surveillance systems in china dual seasonal patterns for influenza clinical and epidemiologic characteristics of early cases of influenza a pandemic (h n ) virus infection, people's republic of china risk factors for severe illness with pandemic influenza a (h n ) virus infection in china vaccine scandal and confidence crisis in china the effect of vaccine literacy on parental trust and intention to vaccinate after a major vaccine scandal association between vaccination coverage decline and influenza incidence rise among italian elderly the re-emergence of highly pathogenic avian influenza h n viruses in humans in mainland china variation in influenza b virus epidemiology by lineage environmental predictors of seasonal influenza epidemics across temperate and tropical climates strategy to enhance influenza surveillance worldwide influenza epidemiology and influenza vaccine effectiveness during the - season: annual report from the global influenza hospital surveillance network distribution of influenza virus types by age using case-based global surveillance data from twenty-nine countries the primary health-care system in china using electronic health records data to evaluate the impact of information technology on improving health equity: evidence from china enabling health reform through regional health information exchange: a model study from china electronic recording and reporting system for tuberculosis in china: experience and opportunities estimated global mortality associated with the first months of pandemic influenza a h n virus circulation: a modelling study continued reassortment of avian h influenza viruses from southern china knowledge, attitudes and practices (kap) related to the pandemic (h n ) among chinese general population: a telephone survey knowledge, attitudes and practices (kap) relating to avian influenza in urban and rural areas of china perceived challenges to achieving universal health coverage: a cross-sectional survey of social health insurance managers/administrators in china consolidating the social health insurance schemes in china: towards an equitable and efficient health system impacts of road traffic network and socioeconomic factors on the diffusion of pandemic influenza a (h n ) in mainland china the roles of transportation and transportation hubs in the propagation of influenza and coronaviruses: a systematic review human mobility and the spatial transmission of influenza in the united states spatiotemporal distributions and dynamics of human infections with the a h n avian influenza virus spatial distribution of bluetongue surveillance and cases in switzerland the evaluation of bias in scrapie surveillance: a review flunet as a tool for global monitoring of influenza on the web global influenza seasonality to inform country-level vaccine programs: an analysis of who flunet influenza surveillance data between epidemiological features of and changes in incidence of infectious diseases in china in the first decade after the sars outbreak: an observational trend study. the lancet infectious diseases hierarchical modeling and analysis for spatial data bayesian mapping of disease. markov chain monte carlo in practice bayesian measures of model complexity and fit cluster detection diagnostics for small area health data: with reference to evaluation of local likelihood models space-time bayesian small area disease risk models: development and evaluation with a focus on cluster detection approximate bayesian inference for latent gaussian models by using integrated nested laplace approximations this study was supported by grants from the key joint project for data center of the national natural science j.q. ma. conceived, designed, and supervised the study. y.w. zhang., x.f. wang. and y.f. li. collected and cleaned the data. y.w. zhang. analysed the data and wrote the drafts of the manuscript. j.q. ma. and y.w. zhang. interpreted the findings. all authors read and approved the final manuscript. the authors declare no competing interests. correspondence and requests for materials should be addressed to j.m. publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.open access this article is licensed under a creative commons attribution . international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons license, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons license, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this license, visit http://creativecommons.org/licenses/by/ . /. key: cord- - rrhcm authors: luce, judith a. title: use of blood components in the intensive care unit date: - - journal: critical care medicine doi: . /b - - . - sha: doc_id: cord_uid: rrhcm nan most patients admitted to an intensive care unit (icu) require the administration of one or more blood components during their stay. such patients exhibit great diversity in conditions necessitating care in the icu, age, underlying medical problems, and integrity of physiologic compensatory mechanisms. all these patients, however, share the need for optimized oxygen-carrying capacity and tissue perfusion. ongoing blood loss resulting from injuries, surgical wounds, invasive monitoring equipment, and blood sampling requirements, coupled with inadequate marrow function and, in some, red cell destruction, makes red cell transfusion a necessity for many icu patients. additionally, many patients are susceptible to the development of hemostatic disorders requiring the administration of such blood components as plasma, cryoprecipitate, or platelet concentrates. blood components should be considered drugs because they exert potent therapeutic responses yet are also capable of causing signifi cant adverse effects. the food and drug administration (fda) regulates blood component preparation, testing, and administration. unlike pharmaceutical agents, however, blood components have fewer objective indications for use and no therapeutic index relating dose to safety. it is not as simple to monitor the effi cacy and continuing need for a blood component as it is to determine the blood level of a drug. in addition, the risks associated with transfusion cannot be known in advance and may be lethal; such risks include medical errors, as well as infectious and immunologic hazards. unlike pharmaceutical agents, these prescribed products require documentation of patient consent and indication for use. although the american blood supply is now safer than ever before, zero-risk transfusion is not achievable, even if blood components could be sterilized. the process of donor selection and screening has become increasingly stringent, an evolution that began in response to the welldefi ned risks of transfusion-transmitted hepatitis and human immunodefi ciency virus (hiv) infection. although the value of maximizing recipient safety is unarguable, increasing donor selectivity has its price. as more tests are added and more conditions placed on the donor, the number of usable donations has declined. this trend has led to occasional regional and seasonal blood shortages and, rarely, outright inability to provide certain blood components. clinicians who prescribe blood components must be aware of these uncertainties in availability and contribute by using blood products appropriately while the national blood banking system seeks strategies to ensure an adequate, safe blood supply. donor screening strategies to ensure recipient safety take several forms. , american blood donors are voluntary donors; cash payment was eliminated in the s after studies linked professional donors with transmission of hepatitis. confi dential questionnaires were initiated to limit transmission of hiv and hepatitis and to allow voluntary self-exclusion and involuntary exclusion of donors who pose an increased risk of transmitting infectious agents. multiple specifi c serologic and biochemical tests are performed to detect the potential for transmission of hiv and other retroviruses, hepatitis, and syphilis. any donor who indicates high-risk behavior or who tests repeatedly positive is placed on a permanent deferral list. some patients may insist on blood obtained from relatives or friends. this practice is termed directed or designated donation. these selected donors must undergo the same rigorous questioning and testing as volunteer donors. some studies have found an increased frequency of hepatitis markers in the blood of directed donors when compared with blood drawn from unselected volunteers, but others suggest that designated donors may be no different from new volunteers. , there continues to be no consensus about whether directed donors are, as a group, as safe as volunteer donors. , institutional policies about the acceptability and processing of directed donations vary widely. in any case, supporting icu patients who require large-volume transfusion with directed donations is unlikely to be advantageous or practical. the basic principle of blood component therapy is prescription of the specifi c blood product needed to meet the patient's requirement. a single whole blood (wb) donation can be separated into its composite parts, or components, which can be distributed to several recipients with differing physiologic needs. component therapy thus meets the clinical requirements of increased safety, efficacy, and conservation of limited resources. as the variety of blood product components increases, however, the complexity of transfusion medicine also increases. a wb donation is typically separated into red blood cells (rbcs), a platelet concentrate, and fresh frozen plasma (ffp) within hours of its collection. the plasma may be further processed into cryoprecipitate and supernatant (cryopoor) plasma. one unit of wb measures approximately ml, including ml of citrate anticoagulant/preservative solution. each unit of wb supplies about ml of rbcs and ml of plasma for volume replacement. wb is refrigerated for to days, depending on the preservative used. after less than hours of refrigerated storage in this preservative and bag system, platelet and granulocyte function is lost. with further storage, levels of the "labile" coagulation factors v and viii decrease. some blood centers offer modifi ed wb, which is produced by removal of the platelet or cryoprecipitate fraction and return of the supernatant plasma to the red cells. this permits provision of the more labile components to patients with specifi c needs, with the remainder forming a product having a composition essentially the same as cold-stored wb. however, the growing need for specialized blood components has resulted in processing the majority of blood donations into components, thus limiting the availability of wb and modifi ed wb. rbcs, or in common usage, "packed" red cells (prbcs), are the blood component most commonly transfused to increase red cell mass. prbcs are derived from the centrifugation or sedimentation of wb and removal of most of the plasma/anticoagulant solution. if collected into citrate-phosphate-dextrose-adenine solution, the volume is approximately ml, the hematocrit (hct) is % to %, and the storage life is days. extended additive solutions permit storage up to days but increase the volume to ml and decrease the hct to %. these extended storage units are commonly used and easier to transfuse because of lower viscosity, but they may pose a problem because of their larger volume. the transfusion of leukocyte-reduced rbcs may benefi t certain patients. transfusion of blood components containing leukocytes may lead to febrile reactions, a greater propensity for alloimmunization, platelet alloimmunization, and transmission of pathogens carried by leukocytes, such as cytomegalovirus (cmv). leukocyte reduction, as defi ned by the fda, requires fi ltration of the blood component by a special fi lter. filtration may be performed either at the time of blood donation and processing or later at the time of transfusion ("bedside fi ltration"). filtration before storage conveys the benefi t of removing white blood cells (wbcs) before they can deteriorate and elaborate cytokines and other unwanted substances during storage. because of proven and theoretical benefi ts of leukocyte reduction of blood components (discussed later in the section covering the adverse effects of transfusion), many european countries and canada require that all transfusions be leukocyte reduced, a process called universal leukoreduction (ulr). some institutions in the united states have also made that decision, but either method of leukocyte reduction adds signifi cantly to the cost of each transfusion ($ to $ ), and the benefi ts of this measure when applied globally have yet to be quantifi ed. washing prbcs involves recentrifuging to remove the plasma/preservative solution from the unit. however, washing may take an hour or more, limits subsequent storage time, and causes some loss of rbcs. washing is also not an effective method of leukoreduction. there are very few indications for the use of washed rbcs, although some recipients with plasma reactions may benefi t. prbcs can be frozen in cryoprotective solution and stored for extended periods. frozen rbcs are generally limited to units of special value, such as those with a rare rbc antigen profi le or autologous blood donations that need to be stored for future use. a rare-donor registry of frozen prbcs exists to assist in providing blood to patients with complex or multiple alloantibodies to red cell antigens. signifi cant advanced planning is necessary to acquire and thaw frozen prbcs for transfusion, thus limiting their use in acute situations. wb and prbcs suffer some cell loss during storage. the current technology of bag and preservative solutions attempts to optimize cell quality and quantity by using strict criteria to determine the length of allowable storage time. nonetheless, as red cell metabolism decreases progressively, a "storage lesion" results, with accumulation of a variety of undesirable substances and loss of cellular function. over time in storage, a slow rise in the concentration of potassium, lactate, aspartate aminotransferase, lactate dehydrogenase, ammonia, phosphate, and free hemoglobin and a slow decrease in ph and bicarbonate concentration occur. cytokines and infl ammatory mediators such as interleukin- , interleukin- and tumor necrosis factor also accumulate. the ph of freshly stored blood in citrate solution is . , which declines to approximately . at the end of the unit's shelf life. as potassium leaks from red cells during storage, levels as high as meq/l may result. however, each unit transfused supplies at most meq of potassium, which is well tolerated under most circumstances. during the storage period there is also a progressive decrease in rbc-associated , -diphosphoglycerate ( , -dpg) and adenosine triphosphate (atp). a decrease in , -dpg increases the affi nity of hemoglobin for oxygen, which shifts the oxygen dissociation curve to the left and decreases oxygen delivery to tissues. there is little evidence, however, that this transient increase in oxygen affi nity has clinical importance. after infusion, , -dpg gradually increases as the transfused red cells circulate, with % recovery in hours and full replacement by hours. decreased atp during storage diminishes the viability of red cells after transfusion and is one of the chief factors limiting storage time. there is no currently available storage or rejuvenation solution that optimizes these cellular constituents. the majority of blood transfusions are in the form of prbcs, the component indicated for normovolemic patients or those for whom intravascular volume constraints are necessary. the use of wb may be desirable for patients who require both increased oxygen-carrying capacity and volume resuscitation because of a large and ongoing hemorrhage; however, the availability of wb is generally limited. resuscitation is effectively achieved with the use of prbcs and crystalloid solutions. each unit of prbcs or wb is expected to raise the hemoglobin level by g/dl and the hct by % in stable, nonbleeding, average-sized adults. although some studies have demonstrated a slight superiority of fresh wb over components when used during cardiac surgery in selected patients, the benefi ts of fresh blood remain controversial, and current testing and processing requirements limit general availability. despite a long tradition of transfusion of rbcs in critically ill patients, the precise indications for transfusion remain a source of controversy, and specifi c transfusion practices may vary widely among clinicians. before the major randomized studies of rbc transfusion policies, a survey of transfusion practice showed that about half of icu patients were receiving red cell transfusions, and another showed that if the icu stay was longer than a week, the rate of transfusion was %. the total number of transfusions was high, and icu practice was characterized by high rates of transfusion. the reasons for the controversies are clear: rbcs should be transfused only to enhance tissue oxygen delivery, but the underlying physiology of anemia, the complex adaptations to anemia, and the potential advantages and disadvantages to particular groups of patients are not as well understood. compensatory mechanisms for acute and chronic anemia are diverse and complex. , all work in concert to maintain oxygenation within the microcirculation. cardiovascular adjustments leading to increased cardiac output include decreased afterload and increased preload resulting from changes in vascular tone, increased myocardial contractility, and elevated heart rate. lowered blood viscosity permits improved fl ow of erythrocytes within capillaries. blood fl ow is redistributed to favor critical organs with higher oxygen extraction. pulmonary mechanisms, though contributing relatively little to shortterm oxygenation demands, exert potent effects on related metabolic variables. finally, the hemoglobin molecule can undergo biochemical and conformational changes to enhance unloading of oxygen at the capillary level. all these mechanisms contribute to an "oxygen reserve" capacity that exceeds baseline requirements by approximately fourfold. no experimental model exists that encompasses the diversity of physiologic compensations for hypoxia. experiments carried out in animals and case reports in patients refusing transfusion indicate that an extremely low hct is tolerated if tissue perfusion is adequate. [ ] [ ] [ ] certain objective, though indirect, measurements of tissue oxygenation exist and are available to clinicians caring for patients monitored invasively in the icu. mixed venous oxygen content (pv o ) and cardiac output can be measured in patients undergoing pulmonary artery catheterization; arterial oxygen content can also be measured directly. the oxygen extraction ratio (er) can be calculated directly, and in the presence of normal or high cardiac output it is a measure of tissue oxygen extraction and, indirectly, the adequacy of tissue oxygen delivery. the total body er at baseline is about %. a falling pv o and an er increasing to greater than % have been proposed as indicators of the need for red cell transfusion. there have been only randomized trials of transfusion policy in the icu, and only of them was large enough to draw specifi c, statistically signifi cant conclusions. the canadian critical care trials group compared a liberal (target hemoglobin, to g/dl) with a restrictive (target hemoglobin, to g/dl) red cell transfusion policy in patients stratifi ed for disease severity. at days from randomization, the restrictive strategy was at least as good as, if not better than (p = . ) the liberal strategy, and overall hospital mortality was signifi cantly lower in the restrictive strategy group (p = . ). for patients younger than years and for patients with lower (< ) apache (acute physiology, age, and chronic health evaluation) ii scores, the restrictive strategy was clearly superior. in addition, liberal transfusion was not associated with shorter icu stays, less organ failure, or shorter hospital stays; longer mechanical ventilation times and cardiac events were more frequent in the liberal strategy group. a later subgroup analysis of patients with cardiovascular disease, though small enough to have statistical doubt, suggested that a more liberal transfusion strategy was probably appropriate for patients with severe ischemic coronary disease. this observation has some support in experimental studies of the effects of anemia in laboratory animals with coronary occlusion. the canadian study has highlighted the many and complex issues involved in transfusion decision making in the icu. since publication of the canadian study, several large reports have examined the use of red cell transfusions in critical care units. vincent and colleagues surveyed european icus and found that the transfusion rate in patients was % during the icu stay and . % after the stay. the mean pretransfusion hemoglobin level was . g/ dl. corwin and colleagues studied icus in the united states a year later and found great similarity: nearly % of patients received transfusions, and the mean threshold hemoglobin level was . g/dl. a single large scottish teaching hospital reported a more parsimonious practice: the rate of transfusion was still % in its icu patients, but the total volume of blood used was slightly smaller and the mean pretransfusion hemoglobin level was only . g/dl. all these authors have concluded that icu practice has not fully embraced the guidelines of the canadian clinical trial. in contrast, hospitals in australia and new zealand have reported on transfusion in consecutive icu admissions, and although the authors found a median pretransfusion hemoglobin concentration of . g/dl, the rate of transfusion was lower, at only . % of patients, % of whom were bleeding. the "inappropriate" transfusion rate was %. the authors speculate that the practitioners may have been infl uenced by publication of the canadian study and their own regional survey of transfusion practices. nonetheless, they agree that full implementation of the canadian guidelines in their clinical setting might be controversial. the literature on rbc transfusion in the setting of surgery, particularly surgery with the use of blood products, is growing. a mounting body of data illustrate the human tolerance of a low hct during and after surgery. a recent randomized trial of rbc transfusion strategy in orthopedic surgery demonstrated no signifi cant differences in outcome between a restrictive ( g/dl) and a liberal ( g/dl) transfusion threshold and included monitoring for silent myocardial ischemia preoperatively and postoperatively. provided that adequate perfusion of the microcirculation is maintained, purposeful maintenance of a low hct during surgery, a technique called normovolemic hemodilution, can be a powerful tool in minimizing blood loss and the attendant need for red cell transfusion. table - summarizes guidelines proposed by the national institutes of health, the american society of anesthesiologists, and the american college of physicians relative to the transfusion of rbcs. these guidelines have been provided with the intent of establishing parameters, not with the intent of substituting for the individual clinician's judgment. the art of medical decision making in transfusion, as in other areas of medicine, lies in determination of the appropriate treatment for the individual patient. a platelet concentrate (random-donor platelets) is obtained by centrifugation from a unit of donated wb. each unit contains a minimum of × platelets suspended in about ml of plasma. platelets are stored at room temperature to avoid loss of function from refrigeration and are constantly agitated to maximize gas exchange. the length of storage varies with the container used, but most systems permit -day storage. because of this limited storage time and the increasing demand for this component, platelets are often subject to supply shortages. some loss of viability and platelet numbers occurs during storage, but -day-old platelets still effect hemostasis. once the bags are entered for pooling before transfusion, the platelets must be administered within hours. each unit of platelets is expected to increase the platelet count by × /l in a typical -kg adult. the usual dose is units, or u/ kg of body weight. a -hour post-transfusion platelet count should be obtained to determine the adequacy of response. the following equation, which relates platelet number and body size to the post-transfusion increment, can be used to assess the effectiveness of the transfusion: abo-compatible platelets are desirable but not essential. when abo-mismatched platelets are given, removal of some of the incompatible plasma can be carried out at the time of pooling for transfusion. likewise, volume reduction may be necessary for patients at risk for fl uid overload from the to ml of plasma present in to units of platelets. nonetheless, the remaining plasma is a good source of stable coagulation factors and contains diminished but still potentially benefi cial amounts of factors v and viii. there is no contraindication to the use of rh-positive platelets in rh-negative patients; if given to women with future childbearing potential, rh immune globulin (rhig) may be used prophylactically against the small risk of rh alloimmunization from red cells that may be contained in the platelet concentrate. plateletpheresis (common terms: single-donor platelets, apheresis platelets) involves separating and removing platelets from one donor by cytapheresis during a / -to -hour procedure on an automated device and then retransfusing the remainder of the blood back into the donor. each collection contains an equivalent of to units of platelet concentrates. single-donor platelets are suspended in about ml of plasma, so the same abo and volume considerations discussed earlier pertain. single-donor platelets offer the clear benefi t of reducing the risk of multiple-donor exposure to the recipient. single-donor platelets may also be the only available alternative for recipients who have been alloimmunized by previous platelet transfusions because they may be human leukocyte antigen (hla) or platelet antigen matched to the recipient. the use of apheresis platelets now exceeds the use of pooled random-donor platelets; however, use of this product in emergency situations is limited by the availability of volunteer donors. platelet transfusions are indicated for patients bleeding because of thrombocytopenia or functional platelet defects. guidelines for transfusion continue to evolve, and the current guidelines merely provide a desirable range for platelet counts, assuming normal platelet func-tion (table - ). there is ample evidence that bleeding medical or surgical patients with platelet counts of × /l or above will not benefi t from transfusion if thrombocytopenia is the only abnormality. for critical invasive procedures in which even a small amount of bleeding could lead to loss of vital organ function or death, maintaining the platelet count at × /l or greater is typically preferred. the presence of other factors that diminish platelet function, such as certain drugs, foreign intravascular devices (e.g., intra-aortic balloon pump or membrane oxygenator), infection, or uremia, may alter this requirement upward. patients at risk for small but strategically important hemorrhage, such as neurosurgical patients, may need to be maintained at counts of to × /l. patients without hemorrhage who have platelet counts of × /l or lower appear to be at increased risk for signifi cant hemorrhage. indications for transfusion to patients with counts above × /l are less well established; thus, the majority of guidelines propose prophylactic platelet transfusion to prevent hemorrhage at a threshold of × /l. the bleeding time is not a useful procedure in this situation because it is usually prolonged at counts below × /l, may be insuffi ciently reproducible, and correlates poorly with the risk for bleeding. patients undergoing cardiac bypass surgery experience a drop in platelet count and often acquire a transient platelet functional defect from damage associated with the bypass apparatus. most patients do not experience platelet-associated bleeding, however, so prophylactic transfusion in the absence of bleeding is not warranted. in a patient who continues to bleed postoperatively, more likely causes are a localized, surgically correctable lesion or failure to reverse heparinization. if these conditions are excluded, empiric transfusion of platelets may be justifi ed. patients thrombocytopenic by virtue of immunologic destructive processes such as idiopathic thrombocytopenic purpura (itp) receive little benefi t from platelet transfusions because the transfused platelets are rapidly removed from the circulation. in the event of life-threatening hemorrhage or an extensive surgical procedure, transfusion may prove benefi cial for its short-term effect. transfusion may be accomplished effectively by pretreatment with high-dose immunoglobulin or high-dose anti-d antiserum (rhig). , platelet transfusion has been reported to be deleterious in thrombotic thrombocytopenic purpura (ttp), in the related hemolytic-uremic syndrome, and in heparin-induced thrombocytopenia. cautious administration, in cases of life-threatening thrombocytopenic bleeding only, is prudent. prophylactic platelet transfusion for thrombocytopenia secondary to underproduction remains controversial. the common practice of transfusion to maintain the platelet count above × /l derives from data published in , which demonstrated an increase in spontaneous bleeding in leukemic patients at that level. however, critical evaluation of the data reveals that serious hemorrhage was not greatly increased until counts fell to × /l or lower and that these patients received aspirin for fever, which might have compromised platelet function and enhanced the bleeding. a somewhat more recent study quantitating stool blood loss in aplastic anemia patients defi ned a bleeding threshold at platelet counts of to × /l. a prospective study of a more conservative transfusion protocol found that major bleeding episodes occurred on . % of days with counts of less than × /l and on only . % of days with counts of to × /l. the trigger for prophylactic platelet transfusion in the to × /l range, however, applies primarily to stable thrombocytopenic patients. factors such as fever, use of anticoagulant or antiplatelet drugs, and invasive procedures must be considered when generating a treatment plan for individual patients. patients experiencing rapid drops in platelet count may be at greater risk than those at steady state and thus may benefi t from transfusion at higher counts. benefi ts to the patient with more judicious use of platelet transfusion include decreased donor exposure, which lessens the risk of transfusion-transmitted disease; fewer febrile and allergic reactions that may complicate the hospital course; and the potential delay or prevention of alloimmunization to hla and platelet antigens. the development of refractoriness to platelet transfusions is a serious event heralded by a falling cci. poor response to platelet transfusions can be seen in patients with other reasons for platelet consumption, including splenomegaly, fever, trauma and crush injury, burns, disseminated intravascular coagulation (dic), concomitant drugs, or transfusion of platelets of substandard quality. these factors should be sought and corrected if possible. alloimmunization is characterized by the development of anti-hla or platelet-specifi c antibodies, with resultant immune platelet destruction. as many as % of patients receiving multiple red cell or platelet transfusions become immunized. leukocyte depletion of transfused components can prevent or delay this phenomenon, but it is important to use leukoreduced components early in the course of transfusion therapy. , when patients fail to achieve expected increments after platelet transfusion, provision of abo-specifi c platelet concentrates that are less than hours old may improve the response. if no improvement is seen and the aforementioned medical conditions are excluded, the patient should be screened for hla antibodies or be hla typed and provided with hla-compatible single-donor platelets. alternatively, platelet crossmatching with the patient's serum can be carried out. there is no advantage to unmatched singledonor platelets in this situation. standard ffp is prepared by centrifugation of wb and is frozen within hours of blood donation. , ffp may be stored frozen for year. the usual volume is about ml, depending on the donor's hct. the most common method of thawing before transfusion is soaking in a ° c water bath, which requires about to minutes. once thawed, ffp can be stored refrigerated for a maximum of hours. when prepared and stored in this manner, ffp supplies all the constituents in the amounts normally present in circulating plasma, including stable and labile coagulation factors, complement, albumin, and globulins. by convention, the coagulation factors are present in concentrations of u/ml. crossmatching to the recipient is not performed, but ffp must be abo compatible. standard ffp is as likely to transmit hepatitis, hiv, and most other transfusion-related infections as cellular components are. new ffp products have recently been introduced in response to concern about the transmission of infectious diseases. one such product is solventdetergent-treated ffp. solvent-detergent treatment is a means of viral inactivation that removes the infectivity of lipid-enveloped viruses, such as hepatitis b and c and hiv. because the product is derived from pooled plasma, with as many as donors in each lot, it has the potential to actually increase recipient exposure to pathogens not inactivated by the solvent-detergent method, such as hepatitis a and parvovirus b , and be more vulnerable to any newly emerging non-lipid-enveloped agent. a variety of other techniques for reducing pathogen exposure in ffp have been developed, including exposure to low ph or vapor heating and treatment with ultraviolet irradiation, gamma irradiation, or psoralens and light to inactivate pathogens by inducing dna damage. because none of the ffp products is entirely free from the risk of disease transmission or other adverse effects and because infection-reducing modifi cations add significantly to the cost of the components, ffp should be used judiciously. it should be administered only to provide coagulation factors or plasma proteins that cannot be obtained from safer sources. ffp is commonly used to treat bleeding patients with acquired defi ciency of multiple coagulation factors, as in liver disease, dic, or dilutional coagulopathy, or to treat patients with congenital defi ciency of a coagulation factor or other protein for which concentrates or safer sources do not exist. ffp may be indicated for emergency reversal of the coagulopathy induced by warfarin anticoagulants when more concentrated products are not available or for the provision of protein c or s in patients who are defi cient and suffering acute thrombosis. ffp should be administered as boluses as rapidly as feasible so that the resulting factor levels allow hemostasis. the use of ffp infusions without adequate bolus administration is not helpful. ffp should not be used for volume expansion or wound healing or as a nutritional source of protein. ffp does not reverse anticoagulation induced by heparin and in theory might exacerbate bleeding by supplying more antithrombin, heparin's cofactor. prophylactic administration of ffp does not improve patient outcome in the setting of massive transfusion or cardiac surgery unless there is bleeding with an associated documented coagulation abnormality. , patients do not usually bleed as a result of coagulation factor insuffi ciency when the international normalized ratio (inr) is less than about . , and even then the results are not always predictable. the partial thromboplastin time (ptt) is not useful in predicting procedural bleeding risk. ffp is often requested prophylactically before an invasive procedure when the patient exhibits mild prolongation in coagulation studies. most of these procedures may be carried out safely without transfusing ffp. , ffp is probably the most misused blood component, as illustrated by retrospective surveys. coagulation factors are normally present in the blood far in excess of the minimum levels required for hemostasis. as little as % of the normal plasma concentration of several factors will effect hemostasis. conversely, ffp treatment of acquired multiple defi ciencies, as in hepatic failure, is often ineffective because many patients cannot tolerate the infusion volumes required to achieve hemostatic levels of coagulation factors, even transiently. the plasma half-life of transfused factor vii is only to hours. it may be impossible to administer suffi cient ffp every few hours without encountering intravascular volume overload. finally, in some instances, transfusion of seemingly adequate volumes may still fail to correct the coagulopathy. careful documentation of both the need for ffp and the adequacy and outcomes of therapy is essential. cryoprecipitate is manufactured by thawing and centrifuging ffp below º c and resuspending the precipitated proteins in about ml of supernatant plasma. , each bag is a concentrated source of factor viii ( to units), von willebrand factor (vwf) ( % of original plasma content), fi brinogen ( mg), factor xiii ( % of original plasma content), and fi bronectin. cryoprecipitate offers the advantage of transfusing more specifi c protein and less total volume than the equivalent dose of ffp does. it has been used to treat patients with inherited coagulopathies, such as hemophilia a, von willebrand disease, or factor xiii defi ciency. in the critical care setting, it is more commonly used to replenish fi brinogen, especially in bleeding patients with hypofi brinogenemia caused by dilutional or consumptive coagulopathy. cryoprecipitate also reportedly improves hemostasis in uremic patients, presumably by reversing the functional platelet defect, but desmopressin acetate (ddavp) or conjugated estrogens exert similar effects and should be used preferentially to avoid potential transfusion-transmitted disease. the usual dose of cryoprecipitate to treat hypofi brinogenemia is bags/units to start, then to bags/units every hours or as necessary to keep the fi brinogen level above mg/dl. each bag/unit of cryoprecipitate carries a risk of disease transmission equivalent to that of unit of blood. for this reason, commercial factor viii concentrates, recombinant or treated to inactivate viruses, are preferred over cryoprecipitate for treating hemophilia a patients. immune serum globulin (ig), rhig, and hyperimmune globulins for diseases such as hepatitis b and varicella zoster are obtained by fractionation of pooled plasma, followed by chromatography, delipidation, and other steps to remove aggregates and infectious agents. intravenous ig (ivig) is available in solution or lyophilized form, with protein content varying by mode of preparation. the available products vary slightly in the amounts of iga and igm contained in them, which are mostly present in only trace quantities. ig preparations can be used to provide passive antibody prophylaxis or to supply ig in certain immunodeficiency states. hyperimmune globulins may be used to treat active infections in immunosuppressed hosts. recent applications have exploited ig's immunomodulatory effects in treating a wide variety of disorders with an immune basis. the specifi c mechanism of action of ivig in such conditions has not yet been identifi ed, but possibilities include interference with macrophage fc receptor function, neutralization of anti-idiotypic antibodies, and interference with the incorporation of activated complement fragments into immune complexes. a recent review more completely discusses the effects of ivig on the immune system and its potential uses. rhig is prepared from pools of plasma obtained from donors sensitized to the red cell antigen d from the rh group. the standard-dose vial contains primarily igg anti-d, with a protein content of µg in ml. this dose will protect against ml of d + red cells or ml of wb. rhig carries no risk of virus transmission. although rhig is used primarily in obstetrics, it may also be indicated to prevent alloimmunization in rh-negative patients receiving small amounts of rh-positive red cells, as in platelet concentrates. routine prophylaxis against large numbers of red cells, as in a unit of rh-positive wb or prbcs given by accident to an rh-negative recipient, is not reliable and usually involves the administration of large amounts, but instances of its effective use in these circumstances have been reported. higher doses of intravenous rhig have been used in the treatment of itp. plasma-derived colloids include human serum albumin (hsa), available in % and % solutions, and plasma protein fraction (ppf), available in a % solution. both are derived from pooled donor plasma but are essentially pathogen-free. hsa is composed of at least % albumin, whereas ppf is subjected to fewer purifi cation steps and contains at least % albumin, with correspondingly more globulins. the % solutions are iso-oncotic, whereas the % solution of hsa is hyperoncotic and requires infusion with crystalloid solutions. potential clinical indications for colloid solutions include hypovolemic shock, hypotension associated with hypoproteinemia in patients with liver failure or protein-losing conditions, as a replacement solution in plasma exchange or exchange transfusion, and to facilitate diuresis in fl uidoverloaded hypoproteinemic patients. albumin solutions are not indicated as a nutritional source to raise serum albumin. their use in some indications, particularly for resuscitation, has become controversial, and pulmonary edema has been reported in association with their infusion. although albumin solutions are reasonably safe products to administer, expense and limited availability restrict their use. anaphylactic reactions have been reported in less than . % of recipients. the use of ppf has been associated with severe hypotensive episodes, with hageman factor fragments or prekallikrein activator being demonstrated, thus making ppf a less desirable resuscitation fl uid and contraindicated in cardiac surgery. granulocyte concentrates for transfusion are obtained from a single donor by cytapheresis methods, which generally involve the administration of hydroxyethyl starch and corticosteroids to the donor to improve granulocyte yield. granulocyte colony-stimulating factor (g-csf) has been added to some collection regimens and increases both cell counts and granulocyte survival substantially. each collection should contain at least granulocytes , and is suspended in approximately ml of plasma. a signifi cant number of red cells are present, so crossmatching for the recipient is required. because of the potential risk for graft-versus-host disease (gvhd), granulocytes are usually collected from hla-matched donors. granulocytes are stored at room temperature and must be transfused within hours of collection, although sooner is better because of rapid deterioration of the cells. patients who may benefi t from granulocyte transfusions include those who are neutropenic (absolute neutrophil count of less than . × /l) and those who are unresponsive to appropriate antibiotic treatment but in whom bone marrow recovery is expected to occur. a course of therapy generally involves daily infusion for to days. granulocytes have been used for progressive fungal infections in immunosuppressed granulocytopenic patients, in patients with defective leukocytes (e.g., chronic granulomatous disease), and in the neonatal icu for neonatal sepsis. randomized trials had suggested that granulocyte transfusions under these circumstances can reduce mortality, but such trials have not been conducted for more than decades. effective antibiotic regimens and the signifi cant adverse effects associated with the use of granulocyte concentrates, including pulmonary insuffi ciency related to alloimmunization and cmv infection, have limited their use in recent years. the decision to transfuse blood components, like any therapeutic maneuver, must be made with full awareness of the potential risk to the recipient, as well as the expected benefi ts. public expectations of a zero-risk blood supply help raise the acuity of physicians' decisions. for some patients, the benefi t from transfusion is so obvious that the associated risks pale in comparison to the consequences of withholding transfusion. however, the clinician's knowledge of the incidence and management of adverse reactions to transfusion is vital, not only to ensure the best patient care but also to provide appropriate patient education and true informed consent. almost every patient who receives an allogeneic blood transfusion will experience some adverse reaction if such universal effects as immunomodulation and bone marrow suppression are considered. measurable reactions to transfusion occur in about % of patients; more serious adverse responses may be expected in only % to % of transfusions. the nature of these adverse reactions ranges from those that are common but clinically unimportant to those that may cause signifi cant morbidity or death (table - ) . transfusion in the icu is a common and often lightly regarded event. however, because the signs and symptoms of severe, life-threatening reactions are frequently indistinguishable from those of troublesome, but less signifi cant reactions, every transfused patient who experiences a signifi cant change in condition, such as an elevation in temperature, change in pulse or blood pressure, dyspnea, or pain, must be promptly and fully evaluated to identify the cause of the reaction and to institute treatment when necessary. the basic approach to all acute reactions should be to maintain a high index of suspicion for acute hemolytic reactions by stopping the transfusion immediately, maintaining venous access with intravenous fl uids, and informing the blood bank laboratory immediately so that the appropriate transfusion reaction protocol can be in stituted and post-transfusion specimens obtained. early recognition of severe transfusion reactions may be lifesaving. the most feared reaction to blood transfusion is intravascular hemolysis, caused by the recipient's complementfi xing antibodies attaching to donor rbcs with resultant rbc lysis. abo incompatibility is most often implicated in these incidents. intravascular hemolysis is still the single most common acute cause of fatalities associated with the transfusion episode. in addition to hemolysis, complement activation stimulates the release of infl ammatory mediators and cytokines and thereby leads to hypotension and vascular collapse. activation of the coagulation system may result in dic. acute renal failure may also occur, presumably on the basis of immune complex interactions. morbidity and mortality are directly related to the quantity of incompatible blood transfused, which is why prompt recognition and cessation of transfusion cannot be overemphasized. misidentifi cation of the patient, or "clerical error," at any time beginning with the process of specimen acquisition through release of the unit and initiation of infusion is the major cause of acute intravascular hemolysis. , this reaction is more likely to occur in critical care settings, such as the icu, operating room, and emergency department, than anywhere else in the hospital. it is far preferable to transfuse uncrossmatched group o red cells than to chance abo incompatibility caused by improper patient and specimen identifi cation procedures. the most common clinical sign of hemolysis is fever, with or without chills. other common signs and symptoms include back or fl ank pain, anxiety, nausea, lightheadedness, dyspnea, and hemodynamic instability. in a comatose or anesthetized patient, many of these symptoms will not be evident; therefore, signs such as hypotension, hemoglobinuria, and diffuse oozing from puncture sites or incisions may be the only notable features. immediate management of hemolytic transfusion reactions must include cessation of the transfusion; the remainder of care is supportive. rapid verifi cation of patient and unit identifi cation must be made, not only to confi rm the suspected reaction but also to prevent a second patient from receiving a reciprocally incompatible unit if a clerical error has been made. desired end points of supportive care include maintenance of blood pressure, high urine output, and support of coagulopathy or further blood loss. steroids, heparin, or other specifi c pharmacologic interventions have no role in treatment. anaphylactic reactions to blood transfusions are fortunately rare but may be life-threatening. the usual cause is recipient antibody to a component of plasma that the patient lacks, most commonly antibody to iga in igadefi cient individuals. signs and symptoms include severe malaise and anxiety, fl ushing, dizziness, dyspnea, bronchospasm, abdominal pain, vomiting, diarrhea, hypotension, and eventually shock. fever and hemolysis do not occur. management includes immediate cessation of transfusion and standard therapy for anaphylaxis. if anti-iga antibodies are determined to be the cause of this reaction, the patient must receive blood components donated by iga-defi cient individuals or, if unavailable, specially prepared washed rbcs and platelet concentrates. plasmaderived preparations, such as albumin, and ig contain varying amounts of iga and pose a substantial risk in these patients. febrile nonhemolytic reactions (fnhrs) are the most commonly occurring immediate transfusion reaction. these reactions are annoying to the clinician, patient, and transfusion service alike in that they can cause signifi cant discomfort and, because they share certain manifestations with acute hemolytic reactions, must be investigated in every instance. fnhrs occur in approximately . % to . % of transfusion episodes. the etiologic factors are probably complex and multiple, but many reactions are caused by the release of cytokines and pyrogens, either within the transfused unit of blood or as a result of recipient antibodies to donor leukocytes. clinical signs include fever, with or without chills, usually beginning to hours after the start of the transfusion but occasionally delayed up to to hours. multiparous women and patients who are multiply transfused are particularly prone to fnhrs. the transfusion must be stopped and the appropriate transfusion reaction evaluation instituted. antipyretics such as acetaminophen may be administered. though commonly used, antihistamines such as diphenhydramine are neither preventive nor therapeutic. once acute hemolysis is excluded, transfusion of a new unit may be instituted. most patients will not experience a second such reaction. if repeated reactions become problematic, leukocyte-depleted blood components may be supplied. the implementation of ulr results in a reduction in the frequency of all fevers seen after transfusion by only about %. hives and pruritus are relatively common adverse effects of transfusion. they are a hypersensitivity reaction localized to the skin, and their cause is unknown but may include both donor and recipient characteristics. these reactions consist of localized or generalized urticaria beginning shortly after the start of transfusion without other signs or symptoms of anaphylaxis or hemolysis. the transfusion should be temporarily stopped, and antihistamines may be administered. if the hives resolve in a short time, the same unit of blood may be cautiously restarted. if repeated urticarial reactions occur, premedication with antihistamines may be effective, or blood components washed to remove plasma may be required. intravascular volume overexpansion is particularly likely to occur in critical care patients with limited cardiac reserve. aside from the inherent volume of the blood components, the intravenous normal saline concurrently administered adds to the volume load. unfortunately, normal saline solution is the only intravenous fl uid that may be administered with blood components. with careful attention to transfusion requirements and the use of volume reduction maneuvers available to the transfusion service, volume overload can be minimized in most instances. the frequency of this complication of transfusion is not reported. delayed hemolysis is an uncommon but probably underrecognized reaction to transfusion that results from the stimulation of a primary or secondary (anamnestic) recipient antibody response to foreign rbc antigens. these antibodies are undetected at the time of transfusion but increase after transfusion in a manner analogous to the vaccination "booster" effect. these reactions typically occur to days after transfusion but are unrecognized because of the lack of a clear temporal association with transfusion. fever, chills, and an unexplained decline in hct are the usual signs. transient elevation in bilirubin and lactate dehydrogenase may also occur. the diagnosis is established by a positive direct antiglobulin (coombs) test resulting from recipient antibody coating donor rbcs. the antibody may be identifi ed by eluting it from the rbcs or by demonstrating it within the recipient's serum. the specifi city of the antibody is often against such rbc antigens as the rh family, kidd, duffy, or kell systems. hemolysis may not occur, but if it does, it is likely to be extravascular and only rarely causes renal failure or dic. prevention of these reactions is diffi cult. alloimmunization to foreign rbc antigens occurs in approximately % of transfusions. detection of delayed antibodies is the purpose for requiring a new blood bank specimen every hours if the patient has recently been transfused. permanent transfusion records should record the occurrence of delayed antibodies, even though they may not be apparent at a later crossmatch. access to transfusion databases is critical for the care of patients with a past history of transfusion. transfusion-related acute lung injury (trali) is an uncommon ( . %) but serious adverse effect of transfusion that has only recently been gaining recognition. similar reactions have been called pulmonary leukoagglutinin reaction or noncardiogenic pulmonary edema. these reactions consist of acute respiratory distress syndrome (ards), which develops to hours after transfusion. signs and symptoms include bilateral pulmonary infi ltrates, hypoxemia, fever, and occasionally hypotension. monitored patients are found to have normal or low pulmonary wedge pressure and central venous pressure, as contrasted with patients experiencing volume overload. if adequate respiratory support and oxygenation are established promptly, spontaneous resolution generally occurs within to days. deaths have nonetheless occurred, particularly with a delay in diagnosis. , episodes of trali appear to have several possible causative mechanisms. some cases may be caused by donor antibodies reacting with recipient neutrophil or hla antigens. plasma factors related to blood storage have also been implicated, such as lipid substances from deterioration of donor cell membranes that prime recipient neu-trophils, which then damage the pulmonary vasculature and lead to increased capillary permeability and an ardslike syndrome. other clinical factors may contribute to increased risk, such as cardiac bypass surgery or other procedures. in the antibody model at least, the implicated antibody is unique to the donor and the affl icted recipient will probably not experience another such reaction, provided that the recipient is not exposed to the same donor. trali is undoubtedly under-recognized in the critical care setting and may frequently be confused with fl uid overload or cardiogenic pulmonary edema. transfusion-associated gvhd (ta-gvhd) is a welldocumented, but probably under-recognized, highly lethal immunologic complication of blood transfusion. immunocompromised patients infused with blood components containing viable donor lymphocytes are at risk for engraftment of the allogeneic lymphocytes and ensuing rejection of recipient (host) tissues. transfusion recipients who are at highest risk include neonates, especially the very premature, bone marrow and organ transplant recipients, and leukemia and lymphoma patients. ta-gvhd has also been reported in patients after cardiac surgery who received designated donor blood from relatives; presumably, the hla antigenic differences between donor and recipient were insuffi cient to stimulate a recipient immune response but suffi cient to elicit a donor immune response. the onset of ta-gvhd is usually within to days after transfusion, and it is manifested as fever and rash, followed by diarrhea and evidence of liver and bone marrow injury. ta-gvhd differs from that seen in bone marrow transplantation (bmt) by its involvement of the marrow and by far greater mortality. treatment is largely ineffective, and mortality exceeds %. irradiation of blood components at gy prevents ta-gvhd by eliminating the donor lymphocyte mitogenic response. all cellular blood components should be irradiated before transfusion to high-risk patients. the functions of the cellular components of blood are unaffected, although damage to rbc membranes limits postirradiation storage of prbcs. blood donated by a relative for any patient should be irradiated, as should hla-matched or crossmatched platelet products. allogeneic blood transfusion has been shown to modulate and suppress the recipient's immune response, an effect fi rst noted with kidney transplantation. immunosuppression in a critical care setting is generally undesirable, but whether transfusion has a signifi cant impact is debated. ongoing clinical issues center around two areas of controversy: the putative association between blood transfusion and increased numbers of postoperative infections and increased and more rapid rates of tumor recurrence in surgical oncology patients with certain malignancies. there has been no resolution of either issue despite a few prospective trials having been performed. the largest pro-spective trial of colorectal cancer resection, for example, is negative, but a meta-analysis of the extant data suggests that an adverse effect on recurrence does exist. similarly, most of the randomized trials of postoperative or critical care unit infections are too small to indicate an effect of transfusion, but all point in the direction of an adverse effect. , controversy will continue until larger randomized trials are conducted. the precise mechanism of the immunosuppression induced by allogeneic transfusion has not yet been delineated, and several mechanisms may be involved. alterations identifi ed in laboratory and clinical transfusion recipients have included depression of the t-helper/tsuppressor lymphocyte ratio, decreased natural killer cell activity, diminished interleukin- generation, formation of anti-idiotype antibodies, impairment of phagocytic cell function, and chronic persistence of donor lymphocytes (microchimerism), suggestive of low-level gvhd. difficulties in analysis of human data arise because patients requiring blood transfusions have conditions that themselves induce immune changes. there is some evidence, bolstered by the results of two large clinical trials, to suggest that leukocyte reduction of blood components reduces or eliminates this immunosuppressive effect. proponents of this viewpoint argue that for this reason, ulr would benefi t most patients receiving blood transfusions and lead to fewer infections, tumor recurrences, and other related putative risks of transfusion, all potentially resulting in saving lives and cost. prospective trials will be extremely important. public awareness of transfusion-associated acquired immunodefi ciency syndrome (aids) has done more to revolutionize transfusion practice than any other transfusion risk by resulting in more conservative blood use, more stringent donor selection criteria, and improved screening tests. the result is that viral transmission rates are now diffi cult to measure, and the risk of transfusionrelated infectious diseases is lower than ever. the current best estimate is that to units per , will transmit some kind of infection if agents such as cmv or epstein-barr virus are included. bacterial infection has become the most common infectious risk thanks to increasingly sensitive donor screening tests, including nucleic acid testing (nat) to detect viral dna or rna, which has shortened the infectious period and reduced the risk for post-transfusion hepatitis (pth) and other viral infections. several fatalities are reported yearly from the transfusion of blood components contaminated with viable, proliferating bacteria, with or without the accumulation of endotoxin. platelet concentrates, because they must be stored at room temperature, are particularly prone to bacterial growth, with a reported incidence of in , transfusions. organisms isolated from platelets and implicated in fatal transfusion reactions include staphylococcus and streptococcus species and gram-negative bacilli. fatalities resulting from bacterial contamination of refrigerated rbcs have occurred as well and more often involve cryophilic bacteria. rbc transfusions contaminated by yersinia enterocolitica have been consistently reported for a decade. transfusion reactions caused by bacterial or endotoxin contamination are fortunately quite rare, but mortality exceeds %. signs and symptoms of reactions caused by microbial contamination overlap those of hemolytic transfusion reactions and consist primarily of fever and hypotension, along with other signs of endotoxic shock. if recognized promptly, a gram stain of the implicated unit can be prepared immediately and, if positive, appropriate antibiotic and supportive therapy instituted. autologous blood components may also be contaminated at the time of collection; therefore, reactions occurring in patients who are receiving their own blood should not be dismissed but instead should be evaluated as fully as though the patients had received allogeneic blood. the success of viral screening measures is most clearly illustrated by the fall in the risk for pth over the past decades. although pth continues to be a signifi cant cause of morbidity and mortality, the nature of pth has changed through the years with the stepwise institution of various donor screening measures. the elimination of paid donors in and the successive introduction of immunologic tests for hepatitis b have resulted in a steady reduction in the rates of pth caused by hepatitis b virus (hbv) to approximately per million units of transfused blood products. although about % to % of hbv transmissions will result in acute hepatitis, chronic hbv infection develops in less than % of such patients. in contrast, the risk for chronic hepatitis c virus (hcv) infection after transfusion is higher, nearly %, and the long-term risk for cirrhosis-or hepatocellular carcinoma-related mortality is about % over more than years after pth secondary to hcv. , the clinical course of hepatitis a is generally milder, and the lack of a chronic carrier state means that with donor screening for symptoms of the acute illness, the risk of transmission is much lower, estimated at less than one in a million units. the prevalence of hepatitis b surface antigenemia among fi rst-time blood donors is . %, and the prevalence of hepatitis c antibodies in donors is approximately . % to . %. at this time, given the sensitivity of current screening assays, including the latest generation of enzyme immunoassays (eias) and nat, the current risk of pth resulting from hcv is believed to be about in , or less. although hbv is still implicated in pth (attributable to the seronegative "window" period in newly infected donors), the risk of transfusion-associated hepatitis b is about in , units. retroviruses, rna-based viruses characterized by their reverse transcriptase and integration into the host genome, and lentiviruses, a subset of retroviruses, are ubiquitous in animals and were initially identifi ed in humans in the early s. those known to be capable of transmission by transfusion are hiv- , hiv- , and human t-cell leukemia/lymphoma virus (htlv) i and ii. transfusion-associated aids was initially reported in late . the fi rst report of an associated viral agent did not appear until late in , and in march the screening enzyme-linked immunosorbent assay (elisa) to detect antibody to hiv- was licensed and immediately incorporated into the blood-screening process. improved confi dential donor screening appeared to decrease the risk of infectious units appearing in the donor pool. , the discovery that heat treatment reduced transmission resulted in a reduction in transmission by plasma products, especially to persons with hemophilia. clinical aids developed in more than % of recipients of infected blood products, and the vast majority succumbed to the disease. removal of donor units with seropositivity by elisa was insuffi cient to prevent transmission of hiv- ; several hundred cases were reported annually after introduction of the elisa test. subsequent development of an assay for the p antigen and then nat has lowered the risk of transfusion-associated hiv- infection to less than one in a million (see table - ). despite donor screening and sensitive assays, including eia, nat, and p antigen, an extremely small, but fi nite risk of hiv- transmission by screened blood transfusions remains. this risk is largely due to the seronegative "window" period experienced by newly infected donors, which is estimated to be an average of days. a second retrovirus, hiv- , fi rst described in residents of countries in west africa and subsequently detected in migrants to western europe, causes an immunodeficiency syndrome similar to that caused by hiv- . although very few cases of hiv- have been reported in the united states , and there have been no reported transfusion-transmitted cases, experience with other retroviruses suggests that screening may prevent the majority of potential transmission. therefore, donated blood is now screened by an assay for the presence of antibody to hiv- . the retrovirus htlv-i is the causative agent of adult t-cell leukemia (atl) and is strongly implicated in the chronic, progressive neurologic disorder termed tropical spastic paraparesis or htlv-i-associated myelopathy (tsp/ham). htlv-ii has been linked to hairy cell leukemia, but no transfusion-transmitted cases have been reported. the virus exhibits strong serologic crossreactivity with htlv-i such that screening assays fail to distinguish between antibodies to either virus. transfusion-transmitted htlv-i has been demonstrated. tsp/ham has developed in a small percentage of infected transfusion recipients, but no transfusionassociated cases of atl have been seen. approximately . % of donors in the united states are seropositive for htlv-i and htlv-ii ; further testing reveals the majority of them to be htlv-ii. donated blood is currently screened for antibodies to htlv-i and htlv-ii. the estimated risk of htlv transmission by screened negative blood is believed to be in , to million. cmv is a human herpesvirus that establishes latent infection in the host's tissues, particularly leukocytes, and is transmitted by all cellular blood components. seropositivity, or the presence of antibody, denotes previous exposure to the virus but does not confer protective immunity. secondary reinfection or reactivation of latent infection can occur. antibodies to cmv persist for life and serve as a marker indicating the potential for transmission of live virus. immunocompetent recipients of transfused cmvpositive blood experience minimal morbidity and mortality. the majority are asymptomatic, whereas a heterophile-negative mononucleosis syndrome may develop in a few. immunocompromised patients, however, may suffer life-threatening manifestations such as severe interstitial pneumonitis, gastroenteritis, hepatitis, or disseminated disease. several groups of patients are at particular risk (box - ), and these patients should receive blood incapable of transmitting the virus. other patients may benefi t from cmv-negative blood as well, such as seronegative solid organ transplant recipients or autologous bmt patients. screening of donated blood for cmv is not routinely done but can be performed quickly if necessary. because the prevalence of donor seropositivity is quite high in some regions ( % to %), cmvseronegative blood may not be readily available. blood that is leukocyte depleted ("cmv safe") may be as effective as seronegative blood in the prevention of cmv transmission, although a recent meta-analysis of clinical trials comparing the two methods suggests that cmvnegative blood products might have a slight advantage over leukocyte-depleted products. many blood-borne parasites may be transmitted by transfusion, although this is a rare occurrence in the united states because of donor screening questions and the low endemicity of implicated agents. changing immigration patterns and worldwide travel, however, make transfusion-transmitted parasites an increasing concern. on a worldwide basis, malaria is the most important transfusion-transmitted infective organism, although only about three cases occur in the united states each year. such infections are manifested by delayed fever, chills, seronegative pregnant women seronegative premature infants weighing less than g seronegative allogeneic or autologous bone marrow transplant recipients seronegative transplant recipients of seronegative organs diaphoresis, and hemolysis, often masked by underlying medical conditions. fatalities have occurred. babesiosis, a tick-borne disease, is endemic in regions of the united states, especially the northeast, with a seroprevalence of about %. transfusion-transmitted cases have been reported, with asplenic or immunocompromised patients being particularly susceptible. with increases in the number of latin american immigrants to the united states, american trypanosomiasis (chagas' disease), which is endemic in latin american countries, has emerged as a potential pathogen. other parasitic diseases that have been transmitted by transfusion include toxoplasmosis, leishmaniasis, and lyme disease. parvovirus b has now been recognized as a pathogen capable of transmission by transfusion, with typical clinical fi ndings and the potential for severe hematologic complications. cases of epstein-barr virus infection with a typical mononucleosis-like illness have been reported after transfusion. west nile virus has also been transmitted by transfusion. h n infl uenza, severe acute respiratory syndrome (sars), and other new viral infections should be capable of transmission by transfusion, although cases have not been reported and the prevalence of asymptomatic disease is unknown. a rising area of concern is the transmission of prion disease, either jacob-creutzfeldt disease or bovine spongiform encephalopathy (bse). donor referral criteria were implemented in for these diseases, and transmission of bse has been reported in the united kingdom. massive transfusion is defi ned as the administration of blood components in excess of one blood volume within a -hour period. in an average adult ( kg), this represents approximately units of wb or equivalent prbcs, crystalloid solution, and other components. massive transfusion, especially in the range of or more units of blood products, causes complications not generally seen in usual transfusion practice: accumulation of undesirable substances present within banked blood and dilutional depletion of normal blood constituents that are lacking in stored units. trauma victims, surgical patients undergoing extensive procedures, and patients with vascular or coagulation disorders may be massively transfused in the critical care setting. survival of the massive transfusion episode is determined more by the nature and degree of the patient's injuries or medical conditions than by the transfusions themselves, but the presence of adverse effects of massive transfusion can complicate patients' courses in the icu. transfusion of large quantities of stored blood defi cient in functional platelets often results in hemostatic defects or outright thrombocytopenia. circulating platelets consistently decrease in inverse proportion to the amount of blood administered, with the hemostatically signifi cant level of × /l reached after u. , functional defects have also been noted, and the bleeding time is prolonged. despite these laboratory changes, severe diffuse bleeding develops in less than % of massively transfused patients, and no laboratory studies predict those who will. prophylactic platelet transfusion has not been shown to be of benefi t. platelet counts may return to hemostatically effective levels quickly in patients with normal marrow function. currently, resuscitation of massively bleeding patients is most often accomplished with prbcs in combination with crystalloid solution. this should result in hemodilution to about % of normal plasma factor levels after the transfusion of about units; this factor level can effect normal hemostasis. in reality, however, crystalloids may be given in excess of prbcs, so after units is transfused, less plasma protein may remain. bleeding is unlikely until prothrombin time (pt)/inr and ptt prolongations exceed . to . times the midpoint normal range, the equivalent of an inr approaching . . as with platelets, prophylactic administration of ffp has not proved effective in preventing diffuse bleeding. thus, the decision to transfuse should be made on an individual basis, as determined by the presence of bleeding or unacceptable risk in patients with documented abnormalities in coagulation. one new area of controversy in the treatment of patients with massive hemorrhage is the use of recombinant activated factor vii. this new agent was created for the treatment of hemophiliac patients with high titers of antibodies to factor viii, which makes them unable to benefi t from transfusion of recombinant factor viii. activated factor vii bypasses that problem by binding to tissue factor and directly activating thrombin and hence generating fi brin. it is extremely expensive, has a short half-life, and carries a risk of inducing pathologic thrombosis, with potentially grave consequences. nevertheless, in numerous case reports, this new agent appears to potentially be benefi cial if used early in the resuscitation of massively injured patients. unfortunately, its unsupervised use has also resulted in thrombotic complications and relative lack of success, both of which suggest that carefully controlled clinical trials are appropriate. blood preservative solutions contain excess citrate, which anticoagulates stored blood by binding ionized calcium. wb contains approximately . g of citrate/citric acid per unit in the plasma fraction. patients with normal liver function can metabolize the citrate load in unit of wb in minutes, but hepatic impairment may extend removal to minutes or longer. toxicity may result when citrate is administered in excess of the metabolic rate, thereby causing a decrease in ionized calcium levels. although paresthesias, cramps, and myoclonus accompany citrate excess, the chief danger of hypocalcemia is depression of myocardial contractility and potential prolongation of the qt interval. because the effects of citrate are transient and the use of prbcs containing little residual citrated plasma is far more common than massive transfusion with wb, routine administration of calcium is not indicated; clinically signifi cant rebound hypercalcemia may result. calcium infusion should be limited to hypoperfused patients with hepatic or cardiac failure who manifest citrate toxicity, and careful monitoring is essential. as potassium leaks from rbcs during storage, up to meq of extracellular potassium may accumulate in each unit. however, dangerous levels of potassium rarely develop in adults from stored blood; the potassium level is more likely to be determined by the patient's acid-base status. studies of massively transfused patients have demonstrated a wide range of potassium levels, with hypokalemia seen as frequently as hyperkalemia. because of the many physiologic mechanisms altered during resuscitation, including those of the respiratory, renal, cardiac, and hepatic systems, it is impossible to predict the net effect of massive transfusion on serum potassium levels. the ph of banked blood drops during storage, from . at the time of collection to as low as . after several weeks of storage. administration of large quantities of acidic blood, together with the metabolic acidosis common in these patients before resuscitation, would lead one to expect worsening acidosis as the outcome of massive transfusion. however, patients are more likely to exhibit metabolic alkalosis at the end of the transfusion episode, , partly because of improved tissue perfusion and the metabolism of citrate and lactate to bicarbonate. patients in renal failure may be unable to handle the bicarbonate load and require dialysis. acidosis persisting after transfusion suggests inadequate tissue perfusion. empiric administration of bicarbonate to counter the acid load is not warranted and may contribute to the deleterious effects of hypercapnia in patients with impaired ventilation. as discussed previously, the level of rbc-associated , -dpg in banked blood declines during storage, which increases the affi nity of hemoglobin for oxygen and thereby results in decreased oxygen off-loaded to tissues. even in massively transfused patients, it has been diffi cult to document a clinical impact of this shift, and no reliable method for restoring red cell , -dpg has been developed. wb and prbcs are stored at approximately º c and require to minutes to warm to room temperature. elective transfusions at standard fl ow rates are tolerated without the need to warm the blood; however, core body temperature, measured by esophageal probe, can fall to º c or lower with the administration of large volumes of cold blood over a period of to hours. adverse effects of hypothermia include a decreased heart rate and myocardial contractility, cardiac arrhythmias, increased affi nity of hemoglobin for oxygen resulting in decreased tissue oxygen delivery, dic, and impaired ability to metabolize the citrate load of stored blood. both blood warmers and patient warming may be instituted during massive transfusion, and patient core temperature should be monitored during such resuscitative efforts. whether massive transfusion in and of itself is a cause of ards is another source of controversy. there are certainly theoretical reasons why massive transfusion might precipitate ards: all cellular transfusions contain damaged or activated wbcs, cell membranes, aggregated platelets, and microthrombi, all of which are capable of lodging in and damaging pulmonary capillaries. despite this possibility, neither microfi ltration of transfusions nor routine leukocyte depletion has shown a signifi cant impact on the incidence of ards in massively transfused patients. certainly, other causes of ards exist in patients who undergo massive transfusion, and the possibility of volume overload and trali should be considered in the evaluation of patients with hypoxia and diffuse pulmonary infi ltrates after massive transfusion. management of such patients is supportive, consistent with the overall management of massive transfusion. , autoimmune hemolytic anemia patients with autoimmune hemolytic anemia (aiha) have an autoantibody, usually of broad specifi city, that fi xes itself to their rbcs and triggers extravascular immune-mediated destruction. patients with aiha have a positive direct antiglobulin test (dat, commonly known as the coombs test) and varying degrees of he molysis, and their autoantibodies cause agglutination of rbcs from all donors during crossmatching. if the hemolysis is brisk, patients may require red cell transfusion to support oxygen needs before medical management of the aiha is effective. hence, transfusion is diffi cult because agglutination during crossmatching interferes with proper defi nition of compatible units of rbcs and because the transfused rbcs are themselves subject to the same immune hemolysis as the host rbcs. many blood banks have methods for depletion of autoantibodies from the recipient's plasma and elution of antibodies from rbcs to arrive at a proper crossmatch. although such crossmatches are time consuming and not generally available on an emergency basis, they can be lifesaving. criteria for transfusion should remain the same as for other recipients. rbcs are crossmatched for red cell antigens in the abo and rh (d) group and for other red cell antigens when antibodies are present. however, there are several hundred other red cell antigens in the human family, and with repeated transfusion recipients may become alloimmunized to other antigens. generally, alloimmunization occurs in approximately % of transfusions, but the prevalence of alloantibodies is higher in chronically transfused, relatively immunocompetent patients, especially african americans, whose distribution of red cell antigens has signifi cant variation from the white population. alloimmunization rates of % or higher may be found in chronically transfused patients with hemoglobinopathies who have not received rbcs matched to potent minor antigens such as kell, duffy, and lewis. alloimmunization may present diffi culties in crossmatching of blood, to the point that compatible blood must be obtained from raredonor registries, if at all. other patients present unresolved serologic problems in that the alloantibody is never precisely identifi ed yet the majority of blood available for transfusion is incompatible. the delay engendered by working with multiple or unidentifi ed antibodies may be unacceptable in some critical care situations in which the need for oxygen-carrying capacity leaves no choice but to transfuse incompatible blood. the behavior of these antibodies in the laboratory may assist in predicting the clinical outcome of the incompatible transfusion. special procedures such as clearance studies, fl ow cytometry and in vivo crossmatching (cautious administration of a small aliquot of blood, with subsequent observation of serum and urine for evidence of hemolysis) are useful if time permits. emergency transfusion of type o, rh-negative uncrossmatched blood is generally reserved for the resuscitation of trauma patients, for whom the delay in crossmatching may be life-threatening. the risks of alloimmunization are generally accepted as low. even rh-positive type o rbcs may be used because rates of alloimmunization to rh (d) are low under the circumstances of emergency transfusion. , dic can present the clinician with diffi cult therapeutic choices. this common disorder in critically ill patients may be manifested as severe hemorrhage or thrombosis. therapy is primarily directed at alleviating the cause and supporting the patient. supportive therapy includes the transfusion of components needed to correct the bleeding diathesis caused by the consumption of platelets and fi brinogen, in addition to prbcs to restore oxygencarrying capacity. platelets and fi brinogen (as cryoprecipitate) are the most useful components needed to repair the coagulopathy, but their use risks merely "fueling the fi re" and increasing the microthrombosis of dic. heparin anticoagulation is controversial , and may increase the risk of bleeding, especially if depleted factors are not replenished. no defi nitive clinical trials have endorsed the routine use of heparin, and randomized trials of other components and coagulation inhibitors have uniformly been negative. in general, the use of heparin and antifi brinolytic agents has been confi ned to the most severe and protracted cases of dic. cirrhotic patients or those with fulminant hepatic failure have a variety of hemostatic disorders that complicate transfusion management of a bleeding patient. hepatic synthesis of coagulation factors may be markedly diminished, thereby necessitating replacement by ffp or cryoprecipitate. patterns of factor diminution may vary between acute hepatic necrosis and chronic cirrhosis. associated hemodynamic alterations may make it impossible to administer the volumes required for effective hemostasis, however, and any effect is transient. the use of factor concentrates or antifi brinolytic agents may precipitate thrombosis. activation of fi brinolysis and decreased clearance of activated factors may produce or mimic chronic dic, thus further exacerbating the factor defi ciencies and impairing coagulation. abnormal platelet function and thrombocytopenia may contribute to the coagulopathy of liver disease, with concomitant splenomegaly reducing the effectiveness of platelet transfusions. bleeding in uremic patients is exacerbated by an acquired platelet defect, in part secondary to dialyzable circulating molecules soluble in platelet membranes. plateletassociated vwf and plasma high-molecular-weight vwf multimers have also been shown to be decreased, which may explain the benefi t shown by ddavp and cryoprecipitate in shortening the bleeding time and improving hemostasis in some uremic patients. raising the hct by red cell transfusion in anemic patients has also been shown to shorten the bleeding time, presumably as a result of blood vessel wall-laminar blood fl ow interaction. transfusion of platelets in the absence of thrombocytopenia is unlikely to be of benefi t because the transfused platelets rapidly become dysfunctional. more aggressive hemodialysis is the most widely accepted method of reducing platelet dysfunction. bmt patients are vulnerable to the severe infectious and toxic side effects of ablative treatment and hence may be cared for in critical care units. these patients may have intensive red cell and platelet transfusion requirements and need specialized products such as cmv-negative and irradiated blood components. a blood bank problem uniquely encountered in bmt is the need to switch the patient's abo group because of an abo-mismatched transplant, thus necessitating an exchange transfusion of red cells and plasma-containing products (i.e., platelet concentrates) of differing abo type to avoid hemolysis of donor and recipient cells. bmt patients may also manifest an increased rate of delayed hemolytic reactions as donor "passenger" lymphocytes recognize recipient or transfused red cell antigens. patients should be monitored particularly closely between days and after a minormismatched allogeneic transplant, and aggressive transfusion should be undertaken if the hemoglobin level falls and the dat result becomes positive. the safest transfusion is one that is not given. therefore, alternatives to blood component therapy continue to be sought and are valuable adjuncts in some instances. it is possible to limit homologous blood exposure by the appropriate use of pharmacologic agents that promote hemostasis and the administration of recombinant hematopoietic growth factors or biologic growth modifi ers to stimulate marrow hematopoiesis. only one substitute for rbc transfusions has been approved in the united states, a polyfl uorocarbon oxygen carrier with signifi cant limitations as a blood substitute. other preparations that have been explored in clinical trials are cell-free hemoglobin solutions cross-linked or polymerized by chemical manipulation to prevent rapid clearance from the circulation. they are intended to provide short-term oxygen-carrying capacity for acutely ill patients and have the advantage of not requiring crossmatching or infection control. although these proposed products may have a longer shelf-life and are easier to transport, their drawbacks are many. most have a circulatory half-life of only about hours. the oxygen dissociation curve for these substitutes is also frequently not favorable: either a high fio is required to "load" these molecules or they are less likely to deliver oxygen efficiently at lower po levels. because the hemoglobin source is reclaimed bovine or human red cells, it is unlikely that patients who do not accept blood components because of their religious beliefs (jehovah's witnesses) will accept these types of hemoglobin solutions. one product in development uses recombinant technology to generate hemoglobin, and it is hoped that this solution may be acceptable to these patients. the licensed perfl uorocarbon solutions have failed to demonstrate any utility as intravascular oxygen carriers because of their unfavorable p- (oxygen half-saturation pressure) and oxygen off-loading characteristics. they are fi nding limited application in regional oxygenation during angioplasty or stent placement procedures and a more novel use in "liquid ventilation." this involves the ventilation of intubated patients experiencing severe pulmonary compromise with superoxygenated perfl uorocarbon solutions in place of oxygen-enriched air. the synthetic vasopressin analogue ddavp increases plasma factor viii : c and promotes the release of vwf from endothelial stores. ddavp has provided effective hemostasis in bleeding patients with mild hemophilia a and type i von willebrand's disease and has been used as prophylaxis for patients undergoing surgery. ddavp reportedly improves platelet function in some patients with qualitative platelet disorders associated with uremia, cirrhosis, and aspirin ingestion. studies of its effi cacy in cardiopulmonary bypass procedures are confl icting, but a subset of these patients may benefi t. the chief drawback to its use is tachyphylaxis, which develops in essentially all cases after short-term repeated administration. the lysine analogues ε-aminocaproic acid and tranexamic acid inhibit fi brinolysis by blocking the binding of plasminogen and plasmin to fi brin. these antifi brinolytic agents may decrease bleeding and thus the need for homologous blood components in patients with hemophilia, thrombocytopenia, and systemic fi brinolysis. a novel and effective use of tranexamic acid involves administration as a mouthwash in preparation for oral surgery in patients with hemophilia or those receiving oral anticoagulant therapy. the most serious side effect of these agents when systemically administered is thrombosis; thus, it is important to use them appropriately and monitor the patient carefully during their use. aprotinin is a naturally occurring bovine serine protease inhibitor that acts on plasma serine proteases such as plasmin, kallikrein, trypsin, and some coagulation proteins. aprotinin has been shown to reduce blood loss in patients undergoing cardiopulmonary bypass surgery by inhibiting fi brinolysis and preventing platelet damage. however, more recent reports of renal injury and longterm mortality may mean an end to its use. aprotinin has been used extensively in liver transplantation, which involves high blood loss. repeated administration poses the risk of anaphylaxis and renal dysfunction. when time permits, vitamin k is the preferred agent to reverse the coagulopathy induced by oral anticoagulants. normalization of the pt can be seen in as few as to hours. additionally, selected cirrhotic patients may exhibit improvement in the pt when treated with therapeutic doses of vitamin k. many patients in critical care units exhibit a prolonged pt, especially if dietary supplements are limited and broad-spectrum antibiotic therapy is given. vitamin k is a safe and effective agent for reversing this effect. recombinant erythropoietin (epo) has dramatically reduced the red cell transfusion requirements of patients in chronic renal failure. epo also has applications in the adjunctive treatment of the anemia of premature infants and the anemia of chronic disease, especially rheumatoid arthritis, cancer, and aids. studies of its effi cacy in reducing perioperative red cell transfusion requirements by increasing the yield of predeposited autologous blood or stimulating bone marrow synthesis after surgery have shown benefi t in reducing blood transfusion, although preoperative planning and autologous deposits are required. in contrast and probably because the impact of epo is not immediate, the effi cacy of epo in the icu is unproven and awaits the results of large clinical trials. recombinant growth factors such as granulocytemacrophage colony-stimulating factor (gm-csf) and g-csf stimulate marrow production of leukocytes by enhancing several different granulocyte and macrophage functions. these agents are fi nding application in reducing the neutropenic period in bmt and cancer chemotherapy by increasing the leukocyte count in hypoproliferative marrow conditions. these myeloid growth factors are replacing granulocyte transfusions for their few remaining indications. cell salvage equipment has been in clinical use for several decades, and although cell salvage is clearly capable of rescuing otherwise "lost" red cells, its full impact on transfusions has been poorly documented. cell salvage generally consists of collection of shed blood from a clean, uncontaminated operating fi eld, followed by removal of the cellular elements and retransfusion into the patient. cell salvage has been used both intraoperatively and postoperatively, especially in cardiac surgery. although the clinical studies of cell salvage have many fl aws, the overall success of this therapy in reducing transfusion has resulted in its wide application. risks include bacterial contamination, febrile reactions, triggering of dic, and coagulopathy as a result of dilution. when combined with acute intraoperative hemodilution, this technology is also potentially cost saving. the word apheresis is derived from the greek aphairein, "to take away"; thus, therapeutic hemapheresis is performed to remove unwanted plasma constituents (plasmapheresis) or blood cells (cytapheresis). automated cell separators use centrifugation or membrane fi ltration to remove and concentrate the selected blood element. many of the same devices used to prepare apheresis blood components for transfusion are used to perform patient procedures, so therapeutic apheresis is often administered under the auspices of the transfusion medicine service. rapid removal of plasma or cells may fi nd several applications in intensive care practice (box - ). the goal of plasmapheresis, or plasma exchange (pe), is to remove or reduce the levels of an undesirable plasma constituent or, alternatively, by means of plasma replacement, to supply a missing substance. the agent to be removed by pe is thought to be an autoantibody in some of the neurologic, renal, or hematologic conditions treated in this manner. immunomodulation by pe is another explanation for its effect, a theory indirectly supported by the equivalent effi cacy of ivig therapy for several of these disorders. pe for the amelioration of hyperviscosity from either excess igm in waldenström's macroglobulinemia or excess ig in multiple myeloma is an effective temporizing measure in the treatment of these conditions. plasmapheresis with pe is the standard therapy for ttp. unfortunately, few controlled trials of pe exist, although anecdotal reports abound. pe is seldom the defi nitive treatment of most of these conditions and is used most appropriately as a short-term adjunct to other medical modalities. the kinetics of pe predicts that a one-volume exchange removes % of a given plasma constituent if the blood volume does not change or additional synthesis or mobilization of the substance does not occur. two or three volume exchanges remove % and %, respectively. highly protein-bound, intravascularly concentrated substances are most effi ciently removed, whereas substances with a large volume of distribution such as igg, active synthesis, or large extravascular stores are removed at less than predicted rates. the usual short-term intense course of pe schedules fi ve one-volume exchanges (approximately l in normal-sized adults) over a -day period. the appropriate replacement fl uid in most conditions is an albumin-saline mixture, which provides oncotic support without the risk of disease transmission borne by ffp. pe in patients with ttp uses replacement with ffp to supply the plasma protease that is consumed during the disease. side effects of pe are relatively common ( % to % of procedures) but generally minor and are related to vascular access, temporary discomfort, or vasomotor symptoms. patient death is rarely due to the procedure itself but is largely of cardiopulmonary causes. plasma proteins such as coagulation factors, immunoglobulins, and complement will be removed by pe, and laboratory test results of coagulation and electrolytes may be deranged in the hours after pe. clinical bleeding is rarely observed. most coagulation factors do not fall below hemostatic levels and recover within hours, with the exception of fi brinogen, which may require several days for complete replenishment. leukapheresis may be required to urgently reduce the wbc count in patients with acute myeloid or lymphoblastic leukemia or chronic myelogenous leukemia with peripheral counts of × /l or greater. each procedure is expected to drop the count by a third, but the effect is short lived. leukapheresis should be reserved for use only as an adjunct to chemotherapy in patients with pulmonary or cerebral leukostasis or for cytoreduction before chemotherapy in patients at risk for severe tumor lysis syndrome. plateletpheresis may be benefi cial as short-term therapy in patients with symptomatic thrombocythemia manifested as cerebral or myocardial ischemia, pulmonary emboli, or gastrointestinal bleeding. each procedure should effect a % reduction in the platelet count. cytotoxic therapy should be started concomitantly as the defi nitive treatment. litigation related to blood transfusion has become prominent, particularly after the epidemic of transfusionassociated aids. most states regulate blood banking and medical practice, but blood products are regarded as symptomatic hyperviscosity thrombotic thrombocytopenic purpura neurologic diseases: myasthenia gravis, guillain-barré syndrome uncontrolled systemic vasculitis with critical end-organ injury symptomatic leukocytosis symptomatic thrombocythemia sickle cell anemia crisis (pulmonary or central nervous system manifestations) a service, not as a commodity, so standard product liability does not pertain to blood components. however, negligence in the course of preparing, testing, transferring, crossmatching, or administering blood products is still a potential cause for legal action. every clinician who orders transfusions must be aware that blood components, like drugs, are approved for specifi c uses and that the indications should be clearly documented in the medical record. the informed consent of the patient is an important area of potential liability. the joint commission on accreditation of healthcare organizations (jcaho) has required written patient consent for blood transfusions since . what constitutes adequate informed consent and who is responsible for advising the patient are still in contention. elements of informed consent include an understanding of the need for transfusion, its risks and benefi ts, and the alternatives, including the risk of not undergoing transfusion, as well as the opportunity to ask questions. whether the clinician documents informed consent with an individual progress note in the patient record or with a standardized form is generally established as institutional policy. similarly, institutions vary with respect to policies for consenting adults who are temporarily incompetent, such as sedated patients in the icu. a competent adult patient may refuse blood transfusion, and jehovah's witnesses commonly do so for religious reasons. case law is clear in upholding this right of the patient, which extends to care given at such time as the patient may become incompetent (i.e., comatose) after such refusal was expressed before becoming incompetent. courts will usually order a lifesaving transfusion for minors. exceptions have been made in the case of some "emancipated minors" who are at the age of reason. most states have evoked a "special interest" in the welfare of a fetus in ordering transfusions to pregnant women. the advent of sentinel event reviews and other quality management procedures for patient safety has had an impact on transfusion practice as well. procedures for patient identifi cation before surgical procedures, including devices such as bar code readers, have also been applied to transfusion practice. however, annual sentinel event reviews reporting transfusion errors have remained constant according to jcaho records. ■ blood components should be prescribed like drugs. appropriate blood component therapy requires that the specifi c blood product needed for a clear indication be prescribed, with avoidance of a formulaic approach. ■ red blood cells should be transfused only to increase oxygen-carrying capacity. transfusion decisions should be based on individual patient physiology. the majority of patients with hemoglobin levels greater than or g/l will not require transfusion unless they have limited cardiopulmonary reserve or active bleeding. ■ platelet transfusions are indicated for patients who are bleeding because of thrombocytopenia or functional platelet defects. guidelines for platelet transfusion are also conservative. prophylactic platelet transfusion remains controversial and is not warranted in many situations. ■ fresh frozen plasma is indicated for the repletion of coagulation factors in bleeding patients defi cient in those factors or to provide specifi c plasma proteins that cannot be obtained from safer sources. ■ cryoprecipitate is a concentrated source of fi brinogen and selected coagulation factors. cryoprecipitate may be more helpful in correcting the hypofi brinogenemia of dilutional or consumptive coagulopathy than fresh frozen plasma. ■ adverse reactions to blood components occur in % to % of transfusion episodes. adherence to routine protocols for the evaluation of transfusion reactions may save lives. ■ acute hemolytic reactions are the leading cause of immediate transfusion fatalities. prevention of these reactions requires strict adherence to transfusion and patient identifi cation procedures. ■ transmission of infectious agents by transfusion has been markedly reduced, and bacterial infection is now the most common infectious complication of transfusion. ■ adverse effects unique to massive transfusion are likely to occur in the icu and complicate the management of critically ill or severely injured patients. component therapy for such patients should remain conservative. the emerging role of activated factor vii in the treatment of these patients requires further evaluation. ■ informed consent for blood transfusion is a standard of practice. a competent adult has the legal right to refuse blood transfusion. consent in critically ill patients remains subject to individual institution policies. department of health and human services, food and drug administration: the code of federal regulations, cfr parts , , standards for blood banks and transfusion services markers for transfusiontransmitted disease in different groups of blood donors comparative safety of units donated by autologous, designated and allogeneic (homologous) donors directed blood donations: con goldfi nger d: directed blood donations: pro shelf-life of bank blood and stored plasma with special reference to coagulation factors generation of cytokines in red cell concentrates during storage is prevented by prestorage white cell reduction universal wbc reduction: the case for and against chemical and hematological changes in stored cpda- blood restoration in vitro of erythrocyte adenosine triphosphate, , -diphosphoglycerate, potassium ion, and sodium ion concentrations following the transfusion of acid-citrate-dextrose stored human red blood cells comparison of the hemostatic effects of fresh whole blood, stored whole blood, and components after open heart surgery in children a practice guideline and decision aide for blood transfusion rbc transfusion in the icu: is there a reason? descriptive analysis of critical care units in the united states: patient characteristics and intensive care unit utilization oxygen transport in man physiologic aspects of anemia oxygen extraction ratio: a valid indicator of myocardial metabolism in anemia human cardiovascular and metabolic response to acute, severe isovolemic anemia transfusion guidelines for cardiovascular surgery: lessons learned from operations in jehovah's witnesses physiologic effects of acute anemia: implications for a reduced transfusion trigger a multicenter, randomized, controlled clinical trial of transfusion requirements in critical care is a low transfusion threshold safe in critically ill patients with cardiovascular diseases? oxygen extraction ratio: a valid indicator of transfusion need in limited coronary vascular reserve? for the abc investigators: anemia and blood transfusion in critically ill patients the crit study: anemia and blood transfusion in the critically ill-current clinical practice in the united states red cell transfusion practice following the transfusion requirements in critical care (tricc) study: prospective observational cohort study in a large uk intensive care unit appropriateness of red blood cell transfusion in australasian intensive care practice silent myocardial ischaemia and haemoglobin concentration: a randomized controlled trial of transfusion strategy in lower limb arthroplasty mathematical analysis of isovolemic hemodilution indicates that it can decrease the need for allogeneic blood transfusion guidelines for perioperative red blood cell transfusions american society of anesthesiologists task force: practice guidelines for blood component therapy prudent strategies for elective red blood cell transfusion platelet transfusion therapy. onehour posttransfusion increments are valuable in predicting the need for hla-matched preparations volunteer donor apheresis national institutes of health consensus conference: platelet transfusion therapy the bleeding time as a screening test for evaluation of platelet function changes in blood coagulation during and following cardiopulmonary bypass: lack of correlation with clinical bleeding gamma globulin for idiopathic thrombocytopenic purpura intravenous anti-d treatment of immune thrombocytopenic purpura: experience in patients hazard of platelet transfusion in thrombotic thrombocytopenic purpura mantel n: the quantitative relation between platelet count and hemorrhage in patients with acute leukemia controversies in platelet transfusion therapy safety of stringent prophylactic platelet transfusion policy for patients with acute leukemia the natural history of alloimmunization to platelets clinical factors infl uencing the effi cacy of pooled platelet transfusions optimizing platelet transfusion therapy current status of solvent/detergenttreated frozen plasma update on pathogen reduction technology for therapeutic plasma: an overview national institutes of health consensus conference: fresh frozen plasma: indications and risks the role of prophylactic fresh frozen plasma in decreasing blood loss and correcting coagulopathy in cardiac surgery: a systematic review hemostasis testing during massive blood replacement: a study of cases should plasma be transfused prophylactically before invasive procedures? screening for the risk for bleeding or thrombosis lack of increased bleeding after liver biopsy in patients with mild hemostatic abnormalities why is fresh-frozen plasma transfused? effect of plasma transfusions on the prothrombin time and clotting factors in liver disease clotting factor levels and the risk of diffuse microvascular bleeding in the massively transfused patient fresh frozen plasma and platelet transfusion for nonbleeding patients in the intensive care unit: benefi t or harm? treatment of the bleeding tendency in uremia with cryoprecipitate desmopressin: a nontransfusional form of treatment for congenital and acquired bleeding disorders clinical uses of intravenous immunoglobulin american college of obstetricians and gynecologists: prevention of d isoimmunization, technical bulletin no human albumin solution for resuscitation and volume expansion in critically ill patients hypotension associated with prekallikrein activator (hageman-factor fragments) in plasma protein fraction granulocyte transfusions for treating infections in patients with neutropenia or neutrophil dysfunction special report: transfusion risks transfusion errors: scope of the problem, consequences, and solutions transfusion errors in new york state: an analysis of years' experience jcaho: blood transfusion errors: preventing future occurrences: available at hemolytic transfusion reaction transfusion reactions associated with anti-iga antibodies: report of four cases and review of the literature febrile transfusion reaction: what blood component should be given next? clinical outcomes following institution of the canadian universal leukoreduction program for red blood cell transfusions delayed hemolytic transfusion reaction: an immunologic hazard of blood transfusion transfusion-related acute lung injury and pulmonary edema in critically ill patients: a retrospective study for the nhlbi working group on trali: transfusion-related acute lung injury: defi nition and review transfusion-associated acute lung injury (trali): clinical presentation, treatment and prognosis transfusion-related acute lung injury caused by two donors with antihuman leucocyte antigen class ii antibodies: a look-back investigation for the trali consensus panel: proceedings of a consensus conference: towards an understanding of trali graft-versushost disease: new directions for a persistent problem survey of transfusion-associated graft-versus-host disease in immunocompetent recipients the effect of prestorage irradiation on post-transfusion red cell survival improvement of kidney-graft survival with increased numbers of blood transfusion blood transfusion-modulated tumor recurrence: first results of a randomized study of autologous versus allogeneic blood transfusion in colorectal cancer surgery transfusion-associated cancer recurrence and postoperative infection: meta-analysis of randomized, controlled clinical trials transfusion practice and nosocomial infection: assessing the evidence transfusion increases the risk of postoperative infection after cardiovascular surgery immunosuppressive effects of blood transfusion transfusion of leukoreduced red blood cells may decrease postoperative infections: two meta-analyses of randomized controlled trials transfusion immunomodulation or trim: what does it mean clinically? risks of blood transfusion transfusiontransmitted cytomegalovirus and epstein-barr virus diseases current status of microbial contamination of blood components: summary of a conference septic reactions to platelet transfusions: a persistent problem red blood cell transfusions contaminated with yersinia enterocolitica-united states, - , and initiation of a national study to detect bacteria-associated transfusion reactions routes of infection, viremia, and liver disease in blood donors found to have hepatitis c infection clinical outcomes after transfusionassociated hepatitis c adverse consequences of blood transfusion: quantitative risk estimates stramer sl: current prevalence and incidence of infectious disease markers and estimated window-period risk in the american red cross blood donor population possible transfusionassociated acquired immune defi ciency syndrome (aids): california impact of explicit questions about high-risk activities on donor attitudes and donor referral patterns. results in two community blood centers the effectiveness of the confi dential unit exclusion option human immunodefi ciency virus type infection in the united states: epidemiology, diagnosis, and public health implications update: hiv- infection among blood and plasma donors-united states transmission of human tlymphotropic virus types i and ii by blood transfusion a prospective study of transmission by transfusion of htlv-i and risk factors associated with seroconversion post-transfusion cytomegalovirus infections reducing the risk for transfusion-transmitted cytomegalovirus infection is white blood cell reduction equivalent to antibody screening in preventing transmission of cytomegalovirus by transfusion? a review of the literature and meta-analysis transmission of parasitic infections by blood transfusion hemostasis in massively transfused trauma patients laboratory hemostatic abnormalities in massively transfused patients given red blood cells and crystalloid serial changes in primary hemostasis after massive transfusion prophylactic platelet administration during massive transfusion clotting factor levels and the risk of diffuse rnicrovascular bleeding in the massively transfused patient potential role of recombinant factor viia as a hemostatic agent recombinant factor viia: unregulated continuous use in patients with bleeding and coagulopathy dues not alter mortality and outcome massive blood replacement: correlation of ionized calcium, citrate, and hydrogen ion concentration potassium levels, acid-base balance and massive blood replacement acid-base status of seriously wounded combat casualties: resuscitation with stored blood blood temperature: a critical factor in massive transfusion an in vivo evaluation of microaggregate blood fi ltration during total hip replacement massive transfusion as a risk factor for acute lung injury: association or causation? guidelines on the management of massive blood loss autoimmune hemolytic anemia approaches to selecting blood for transfusion to patients with autoimmune hemolytic anemia the clinical implications of platelet transfusions associated with abo or rh(d) incompatibility survival curves of incompatible red cells: an analytical review isotype-specifi c detection of abo blood group antibodies using a novel fl ow cytometric method use of rh positive blood in emergency situations pharmacologic agents in the management of bleeding disorders disseminated intravascular coagulation. approach to treatment the pathogenesis and management of disseminated intravascular coagulation coagulation disorders in liver disease new insights into haemostasis in liver failure plasma and platelet von willebrand factor defects in uremia deamino- -d-arginine vasopressin shortens the bleeding time in uremia donor-derived red blood cell antibodies and immune hemolysis after allogeneic bone marrow transplantation fluosol-da as a red-cell substitute in acute anemia the prospect for red cell substitutes low-dose perfl uorocarbon: a revival for partial liquid ventilation? response of factor viii/von willebrand factor to ddavp in healthy subjects and patients with haemophilia a and von willebrand's disease management of oral bleeding in haemophiliac patients amelioration of the bleeding tendency of preoperative aspirin after aortocoronary bypass grafting for investigators of the multicenter study of perioperative ischemia research group: mortality associated with aprotinin during years following coronary artery bypass graft surgery does the use of erythropoietin reduce the risk of exposure to allogeneic blood transfusion in cardiac surgery? a systematic review and meta-analysis cell salvage for minimizing perioperative allogeneic blood transfusion cost-effectiveness of cell salvage and alternative methods of minimizing perioperative allogeneic blood transfusion: a systematic review and economic model plasmapheresis in nephrology: an update national institutes of health consensus conference: the utility of therapeutic plasmapheresis for neurological disorders correction of hyperviscosity by apheresis improved survival in thrombotic thrombocytopenic purpura-hemolytic uremic syndrome therapeutic plasma exchange as a nephrological procedure: a singlecenter experience a review of transfusion-associated aids litigation: through legal, fi nancial, and public health consequences of hiv contamination of blood and blood products in the s and s legal aspects of transfusion of jehovah's witnesses joint commission on accreditation of hospitals and healthcare organizations: sentinel event statistics: available at key: cord- -e kuwf authors: nan title: opinion of the scientific panel on animal health and welfare (ahaw) on a request from the commission related with the risks of poor welfare in intensive calf farming systems date: - - journal: efsa j doi: . /j.efsa. . sha: doc_id: cord_uid: e kuwf nan summary efsa has been requested by the european commission to issue a scientific opinion on animal health and welfare aspects of intensive calf farming systems and their ability to comply with the requirements of the well-being of calves from the pathological, zootechnical, physiological and behavioural points of view. in particular the commission asked efsa to update the findings of the scientific veterinary committee (animal welfare section) report, on the welfare of calves of november , in the light of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. in this report a risk assessment was made and the relevant conclusions and recommendations are forming the scientific opinion by the ahaw panel. the svc ( ) report contains information on measurements of welfare, needs of calves, descriptions of current housing systems, chapters on types of feed and feeding systems, weaning of calves, housing and pen design, climate, mananimal relationships, dehorning and castration. further chapters covered economical considerations of systems and for improving welfare. in the report conclusions were made on general management, housing, food and water and economics. the present report "the risks of poor welfare in intensive calf farming systems" is an update o the previous svc report with the exception of economical aspects which are outside of the mandate for this report. the various factors potentially affecting calves' health and welfare, already extensively listed in the report of the scientific veterinary committee animal welfare section (svc, ) , are updated and subsequently systematically determined whether they constitute a potential hazard or risk. to the latter end their severity and likelihood of occurrence in animal (sub) populations were evaluated and associated risks to calf welfare estimated, hence providing the basis for risk managers to decide which measures could be contemplated to reduce or eliminate such risks. in line with the terms of reference the working group restricted itself to (in essence a qualitative) risk assessment although it is agreed that welfare and health of calves can be substantially affected in the course of and as a result of transport and slaughter, this report does not consider animal health and welfare aspects of calves during transport and slaughter but such information can be found in a recently issued comprehensive report of the scientific committee on animal health and animal welfare (scahaw), on "the welfare of animals during transport (details for horses, pigs, sheep and cattle)" which was adopted on march (dg sanco, ) and in the efsa report "welfare aspects of animal stunning and killing methods" (ahaw / ). in relation with the food safety aspects, main foodborne hazards associated with calf farming are salmonella spp., human pathogenic-verotoxigenic escherichia coli (hp-vtec), thermophilic campylobacter spp., mycobacterium bovis, taenia saginata cysticercus and cryptosporidium parvum/giardia duodenalis. present knowledge and published data are insufficient to produce a universal risk assessment enabling quantitative food safety categorization/ranking of different types of calf farming systems. nevertheless, the main risk factors contributing to increased prevalence/levels of the above foodborne pathogens, as well as generic principles for the risk reductions are known. the latter are based on the implementation of effective farm management (e.g. qa, husbandry, herd health plans, biosecurity) and hygiene measures (e.g. gfp-ghp). in general, the conclusions made in the previous svc report remain. however, recent research has provided for some additional conclusions. the risk analysis is presented in the tables of annex . the graphics in this table are not intented to represent numerical relationships but rather qualitative relations. in some instances the exposure could not be estimated due to lack of data, in which cases the risks where labelled "exposure data not available". the following major and minor risks for poor animal health and welfare have been identified for one or several of the various husbandry systems considered: the hazards of iron deficiency and insufficient floor space are considered to be very serious, the hazard of inadequate health monitoring is considered to be serious and the hazards of exposure to inadequate hemoglobin monitoring, allergenic proteins and too rich diet are considered to be moderately serious. for these hazards, there is no consensus on the exposure of calves mainly due to lack of data and that is why it is recommended that further studies should be made to provide evidence for an exposure assessment. regarding castration and dehorning (and disbudding) without anaesthetic drugs, there is a variation in relation to national legislation why the risk of poor welfare in relation to castration and dehorning has a wide range between countries. council directive / /eec laying down minimum standards for the protection of calves as amended by council directive / /ec requires the commission to submit to the council a report, based on a scientific opinion, on intensive calf farming systems which comply with the requirements of the wellbeing of calves from the pathological, zootechnical, physiological and behavioural points of view. the commission's report will be drawn up also taking into account socio-economic implications of different calf farming systems. it should be noted that the scientific veterinary committee (animal welfare section) adopted a report on the welfare of calves on november which should serve as background to the commission's request and preparation of the new efsa scientific opinion. in particular the commission requires efsa to consider the need to update the findings of the scientific veterinary committee's opinion in light of the availability of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. efsa has been requested by the european commission to issue a scientific opinion on animal health and welfare aspects of intensive calf farming systems and their ability to comply with the requirements of the well-being of calves from the pathological, zootechnical, physiological and behavioural points of view. in particular the commission requires efsa to update the findings of the scientific veterinary committee (animal welfare section) report on the welfare of calves of november in light of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. the mandate outlined above was accepted by the panel on animal health and welfare (ahaw) at the plenary meeting, on / march . it was decided to establish a working group of ahaw experts (wg) chaired by one panel member. therefore the plenary entrusted a scientific report and risk assessment to a working group under the chairmanship of prof. bo algers. the members of the working group are listed at the end of this report. the scientific report is considered for the discussion to establish a risk assessment and the relevant conclusions and recommendations forming the scientific opinion by the ahaw panel. according to the mandate of efsa, ethical, socio-economic, cultural and religious aspects are outside the scope of this scientific opinion. . . the working group set out to produce a document in which the various factors potentially affecting calves' health and welfare [already extensively listed in the report of the scientific veterinary committee animal welfare section (svc, ) , are updated and subsequently to systematically determine whether these factors constitute a potential hazard or risk. to the latter end their severity and likelihood of occurrence in animal (sub) populations were evaluated and associated risks to calf welfare estimated, hence providing the basis for risk managers to decide which measures could be contemplated to reduce or eliminate such risks. it should be noted, however, that this does not imply that a hazard that has a serious effect on just a few animals should not be dealt with by managers on farm level as the suffering imposed on some animals constitute a major welfare problem for those individuals. in line with the terms of reference the working group restricted itself to (in essence qualitative) risk assessment, i.e. only one of three elements essential to risk analysis a risk assessment approach was followed, similar to the one generally adopted when assessing microbiological risks, i.e. along the lines suggested at the nd session of the codex alimentarius commission (cac, ) . incidentally, these guidelines have been characterized by the cac as 'interim' because they are subject to modifications in the light of developments in the science of risk analysis and as a result of efforts to harmonize definitions across various disciplines. cac's guidelines are in essence exclusively formulated for the purpose of assessing risks related to microbiological, chemical or physical agents of serious concern to public health. consequently -considering their disciplinary focus -the working group had to adapt the cac definitions to serve their purpose. these adapted definitions, have, in alphabethical order, been included in chapter (see risk analysis terminology) the objectives of this report are to review and report recent scientific literature on the welfare including the health of intensively reared calves, to report on recent findings as an update to the scientific veterinary committee's previous report, to make a qualitative risk assessment concerning the welfare of intensively kept calves. where relevant, food safety implications of different farming systems are also considered. the report is structured in five major parts. the first three follow the scientific veterinary committee's previous report "on the welfare of calves" with introductory chapters - on background, measurements and needs in relation to calf welfare, chapter describing housing, diet and management and chapter describing comparison of systems and factors. in chapter common disease and use of antibiotics is described. the other two parts involve aspects of meat quality and food safety (chapter ) and the risk assessment (chapters ). conclusions and recommendations from the previous svc document together with updated conclusions derived from recent research findings are presented in chapter . effect of transport and slaughter on calves' health and welfare although it is agreed that welfare and health of calves can be substantially affected in the course of and as a result of transport, this report does not consider animal health and welfare aspects of calves during transport because there is already a comprehensive recent report of the scientific committee on animal health and animal welfare (scahaw), on "the welfare of animals during transport (details for horses, pigs, sheep and cattle)" which was adopted on march (dg sanco, . the report takes into account all aspects related with transport that could affect the health and welfare of cattle and calves, including the direct effects of transport on the animals and the effects of transport on disease transmission. the loading methods and handling facilities for cattle, the floor space allowance, the relationships of stocking and the density requirements, the vehicle design, space requirements and ventilation for cattle transporters (see also the ahaw scientific opinion related to standards for the microclimate inside animal road transport vehicles, efsa-q- - ), the behaviour of cattle during road transport, the road conditions, long distance transport and the travel times are also reviewed. recommendations for all these aspects are also given in that report. feeding and housing systems, weaning strategies and quality of solid and liquid feed . . . feeding systems and weaning strategies recommendations without a fully functional rumen, calves will be unable to utilise nutrients provided in the post-weaning dry feed diet. attention must paid to type of forage and consistent of particle size of starter grain in order to achieve a proper rumen development. calf weaning should be based on the amount of dry feed calves ingest per day, not on their age or weight, and calf starter should be made available five to days after birth. a calf consuming . kg of dry feed or more on three consecutive days is ready for weaning. when calves are fed low levels of milk to encourage early consumption of dry food, weaning can be done abruptly. in contrast, if milk is given in large amounts, weaning may require two to three weeks of slow transition to avoid a setback in growth. the provision of solid feeds with adequate content and balance to veal calves is a prerequisite for the development of a healthy and functional rumen, the prevention of abnormal oral behaviours, and the stimulation of normal rumination activity. although some solid feeds may exacerbate problems with abomasal ulcers in milk-fed veal calves, properly balanced rations seem to moderate this effect. nutritional factors are clearly involved in the etiology of abomasal ulcers in veal calves. important elements include the consumption of large quantities of milk replacer and the interaction between a milk replacer diet and the provision of roughage. if vegetable proteins are not properly treated, milk replacers may cause hypersensitivity reactions in the gut, which may compromise calf welfare. it is recommended that solid feeds provided to veal calves, in addition to milk replacer, are adequately balanced in terms of the amount of fibrous material, which will promote rumination, and other components such as proteins and carbohydrates, which stimulate rumen development and support a healthy function of the digestive system. since milk replacer formulations are frequently changing, it is recommended to carefully and consistently examine allergenic properties and other possibly detrimental effects of all milk replacers before they are used on a large scale. if the concentration of haemoglobin in the blood of calves drops below . mmol l - , the ability of the calf to be normally active as well as lymphocyte count and immune system function are substantially impaired, and there is reduced growth rate. below . mmol l - , veal calves exhibit a number of adaptations to iron deficiency, including elevated heart rate, elevated urinary noradrenaline and alterered reactivity of the hpa axis. there is a lack of data on the variability in groups of calves. hence, when haemoblogin levels are found to be below . mmol l - in groups of young veal calves, it is field practice to give supplementary iron. for older calves, including those in the last four weeks before slaughter, efficient production is possible in individual calves whose haemoglobin concentration is above . mmol l - . if the concentration of haemoglobin in blood is not checked at all, there is a high risk of anaemia that is associated with poor welfare, for all calves fed a diet with a very low iron content. anaemia can be identified and quantified adequately if checks are carried out on veal production calves of - weeks, for example, when the calves are brought into a unit, between - weeks of fattening, and during the last four weeks before slaughter. if the concentration of haemoglobin in the blood of a group of calves during the last four weeks before slaughter is a mean of . mmol l - , some calves may have a concentration substantially lower than the group-mean, and hence their welfare may be poor. in order to avoid anaemia levels that are associated with poor welfare because normal activity is difficult or not possible and other functions are impaired, it is advisable that diets should be provided that result in blood haemoglobin concentrations of at least . mmol l - throughout the life of the calf. in order to avoid serious impairment of immune system function and hence poor welfare, no individual calf should have a blood haemoglobin concentration lower than . mmol l - . in most cases this is achieved by adjusting the concentration of iron in the diet and having an adequate checking system so that the above condition is avoided. other treatment may be needed for calves with clinical conditions which cause anaemia but which are not related to diet. since the lowest haemoglobin concentrations in the blood of veal calves are usually reached during the last four weeks before slaughter, these blood concentrations should be checked at this time. such controls would help to see if measures are necessary to be taken or not. a checking system using a mean level, but whose aim is to avoid the risk of a low haemoglobin concentration in any individual lower than . mmol l - would have to use a mean substantially higher than , mmol l - , probably mmol l - , and an appropriate sample size. in order to avoid poor welfare associated with anaemia, as explained in the conclusions (above), measurements of average blood haemoglobin concentration are not a satisfactory means of avoiding poor welfare but the use of a minimum level of . mmol l - for individual calves would achieve this. there is a lack of data on the haemoglobin levels and variation in groups in slaughtering calves. to gain more information as a basis for further actions and recommendations, it is advisable to perform sampling of calves at slaughter, by checking the haemoglobin level on a random basis in groups of calves. space and pen design recommendations space should be enough to allow animals to fulfill their needs for social behaviour, lying and grooming. as the pen shape affects the use of space by animals, pens should be rectangular rather than square and pen space should be divided into different usable areas. as the floor type affects the resting and lying postures of calves it should be comfortable. wet floors should be avoided due to thermal and resting problems. degree of social contact conclusions group housing can help calves to acquire social skills. some experience of mixing is important as calves that have been reared for a while in groups dominate calves that have always been in individual crates. when calves are mixed together in the first few days of life, and then kept for some weeks in a social group, there may be poor welfare because of the following risks: . especially when individuals are provided with inadequate access to teats and roughage in the diet, cross-sucking and other abnormal sucking behaviour may occur. . some individuals may be unaccustomed to the food access method, for example they may have only received food via a teat, and may find it difficult to drink from a bucket. . calves coming from different buildings, perhaps from different farms, may carry different pathogens and hence there is a risk of disease spread in all the calves that are put in the same airspace or are otherwise exposed to the pathogens. since calves are social animals, they should be kept in social groups wherever possible. these groups should be stable with no mixing or not more than one mixing. it is advisable for calves in the first two weeks of life not to be mixed with other animals. if calves from different buildings, perhaps different farms, are to be mixed in a pen or are to be put in different pens in the same airspace, quarantining animals for - weeks can reduce disease in the calves and hence prevent poor welfare. although cross-sucking can sometimes be minimised by provision of teats, water and roughage, if this is not possible, mixing into groups could be delayed for three to four weeks. calves fed by various means may require careful supervision after being put into groups in order that they learn how to feed effectively. temperature, ventilation and air hygiene calf rearing causes significant emissions of substances such as nitrate, phosphate, heavy metals and possibly antibiotics in manure and liquid effluents. in addition, there are odours, gases, dusts, micro-organisms and endotoxins in the exhaust air from animal houses. also in the handling of manure in storage and during application of manure and during grazing. these effluents can have distinct impacts on air, water, soil, biodiversity in plants, forest decay and also on animals and including humans. calf houses possess a high potential for emissions of ammonia and other gases. dust, endotoxins and micro-organisms are emitted in lower amounts than from pig or poultry production. odour, bioaerosols, ammonia, nitrogen, phosphorous and heavy metals may either have a local or a regional impact. gases such as methane and nitrous oxide contribute to global warming. respiratory disorders are the second largest reason for morbility and mortality in calf rearing. the most important causes are environmental conditions such as hygiene, management and the physical, chemical and biological factors in the environment. ventilation plays a decisive role in reducing the incidence of respiratory disease. temperatures below ºc can compromise lung function. ammonia concentrations of more than ppm seem to increase respiratory infections. relative humidity of more than % bear the risk of increased heat dissipation and can help bacteria to survive in airborne state. air velocities close to the animals of more than . m/s can significantly increase respiratory sounds in calves. sufficient air space in confined buildings can help to reduce the concentration of airborne bacteria. calf houses contain relatively high amounts of endotoxins ( eu) (eu: endotoxin unit, see scientific report, www.efsa.eu.int) there is concern that antibiotic residues may contribute to the development of bacterial resistance. local and regional environmental problems are enhanced by high animal densities and insufficient distances between farms and residential areas. the exact quantitative contribution of calf rearing to environmental pollution and its impact on water, air, soil vegetation and nearby residents is not yet well understood. when housing systems are compared, although dust emission levels will seldom pose problems for the health of calves, ammonia emission levels may be high enough to exacerbate calf disease, especially when calves are kept in slatted floor units. the development of low emission production systems should be encouraged including mitigation techniques, e.g. biofilters, bioscrubbers, covered manure pits and shallow manure application. in particular there is need to reduce ammonia emissions from slatted floor units or to reduce the usage of such systems. adequate and efficient feeding regimes are required with minimal wastage of nitrogen and phosphorous and limited use of growth promoters and drugs. there is an urgent need for cooperative research to design appropriate ventilation systems to improve health and welfare of calves kept in confined rearing conditions. temperatures for young calves should range between and ºc. ammonia concentrations should be kept as low as possible preferably not more than ppm. housing and management should aim a reducing dust, bacteria and endotoxin concentrations in the animal house air. minimum ventilation rates of m per kg live weight should be applied. human-animal relationships recommendation stockpersons should be appropriately trained so that they have sufficient skills in rearing calves. they should have a positive attitude towards animals and work with them in order to minimise stress and to maintain a high quality of health control. rough contact (e.g. use of painful device such as an electric prod, loud noises) should be avoided and gentle contacts (e.g. talking softly, stroking, offering food) should be encouraged. this sort of contact is of particular importance for calves in groups or with their dam that tend not to approach humans readily. dehorning and castration if cattle are to be dehorned, it is recommend to disbud young cattle rather than to dehorn older ones. disbudding by cautery is recommended over other methods. local anaesthesia (e.g. - ml lidocaïne or lignocaïne % around the corneal nerve) and analgesia with an nsaid (e.g. ml flunixin meglumine or - . mg ketoprofen % / kg body weight) should be given - min before disbudding. if cattle are to be castrated, it is recommended to castrate calves as early as possible (no later than . mo and preferably at wk of age), to use the burdizzo method, and to provide appropriate anaesthesia and analgesia (e.g. ml lignocaine % in each testicle through the distal pole and mg ketoprofen % / kg body weight injected intravenously both min before castration). prevention of typical calf diseases in the first months of life such as diarrhoea and enzootic bronchopneumonia requires a systematic approach by improving management and housing conditions, specifically the preparation of the cow, hygiene of the calving environment, including dry clean bedding and high air quality, immediate supply with maternal antibodies, no mixing with older animals and careful attention and a rapid response to any sign indicating disease. main foodborne hazards associated with calf farming are salmonella spp., human pathogenic-verotoxigenic escherichia coli (hp-vtec), thermophilic campylobacter spp., mycobacterium bovis, taenia saginata cysticercus and cryptosporidium parvum/giardia duodenalis. the prevalence-level of infection and/or contamination of calves with, and further spread of, foodborne pathogens on farms depend on the status and the inter-relationship of different contributing factors that are inherently highly variable. present knowledge and published data are insufficient to produce a universal risk assessment enabling quantitative food safety categorization/ranking of different types of calf farming systems. nevertheless, generic principles for risk reductions for the main foodborne pathogens at calf farm level are known and are based on the implementation of effective farm management (e.g. qa, husbandry, herd health plans, biosecurity) and hygiene measures based on gfp-ghp. for quantitative food safety risk categorization of farming systems individually, and/or their related ranking, further scientific information is needed. accordingly, related research should be encouraged. the conclusions of the scientific veterinary committee report on the welfare of calves are presented in table below together with additions relevant in the light of this update of the svc report. the best conditions for young rearing calves involve leaving the calf with the mother in a circumstance where the calf can suckle and can subsequently graze and interact with other calves.c agreed where the calf will be separated from its mother at an early age, evidence suggests that it is normally beneficial for the calf if the mother is allowed to lick the calf thoroughly for a few hours after birth.c agreed r whenever possible, cows should be given the opportunity to lick the calf during at least three hours after parturition. it is important that the calf should receive sufficient colostrum within the first six hours of life and as soon as possible after birth, in conditions which facilitate antibody absorption, preferably by suckling from the mother, so as to ensure adequate immunoglobulin levels in the blood. r where necessary, suckling assistance or additional colostrum should be provided for calves left to suckle from the dam.r agreed agreed calves need resources and stimuli which are normally provided by their mothers. all calves should be given adequate food and water, appropriate conditions of temperature and humidity, adequate opportunities to exercise, good lying conditions, appropriate stimuli for sucking during the first few weeks of life and social contacts with other calves from one week of age onwards. specific aspects of housing and management which fulfill these conditions are detailed. agreed young calves reared without their mothers should receive considerate human contact, preferably from the same stockperson throughout the growing period.r agreed stockpersons should be appropriately trained so that they have sufficient skill in the rearing of calves. they should have positive attitudes towards animals and to working with them in order to handle them while minimising stress and to maintain a high quality of health control. rough contacts (e.g. use of a painful device such as an electric prod, or loud noises) should be avoided and gentle contacts (e.g. talking softly, petting, offering food) should be encouraged. these contacts are of particular importance for calves in groups or with their dam that may tend not to approach humans easily. where calves cannot be kept with their mother, the system where welfare is best is in groups with a bedded area and an adequate space allowance available to them.c agreed. r see below. the welfare of calves is very poor when they are kept in small individual pens with insufficient room for comfortable lying, no direct social contact and no bedding or other material to manipulate. c agreed r as the floor affects the resting and lying posture of calves they should be useful to have a comfortable floor. wet floors should be avoided due to thermal and resting problems. tethering always causes problems for calves. calves housed in groups should not be tethered except for periods of not more than one hour at the time of the feeding of milk or milk substitute. individually housed calves should not be tethered. r calves are vulnerable to respiratory and gastro-intestinal disease and welfare is poor in diseased animals. better husbandry is needed to minimize disease in group housing conditions but results that are as good as those from individual housing can be obtained. c every calf should be able to groom itself properly, turn around, stand up and lie down normally and lie with its legs stretched out if it wishes to do so. r in order to provide an environment which is adequate for exercise ,exploration and free social interaction, calves should be kept in groups. calves should never be kept at too high stocking density. the following requirements are based on evidence of increasingly poor welfare as space allowance decreases. the space allowance should provide, especially for allowing resting postures, an area for each calf of at least (its height at the withers) x (its body length from the tip to its nose when standing normally to the caudal edge of the tuber ischii or pin bone x . ). the length measurements takes account of the forward and backward movements involved in standing up and lying down. this calculation takes account of differences in size amog breeds and with age. as a guideline, for holstein calves this area is . m at weeks, . m at weeks and . m at weeks. r r since calves are social animals, they should be kept in social groups wherever possible. these groups should be stable with no mixing or not more than one mixing. it is advisable for calves in the first two weeks of life not to be mixed with other animals. c when calves are mixed together in the first few days of life, and then kept for some weeks in a social group, there may be poor welfare because of the following risks: -especially when individuals are provided with inadequate access to teats and roughage in the diet, cross-sucking and other abnormal sucking behaviour may occur. -some individuals may be unaccustomed to the food access method, for example they may have only received food via a teat, and may find it difficult to drink from a bucket. -calves coming from different buildings, perhaps from different farms, may carry different pathogens and hence there is a risk of disease spread in all the calves that are put in the same airspace or are otherwise exposed to the pathogens. r if calves from different buildings, perhaps different farms, are to be mixed in a pen or are to be put in different pens in the same airspace a quarantine situation should be used in order to reduce disease in the calves and hence prevent poor welfare. for a given space allowance per calf, increasing group size results in a larger total area and hence better possibilities for exercise, social interaction and improved environmental complexity. c larger groups are preferred because of the better possibilities for providing an adequate environment but there are limits to the numbers of animals which should be in one building section and risks associated with mixing of calves from different sources should be considered. r agreed r the space provided for calves should be enough to allow animals to fulfill their needs for social behaviour, lying and grooming. space allowance per animal should be greater for groups of - animals and for feeding systems, and pen shapes or flooring materials that necessitate extra space availability. if the preferred system, group housing, is not possible then individual pens whose width is at least the height of the calf at the withers and whose length is at least the length of the calf from the tip of its nose when standing normally to the caudal edge of the tuber ischii or pin bone x . should be used. this space requirement is calculated on the basis of the space required for normal agreed r as the pen shape affects the use of space by animals, pens should maximize the perimeter and pen space should be divided into different usable areas. movements and evidence of increasingly poor welfare r appropriate bedding for example straw is recommended. bedding must be changed at appropriate intervals and every calf should have access to a dry lying area. slatted floors must not be slippery and must not be a cause of tail tip necrosis. r agreed see above buildings should be adequately ventilated taking into account of the number of animals present and the external conditions. the air space in the building should be m per calf up to weeks of age and an amount of air space which increases with age is needed for older calves.r agreed c -calf rearing causes significant emissions such as nitrate, phosphate, heavy metals and possibly antibiotics in manure and liquid effluents as well as odour, gases, dusts, micro-organisms and endotoxins in the exhaust air from animal houses, from manure storage facilities, during application of manure and during grazing. -these effluents can have distinct impacts on air, water, soil, and thus also on animals. -calf houses possess a high potential for emissions of ammonia and other gases. dust, endotoxins and micro-organisms are emitted in lower amounts than from pig or poultry production. -respiratory disorders are the second largest reason for morbidity and mortality in calf rearing. the most important reason are environmental conditions such as hygiene, management and the physical, chemical and biological factors of the aerial environment. -ventilation plays a decicive role in reducing the incidence of respiratory diseases. temperatures below c can compromise lung function. -ammonia concentrations of more than ppm seem to increase respiratory affections. relative humidity of more than % bear the risk of increased heat dissipation and can help bacteria to survive in airborne state. -air velocities close to the animals of more than . m/s can increase respiratory sounds in calves significantly. -sufficient air space in confined buildings can help to reduce the concentration of airborne bacteria. -calf houses contain relatively high amounts of endotoxins. -there is concern that antibiotic residues may contribute to the development of bacterial resistance. -environmental problems in calf houses are enhanced by high animal densities, insufficient distances between farms. -when housing systems are compared, although dust emission levels will seldom pose problems for the health of calves, ammonia emission levels may be higher enough to exacerbate calf disease, especially in slatted floor units. r -the development of low emission production systems should be encouraged including mitigation techniques, e.g. biofilters, bioscrubbers, covered manure pits and shallow manure application. in particular there is need to reduce ammonia emissions from slatted floor units or to reduce the usage of such systems. -adequate and efficient feeding regimes are required with minimal wastage of nitrogen and phosphorous and limited use of growth promoters and drugs. -there is an urgent need for cooperative research to design appropriate ventilation systems to improve health and welfare of calves kept in confind rearing conditions. -temperatures for young calves should range between and c. -ammonia concentrations should be kept as low as possible, preferably not more than ppm. -housing design and management procedures should aim to reduce dust, bacteria and endotoxin concentrations in the animal house air. -minimum ventilation rates of c per kg live weight should be applied. calves which lack specific nutrients, including iron, which are given poorly balanced diet, and which are not provided with adequate roughage in the diet after four weeks of age can have serious health problems, can show serious abnormalities of behaviour, and can have substantial abnormalities in gut development. c every calf should receive a properly balanced diet with adequate nutrients.r agreed r it is recommended that solids feeds provided to veal calves, in addition to milk replacer, are adequately balanced in terms of the amount of fibrous material, which will promote rumination, and other components such as proteins and carbohydrates, which stimulate rumen development and support a healthy function of the digestive system. c if the concentration of haemoglobin in the blood of calves drops below . mmol l - , the ability of the calf to be normally active as well as the lymphocyte count and immune system function are substantially impaired, and there is reduced growth rate. below . mmol l - , veal calves exhibit a number of adaptations to iron deficiency, including elevated heart rate, elevated urinary noradrenaline and alterered reactivity of the hpa axis. hence it is normal practice to identify young veal production calves with less than . mmol l - haemoglobin in plasma and to provide supplementary iron in addition to that normally included in the diet. for older calves, including those in the last four weeks before slaughter, efficient production is possible in individual calves whose haemoglobin concentration is above . mmol l - . if the concentration of haemoglobin in blood is not checked at all, there is a high risk of anaemia that is associated with poor welfare, for all calves fed a diet with very low iron content. anaemia can be identified and quantified adequately if checks are carried out on veal production calves of - weeks, for example, when the calves are brought into a unit, between - weeks of fattening, and during the last four weeks before slaughter. if the concentration of haemoglobin in the blood of a group calves during the last four weeks before slaughter is a mean of . mmol l - , some calves may have a concentration substantially lower than the group-mean, and hence their welfare may be poor. r in order to avoid anaemia levels that are associated with poor welfare because normal activity is difficult or not possible and other functions are impaired, it is advisable that diets should be provided that result in blood haemoglobin concentrations of at least . mmol l - throughout the life of the calf. in order to avoid serious impairment of immune system function and hence poor welfare, no individual calf should have a blood haemoglobin concentration lower than . mmol l - . in most cases this is achieved by adjusting the concentration of iron in the diet and having an adequate checking system so that the above condition is avoided. other treatment may be needed for calves with clinical conditions which cause anaemia but which are not related to diet, r since the lowest haemoglobin concentrations in the blood of veal calves are usually reached during the last four weeks before slaughter, these blood concentrations should be checked at this time. such controls would help to see if measures are necessary to be taken or not. a checking system using a mean level, but whose aim is to avoid the risk of a low haemoglobin concentration in any individual lower than . mmol l- , would have to use a mean substantially higher than , mmol l- , probably mmol l- . in order to avoid poor welfare associated with anaemia, as explained in the conclusions (above), measurements of average blood haemoglobin concentration are not a satisfactory means of avoiding poor welfare but the use of a minimum level of . mmol l - for individual calves would achieve this. some non-milk proteins are inappropriate for use in a milk substitute fed to calves because they produce allergenic reactions. some carbohydrates cannot be easily or properly digested by calves and they may cause digestive upset. no milk substitute should be fed to calves unless it can be easily digested and does not cause harmful reactions in the calves. r acidification of milk can reduce the incidence of diarrhoea, but any forms of acidified milk which are unpalatable to calves or which harm the calves should not be used. r every calf should be fed fermentable material, appropriate in quality and sufficient in quantity to maintain the microbial flora of the gut and sufficient fibre to stimulate the development of villi in the rumen. roughage, in which half of the fibre should be at least mm in length, should be fed to calves. they should receive a minimum of g of roughage per day from to weeks of age, increasing to g per day from to weeks of age but it would be better if these amounts would be doubled. the development of the rumen should be checked by investigating villi development in a proportion of calf guts after slaughter. r agreed r without a fully functional rumen, calves will be unable to utilise nutrients provided in the post-weaning dry feed diet. attention should paid to type of forage and consistent of particle size of starter grain in order to achieve a proper rumen development. calf weaning should be based on the amount of dry feed calves ingest per day, not on their age or weight, and calf starter should be made available five to days after birth. a calf consuming . kg of dry feed or more on three consecutive days is ready for weaning. when calves are fed low levels of milk to encourage early consumption of dry food, weaning can be done abruptly. in contrast, if milk is given in large amounts, weaning may require two to three weeks of slow transition to avoid a setback in growth. there are clear signs of increased disease susceptibility and immunosuppression in calves up to weeks of age, whose blood haemoglobin concentration is below . mmol/liter. however, in some studies the antibiotic treatment was not higher in calves whose haemoglobin was near to mmol/litre than in calves whose level was near to mmol/litreat weeks of age. studies of exercise in anaemic calves show that there can be problems during exercise at a level of , mmol/litre. c all calves should be fed in such a agreed see way that their haemoglobin level does not fall below a minimum of . mmol/litre. r where calves are fed a diet which is lower in iron than mg/kg, an adequate sample of animals should be checked at and weeks of age in order to find out whether the blood haemoglobin concentration is too low. r agreed see young calves have a very strong preference to suck a teat or teat-like object. it is preferable for calves to be fed milk or milk substitute from a teat during the first four weeks of life. calf welfare is improved if a non-nutritive teat is provided during the first four weeks of life especially if they are not fed from a teat. c when young group-housed calves are fed milk or milk substitute, the social facilitation effects of having a group of teats close together are beneficial. it is also advisable for several teats to be provided in groups of older calves. transponder controlled feeder systems have been found to work well. c the feeding to calves of large quantities of milk or milk substitute in a single daily meal can cause digestive problems. hence when calves are fed more than % of body weight in milk or milk substitute each day, this should be fed in at least two meals per day. r calves fed ad libitum, or close to this level should not be weaned off milk or milk replacer until they are consuming a minimum of g of concentrates per head per day in the week prior to weaning. where calves are fed restricted quantities of milk or milk replacer before weaning they should not be weaned until they are consuming a minimum of g of concentrates per head per day in the week prior to weaning. r calves which are diseased and calves which are in hot conditions often need to drink water as well as milk or milk substitute and all calves drink water if it is available. the provision of milk or milk substitute is not an adequate alternative for provision of water. hence calves should be provided daily with water to drink. it is recommended that drinkers be provided in all pens. r agreed r prevention of typical calf diseases in the first months of life such as diarrhoea and enzootic bronchopneumonia requires a systematic approach by improving management and housing conditions, specifically the preparation of the cow, hygiene of the calving environment, including dry clean bedding and high air quality, immediate supply with maternal antibodies, no mixing with older animals and careful attention and early reaction of all signs of any beginning diseases. dehorning calves between - weeks by cauterisation with adequate anaesthesia and analgesia (no precision given) castrate calves at months with adequate anaesthesia and analgesia (no precision given) r dehorning: if cattle are to be dehorned, it is recommended to disbud young cattle rather than to dehorn older ones. disbudding by cauterisation is recommended over other methods. local anaesthesia (e.g. - ml lidocaïne or lignocaïne % around the corneal nerve) and analgesia with a non steroidal anti-inflammatory drug ( ml flunixin meglumine or - . mg ketoprofen % / kg body weight) shall be performed - min before disbudding. r castration:if cattle are to be castrated, it is recommended to castrate calves as early as possible (no later than . mo and preferably at wk of age), to use the burdizzo method, and to provide appropriate anaesthesia and analgesia (e.g. ml lignocaine % in each testicle through the distal pole and mg ketoprofen % / kg body weight injected intravenously both min before castration). -main foodborne hazards associated with calf farming are salmonella spp., human pathogenic-verotoxigenic escherichia coli (hp-vtec), thermophilic campylobacter spp., mycobacterium bovis, taenia saginata cysticercus and cryptosporidium parvum/giardia duodenalis. -the prevalence-level of infection and/or contamination of calves with, and further spread of, foodborne pathogens on farms depend on the status and the inter-relationship of different contributing factors that are inherently highly variable. -present knowledge and published data are insufficient to produce a universal risk assessment enabling quantitative food safety categorization/ranking of different types of calf farming systems. -nevertheless, generic principles for risk reductions for the main foodborne pathogens at calf farm level are known and are based on the implementation of effective farm management (e.g. qa, husbandry, herd health plans, biosecurity) and hygiene measures based on gfp-ghp recommendations for future research . it is recommended that future research should be conducted within the following areas: -hemoglobin levels and iron deficiences of veal calves aged - weeks. -the monitoring of haemoglobin in groups of calves using representative samples -exposure to allergenic proteins -solid and liquid food balance. exposure to too rich diets and changes in feed composition. -space requirements -health monitoring systems and the effect of such on clinical health in calves -infection transmission (respiratory and digestive diseases) due to direct contact between calves in relation to social benefits of mixing -pain relief when disbudding, dehorning and castrating calves -design of appropriate ventilation systems for calves in confined rearing conditions -health and environmental effects of feeding minerals as antimicrobial agents -for quantitative food safety risk categorization of farming systems individually, and/or their related ranking, further scientific information is needed. accordingly, related research should be encouraged. references used in this scientific opinion are available and listed in the scientific report published at the efsa web (www.efsa.eu.int). the ahaw panel wishes to thank the members of the working group chaired by panel member summary efsa has been requested by the european commission to issue a scientific opinion on animal health and welfare aspects of intensive calf farming systems and their ability to comply with the requirements of the well-being of calves from the pathological, zootechnical, physiological and behavioural points of view. in particular the commission asked efsa to update the findings of the scientific veterinary committee (animal welfare section) report on the welfare of calves of november in light of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. in this report a risk assessment was made and the relevant conclusions and recommendations are forming the scientific opinion by the ahaw panel. the svc ( ) report contains information on measurements of welfare, needs of calves, descriptions of current housing systems, chapters on types of feed and feeding systems, weaning of calves, housing and pen design, climate, mananimal relationships, dehorning and castration. further chapters covered economical considerations of systems and for improving welfare. in the report conclusions were made on general management, housing, food and water and economics. the present report "the risks of poor welfare in intensive calf farming systems" is an update o the previous svc report with the exception of economical aspects which are outside of the mandate for this report. the various factors potentially affecting calves' health and welfare, already extensively listed in the report of the scientific veterinary committee animal welfare section (svc, ) , are updated and subsequently systematically determined whether they constitute a potential hazard or risk. to the latter end their severity and likelihood of occurrence in animal (sub) populations were evaluated and associated risks to calf welfare estimated, hence providing the basis for risk managers to decide which measures could be contemplated to reduce or eliminate such risks. in line with the terms of reference the working group restricted itself to (in essence a qualitative) risk assessment although it is agreed that welfare and health of calves can be substantially affected in the course of and as a result of transport and slaughter, this report does not consider animal health and welfare aspects of calves during transport and slaughter but such information can be found in a recently issued comprehensive report of the scientific committee on animal health and animal welfare (scahaw), on "the welfare of animals during transport (details for horses, pigs, sheep and cattle)" which was adopted on march (dg sanco, ) and in the efsa report "welfare aspects of animal stunning and killing methods" (efsa, b) . in relation with the food safety aspects, main foodborne hazards associated with calf farming are salmonella spp., human pathogenic-verotoxigenic escherichia coli (hp-vtec), thermophilic campylobacter spp., mycobacterium bovis, taenia saginata cysticercus and cryptosporidium parvum/giardia duodenalis. present knowledge and published data are insufficient to produce a universal risk assessment enabling quantitative food safety categorization/ranking of different types of calf farming systems. nevertheless, the main risk factors contributing to increased prevalence/levels of the above foodborne pathogens, as well as generic principles for the risk reductions are known. the latter are based on the implementation of effective farm management (e.g. qa, husbandry, herd health plans, biosecurity) and hygiene measures (e.g. gfp-ghp). in general, the conclusions made in the previous svc report remain. however, recent research has provided for some additional conclusions. the risk analysis is presented in the tables of annex . the graphics in this table are not intented to represent numerical relationships but rather qualitative relations. in some instances the exposure could not be estimated due to lack of data, in which cases the risks where labelled "exposure data not available". the following major and minor risks for poor animal health and welfare have been identified for one or several of the various husbandry systems considered: the hazards of iron deficiency and insufficient floor space are considered to be very serious, the hazard of inadequate health monitoring is considered to be serious and the hazards of exposure to inadequate hemoglobin monitoring, allergenic proteins and too rich diet are considered to be moderately serious. for these hazards, there is no consensus on the exposure of calves mainly due to lack of data and that is why it is recommended that further studies should be made to provide evidence for an exposure assessment. regarding castration and dehorning (and disbudding) without anaesthetic drugs, there is a variation in relation to national legislation why the risk of poor welfare in relation to castration and dehorning has a wide range between countries. tables which clarify the risk assessment have been included in annex . calf a calf is a young bovine which is significantly younger and smaller in size than an adult of the same species and breed and which is not reproductively active. there is a gradual transition from a newborn animal, dependent on milk, to an animal with many adult characteristics. few people would use the term calf for domestic cattle of - months whilst most would call an animal of months or somewhat older a calf. in this report, calf is used for animals of up to months of age. however, in deciding on the end of the calf stage, any definition based on age or weight is arbitrary. the term calf is not normally restricted to animals that are unweaned or monogastric rather than having some degree of development of the rumen for its specialist function. the removal of the horn bud or the actual horn depending on the breed and the age of the animal. endotoxin unit (eu) endotoxin activity of . ng of reference endotoxin standard, ec- or eu/ng (fda). to convert from eu's into ng, the conversion is eu/ng. a process where water bodies receive excess nutrients that stimulate excessive plant growth (i.e. water pollution). intensively reared calf a calf which is not kept extensively at pasture. according to the council of europe european convention for the protection of animals kept for farming purposes, (chapter i, article ), "modern intensive animal farming systems are systems in which mainly technical facilities are used that are primarily operated automatically and in which the animals depend on the care of and supply from the farmer". nsaid non-steroidal anti-inflammatory drug. the process by which a mother mammal allows a young animal to obtain milk from its teats. odds ratio (or) the odds ratio is a measure of effect size particularly important in bayesian statistics and logistic regression. infection of the navel. meat produced from animals slaughtered at - weeks of age and supplied with roughage from at least months of age onwards. there is not any classification system for veal carcasses agreed across the eu. the only existing classification system would rather relate to a general beef carcass classification system, which comprises the following categories: however, these categories are valid for cattle having a live weight of more than kg. consequently, some member states have issued their own national schemes for veal carcass classification. in trade, there is agreement between importing and exporting countries that veal originates from calves which were fed predominantly milk replacers, and which displays a light colour. the age limit is around months. some countries such as the netherlands market meat of animals of the age of to months, as pink veal. the eu subsidies scheme represents an important incentive for pink veal production. the determination of the relationship between the magnitude of exposure of calves to a certain hazards and the severity and frequency of associated adverse effects on calf welfare. the quantitative and qualitative evaluation of the likelihood of hazards to welfare occurring in a given calf population. any factor, occurring from birth to slaughter, with the potential to cause an adverse effect on calf welfare. the qualitative and quantitative evaluation of the nature of the adverse effects associated with the hazard. considering the scope of the exercise of the working group the concerns relate exclusively to calf welfare. the identification of any factor, from birth to slaughter, capable of causing adverse effects on calf welfare. a function of the probability of an adverse effect and the severity of that effect, consequent to a hazard for calf welfare. the process of determining the qualitative or quantitative estimation, including attendant uncertainties, of the probability of occurrence and severity of known or potential adverse effects on welfare in a given calf population based on hazard identification, hazard characterisation, and exposure assessment. unaltered remain the following cac (codex alimentarius commission) definitions (note: for completeness all definitions used by cac -while not necessarily used in this document -have been included): a risk assessment that provides numerical expressions of risk and an indication of the attendant uncertainties (stated in the expert consultation definition on risk analysis). a risk assessment based on data which, while forming an inadequate basis for numerical risk estimations, nevertheless, when conditioned by prior expert knowledge and identification of attendant uncertainties, permits risk ranking or separation into descriptive categories of risk. a process consisting of three components: risk assessment, risk management and risk communication. a scientifically based process consisting of the following steps: i) hazard identification, ii) hazard characterisation, iii) exposure assessment and iv) risk characterisation. the interactive exchange of information and opinions concerning the risk and risk management among risk assessors, risk managers, consumers and other interested parties. output of risk characterisation. the process of weighing policy alternatives in the light of the results of risk assessment and, if required, selecting and implementing appropriate control options (i.e. prevention, elimination, or reduction of hazards and /or minimization of risks) options, including regulatory measures. a method to examine the behaviour of a model by measuring the variation in its outputs resulting from changes to its inputs. characteristics of a process where the rationale, the logic of development, constraints, assumptions, value judgements, decisions, limitations and uncertainties of the expressed determination are fully and systematically stated, documented, and accessible for review. a method used to estimate the uncertainty associated with model inputs, assumptions and structure/form. the process by which a young mammal obtains milk from the teat of its mother or another lactating female by sucking. the term veal refers to the meat produced from calves, principally those of the species bos taurus and bos indicus. there are several meat products from calves. they are generally distinguished by their colour: "pale" or "white" veal is generally produced from an animal under months of age and fed mostly milk or milk replacer; "pink" veal is generally produced from an animal of up to months fed larger amounts of solid foods and possibly weaned. meat from calves of - months is called young beef. weaning, weaned in mammals, weaning is a gradual process during which the young animal receives less and less milk from its dam and consumes more and more solid food. it is accompanied by changes in the dam-offspring relation. in farming, calves are often separated from their dams soon after birth and receive milk (or milk replacer) from humans or a machine. although separated from the dam, calves are considered as un-weaned as long as they are fed milk. suckler calves are left with their dam for some months and are generally weaned some time before the next calving by separating them suddenly from the dam. calves normally commence eating solid food at - weeks, although some start earlier, and they eat enough solid food for development of a functional rumen to start by about weeks of age. a weaned animal is one that no longer needs to suckle and so does not consume milk in any significant quantity indicating that the weaning process has finished. council directive / /eec laying down minimum standards for the protection of calves as amended by council directive / /ec requires the commission to submit to the council a report, based on a scientific opinion, on intensive calf farming systems which comply with the requirements of the wellbeing of calves from the pathological, zootechnical, physiological and behavioural points of view. the commission's report will be drawn up also taking into account socio-economic implications of different calf farming systems. it should be noted that the scientific veterinary committee (animal welfare section) adopted a report on the welfare of calves on november (svc, ) which should serve as background to the commission's request and preparation of the new efsa scientific opinion. in particular the commission requires efsa to consider the need to update the findings of the scientific veterinary committee's opinion in light of the availability of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. efsa has been requested by the european commission to issue a scientific opinion on animal health and welfare aspects of intensive calf farming systems and their ability to comply with the requirements of the well-being of calves from the pathological, zootechnical, physiological and behavioural points of view. in particular the commission requires efsa to update the findings of the scientific veterinary committee (animal welfare section) report on the welfare of calves of november in light of more recent data on this issue. where relevant the possible food safety implications of different farming systems should also be considered. the mandate outlined above was accepted by the panel on animal health and welfare (ahaw) at the plenary meeting, on / march . it was decided to establish a working group of ahaw experts (wg) chaired by one panel member. therefore the plenary entrusted a scientific report and risk assessment to a working group under the chairmanship of prof. bo algers. the members of the working group are listed at the end of this report. this report is considered for the discussion to establish a risk assessment and the relevant conclusions and recommendations forming the scientific opinion by the ahaw panel. according to the mandate of efsa, ethical, socio-economic, cultural and religious aspects are outside the scope of this scientific opinion. in , the scientific veterinary committee of the european commission published the report on the welfare of calves. the svc ( ) report contains information on measurements of welfare, needs of calves, descriptions of current housing systems, chapters on types of feed and feeding systems, weaning of calves, housing and pen design, climate, mananimal relationships, dehorning and castration. further chapters covered economic considerations of systems and for improving welfare. in the report conclusions were made on general management, housing, food and water and economics. the present report "the risks of poor welfare in intensive calf farming systems" is an update of the previous svc report with the exception of economic aspects which are out of the mandate for this report. this report represents an update of the previous svc report ( ) with a risk assessment perspective. factors which are important for calf welfare include housing (space and pen design, flooring and bedding material, temperature, ventilation and air hygiene), feeding (liquid feed, concentrates, roughage) and management (grouping, weaning, human-animal relations). the measures used to assess welfare include behavioural and physiological measures, patho-physiological measures and clinical signs as well as production measures. as explained in the glossary, in this report young bovines are called calves up to a maximum of eight months of age and veal is the meat of a calf. countries with substantial production of veal are france, italy, the netherlands, belgium, spain and germany. significant veal production exists also in portugal, austria and denmark, the production of white veal, from calves that have been fed predominantly milk replacer and which has a light colour, takes place largely in france, the netherlands, belgium and italy. the eu subsidies scheme represents an important incentive for pink veal production. most calves produced for further rearing are in france, germany, uk, ireland and italy. the ways of keeping calves vary considerably from country to country and between breeds. most dairy calves are separated from their dam at birth and artificially fed whereas calves from beef breeds generally suckle their dam. according to eu statistics, in in the eu ( ) , , calves were reared for slaughter (table ) and , , calves were reared for other reasons than slaughter (table ). in total (table ) , tonnes of calf meat were produced in eu ( ) which probably implies that about , tonnes were produced in eu ( ) during . human consumption of meat from calves decreased slightly from to in eu ( ) ( table ). : item and element selected in the eurostat database to make the query and extract the data the working group set out to produce a document in which the various factors potentially affecting calves' health and welfare [already extensively listed in the report of the scientific veterinary committee animal welfare section (svc, ) , are updated and subsequently to systematically determine whether these factors constitute a potential hazard or risk. to the latter end their severity and likelihood of occurrence in animal (sub) populations were evaluated and associated risks to calf welfare estimated, hence providing the basis for risk managers to decide which measures could be contemplated to reduce or eliminate such risks. it should be noted, however, that this does not imply that a hazard that has a serious effect on just a few animals should not be dealt with by managers on farm level as the suffering imposed on some animals constitute a major welfare problem for those individuals. in line with the terms of reference the working group restricted itself to (in essence qualitative) risk assessment, i.e. only one of three elements essential to risk analysis a risk assessment approach was followed, similar to the one generally adopted when assessing microbiological risks, i.e. along the lines suggested at the nd session of the codex alimentarius commission (cac, ) . incidentally, these guidelines have been characterized by the cac as 'interim' because they are subject to modifications in the light of developments in the science of risk analysis and as a result of efforts to harmonize definitions across various disciplines. cac's guidelines are in essence exclusively formulated for the purpose of assessing risks related to microbiological, chemical or physical agents of serious concern to public health. consequently -considering their disciplinary focus -the working group had to adapt the cac definitions to serve their purpose. these adapted definitions, have, in alphabethical order, been included in chapter (see risk analysis terminology). the objectives of this report are to review and report recent scientific literature on the welfare including the health of intensively reared calves, to report on recent findings as an update to the scientific veterinary committee's previous report, to make a qualitative risk assessment concerning the welfare of intensively kept calves. where relevant, food safety implications of different farming systems are also considered. the report is structured in five major parts. the first three follow the scientific veterinary committee's previous report "on the welfare of calves" with introductory chapters - on background, measurements and needs in relation to calf welfare, chapter describing housing, diet and management and chapter describing comparison of systems and factors. in chapter common disease and use of antibiotics is described. the other two parts involve aspects of meat quality and food safety (chapter ) and the risk assessment (chapter ). conclusions and recommendations from the previous svc document together with updated conclusions derived from recent research findings are presented in the scientific opinion (www.efsa.eu.int). effect of transport and slaughter on calves' health and welfare although it is agreed that welfare and health of calves can be substantially affected in the course of and as a result of transport, this report does not consider animal health and welfare aspects of calves during transport because there is already a comprehensive recent report of the scientific committee on animal health and animal welfare (scahaw), on "the welfare of animals during transport (details for horses, pigs, sheep and cattle)" which was adopted on march (dg sanco, . the report takes into account all aspects related with transport that could affect the health and welfare of cattle and calves, including the direct effects of transport on the animals and the effects of transport on disease transmission. the loading methods and handling facilities for cattle, the floor space allowance, the relationships of stocking and the density requirements, the vehicle design, space requirements and ventilation for cattle transporters (see also the ahaw scientific opinion related to standards for the microclimate inside animal road transport vehicles; efsa, ), the behaviour of cattle during road transport, the road conditions, long distance transport and the travel times are also reviewed. recommendations for all these aspects are also given in that report. the following general requirements in relation to animal welfare were annexed as a protocol to the eu treaty of amsterdam in : "in formulating and implementing the community's agriculture, fisheries, transport, and internal market policies, the community and the member states shall pay full regard to the welfare requirements of animals, while respecting the legislative provisions and customs of member states relating to religious rites, cultural traditions and regional heritage." in the introduction to the proposed eu constitution, the following extended wording is included: "in formulating and implementing the european union's agriculture, fisheries, transport, internal market, research and technological development and space policies, the union and the member states shall pay full regard to the welfare requirements of animals, as sentient beings, while respecting the legislative provisions and customs of member states relating to religious rites, cultural traditions and regional heritage." this wording reflects the ethical concerns of the public about the quality of life of the animals. it also takes into account customs and cultural traditions. farm animals are subject to human imposed constraints and for a very long time the choice of techniques has been based primarily on the efficiency of production systems for the provision of food. however it is an increasingly held public view that we should protect these animals against mistreatment and poor welfare. in order to promote good welfare and avoid suffering, a wide range of needs must be fulfilled. these needs may require the animal to obtain resources, receive stimuli or express particular behaviours (hughes and duncan, ; jensen and toates, ; vestergaard, ) . to be useful in a scientific context, the concept of welfare has to be defined in such a way that it can be scientifically assessed. this also facilitates its use in legislation and in discussions amongst farmers and consumers. welfare is clearly a characteristic of an individual animal and is concerned with the effects of all aspects of its genotype and environment on the individual (duncan, ) . broom ( ) defines it as follows: the welfare of an animal is its state as regards its attempts to cope with its environment. welfare therefore includes the extent of failure to cope, which may lead to disease and injury, but also ease of coping or difficulty in coping. furthermore, welfare includes pleasurable mental states and unpleasant states such as pain, fear and frustration (duncan, ; fraser and duncan, ) . feelings are a part of many mechanisms for attempting to cope with good and bad aspects of life and most feelings must have evolved because of their beneficial effects (broom, ) . although feelings cannot be measured directly, their existence may be deduced from measures of physiology, behaviour, pathological conditions, etc. feelings cannot be directly measured and therefore care is necessary to avoid uncritical anthropomorphic interpretations (morton et al., ) . good welfare can occur provided the individual is able to adapt to or cope with the constraints to which it is exposed. hence, welfare varies from very poor to very good and can be scientifically assessed. measures which are relevant to animal welfare during housing, i.e. largely longterm problems, are described by broom and johnson ( ) and by broom ( broom ( , a . production criteria have a place in welfare assessment. however, although failure to grow, reproduce etc. often indicates poor welfare, high levels of production do not necessarily indicate good welfare. physiological measurements can be useful indicators of poor welfare. for instance, increased heart-rate, adrenal activity, or adrenal activity following acth challenge, or reduced heart-rate variability, or immunological response following a challenge, can all indicate that welfare is poorer than in individuals which do not show such changes. the impaired immune system function and some of the physiological changes can indicate the pre-pathological state (moberg, ) . in interpreting physiological measurements such as heart rate and adrenal activity it is important to take account of the environmental and metabolic context, including activity level. behavioural measures are also of particular value in welfare assessment (wiepkema, ) . the fact that an animal avoids an object or event, strongly gives information about its feelings and hence about its welfare (rushen, ) . the stronger the avoidance the worse the welfare whilst the object is present or the event is occurring. an individual, whom is completely unable to adopt a preferred lying posture despite repeated attempts will be assessed as having poorer welfare than one which can adopt the preferred posture. other abnormal behaviour which includes excessively aggressive behaviour and stereotypes, such as tongue-rolling in calves, indicates that the perpetrator's welfare is poor. very often abnormal activities derive from activities that cannot be expressed but for which the animal is motivated. for example, calves deprived of solid foods and hence lacking the possibility of nutritive biting, develop non-nutritive biting. whether physiological or behavioural measures indicate that coping is difficult or that the individual is not coping, the measure indicates poor welfare. studies of the brain inform us about the cognitive ability of animals and they can also tell us how an individual is likely to be perceiving, attending to, evaluating, coping with, enjoying, or disturbed by its environment so can give direct information about welfare (broom and zanella, ) . in studies of welfare, we are especially interested in how an individual feels. as this depends upon highlevel brain processing, we have to investigate brain function. abnormal behaviour and preferred social, sexual and parental situations may have brain correlates. brain measures can sometimes explain the nature and magnitude of effects on welfare. the word "health", like "welfare", can be qualified by "good" or "poor" and varies over a range. however, health refers to the state of body systems, including those in the brain, which combat pathogens, tissue damage or physiological disorder (broom and kirkden, ; broom, ) . welfare is a broader term than health, covering all aspects of coping with the environment and taking account of a wider range of feelings and other coping mechanisms than those associated with physical or mental disorders. disease, implying that there is some pathology, rather than just pathogen presence, always has some adverse effect on welfare (broom and corke, ). the pain system and responses to pain are part of the repertoire used by animals to help them to cope with adversity during life. pain is clearly an important part of poor welfare (broom, b) . however, prey species such as young cattle and sheep may show no behavioural response to a significant degree of injury (broom and johnson, ) . in some situations responses to a wound may not occur because endogenous opioids which act as analgesics are released. however, there are many occasions in humans and other species when suppression of pain by endogenous opioids does not occur (melzack et al., ) . studies of the brain inform us about the cognitive ability of animals and they can also tell us how an individual is likely to be perceiving, attending to, evaluating, coping with, enjoying, or disturbed by its environment so can give direct information about welfare (broom and zanella, ) . in studies of welfare, we are especially interested in how an individual feels. as this depends upon high-level brain processing, we have to investigate brain function. abnormal behaviour and preferred social, sexual and parental situations may have brain correlates. brain measures can sometimes explain the nature and magnitude of effects on welfare. the majority of indicators of good welfare which we can use are obtained by studies demonstrating positive preferences by animals (dawkins, ) . methods of assessing the strengths of positive and negative preferences have become much more sophisticated in recent years. the price which an animal will pay for resources, or pay to avoid a situation, may be, for example, a weight lifted or the amount of energy required to press a plate on numerous occasions. the demand for the resource, i.e. the amount of an action which enables the resource to be obtained, at each of several prices can be measured experimentally. this is best done in studies where the income available, in the form of time or energy, is controlled in relation to the price paid for the resource. when demand is plotted against price, a demand curve is produced. in some studies, the slope of this demand curve has been measured to indicate price elasticity of demand but in recent studies (kirkden et al., ) it has become clear that the area under the demand curve up to a particular point, the consumer surplus, is the best measure of strength of preference. once we know what animals strongly prefer, or strongly avoid, we can use this information to identify situations which are unlikely to fulfil the needs of animals and to design better housing conditions and management methods (fraser and matthews, ) . however, as pointed out by duncan ( duncan ( , , all data from preference studies must be interpreted taking account of the possibilities that, firstly, an individual may show a positive preference for something in the shortterm which results in its poor welfare in the long-term, and secondly, that a preference in a simplified experimental environment needs to be related to the individual's priorities in the more complicated real world. each assessment of welfare will pertain to single individual and to a particular time range. in the overall assessment of the impact of a condition or treatment on an individual, a very brief period of a certain degree of good or poor welfare is not the same as a prolonged period. however, a simple multiplicative function of maximum degree and duration is often not sufficient. if there is a net effect of poor welfare and everything is plotted against time, the best overall assessment of welfare is the area under the curve thus produced (broom, c) . . the needs and functioning of calves . . the concept of needs in assessing the needs and functioning of calves, many different approaches can be taken. one is to study, at a fundamental level, the physiology and behaviour of cattle and the ways in which they have evolved, in order to try to understand their causation and function. needs are in the brain but may be fulfilled by obtaining resources, physiological change, or carrying out a behaviour. in order to conclude that a need exists to show certain behaviour, it is necessary to demonstrate that the calves used in modern production systems are strongly motivated to show the behaviour and that, if the need is not provided for, there are signs of poor welfare such as abnormal behaviour or physiology or pathological effects (see chapter ). where the housing design allows the animals to show the behaviour that they need to show, this will promote the avoidance of poor welfare. a need is a requirement, which is a consequence of the biology of the animal, to obtain a particular resource or respond to a particular environmental or bodily stimulus. an animal may have a need that results in the existence at all times of mechanisms within the brain and abilities to perceive stimuli and respond appropriately. however, this does not mean that every individual at all times needs to carry out the response. for example, a calf has a need to avoid attack by a predator but it does not need to carry out anti-predator behaviour if no individual perceived as a predator is present. there are some needs which require urgent fulfilment, otherwise the body functioning will be impaired and in the medium or long term, the animal may suffer. for example, an adequate amount of an essential nutrient or avoidance of exposure to a serious disease. there are other needs which, if not fulfilled lead to frustration and excessive activities in an attempt to fulfil the need. the resulting poor welfare may be extreme and prolonged. needs to avoid predation and other danger mean that animals have a negative experience in some situations. close human presence and handling of animals may elicit physiological and behavioural anti-predator responses. the avoidance of such situations can also be considered as a need. calves require space to perform activities such as resting, feeding, exploring, interacting and escaping from perceived danger. to assess what risks of poor welfare are involved when the housing circumstances do not allow certain activities, it can be helpful to consider why the calves are intrinsically motivated to perform the activities. the selection criteria applied to modern cattle genotypes have resulted in changes in morphological phenotype. although these have not altered the categories of needs of calves, they may have altered rates of growth and energy partitioning so that the timing of problems and the probability that they will arise may be changed. the overall need of calves is to maintain bodily integrity while growing and preparing for adult life. in order to do this, calves have a series of needs that are relevant to the housing and management conditions imposed upon them by humans. the needs of calves are described in detail by broom ( broom ( , . in listing needs and in later consideration of how to provide for them, it is assumed that extreme human actions, such deliberately creating a large wound or infecting an animal with a dangerous pathogen, will not occur. the list of needs is not in order of importance. some of the needs mentioned here are discussed at greater length in the previous report. . . . to breathe calves need air that has sufficient oxygen and a low level of noxious gases in it. calves may be adversely affected by some of the gaseous products of the breakdown of animal faeces and they show preferences that help them to avoid any harm that they may cause. calves need to rest and sleep in order to recuperate and avoid danger. they need to use several postures which include one in which they rest the head on the legs and another in which the legs are fully stretched out (de wilt, ; smits, , ) . sleep disruption may occur if comfortable lying positions cannot be adopted or if there is disturbance to lying animals because they are trodden on or otherwise disturbed by other calves. exercise is needed for normal bone and muscle development. calves choose to walk at intervals if they can, show considerable activity when released from a small pen and have locomotor problems if confined in a small pen for a long period (warnick et al., ; dellmaier et al., ; trunkfield et al., ) . calves living in natural conditions would be very vulnerable to predation when young. as a consequence, the biological functioning of calves is strongly adapted to maximise the chance of recognition of danger and escape from it. calves respond to sudden events and approaches by humans or other animals perceived to be potentially dangerous with substantial sympathetic nervous system and hypothalamic-pituitary-adrenocortical (hpa) changes. these physiological changes are followed by rapid and often vigorous behavioural responses. fear is a major factor in the life of calves and has a great effect on their welfare. . . . to feed and drink . . . . sucking the calf needs to attempt to obtain nutrients at a very early stage after birth and shows behavioural responses that maximise this chance. as a consequence, from an early age, calves have a very strong need to show sucking behaviour and if a calf is not obtaining milk from a real or artificial teat, it sucks other objects (broom, (broom, , metz, ; hammell et al., ; jung and lidfors, ) . the need of the calf is not just to have the colostrum or milk in the gut but also to carry out the sucking behaviour on a suitable object (jensen, ) . further, the sucking is of importance for the release of gastrointestinal hormones. it has been shown in calves that oxytocin is released during milk ingestion. the amount released, however, was less in calves drinking their milk from a bucket compared with calves suckling the dam (samuelsson, ) . peripheral oxytocin stimulates the release of glucagon from the pancreas whereas central oxytocin increases hunger and the release of gastrointestinal hormones promoting growth (stock et al., ; björkstrand, ) . in the early days after birth, calves are motivated to suck and obtain milk. however, calves also have a need to obtain sufficient water and will drink water even when fed milk. if the temperature is high, calves will drink water if it is available and sick calves will also choose to drink water. if water is not available, over-heated calves and sick calves may become dehydrated. sick calves may become dehydrated even when water is offered. calves with acidosis with or without diarrhoea often lose their suckling reflex. this may also happen in calves with hypoglycaemia and septicaemia (berthold, pers. com). after the first few weeks of life, calves attempt to start ruminating. if they have received no solid material in their diet, calves still try to ruminate but cannot show the full rumination behaviour. in addition to the need to suck when young, calves need to manipulate material with their mouths. they try to do this whether or not they have access to solid material and they will seek out solid material that they subsequently manipulate. (van putten and elshof, ; webster, ; webster et al., ) . calves eat solid food better when water is offered simultaneously. certain rapidly digestible carbohydrates are necessary for the development of ruminal papillae with associated physiological development and fibrous roughage helps the anatomical development of the rumen. so it is clear that calves need appropriate solid food in their diet after the first few weeks of life; first, food that is digested rapidly and provide fatty acids; then fibrous foods. rumen development is enhanced when calves are fed with concentrates, water and roughage such as hay. . . . to explore exploration is important as a means of preparing for the avoidance of danger and is a behaviour shown by all calves (kiley worthington and de la plain, ; fraser and broom, ) . exploration is also valuable for establishing where food sources are located. calves need to explore and it may be that higher levels of stereotypes (dannemann et al., ) and fearfulness (webster and saville, ) in poorly lit buildings or otherwise inadequate conditions are a consequence of inability to explore. . . . to have social contact . . . . maternal contact the needs of young calves are met most effectively by the presence and actions of their mothers. in the absence of their mothers, calves associate with other calves if possible and they show much social behaviour. the need to show full social interaction with other calves is evident from calf preferences and from the adverse effects on calves of social isolation (broom and leaver, ; dantzer et al., ; friend et al., ; lidfors, ) . to minimise disease during the first few hours of life, the vigorous attempts of the calf to find a teat and suckle should result in obtaining colostrum from the mother. this colostrum includes immunoglobulins that provide passive protection against infectious agents. hence the needs of the calf have an evident function that is not just nutritional. calves also show preferences to avoid grazing close to faeces. they also react to some insects of a type which may transmit disease. if infected with pathogens or parasites, calves will show sickness behaviour that tends to minimise the adverse effects of disease (broom and kirkden, ) . young calves, less than four weeks of age, are not well adapted to cope with stressful events such as handling and transport, often suffering very high rates of mortality and the younger the calves are, the higher their mortality (staples and haugse, ; mormède et al., ) many succumbing to pneumonia or scouring, within four weeks of arrival at the rearing unit (staples and haugse, ). an inability to mount an effective glucocorticoid response, which is adaptive in the short term, may be a contributing factor to the high levels of morbidity and mortality which occur in young calves (knowles et al., ) as may neutrophilia (simensen et al., ; kegley et al., ) , lymphopaenia (murata et al., ) and suppression of the cell mediated immune response (kelley et al., ; mackenzie et al., ) . to groom grooming behaviour is important as a means of minimising disease and parasitism and calves make considerable efforts to groom themselves thoroughly (fraser and broom, ). calves need to be able to groom their whole bodies effectively. to thermoregulation calves need to maintain their body temperature within a tolerable range. they do this by means of a variety of behavioural and physiological mechanisms. . . . . selection of location when calves are over-heated, or when they detect that they are likely to become over-heated, they move to locations that are cooler. if no such movement is possible, the calf may become disturbed, thus exacerbating the problem and other changes in behaviour and physiology will be employed. responses to a temperature that is too low will also involve location change if possible. . . . . body position over-heated, or potentially over-heated, calves adopt positions that maximise the surface area from which heat can be lost. such positions often involve stretching out the legs laterally if lying and avoiding contact with other calves and with insulating materials. if too cold, calves fold the legs and lie in a posture that minimises surface area. . . . . water drinking over-heated calves will attempt to drink in order to increase the efficiency of methods of cooling themselves. to avoid harmful chemical agents calves need to avoid ingesting toxic substances and to react appropriately if harmful chemical agents are detected within their bodies. . . . to avoid pain calves need to avoid any environmental impact or pathological condition that causes pain. the text in this section refers to current situation in eu countries. calf housing in other countries may be different. replacement dairy calves . . . diet brief description of the diet of replacement heifer calves. this has not really changed since the report. following birth, calves receive (or should receive) colostrum and are than reared with whole milk or milk replacer. calves are weaned; weaning ages and weaning strategies may differ according to region or country. briefly mention current weaning strategies. calves receive starter and, for example, hay and maize silage to promote rumen development. according to the latest eu regulation on the housing of calves (council directive eu / /ec), group housing is compulsory for calves older than weeks, unless there is any need for isolation certified by a veterinarian. individual housing of rearing calves younger than weeks, is quite common in the european dairy industry. below, the most important housing systems for replacement heifer calves are briefly listed. . . . . hutches: partially closed, outside area hutches are made of plywood, plastic or fibre glass. if hutches are made from a synthetic opaque material, this prevents the greenhouse effect inside the hutch and reduces heat stress. if reflective material is used (light coloured), the sun rays are reflected which reduces the risk for overheating. the size of hutches may vary from . - . m width and . - . m length. a layer of sand, e.g. cm gravel or crushed stone can be placed under the calf hutch. litter may be provided preferably as straw, as it provides the warmest surface temperature (panivivat et al., ) , but also wood shavings, sawdust or newspapers are used and the layer should be thick enough to provide a comfortable and dry bed. calf hutches provide three different environments, as the inside is dry and protected from the weather and outside the calf is able to get limited exercise and sunlight. the calf can be also position itself half in and half out, getting sunlight and being protected from wind. hutches should be placed where they catch the most sunlight and avoiding hot, windy and wet locations. nevertheless, during hot summer conditions hutches should be placed in a shady area to avoid overheating. in the rear wall, a hole that can be closed provides better air ventilation within the hut in warm weather. in the hutches, the calf can be kept using wire panels in a building with an outdoor run, preferably of more than . m , enabling some contact to other calves. calves can also be fed outside using a milk bucket support, a dry feed recipient support and a hay rack. other hutch types locate feed and water pails inside the hutch. individual pens are situated in a roofed building. the area should be wellventilated so that the air is dry and fresh, but draught has to be avoided. separation from adult cows is advantageous with respect to disease prevention. pens are either made from hard material with concrete walls or dismountable with three solid sides (i.e. plywood) and an open front (see figure ). walls have to be perforated according to council directive eu / /ec in holdings with more than five calves, which allow at least limited social contact with other calves, one of the key needs of calves. the open front gets fresh air to the calf and makes them easier to feed through a bucket support provided on the front. hardwood is normally used for the floor, which is covered with a litter that is thick enough, dry and clean. totally slatted floors are in use also, made of wood, plastic or metal, but require more care for air temperature. the . - . m x . - . m pen can be put mm above the ground allowing for draining and the removal of urine. dismountable individual pens should be designed in such a way that they can be taken apart and stored when they are not needed, and also easily cleaned with a skid-steer loader or small bucket tractor. in case of cold weather, a plywood cover can be placed over the rear portion of the pen to preserve heat produced by the calf. in hot weather, a removable panel at the rear of the shelter can be opened to provide additional air exchange. collective hutches may house a group of between and calves. the hutches are made of synthetic materials or wood. the inside of the hutch is provided with litter and some hay may be put in a rack. roughage is distributed at a feeding barrier and anti-freeze drinking devices are needed if freezing temperatures may occur. with collective hutches fastened on concrete, a good outdoor run has a non-slippery surface. manure and bedding have to be removed manually or the collective hutch has to move over a few metres distance by means of a tractor and guide-blocks. as for the individual hutches, the location has to be chosen carefully to avoid overheating during summer and provide protection from wind and rain entering the hutch during cold seasons, but give as much sunlight as possible. when sufficient straw and proper ventilation is provided these are the most suitable facilities for young replacement heifer calves. if the calves stay there for several months it is necessary to provide a passage on slippery free concrete. if the floor of this passage is quite rough this will prevent slipping. the concrete floor may be replaced by a slatted floor provided that the spacing between slats agrees with the age of the animals. the lying area can be built in different ways and littered with different materials. in the deep litter system, the dung is removed at regular intervals from every few weeks to twice per year. . . . . group pens inside another common system for group housing of replacement heifer calves is group housing inside, in straw littered pens usually with - calves per pen. calves may enter such group housing already after weeks of individual housing. the regulatory change with regard to calf housing together with a general trend towards larger dairy farms has increased the interest in group housing systems for rearing calves during the milk feeding period (hepola, ; jensen, ) . in addition to systems with small groups of calves ( - animals per group) kept on straw and usually bucket-fed, calves are increasingly kept in larger groups ( up to about calves) with computer-controlled automatic milk feeders. an automatic milk feeder may contain two milking dispensers, and each milking dispenser can be used for about calves. to prevent hierarchic and health problems within the group, calves are grouped with a limited age difference between the animals. calves receive milk replacer according to their needs or ad libitum. when calves are fed according to their needs, a radio-frequency electronic identifier can be used, with a transponder inserted in the collar, in an ear tag, injected under the skin or inside a ruminal bolus swallowed by the animal. the diet of the vast majority of veal calves in the european union is determined by the market demand for "white meat", i.e. meat with low myoglobin content. the production of white veal meat comes from a tradition of fattening calves thanks to a diet based on milk, which is naturally poor in iron, and slaughtering the animals when they are young. nowadays most veal calves are fed milk replacers that contains a variable proportion of milk powder and which iron content is maintained at a low level. this results in relatively low blood haemoglobin levels. an average blood haemoglobin level at slaughter between . and . mmol/l is compatible with an acceptable meat colour. as haemoglobin levels increase, the number of animals whose meat is darker in colour increases. in order to prevent calves from having haemoglobin levels that are too low, early in the production phase, the iron supply in the milk replacer fed during the first - weeks of the fattening period (starter) is usually about ppm, whereas iron supply in the milk replacer fed during the remainder of the fattening period (fattener) is ppm. moreover, blood haemoglobin levels are generally monitored, most intensively upon arrival at the fattening unit, and calves with levels below age-dependent thresholds are treated with iron, either individually or group-wise. thus, blood haemoglobin levels usually gradually decline across the fattening period, and the lowest average levels are supposed to be reached during the last four weeks prior to slaughter. some veal calves are still fed raw milk. in case of dairy breeds, the cows are generally milked and the milk is given to calves in buckets. in case of beef breeds, the calves are led twice a day for suckling their dam or another cow. according to the latest amendment to the annex of council directive / /eec (commission decision / /ec) calves should receive sufficient iron to ensure an average blood haemoglobin of at least . mmol/l, and calves over two weeks old should be provided daily with some fibrous feed which should increase from to a minimum of grams per day from the beginning to the end of the fattening period. the main types of solid feed given to veal calves differ somewhat between the veal producing countries in europe. in france and italy solid feeds for veal calves usually consist of chopped straw or pelleted dry feed consisting of both fibrous (e.g. straw) and concentrate-like (e.g. cereal) materials. in the netherlands, maize silage is a popular roughage source for white veal calves, provided that the iron content is not too high (an upper limit of - pp/kg dry matter is generally imposed). maize silage is usually fed in relatively high amounts, with maximum daily amounts of up to . kg ( gr dry matter)/calf/day. other feeds used in the veal industry include chopped straw and rolled barley. white veal calves are fattened for approximately weeks in italy and the netherlands, and for - weeks in france. besides the production of white veal meat, several systems exist across europe that lead to the production of so-called "pink veal meat". the main differences from the more conventional production of white veal are that the calves are reared for a longer period and they receive higher amounts of solid foods. as a consequence the muscles have a higher content of myoglobin, hence the darker colour of the meat. in france, the calves are most often from suckler beef breeds; they are reared with their dam and may be weaned before the end of rearing. in the netherlands, pink veal meat is generally produced from calves of dairy breeds. pink veal calves are weaned at - weeks of age. after weaning, they receive a diet of ad lib roughage (frequently maize silage) and by-products. pink veal calves are not restricted with regard to dietary iron supply and, consequently, develop normal haemoglobin levels and the associated "red" (pink) meat colour. the age at slaughter can vary from calves of - months to young bred animals of - months with the slaughter age of individuals depending on the production rate. these products are labelled to help consumers to distinguish them from white veal meat. in line with the latest eu regulation (council directive eu / /ec), individual housing of veal calves has been officially abolished in the european union. already in the s extensive studies were initiated with the aim to develop a practically feasible husbandry system for group housing of veal calves. at present the systems involve both large and smaller groups. housing of calves is in groups of - animals, with a slight trend towards larger group sizes ( - calves per group). the floor can be bedded with straw or wood shavings but is more commonly made of wooden slats. wooden slats require less labour and straw or woodshavings easily become dirty and wet. calves are kept in individual pens, sometimes called "baby-boxes" for a period of - weeks upon arrival at the fattening unit to prevent overt preputial sucking thereafter and to be able to monitor more closely the health of calves. baby boxes are usually made of galvanised or wooden partitions placed inside the group pen. in these boxes, calves are bucket-fed individually. after - weeks, these temporary partitions are removed and calves are free to move around in the pen. calves are fed milk replacer in a trough or in individual buckets. a crucial management procedure associated with trough feeding is the regular re-grouping of calves, to maintain homogeneous groups in terms of calf weight and particularly drinking speed throughout fattening. experimental work confirmed the feasibility of this procedure in that calves could be repeatedly regrouped without effects on their health, growth rate and a number of physiological measures of stress . in this latter study, aggression between calves was rare, and calves seemed to habituate to repeated mixing. individual calves not thriving on milk replacer because of drinking problems, are provided with floating teats or with a teat-bucket. veal calves are sometimes kept in pairs. this type of housing results in less availability of space for movement and social opportunities than in larger groups of calves but is reported to have no disadvantages in health, weight gain and the occurrence of cross-sucking (chua et al., ) . suckling veal calves are generally accomodated in small groups. as in the rearing of dairy calves, automatic feeding systems have been extended to veal production systems, particularly since increasingly sophisticated computer technology is becoming available for sensor-aided recognition of individual animals, and to control feeding times and intake. calves are usually housed in large groups ( - calves) and receive milk replacer via an automatic feeding machine. with such feeders, calves suck to obtain their milk. the floor generally consists of wooden slats, or concrete in combination with wooden slats. some veal calves are kept on straw bedded floors, or have access to rubber mats or concrete covered by rubber. calf rearing and animal environmental pollution . . . general introduction in the report there was a short chapter on calf production and environmental pollution referring to gases (ammonia, nitrous oxide, carbon dioxide). manure resulting from calf production was seen as a fertiliser only. this chapter briefly describes in a condensed way the impact of modern animal calf production affects the environment of the animals. modern animal production is a source of solid, liquid and gaseous emissions which i.a. can be harmful to the animals. solid and liquid manure and waste water contain nitrogen and phophorus which are the most important plant nutrients, but are harmful when applied to agricultural land in excess amounts thereby leading to pollution of ground water by nitrates, surface water with phosphorous causing eutrophication and soil with heavy metals such as zinc and copper which are used as growth promotors in the feed stuff, all of which can affect the animals if returned to them. a third group of potentially hazardous effluents are drug residues, such as antibiotics, which may be present in the excreta of farm animals after medical treatment and which are passed to the environment during grazing or spreading of animal manure where they may conceivably contribute to the formation of antibiotic resistance in certain strains of bacteria. the same risk arises when sludge and waste water from sewage plants containing residues of antibiotics and other drugs from human consumption are discharged as fertiliser in the soil and water body of agricultural land. the most important aerial pollutants from calf rearing systems are odours, some gases, dust, micro-organisms and endotoxins, together also addressed as bioaerosols (seedorf and hartung, ) , which are emitted by way of the exhaust air into the environment from buildings and during manure storage, handling and disposal. aerial pollutants can give cause for concern for several reasons. e.g. an animal's respiratory health may be compromised by these pollutants. in fattening units, up to % of all calves may show signs of pneumonia, pleuritis or other respiratory disease within the first three weeks of housing when the calves come together from different herds (see chapter on temperature, ventilation and air hygiene). the travel distance of viable bacteria from animal houses via the air is presently estimated at m (müller and wieser, ) downwind why there is a possible transmission between animal houses. very little is known about the distribution characteristics of dust particles, endotoxins, fungi and their spores, in the air surrounding animal houses. recent investigations showed dispersion of staphylococcae sp. (bioaerosols) up to m (schulz et al., ) from a broiler barn. the contribution of calf production is presently unknown. it is estimated that calves produce about . kg fresh manure and . kg slurry per animal and day. this is a share of . % in the total amount of fresh manure produced in cattle farming (richter et al., ) . manure suspected to contain pathogens such as salmonella should be stored for at least months without adding or removing material and subsequently applied to arable land where it is ploughed in, or it is disinfected before any further use. the second area of concern is the emission compounds such as gases, odours, dust, micro-organisms and other compounds like endotoxins which are regularly present in calf house air where they can cause or exacerbate respiratory disorders in animal and work force. the quantities emitted from calf houses are summarised in seedorf et al. ( b) there are considerable emission amounts from calf husbandry. the emissions of micro-organisms are higher than from dairy or beef barns but distinctly lower than from pig or poultry production (seedorf et al., a) . the same is true for endotoxins which are one log lower in cow barns but distinctly higher in pig and poultry houses. the dust emissions can be times higher in piggeries and times higher in broiler barns . the ammonia concentration is usually lower than in piggeries or laying hen houses. however this depends greatly on the housing and manure management system. in a us study johnson et al. ( ) reported that cow-calf, stocker and feedlot phases contribute considerable amounts of nitrous oxide and methane to the emissions from cattle production. . comparison of systems and factors . . feeding and housing systems, weaning strategies and quality of solid and liquid feed . . . feeding systems the main potential problems associated with the housing of calves in large groups with automatic feeders include: cross sucking, i.e., non-nutritive sucking of parts of another calf's body (in particular the ears, mouth, navel, udder-base and, in case of bull calves, the scrotum and prepuce) (plath et al., ; bokkers and koene, ; jensen, ) , competition for access to the feeder (jensen, ; , and health problems, in particular a high incidence of respiratory disease (maatje et al., ; plath et al., ; svensson et al., svensson et al., , hepola, ; engelbrecht pedersen et al., ) . a number of factors have been identified that are likely associated with some of these problems, although conflicting results have been reported. cross-sucking is linked with the sucking motivation of calves and, hence, measures to reduce the motivation of calves for non-nutritive sucking may reduce the occurrence of cross-sucking (de passillé, ). an increased milk allowance also reduced non-nutritive sucking on a teat as well as cross-sucking in group-housed calves in one experiment (jung and lidfors, ), but did not affect cross-sucking in another (jensen and holm, ) . reducing the milk flow rate decreased nonnutritive sucking on a teat in individually housed calves (haley et al., ) , but failed to influence cross-sucking in group-housed ones (jung and lidfors, ; jensen and holm, ) . alternatively, it has been suggested that hunger may also control the level of non-nutritive sucking and possibly cross-sucking (jensen, ) . this idea is consistent with the observations that the duration of unrewarded visits to an automatic feeding station increased during gradual weaning (jensen and holm, ) , and that under practical farm conditions the frequency of cross-sucking among dairy calves around weaning is increased with decreasing availability or energy density of solid feeds (keil et al., ; keil and langhans, ) . in contrast to other calves, white veal calves are not weaned, receive large amounts of milk replacer and usually obtain only restricted amounts of solid food. these additional factors may also affect and perhaps exacerbate crosssucking in systems with an automatic feeder (jensen, ) . results by veissier et al. ( ) showing that bucket-fed group-housed veal calves show less cross-sucking than those fed by an automatic feeder again seem to implicate factors other than sucking motivation per se in the development and expression of cross-sucking. on the other hand, rearing calves in large groups with an automatic feeder allows more interactions between calves and offers calves the possibility to suck milk. competition for access to an automatic milk feeder was increased in groups of or calves in comparison with groups of or , respectively (herrmann and knierim, ; jensen, ) , and under dietary conditions of relatively low milk allowance and reduced milk flow rate (jensen and holm, ) . protecting calves from displacement at the feeder may also be accomplished by fitting a closed feeding stall to the station (weber and wechsler, ) . in comparison with the usual setup, this modification increased the duration of visits to the feeder as well as the duration of non-nutritive sucking on the teat after milk ingestion, and significantly reduced the frequency of cross-sucking within minutes after milk ingestion. however, the incidence of cross-sucking performed without prior milk ingestion was not affected by the design of the feeder (weber and wechsler, ) . in a recent comprehensive review, jensen ( ) observes that there is a lack of knowledge on the effect of different weaning methods on cross-sucking. she also concludes that future research should focus on preventive measures to reduce cross-sucking and problems with aggression in automatically fed calves, including the establishment of appropriate numbers of calves per feeder. the apparent increase in health problems of calves kept in large groups with automatic feeders might be related to group size rather than to feeding system. a comparison of two different group sizes of calves fed by an automatic milk feeder showed that calves housed in groups of - had a higher incidence of respiratory illness and grew less than calves housed in groups of - (svensson and liberg, ) . similarly, placement of preweaning heifer calves in groups of or more was associated with high calf mortality in a large scale epidemiological survey (losinger and heinrichs, ) . interestingly, in a study by kung et al. ( ) , group-housed calves fed by an automatic feeding system for milk supply had fewer days of medication than those kept individually in separate calf hutches. these authors also emphasize the importance of good management and frequent observations of calves as an integral part of a successful rearing program. likewise, howard ( ) specifically links good and correct management practices with the prevention of disease and successful group housing of dairy calves. natural weaning in cattle takes place when young animals are around - months of age. depending on productive system, weaning can usually occur between and months of age. dairy calves are usually reared away from their dams and they are given milk or milk replacer until weaning at to weeks of age. however holstein calves can be weaned at to weeks of age (early weaning). beef calves are usually weaned at to months of age depending on season of birth. early weaning of beef calves may be considered as a management practice in poor climate conditions and where forage quality is poor later in the grazing season. several studies have shown that it is possible to wean calves at very young ages based on concentrate intake (svc, ) . however, regardless of the productive system, weaning is effective and does not cause health and welfare problems to calves when it occurs as a smooth transition from an immature to mature ruminant with an adequate size and development of the reticulo-rumen for efficient utilisation of dry and forage based diets. at birth, the reticulum, rumen, and omasum of the calf are undeveloped, nonfunctional and small in size compared to the abomasum and rumen remains underdeveloped during the first - months of age. calves being ruminant animals require a physically and functionally developed rumen to consume forages and dry feeds. however, the rumen will remain undeveloped if diet requirements for rumen development are not provided. solid feed intake stimulates rumen microbial proliferation and production of microbial end products, volatile fatty acids, which initiate rumen epithelial development . solid feeds are preferentially directed to the reticulo-rumen for digestion, however they differ in efficacy to stimulate rumen development. recent studies have shown that addition of yeast culture ( %) increased calf grain intake, but did not affect rumen development in young calves ; while papillae length and rumen wall thickness were significantly greater in week old calves fed calf starters containing steamflaked corn over those fed dry-rolled and whole corn when these corn supplements made up % of the calf starter showing that the type of grain processing can influence rumen development in young calves. forages seem to be the primary stimulators of rumen muscularization development and increased rumen volume (zitnan et al., ) . large particle size, high effective fibre content, and increased bulk of forages or high fibre sources physically increase rumen wall stimulation, subsequently increasing rumen motility, muscularization, and volume coverdale et al., ) . besides, solid feeds other than forages or bulky feedstuffs can be effective in influencing rumen capacity and muscularization. coarsely or moderately ground concentrate diets have been shown to increase rumen capacity and muscularization more than finely ground or pelleted concentrate diets, indicating that extent of processing and/or concentrate particle size affects the ability of concentrates to stimulate rumen capacity and muscularization (beharka et al., ; greenwood et al., ) . therefore, it seems that concentrate diets with increased particle size may be the most desirable feedstuff for overall rumen development, due to their ability to stimulate epithelial development, rumen capacity, and rumen muscularization . calf weaning should be based on the amount of dry feed calves ingest per day, not on their age or weight, and calf starter should be made available five to days after birth. but, as pointed out from recent research attention must paid to type of forage and consistent of particle size of starter grain in order to achieve a proper rumen development. a calf consuming . kg of dry feed or more on three consecutive days is ready for weaning. when calves are fed low levels of milk to encourage early consumption of dry food, weaning can be done abruptly. in contrast, if milk is given in large amounts, weaning may require two to three weeks of slow transition to avoid a setback in growth. early weaning systems should not be used if the animals are in a negative energy balance. . . . quality of solid and liquid feed . . . . solid feed: concentrates and roughage traditionally, veal calves were fattened on a diet consisting exclusively of milk replacer. calves fed in this manner show a number of welfare problems (reviewed in the previous report), including abnormal behaviours and disease associated with lack of rumen development. to better safeguard the welfare of calves, provision of (some) solid feed to veal calves has become compulsory according to the latest amendment to the annex of council directive / /eec (commission decision / /ec). however, provision of roughage to veal calves fed a regular milk replacer diet, has clearly been demonstrated to increase the incidence of abomasal ulcers, in particular in the pyloric part (which connects to the duodenum) (wensink et al., ; welchman and baust, ; breukink et al., ) . thus, recent studies have largely focussed concurrently on the effects of provision of roughage on calf behaviour, abomasal lesions and rumen development, in an attempt to identify feeds that may benefit veal calf welfare without compromising abomasal integrity. in a comprehensive eu-funded project, a range of different types of roughage/solid feeds (straw, maize silage, maize cob silage, rolled barley and beet pulp) in different amounts ( versus gr dry matter) and of different particle sizes and physical characteristics (i.e., chopped versus ground, dried versus fresh, un-pelleted versus pelleted) were given to veal calves in addition to milk replacer in large-scale multifactorial trials (chain management of veal calf welfare, ; cozzi et al., ; mattiello et al., ) . control treatments consisted of milk replacer only, and milk replacer with ad lib access to hay. another control group consisted of bull calves reared in a similar way to normal dairy calves, i.e. the animals received ad libitum hay and concentrates and were weaned at weeks of age. in comparison with milk replacer only, those types of roughage that were richest in fibrous material, i.e. straw (regardless of amount and physical structure) and hay, significantly reduced the level of abnormal oral behaviours (composed of tongue rolling, tongue playing and compulsive biting/sucking of substrates), and concomitantly increased the level of rumination. weaned calves exhibited no abnormal oral behaviours. higher levels of rumination in veal calves as a function of the fibre content of the solid feed were also reported by morrisse et al. ( morrisse et al. ( , . in line with these findings, veissier et al. ( ) , observed reduced levels of biting at substrate and more chewing behaviour in veal calves provided with straw compared with un-supplemented controls. previously, it has been suggested that a sucking deficit causes abnormal oral behaviours in calves (sambraus, ) . more recent data, however, clearly identify the lack of appropriate roughage as a major determinant of abnormal oral behaviours in veal calves. correspondingly, bokkers and koene ( ) found no differences in abnormal oral behaviours between group-housed veal calves fed either by bucket or by an automatic feeder. results obtained in veal calves are also fully consistent with data in cows (redbo et al., ; redbo and nordblad, ) and other ruminants such as giraffes (baxter and plowman, ) , which all link increased levels of abnormal oral behaviours with feeds poor in fibre. in agreement with previous data, most roughages provided to milk-fed veal calves significantly increased the incidence of abomasal lesions, particularly ulcers in the pyloric region, in comparison with the feeding condition without additional roughage (chain management of veal calf welfare, ; . incidences of abomasal ulcers (expressed as the percentages of calves with one or more lesions) among weaned bull calves, calves fed milk only, and veal calves given supplemental roughages were , between - , and between - %, respectively. this suggests that the interaction between roughage and a milk replacer diet rather than roughage per se, is involved in the etiology of abomasal ulcers in veal calves. these findings support the hypothesis that pyloric ulcers in milk-fed veal calves may be caused by local ischaemia followed by focal necrosis as a consequence of strong contractions of the pyloric wall when large volumes of milk are consumed. provision of roughage, in turn, would then exacerbate an existing problem in that roughage particles exert a mechanically abrasive effect on a sensitive abomasal mucosa, and delay the healing of any lesions already present (unshelm et al., ; dämmrich, ; krauser, ; welchman and baust, ; breukink et al., ) . this explanation may also fit the observations that veal calves fed either hay or a combination of concentrates and straw exhibited similar incidences of abomasal ulcers to those fed milk replacer only (chain management of veal calf welfare, ; veissier et al., ) . these roughages represent more balanced feeds, accompanied by better rumen fermentation. this may have improved ruminal digestion of fibres, thereby preventing sharp undigested particles entering the abomasum. other factors proposed or examined in relation to the pathogenesis of abomasal ulcers in calves include stress, infection with bacteria, trace mineral deficiencies, and prolonged periods of severe abomasal acidity (lourens et al., ; mills et al., ; jelinski et al., ; de groote et al., ; palmer et al., ; ahmed et al., ahmed et al., , ahmed et al., , . however, so far none of these factors have been convincingly related to abomasal ulcers in veal calves. calves fed milk only, showed a high incidence of ruminal hairballs. in different experiments between - % of milk-fed veal calves had hairballs (chain management of veal calf welfare, ; cozzi et al., ) . feeding roughage gave a profound reduction of hairballs; depending on the type of roughage the incidence varied between - %. similarly, morisse et al. ( ) reported a marked reduction of ruminal hairballs in calves fed pelleted straw and cereals. this reduction was thought to result from a continuous elimination of ingested hair by improved ruminal motility. however, it may well be that abnormally high self-licking behaviour is reduced when roughage is provided. it is suggested that further optimising the composition of roughage in terms of adequate rumen development and rumen function, may eventually result in feeds that promote rumination and reduce abnormal oral behaviours without damaging the digestive apparatus (chain management of veal calf welfare, ; morisse, ; mattiello et al., ) for all newborn calves, receiving an adequate amount of high quality colostrum is essential for their health and survival. in comparison with mature milk, colostrum contains greater concentrations of total solids and of fat, protein, vitamins and minerals. most importantly, colostrum provides the calf with immunoglobulins (igg), which are vital for its early immune protection. in addition, colostrum contains a range of other non-nutrient and bioactive components including various types of cells, peptide hormones, hormone releasing factors, growth factors, cytokines and other bioactive peptides, oligosaccharides and steroid hormones. these factors modulate the microbial population in the gastrointestinal tract, have profound effects on the gastrointestinal tract itself (e.g. cell proliferation, migration, differentiation; protein synthesis and degredation; digestion, absorption, motility; immune system development and function), and in part exert systemic effects outside the gastrointestinal tract on metabolism and endocrine systems, vascular tone and hemostasis, activity and behaviour, and systemic growth (waterman, ; blum, ) . the highest quality colostrum, or true colostrum, is obtained from the very first milking after parturition. thus, provision of first colostrum to newborn calves is one critical factor for successful calf rearing. the timing of provision of colostrum is also crucial since the ability of the calf's small intestine to absorb large proteins such as igg decreases rapidly following birth. consumption of sufficient colostrum within the first h of life is needed not only for an adequate immune status but also to produce the additional important and favourable effects on metabolic and endocrine traits, and on vitality. finally, colostrum should be regularly provided for a sufficient length of time, preferably for the first three days after birth (hadorn et al., ; waterman et al., ; rauprich et al, ) . although the importance of colostrum for calf health and survival is generally recognized, actual practices in calf rearing do not always favour adequate colostrum intake in newborn calves, and may therefore pose a risk for their welfare. after the period of colostrum feeding, calves can be switched to whole milk or a high quality milk replacer. in the case of rearing calves, both sources of liquid feed are used, although the majority of dairy calves are currently reared on a milk replacer diet. milk replacers are usually less costly than saleable whole milk, and the feeding of raw waste milk may pose several health and contamination risks, including the transfer of infectious diseases to the calf, and problems with antibiotic residues or overdoses (wray et al., ; selim and cullor, ; waltz et al., ) . at present, good quality milk replacers may provide comparable performance to whole milk. however, pasteurization of waste milk prior to feeding it to calves may also represent an effective and viable alternative for minimizing health risks (stabel et al., ) . results from a recent clinical survey by godden et al. ( ) even suggested that dairy calves fed pasteurized waste milk have a higher growth rate and lower morbidity and mortality rates than do calves fed conventional milk replacer. with the exception of production systems involving suckler cows, veal calves are generally fattened on milk replacer diets. over time, formulations of commercially available milk replacers for veal calves (as well as those for dairy calves) have become more and more sophisticated. at the same time, economic pressures continuously prompt the industry to reduce feeding costs and to consider alternative components and raw materials. originally, proteins in milk replacers were milk-based, and skim milk powder constituted the major protein source. subsequently, milk replacers based on whey powder became available. approximately during the last two decades, attempts have been made to replace animal-based proteins in milk replacers by vegetable proteins, mainly from soybean and wheat and, to a lesser extent, pea and potato. initially, some of these attempts met with little success because of health problems in the calves. for example, compared to calves fed diets based on skim milk powder, calves fed milk replacers containing heated soybean flour developed severe immunemediated gut hypersensitivity reactions characterized by partial atrophy of the small intestinal villi, malabsorption, diarrhoea, and large infiltrations of the small intestine by immune cells, accompanied by the presence of high antibody titres against soy antigens in plasma and intestinal mucous secretions (lalles et al., a (lalles et al., , b (lalles et al., , (lalles et al., , dreau et al., ; dreau and lalles, ) . however, the nutritional utilization of vegetable proteins can be improved by a variety of technological treatments including, for example, heating, protein hydrolysis, and ethanol extraction. such treatments reduce anti-nutritional factors and antigenic activity, and increase protein digestibility by denaturing three-dimensional structures (lalles et al., c (lalles et al., , d . at present a number of processed plant proteins are successfully applied in combination with milkbased protein sources in milk replacers for (veal) calves, including hydrolysed soy protein isolate and hydrolysed wheat gluten. recent research in the area of plant proteins in milk replacer formulas is focussed on understanding mechanisms underlying the flow of proteins in duodenal digesta, and the interaction of dietary peptides with the gut, in particular at the level of the mucus layer (montagne et al., (montagne et al., , (montagne et al., , . results of this type of work may further enhance the use of plant proteins in milk replacers for calves. in addition to an enhanced risk for gut problems, low quality milk replacers may also cause dysfunction of the oesophageal groove reflex, which may result in ruminal acidosis. in this respect, temperature is also an important quality feature; a too cold drinking temperature of the milk replacer attenuates the oesophageal groove reflex (gentile, ) . if vegetable proteins are not properly treated, milk replacers may cause hypersensitivity reactions in the gut, which may compromise calf welfare. low iron dietary supply is a prerequisite for the production of white veal. the blood haemoglobin level in veal calves towards the end of fattening (between . and . mmol/l), is generally considered a threshold below which iron deficiency anaemia occurs (bremner, ; sprietsma, a, b; postema, ; lindt and blum, a) , although some authors have argued that this level is already below a critical value (welchman et al., ) . when calves were forced to walk on a treadmill, those with a mean haemoglobin level of . mmol/l consumed more oxygen and exhibited higher cortisol levels after walking than calves whose haemoglobin level was . . or . mmol/l (piguet et al., ) . on the other hand, blood lactate after transport was not significantly different between groups of calves with average haemoglobin levels of . and . . mmol/l, respectively (lindt and blum, b) . there is a large body of evidence showing that iron deficiency anaemia may compromise immunocompetence, in particular cellular immune function, in a range of species including laboratory rodents and humans (dallman, ; dhur et al., ; galan et al., ; latunde-dada and young, ; ahluwalia et al., ) . in human children, iron-deficiency states have been epidemiologically associated with increased morbidity due to respiratory infection and diarrhoea (keusch, ; de silva et al., ; levy et al., ) . this justifies the question of whether dietary iron supply and associated haemoglobin levels are sufficient to guarantee adequate health in white veal calves. previous results concerning the relationship between clinical health and anaemia in veal calves are scarce, and were inconclusive. using very small numbers of calves, möllerberg and moreno-lopez ( ) found no difference between iron anaemic and normal calves in the clinical response to infection with an attenuated parainfluenza- virus strain, whereas sárközy et al. ( ) reported a depressed immune response as reflected in significantly lower antibody levels in anaemic calves compared with controls following inoculation with a live adenovirus. in a study by gygax et al. ( ) , cellular immune function was depressed, and disease incidence, especially of respiratory infections, was increased in calves fed low amounts of iron. however, in this particular study, haemoglobin levels dropped considerably below the value of . mmol/l. a more recent study (van reenen et al., ) , therefore, aimed to examine immunocompetence in a bovine herpes viral (bhv ) infection model in white veal calves with blood haemoglobin levels maintained at all times above or just at . mmol/l. calves daily supplemented with extra iron exhibited normal haemoglobin levels across the entire experiment (average approximately . mmol/l), whereas white veal calves had average haemoglobin levels at the time of bhv infection and at slaughter of approximately . and . mmol/l, respectively. dietary iron supply did not affect the reactions of calves to bhv infection (clinical signs, viral excretion in nasal fluid, antibody reponse), white blood cell and lymphocyte counts, and growth rate. by contrast, in comparison with calves with high haemoglobin levels, white veal calves exhibited a higher heart rate during milk intake, had consistently elevated levels of urinary noradrenaline, and showed enhanced plasma acth and reduced plasma cortisol responses in a number of hpa axis reactivity tests. these latter findings concur with increased heart rate and catecholamines in urine, and altered responsiveness of the hpa axis in iron-deficient or anaemic humans and laboratory rodents (voorhess et al., ; dillman et al., ; dallman et al., ; groeneveld et al., ; saad et al., ) . these physiological changes are part of an elaborate adaptive response to iron deficiency (beard, ; rosenzweig and volpe, ) , which also involves alterations in glucose metabolism (blum and hammon, ) . veal calves with blood haemoglobin levels clearly below . mmol/l demonstrated reduced growth rates as well as a large depression in white blood cell and lymphocyte counts (reece and hotchkiss, ; gygax et al., ) . thus, it is suggested that maintaining blood haemoglobin in individual veal calves over . mmol/l induces a number of physiological adaptations which seem universal for iron-deficient mammals in general, but do not harmfully compromise biological capacities in terms of growth and immunocompetence. in actual practice, however, the haemoglobin threshold of . mmol/l is currently considered at the group rather than at the individual level. for example, an average haemoglobin level of . mmol in a group of finished veal calves is assumed to be exactly at the lower threshold value. however, depending on the variation between individuals, if a group of calves has an average haemoglobin level of . mmol/l, then some individuals within that group may have levels well below this lower threshold value. in fact, based on an analysis of the variation between calves in blood haemoglobin levels, it has been argued that the haemoglobin threshold for anaemia of a group of veal calves should be higher than that of an individual calf, i.e. an average level of . rather than mmol/l (van hellemond and sprietsma, a) . in order to prevent anaemia during fattening, blood haemoglobin levels are monitored to some extent in white veal calves, and animals are treated with supplemental iron according to age-dependent haemoglobin thresholds. however, systematic monitoring generally occurs only on two occasions: within the first - weeks upon arrival at the fattening unit, in all animals, and between - weeks of fattening, in a sample of calves. outside these instants, individual calves may receive iron supplementation in the presence of clinical signs of iron deficiency. but once clinical signs are apparent, haemoglobin levels are usually well below . mmol l - (blaxter et al., ; bremner et al., ) . since blood haemoglobin levels are not routinely monitored in veal calves beyond the th week of fattening, there is a likelihood of too low haemoglobin levels occurring in part of the animals, in particular towards the end of fattening, when low haemoglobin levels are most likely to occur. general housing calves kept indoors are housed in an environment where several important factors interact such as space, pen design, social contacts, flooring and bedding material as well as climate. in experimental studies, usually one or a few of these factors are varied and the others controlled for. however in larger epidemiological studies many of these factors vary and their interaction can be measured. in a study of heifer calves in swedish dairy herds the effect of draught, cleanliness of the animals, hygiene level of the farm, placing of the calf pens, nature of the pen walls, air volume per animal, management factors such as status of the caretaker and feeding routines was evaluated by means of a two-level variance component logistic model. the placing of calf pens along an outer wall was significantly associated with the risk of diarrhoea (odds ratio (or): . , p< . ), the risk for respiratory disease was significantly associated with an ammonia concentration below ppm (or: . , p< . ) while the or for moderately to severely increased respiratory sounds was significantly associated with draught (or: . , p< . ) (lundborg et al., ) . odds ratios for respiratory disease were increased in calves housed in large-group pens with an automatic milk-feeding system (or: , ) . the report highlights that the housing systems of calves and the available space affect the development and determine which behaviours the animals are able to perform. the report (svc, ) recommends the minimum space for both single crate and group pen and it points out how lack of space can affect health and welfare of reared calves (maatje and verhoeff, ; dantzer et al., ; friend et al., ) . the report also suggests that shape of the pen can be important to the animal. recent studies confirmed that the space available can affect both behavioural and physiological traits and productive performances of cattle. however, the majority of them compare behaviour, production or other indicators of calves reared in individual crates versus group pens (vessier et al., ; andrighetto et al., ; jensen, ; verga et al., ; cozzi et al., ; bokkers and koene, ) or tethered or single pen (terosky et al., ; wilson et al., ) which were already discussed in chapter . little research has been done to directly compare behavioural and physiological indicators of welfare in calves reared in pens of various space allowances. in dairy calves it has been shown that spatial environment stimulated play: calves in small group pens performed less locomotory play that the ones kept in larger pens (jensen et al., ; jensen and kyhn, ) . it has been reported in a preliminary study that dairy calves kept, from birth to month of life in larger stalls ( . m x . m) showed a higher percentage of lying behaviour and grooming than calves kept in smaller stalls ( . m x . m); besides, lymphocyte proliferation was significantly higher in calves reared in large stalls (ferrante el al., ) . it is known that cattle prefer to use the perimeter of pens rather than the central area (stricklin et al., ; hinch et al., ; fraser and broom, ) . the ratio between the number of corners in the pen and number of animals seems to influence the individual space, the space that calves try to keep to other calves, as showed by simulation models (stricklin et al., ) . therefore pen shapes maximising the perimeter to area ratio might be preferable for cattle (jóhannesson and sørensen, ) . for this reason it has been pointed out that measurements such as pen perimeter, the number of corners and the diagonal distance of the pen could be important for dairy cattle (jóhannesson and sørensen, ) . however there is a lack of knowledge on this topic on calves. in a study on veal calves most of the animals lying next to the wall, the quieter and drier part of the pen, stood more on the side of the far pen and eliminated in the feeding area (stefanowska et al., ) . calves kept in a large group ( animals) and fed using an automatic milk replacer showed an elevated use of the area around the partition of the pen and they spent little time in the centre of the area. (morita et al., ) , this use of the pen space could lead to a pen design functionally divided into a walking and feeding area and a lying area. the report concludes that slatted floors must not be slippery, it also recommends appropriate bedding, for example straw, and that every calf should have access to a dry lying area. the report highlights that housing and management conditions can affect the posture adopted when lying and resting in calves. . . . recent findings regarding importance of floor and bedding materials slatted floors have been used for many years as convenient for intensive housing for beef cattle but concerns have been expressed about their effects on animal welfare (scahaw, ) . the type of surface not only affects the movements of getting up and lying down, lying and resting behaviour of the fattening animals but also other behavioural traits and physiological indicators of stress (scahaw, ) . moreover when cattle can choose between different floor types they prefer deep litter to slatted floor especially for resting. many studies were conducted in order to analyse the floor comfort in the lying area in dairy and in beef cattle (for a list of references see tuyttens, ; scahaw, ) . the group pens for veal calves do not have separate lying areas and therefore the animals spend all their time on the same surface. if the floor is too hard for lying or too slippery, discomfort, distress and injury may result. a suitable floor is very important for calves as adequate rest is essential for the good welfare of young growing animals, moreover a positive correlation between the amount of rest and growth rates has been observed for growing cattle (mogensen et al., ; hanninen et al., ) . adequate resting is important both for sleep and temperature regulation. veal calves are often housed on slatted floors, commonly made of hardwood, a product that is controversial because it often comes from unsustainable forestry in tropical countries (stefanowska et al., ) , or on concrete floors due to the fact that bedding material is costly and requires more labour and can cause problems in manure handling systems. wooden slatted floors can absorb liquid from manure and a wet surface is not comfortable for moving and lying (verga et al., ) . even if straw bedding provides better floor comfort to animals than slatted or concrete floors, suitable alternatives to reduce or eliminate the use of straw bedding are available for cattle (tuyttens, ) . recent studies have investigated the effect of the texture (how soft) and the thermal properties of floor on lying postures and resting behaviour of calves. in cool or drafty floors calves spent less time resting on the side and rest curled up in order to conserve heat (hanninen et al., ) . in contrast with adult dairy cows which rested longer and lay down more frequently on softer floors, there was no effect of type of floor (concrete floor or rubber mats) on resting behaviour of dairy calves (hanninen et al., ) . in another experiment where veal calves could choose to use a hardwood slatted floor surface or a synthetic rubber coated floor surface the calves preferred the wooden floor for lying (stefanowska et al., ) . moreover the animals rested in the drier part of the pen (stefanowska et al., ) . from these studies it seems that the texture of the floor is not as important to calves as to older animals, whereas thermal comfort seems to affect lying and lying postures. panivivat et al. ( ) investigated growth performance and health of dairy calves bedded with five different types of materials (granite fines, sand, rice hulls, long wheat straw, wood shavings) for days during august to october from birth. overall average daily gain and dry matter intake of calves did not differ with bedding type, although during week , calves housed on rice hulls had the greatest dry matter intake and those housed on wood shavings had the lowest. during week , calves housed on granite fines and sand were treated more often for scours, and calves housed on long wheat straw received the fewest antibiotic treatments (week by bedding material interaction). granite fines formed a harder surface than other bedding, and calves housed on granite fines scored the dirtiest. long wheat straw had the warmest surface temperature, and rice hulls and wood shavings were warmer than granite fines and sand. serum cortisol, alpha ( )-acid glycoprotein, immunoglobulin g concentrations, and the neutrophil:lymphocyte ratio were not affected by bedding type. on day , coliform counts were greatest in rice hulls. after use, coliform counts were greatest in long wheat straw (week by bedding material interaction). growth rates of calves bedded for d with bedding types did not differ; however, the number of antibiotic treatments given for scours was greatest on granite fines and sand; coliform counts in the bedding were highest in rice hulls before use and in long wheat straw after days of use. degree of social contact the report recommends that calves are cared for by their dam after birth so that they are licked and receive colostrum and that calves are not deprived of social contact, especially with other calves because ) calves for social contacts; ) calves isolated from other calves express more abnormal activities (e.g. excessive grooming, tongue rolling), are hyper-reactive to external stimuli and their subsequent social behaviour is impaired; and ) in combination with restricted space or lack of straw, individual housing induces a chronic stress state as assessed through enhanced responses to an acth challenge. . . . recent findings regarding contacts with the dam the bond between dam and calf is likely to develop very soon after birth: calves separated from their dam at h can recognise the vocalizations of their own dam one day later (marchant-forde et al., ) in their review about early separation between dairy cows and calves, flower and weary ( ) conclude that, on the one hand, behavioural reactions of cows and calves to separation increase with increased contacts but, on the other hand, health and future productivity (weight gain for the calf, milk production for the cow) are improved when the two animals have spent more time together. calves reared by their dam do not develop cross-sucking while artificially reared calves do so (margerison et al., ) . the provision of milk through a teat, a long milk meal, and the possibility to suck a dry teat can decrease non-nutritive sucking in artificially reared calves but do not abolish it (review by jensen, ; lidfors and isberg, ; veissier et al., ) . the presence of adult cows other than the dam do not help calves to get accustomed to new rearing conditions, as observed by schwartkorf-genswein et al. ( ) for calves submitted to feedlot conditions. . . . recent findings regarding contacts with other calves recent studies confirmed that calves are motivated for social contact. such a motivation was shown using operant conditioning by holm et al. ( ) ; furthermore calves that are housed individually engage in more contacts with their neighbours than calves housed in pairs (raussi et al., ) . the presence of a companion can reduce emotional responses of calves. this, for instance, is the case when group housed calves are exposed to a novel situation like a novel object (boivin et al., ) , a novel arena (jensen et al., ) , a sudden event (veissier et al., ) , or a lorry (lensink et al., ) . humans are not a good substitute for social contacts. individually housed calves interact more with their neighbours compared with pair-housed calves, even when they receive additional contacts from the stockperson (e.g. stroking, letting suck fingers, speaking softly) (raussi et al., ) . (see section on humananimal relationships). . . . comparison between individual housing vs. group housing individual housing can be stressful to calves as measured by adrenal responses to acth (raussi et al., ) . group housed calves are generally more active than individually housed calves as far as gross activity is concerned (more time spent moving or eating, less time spent idling or lying) (babu et al., ; raussi et al., ) . group housing can benefit production: xiccato et al. ( ) found that calves housed in fours put on more weight than calves tethered in individual crates. however, this seems not to be the case when the calves are not tethered in individual crates (veissier et al., ) . group housed calves are less easy to handle. human contact is thus essential for them to become accustomed to humans and to react less to handling (lensink et al., ; mogensen et al., ) . group housing can help calves acquiring social skills (boe and faerevik, ) . some experience of mixing is of particular importance: calves that have been reared for a while in a group dominate calves that have always been in individual crates (veissier et al., ) . by contrast, it is not clear whether repeated mixing would be beneficial or harmful to calves veissier et al., ) . recent research (e.g. svenson et al., and svensson and liberg, ) suggests that transfer from individual pen to group-housing during the second week of life is disadvantageous for health reasons (see chapter ) and, that a delay in mixing until the calf is weeks old may be preferable. additional research seems necessary to establish what mixing age would be preferable from a health and welfare perspective. . . temperature, ventilation and air hygiene the importance of the aerial environment inside a calf house for the health status of the animals was stressed in the report, and it still seems to be one of the major factors which cause morbidity and mortality (svensson et al., ) . bioaerosols (micro-organisms, dust), low air temperatures together with high air humidity, gases such as ammonia, draught, insufficient air space and poor ventilation form a complex environmental situation which can be detrimental particularly for the respiratory health of young calves (lundborg et al. ; svc, ) . . . . temperature and relative humidity a healthy calf consuming a sufficient amount of feed has a wide zone of thermal neutrality. there is no difference in the performance of healthy, normal eating calves at temperatures ranging between ºc and ºc provided it is dry and not exposed to draughts. above ºc conditions in confined calf houses can start to become uncomfortable. moran ( ) suggests that the ideal temperature and relative humidity for calves are ºc and %, respectively. however, there is a large number of influencing factors to consider which can alter the situation for a calf substantially. lower critical temperatures for a calf in calm air and full feed are different whether it stands (- ºc) or is lying on dry concrete (- ºc) or on dry straw (- ºc) (thickett et al. ). the younger the animal the higher is its demand for the thermal environment. by week of age, the lower critical temperature in still air is approximately ºc (webster, ) . this temperature can be significantly changed by draught, wet coat and feeding level. young calves start to shiver at ºc when they are exposed to draught even if their coat is dry and they are fed sufficient feed. when fed on maintenance only level shivering starts at ºc. if their coat is wet and they are exposed to draught shivering starts at ºc when on full feed and ºc when fed on a low level (moran, ) . no signs of shivering are observed at ºc when the coat is dry with no draught and feeding is at a normal level. cold stress in calves can be prevented by providing dry lying areas, appropriate feeding and draught free ventilation. dry bedding such as straw significantly improves thermal comfort for the lying calf. in summer situations reduced feeding (but with sufficient water supply!) or feeding calves in the cooler evening or at a reduced animal density per pen and increased ventilation rates can help to lower heat stress. heat stress can also be reduced through constructing sheds with insulated roofs and well ventilated walls. calf houses with a solid wall construction and a high capacity to store energy combined with an efficient ventilation system can also contribute to create comfortable environmental temperatures for young calves kept indoors all year (din , ) . the preferred environmental temperature for calves is not fixed, it largely depends on management and other environmental factors such as wind speed and humidity of the air. the generally accepted range of relative humidity for calf barns is between and % with an optimum around % which is not too humid to dissipate excess heat and not too dry to dry out the respiratory pathways predisposing the mucous membranes to infectious and noxious agents present in the inhaled air. air humidities of more than % can lead to condensation on the walls and ceiling increasing the risk of wetting the animals by water dripping off these surfaces. high relative humidities can also impair the insulation properties of the walls increasing heat losses. cold and humid air at high velocities can considerably increase the heat loss of animals. lundborg et al. ( ) showed that draughts greater than . m/s measured close to the animal, significantly increased the odds ratio for moderate to severe respiratory sounds. the higher the humidity the higher the risk of wet skin and cooling and shivering. high air humidity increases the probability that bacteria survive in an airborne state and are transmitted between animals in the same pen and between animal pens (wathes, ) . there are existing numerous reports from s onwards of the survival of bacteria and viruses employing simple regression models to describe the loss of viability of microbes over time. however, these models lack an insight into biophysical and biochemical mechanisms of cell and virus death. as long as we do not understand these mechanisms, the measures to reduce the air pollutants are limited to either increased ventilation or increased air space. the smaller the air space per animal the more sophisticated the ventilation system must be. the influence of air space was demonstrated by wathes ( ) showing that doubling of the air space in a calf barn from to m³ per calf had the same effect on the concentration of airborne bacteria as a six fold increase in ventilation rate (air exchange rate). an air space of to m³ per calf was recommended by hilliger ( ) from experience. . . . air quality aerial pollutants in confined animal houses are widely recognised as detrimental for respiratory health. primary and opportunistic microbial pathogens may cause directly infectious and allergic diseases in farm animals, and chronic exposure to some types of aerial pollutants may exacerbate multi-factorial environmental diseases, such as enzootic bronchopneumonia. the factors can be inadequate environmental conditions, e.g. too low temperatures, high ammonia concentrations and poor ventilation resulting in low air quality. poor air and surface hygiene in calf buildings are nearly always associated with intensive systems of husbandry, poor standards of management and high stocking densities (wathes ) . the most common aerial pollutants in calf housing are summarised in table . . gases such as ammonia (nh ), hydrogen sulphide (h s), carbon dioxide (co ), and more than hundred trace gases form an airborne mixture of bioaerosols composed of about % organic compounds and can also contain endotoxin, antibiotic residues and further trace components. significantly high amounts of endotoxins were found in calf house air while bacteria and dust are relatively low compared with pig and poultry houses and suggest that a high number of gram-negative bacteria are present in the air. the average concentration of ng/m³ endotoxin given in table . represents about eu (endotoxin units) according to the new nomenclature. it seems rather high in comparison to a formerly proposed occupational threshold of eu for humans at the work place (rylander and jacobs, ) . it can be assumed that high endotoxin concentrations in calf house air may substantially contribute together with the other bioaerosol compounds and the physical environment to the high level of respiratory disorders in young calves up to days (assie et al., ) . in general, there is little detailed knowledge on the specific composition of bioaerosols in calf keeping systems and which factors cause respiratory diseases. assie et al. ( ) found e.g. a tendency to higher repiratory disorders in non-weaned calves reared in loose-housing yards compared with tied-cow stalls. the highest incidence rates of cases were observed between november and january, while daily meteorological conditions obviously did not influence incidence rates. one of the most detrimental gases in calf barns is ammonia which is formed by bacterial degradation of nitrogen containing compounds in urine and faeces. it is the most widespread naturally occurring alkaline gas in the atmosphere and a strong irritant in animals and humans. concentrations in the air of more than ppm can impair the proper functioning of the mucous membranes of the respiratory tract and predispose to infection. in a recent study lundborg et al. ( ) found that the risk for respiratory disease was significantly associated with an ammonia concentration below ppm (or: . ; p< . ). high concentrations of hydrogen suphide can be released in high amounts from stored liquid calf manure when it is stirred up before removal from the slurry pit of the barn. concentrations of about ppm are acute fatal. the composition of the inhalable and respirable particles in animal houses is associated with compounds such as dried dung and urine, skin dander and undigested feed. the majority of bacteria found in shed airspace have been identified as gram-positive organisms, with staphylococci spp. predominating (cargill et al., ) . a survey by heinrichs et al. ( ) showed the importance of good ventilation which removes dust and other respirable particles as well as noxious gases and is essential for calf health. adequate ventilation is seen as vital to help to reduce the incidence of respiratory disease. air inlets should be above calf height and the outlet at least . m above the inlet (howard, ) . however, heating and ventilation in combination with an air filtration system significantly improved the environment in a calf house but did not completely eliminate pneumonia (bantle et al., ) . this may have to do with other factors such as ambient temperature. in a recent study by reinhold and elmer ( ) some compromise in lung function (compared with controls) was seen in calves exposed to an ambient temperature of ºc. . . . . light in the past, veal calves were often kept in dark to reduce muscle activity but the requirements for light have increased over the last years from to lux (bogner and grauvogel, ) , to over lux (irps, ) to lux for at least h and according to daylight circadian rhythm. (tierschutz-nutztierhaltungsverordnung germany, ) . there is wide agreement that calves need light for orientation in their boxes or pens and for social contact. a precise threshold has not been determined. there is a need for air movement around calves to supply fresh air and to remove excessive heat, moisture and air pollutants (gases, dust, microorganisms). good ventilation systems provide this exchange. however, high air speeds close to the animals can lead to draughts and should be avoided. draughts happen when part of an animal is hit by an air stream with a higher velocity than the ambient air movement and which has a substantially lower temperature than the surrounding air, causing a feeling of cold and physiological reaction in that particular part of the body. draught can lead to poor welfare and disease when it continues and the animal cannot escape, e.g. when it is tethered. it is generally recommended that wind speeds around animals should be between . m/s in winter (least value) and about . m/s in summer. these values strongly depend on relative humidity and temperature of the air and whether the skin of the animals is dry or wet, full fleece or shorn. in confined buildings this complex relationship between the various factors is strongly influenced by the ventilation system and the ventilation rate which is necessary for the number of animals kept in the animal house. it seem useful to develop a more comprehensive model for the interaction of the different air quality compounds and the air exchange rate to improve our understanding of the welfare and health impacts arising from the air environment. . . . ventilation ventilation plays an essential role in improving air quality in calf barns. this applies to free ventilated and forced ventilated houses. calculations of ventilation rates are usually based on heat removal in summer and moisture removal in winter, and give some guideline temperatures and humidities of the air which should not be exceeded (e.g. cigr, ; din , ) . ventilation rates in calf barns can only be calculated satisfactorily for confined buildings. minimum ventilation rates around m³ per kg live weight should be sufficient to keep the air quality within acceptable limits if the air distribution system ensures an even air exchange in all parts of the building. such guidelines (cigr) and norms (din , ) cannot guarantee healthy calves but they can substantially help in designing confined calf houses. it seems useful to standardise the air quality and ventilation requirements for confined calf houses in europe in order to reduce respiratory disorders, suffering of the animals and economic losses. the report highlights two aspects of human-animal relationships: the skills and motivations of caretakers to raise healthy calves, which are of particular importance for indoor calves and in large groups and are linked to the health status of calves; the physical contacts between caretakers and calves to improve subsequent reactions of calves to humans. it recommends careful monitoring (by the same person throughout rearing) and careful handling to habituate calves to human contacts. recent studies confirmed that the stockpersons have a great impact on both the productivity (e.g. growth) and the welfare of farm animals (stress responses, fear reactions during handling) (boivin et al., ; reviewed by hemsworth and coleman, ) . the effect of stockmanship is two-fold: good stockmanship leads to healthy animals and less stressful human-animal interactions. stockman skills are associated with positive attitudes towards work and towards animals. in calf production, a better health status is observed on farms where the caretaker (also the owner) believes that calves are sensitive animals and he/she has a positive attitude towards farming tasks in calf production (lensink et al., b) contacts given by stockpersons to animals depend also on human attitudes. stockmen that are positive towards gentle contacts with calves (e.g. stroking, talking) are more likely to provide calves with such contacts (lensink et al., a) . it is not only the duration of contact but also its nature that plays a role. gentle contacts (e.g. stroking, talking, letting a calf suck fingers, offering food) lead to calves approaching humans as they have less fear of handling (boivin et al., ; jago et al., ; lensink et al., b) . whereas rough contacts (e.g. hitting with a stick, use of nose tongs or an electric prod) lead to fear reactions in presence of humans (rushen et al., ) . the electric prod seems particularly stressful to calves (croney et al., ) and noises (metal clanging, shouts by humans) will also increase stress during handling (waynert et al., ) . during transport to slaughter, less fear responses to handling (e.g. due to regular previous experience of gentle contact) not only improves the welfare of calves but also improves meat quality (lower ph and lighter colour) (lensink et al., c; lensink et al., a) . cattle are able to distinguish between familiar and unfamiliar persons (rybarczyk et al., ; taylor and davis, ) . among familiar persons, they distinguish between those who have been rough with them and those who have been gentle (e.g. stroking, brushing, and offering food) (de passillé et al., ) . compared with individually housed calves, calves housed in groups tend not to approach humans and are more difficult to handle (lensink et al., b) . the presence of the dam can lower the effectiveness of gentle contacts with animals (boivin et al., ) . contact early after birth can be more effective that contact provided later; however, regular contact is necessary to maintain a lower level of fear responses to humans (boivin et al., ) . the report recommended: to dehorn calves between - weeks by cauterisation with adequate anaesthesia and analgesia (no details given) to castrate calves at months with adequate anaesthesia and analgesia (no details given) . . . dehorning dehorning means the removal of horns while disbudding (on young animals) corresponds to the removal of horn buds. disbudding can be performed by cautery, or by rubbing or covering the horn buds with a chemical (naoh, koh or colloidon), or by amputation with a specifically designed sharp tool, a scoop. recent publications confirmed that disbudding and dehorning are painful to cattle (stafford and mellor, ) . the existence of pain is deduced from observations of an increase in blood cortisol for several hours after dehorning and from specific pain related behaviour: head shaking, ear flicking (faulkner and weary, ) disbudding without anaesthesia or analgesia is painful to calves, even when young, and dehorning with a wire-saw is painful to cows even if anaesthesia is carried out (taschke and folsch, ) . disbudding by cautery (hot iron, electric tool) and chemical disbudding (naoh) are less painful than disbudding with a scoop (stilwell g. et al., a; stilwell g. et al., b; sylvester et al., ) . local anesthesia ( - ml lidocaine or lignocaine % around the corneal nerve - min before disbudding) can abolish the pain that immediately follows cautery and largely diminishes the pain caused by disbudding by other methods; the effects last for the few hours and when the nerve block has lost its effect, pain ensues stilwell et al., b; sutherland et al., b) . local anesthesia plus analgesia with a non-steroidal anti-inflammatory drug (nsaid)(e.g. ml flumixin meglubine or - . mg/kg ketoprofen ( %) min before disbudding) abolish pain caused by cautery but only reduces it in the case of disbudding with a scoop (unless it is followed by cautery) (sylvester et al., ; faulkner and weary, ; sutherland et al., a; stilwell et al., b) . in their review, stafford and mellor ( ) concluded that cautery is the less painful method for disbudding and that optimal pain relief is obtained with xylazine sedation, local anaesthesia and analgesia with a non-steroidal antiinflammatory drug. the report recommend that when cattle are to be castrated this should be done at around months of age and under appropriate anaesthesia and analgesia. methods to castrate cattle are: clamping (generally with a burdizzo clamp), constriction of the blood supply with a rubber ring, and surgical removal (cutting of the scrotum then traction on the testes and spermatic cords or cutting across the spermatic cross). calves are castrated as early as week up to over months (see review of practices used in uk by kent et al., ) . castration is painful whatever the method used and whatever the age of the calf (molony et al., ; robertson et al., ) . acute pain is deduced from the observation of increases in blood cortisol and abnormal postures (immobility), and behaviours such as foot stamping and kicking. chronic pain is deduced from the observation of activities targeting at the site of castration (e.g. licking, head turning, alternate lifting of the hind legs, and slow movements of the tail) as well as abnormal standing. burdizzo clamping and surgery induce acute pain for at least h (molony et al., ; obritzhauser et al., ; robertson et al., ) . burdizzo is less painful than surgery but may also cause pain for longer (at least h) due to scrotal inflammation (stilwell, pers. comm.) . castration with a rubber ring causes both acute and chronic pain for at least . mo (molony et al., ) castration is less painful for wk old calves than for - wk old calves (robertson et al., ) and it is less painful at . months than at older ages (ting et al., ) . local anesthesia ( ml lignocaine % into each testicle through the distal pole) abolishes the acute pain associated with castration by a ring or a band (stafford k.j. et al., ) . it reduces but does not abolish acute pain associated with castration by surgery or clamping (fisher et al., ; stafford et al., ) . analgesia with a nsaid drug (e.g. ketoprofen % mg/kg body weight) reduces the pain associated with clamping (ting et al., ) . some analgesics (e.g. caprofen, , mg/kg body weight) are effective for longer than h and are thus more likely to provide more effective pain relief (stilwell, comm. pers.) . local anaesthesia plus analgesia appears to eliminate the acute pain due to castration by surgery or clamping . the most important diseases in young calves are diarrhoea and respiratory disease (olsson et al., ; sivula et al., ; virtala et al., a , donovan et al., lundborg, ) . a prospective study was carried out on randomly selected beef herds in the midi-pyrenees region in france (bendali et al., ) . the objective was to describe diarrhoea and mortality in beef calves from birth to days of age. calves ( , ) were followed from december to april , and a total of visits allowed records of herd management practices, individual data and environmental conditions to be collected. the incidence rate for diarrhoea during the neonatal period was . %, and varied markedly between herds. eighteen herds did not suffer from diarrhoea, while five herds had an incidence of more than %. results indicate that % of diarrhoea appears during the first week and only % after the second week of life. the greatest risk of diarrhoea for a calf was during the first and second weeks of life ( . and . times, respectively). the month of birth was also significantly associated with morbility, the highest incidence was observed in december and march ( . % and . %, respectively) . the global mortality rate was . % and was two-times higher in december than in other months. forty per cent of herds did not exhibit mortality, and % had mortality rates greater than %. in a study of calf health in cow-calf herds in switzerland, busato et al. ( ) found that of calves included in the study % of the calves had been treated by a veterinarian. of those, % of the treatments were given because of diarrhoea and % because of respiratory disease. another swiss study (frei et al., ) showed that in swiss dairy herds, the incidence density (id) per animal-years of diarrhoea, omphalitis (infection of the navel), respiratory diseases and other diseases were . , . , . and . respectively. in a study of nine herds and calves with swedish red and whites, swedish holsteins and some crossbreeds the effect of group size on health was studied (svensson and liberg, ) . after transfer to group pens (at - days of age) . % of the calves had diarrhoea, . % had omphalophlebitis/umbilical abscess, . % had a clinical respiratory-tract disease and . % had weak calf syndrome. of all calves, in . % there was associated general condition impairment. in . % of the diarrhoea cases antibiotics were used as treatment and of the clinical respiratory-tract cases % were treated with antibiotics. several factors have been associated with an increased risk of infectious disease during the first days of life, particularly factors affecting serum immunoglobulin concentration. in a study of dairy herds in south-west sweden, svensson et al. ( ) clinically monitored the health of heifer calves from birth until days of age. % of the calves developed one or more diseases during this period. most of the diarrhoea cases were mild ( %) whereas of the cases of respiratory disease % were severe. the total morbidity was . cases per calf-month at risk and the incidence rates of arthritis, diarrhoea, omphalophlebitis, respiratory disease and ringworm were . , . , . , . and . cases per calf-month at risk respectively. odds ratios were calculated for severe diarrhoea in calves born in the summer (or: . ) and receiving colostrums through suckling instead of a bucket or nipple (or: . ). it has been shown that calves left with their mothers have a delayed/longer time to ingest colostrum and often fail to ingest adequate volumes (rajala and castrén, ) . svensson and liberg ( ) found that the health status of the mother cow - days before calving, length of dry period, retained placenta and somatic cell count were predisposing risk factors for respiratory disease in the calf. svensson et al. ( ) were also able to demonstrate that the odds ratios for respiratory disease and increased respiratory sounds were increased in calves housed in large group pens with an automatic milk-feeding system (or: . and . ). similar results have been reported by maatje et al. ( ) and plath ( ) . there was a decreased odds ratio for respiratory disease if calving was supervised (or: . ) . if birth was taking place in individual maternity pens or in tie stalls instead of in cubicle or group maternity pen, the odds ratio for increased respiratory sounds was . or . respectively. % of the diarrhoea cases were treated with antibiotics whereas % of the respiratory cases were treated using antibiotics. in another study of nine farms, svensson and liberg ( ) found that in pens for six to nine calves there was a significantly reduced risk of clinical respiratory tract disease (or: . - . ) compared with pens with - calves and there was also an association with the age at transfer to the group pen. the risk of diarrhoea was not affected by housing the calves in differently sized groups. however, calves housed in large sized groups grew significantly less quickly (approximately g/day) than calves housed in groups of six to nine. serological responses to respiratory viruses (e.g. bovine respiratory syncytical virus, parainfluenza virus, corona virus and viral diarrhoea virus) showing that animals within a herd are usually either all seropositive or all seronegative, indicate that infections spread to all calves in the herd when introduced or activated (hägglund et al., in press) and hence that aerosol is an important means for the spread of viruses. however, an infected animal is not equivalent to a diseased animal. it has been shown that calves housed in adjacent pens can maintain quite different levels of disease. svensson and liberg ( ) reported that calves in a group of had a significantly higher incidence of clinical respiratory disease that calves in an adjacent pen kept in groups of . engelbrecht ( ) reported calves transferred to group pens in a batchwise manner had significantly higher prevalence of diarrhoea and respiratory disease than calves in adjacent pens that were transferred to and from the group pen continuously. in both studies calves had no direct contact with calves in adjacent pens. these results indicate an important role of direct contact for the transmission of respiratory disease and hence the importance in disease control of decreasing direct contact between calves within the same building by means of solid walls. svensson and liberg ( ) also reported that the age at transfer from single pens to group pens was associated with the risk of respiratory disease, indicating that delaying transfer to after two weeks of age might be preferable for health reasons. enteritis is the most common disease in calves less than a month old (virtala et al., b; wells et al., ; radositis et al., ) . diarrhoea is caused by dietary factors or caused by infections due to viruses, bacteria or parasites. routines in distributing colostrum to the calf are crucial for transferring immunoglobins to the calf and to obtain a good health. (rajala and castrén, ; liberg and carlsson, ) . enteritis is clinically recognized by the observation of faeces with a looser consistency than normal. colour as well as smell of the faeces might be affected. diseased animals exhibit fever and may be inactive usually as a result of dehydration and possibly acidosis (radositis et al., ) . usually it is not possible to differentiate between different agents causing the diarrhoea by clinical findings. rotavirus is worldwide a major cause of diarrhoea and it is an often detected agent among young calves with enteritis (e.g. björkman et al., ) . rotavirus affects calves usually between and weeks of age and the diarrhoea can vary from very mild to lethal (de leeuw et al., ) . the virus is excreted through faeces of infected animals and is very resistant for several months, that is why cleaning of pens is necessary to break the infectious path (saif and theil, ) . bovine coronavirus is most commonly seen in calves at about week of age (fenner et al., ) . escherichia coli k + may cause diarrhoea in young calves although it is a part of the normal intestinal flora. poor routines for transferring colostrum to the calf, stress etc. might trigger a diarrhoea outbreak (wray and thomlinson, ) . severity of the disease may vary but with a high proportion of mortality (radositis et al., ) . only amoxicillin is recommended for the treatment of diarrhoea caused by e. coli bacteria associated by systemic illness. in calves with diarrhoea and no systemic illness (normal appetite for milk, no fever), the health of the calves should be monitored carefully and no antibiotics should be administered (constable, ) . bovine viral diarrhoea may occur at any age. the infection causes an immunosuppression in infected animals which may lead to infections with other intestinal or respiratory pathogens (elvander et al., ; de verdier klingenberg, ) and it may increase the mortality rate in the herd (ersböll et al., ) . salmonella spp, mainly s dublin and s typhimurium can affect calves usually between and weeks of age. the pathogen is introduced into the herd via infected feed, water, pastures, cattle or humans or via other animals entering the herd. calves are infected orally and clinical signs are fever and bloody diarrhoea (carter and chengappa, ) . clostridial infections in the gastrointestinal tract are sometimes a problem in calves. usually, the calf, less than days of age, develops haemorrhagic, necrotic enteritis and enterotoxemia, often associated with clinical abdominal pain. affected calves exhibit tympany, hemorrhagic abomasitis and ulcerations in abomasum (songer and miskimins, ) . as yet, relatively little is known about the etiology aside of the participation of c. perfringens type a. overfeeding or feeding which decreases gut motility is suggested to contribute to the occurrence of the disease (songer and miskimins, ) eimeria spp. are frequently present in cattle in europe (bürger, ) . predominantly e. ellipsoidalis was found in housed calves in east germany (hiepe et al., ) and the distribution may differ from country to country. svensson ( ) found a predominance of e. alabamensis in swedish dairy calves. clinically, signs are rarely seen but diarrhoea can occur usually as a result of exposure at the first grazing season in areas contaminated with oocysts (svensson, ) . there is evidence that infection rates have increased since the prohibition of tethering (berthold, pers. com.). emeria bovis and emeria zuernii are other intracellular protozoan parasites belonging to the same group and with a worldwide distribution (urquhart et al., ) . it is often seen in calves between - months of age (holliman, ) . the disease is triggered by stress such as very cold or hot climate (urquhart et al., ) . cryptosporidial infection in calves less than days old is significantly associated with the risk of infection in the dairy herd. the risk increases when animals are grouped together and when hygiene and management practices are deficient (attwill et al., ; mohammed et al., ) . factors associated with a decreased risk of infection in preweaning calves were shown to be use of ventilation in calf rearing areas, daily addition of bedding, feeding of milk replacer, daily disposal and cleaning of bedding and use of antibiotics. in addition, postweaning moving of animals was also associated with a decreased risk of infection with c. parvum (mohammed et al., ) . perryman et al. ( ) showed that with appropriate supply of immune colostrum, diarrhoea can be prevented. two species are distinguished: c. parvum and c. andersoni, although only c. parvum has been shown to be associated with diarrhoea . cryptosporidia parvum is an intracellular protozoan parasite belonging to coccidiae. in the uk c. parvum has been considered to be one of the most common agents in neonatal diarrhoea in calves (reynolds et al., ) . in denmark it was found mixed with other enteropathogens in % of diseased calves (krogh and henriksen, ) . in two recent swedish studies it was found in % and % respectively of calves with diarrhoea (de verdier klingenberg and svensson, ; björkman et al., ) . the most common form of respiratory disease affecting young calves is enzootic pneumonia (ames, ; radositis et al., ) . it is considered to be a multifactorial disease with causative agents, individuals and environmental factors as important components (ames, ) . enzootic pneumonia usually affects calves between and months of age (radositis et al., ) . the signs usually found are fever, nasal discharge, coughing and increased respiratory sounds when lung auscultation is performed. secondary bacterial infections may occur which might increase the fever. diagnosis of etiological factors may be achieved from serological examinations, viral examinations from nasal discharge or at autopsy. bovine respiratory syncytical virus (brsv) is a worldwide present agent with seasonal peaks during autumn and winter (baker and frey, ) . the virus is thought to be transmitted from infected animals, by transmission of humans or by airborne transmission (van der pohl et al., ; elvander, ) . morbidity can be high but mortality is usually low . another virus with a milder course of disease is para-influenza- virus but the virus can cause immunosuppression predisposing to secondary bacterial infections (adair et al., ) . the most common bacterial pathogens in calves with respiratory disease are pasteurella multocida and manheimia hemolytica (mosier, ; bengtsson and viring, ) . these agents are usually found in the bovine nasopharynx and may, as a result of viral disease proliferate and colonise the lungs of the calf (kiorpes et al., ) . haemophilus somnus was shown to be commonly present in danish calves (tegtmeier et al., ) where no such agents were found in swedish calves (bengtsson and viring, ) . arcanobacterium pyogenes and staphylococcus aureus (carter and chengappa, ) as well as mycoplasma spp. (ames, ) are other agents found in immune depressed calves with other infections. infections may occur in the umbilical cord (radositis et al., ) of newborn calves. various bacteria are found and through a bacteraemia infection may spread to to joints, meninges and internal organs (radositis et al., ) . omphalitis is painful in response to palpation of the umbilicus. arthritis is often secondary to an umbilical infection and usually affects the calf during its first month resulting in warm and swollen joints, fever and lameness (radositis et al., ) . the effect of environmental factors on the risk of diarrhoea and respiratory disease was studied by lundborg et al. ( ) in the same farms and calves as previously reported by svensson et al. ( ) . they found that the placing of calf pens along an outer wall was associated with a significantly higher risk of diarrhoea (or: . ). an ammonia level below ppm was significantly associated with the risk of respiratory disease (or: . ) but variations of ammonia levels were low, while the odds ratio for increased respiratory sounds was associated with a bvdv infection in the herd (or: . ) and draught (or: . ). absence of draught was associated with the risk for infectious diseases other than diarrhoea and respiratory disease (or: . ), a finding which could not be explained by the authors. an increased calf mortality rate in herds with a bvdv infection has also been reported by ersböll et al. ( ) and the eradication of bvdv infection in a dairy herd has been demonstrated to decrease the incidence of calf diarrhoea (de verdier klingenberg et al., ) . typically, clinical experience is that the incidence and prevalence of infectious respiratory disease is much higher in rearing systems where the calves have been bought and transported from several farms where they were born to a specific rearing farm than if they are reared on the farm they were born on. calves reared indoors commonly develop complex respiratory diseases. bergmann ( ) reported that % of calves of a large fattening unit with several thousand calves suffered from bronchopneumonia within the first six month of their life. similar figures were reported from herrmann ( ) with a prevalence of to %, lämke et al. ( ) % and busato et al. ( ) %. the disease seems to be continuously present and does not come or go in form of isolated outbreaks. therefore kielstein et al. ( ) called it enzootic pneumonia. it is a typical multifactorial disease caused by a variety of different types of micro-organisms which are always present but becoming a nuisance only when additional factors contribute (grunert, ) . the most prominent reasons for losses of calves in the first weeks of life are respiratory and digestive disorders (katikaridis ; girnus, ) . losses can reach to per cent in the first six months of rearing (berchtold et al. and others). estimations show that the financial losses are reaching million €/year in germany (biewer, ) . this sum does not cover the costs of veterinary treatment and reduced growth rates of the calves. there are several epidemiological studies on the different diseases in calves in the first couple of weeks of life (katikaridis, ; biewer, ; girnus , svenson et al., . heinrichs ( ) reported that % to % of , fallen calves coming for post-mortem dissection showed digestive disorders. calves weeks of age died predominantly of respiratory diseases ( % to %). an investigation of , calves less than months shows that enzootic bronchopneumonia can already start with the age of weeks (buhr, ) . the calves were not older than months. % displayed abomasum enteritis. % of the animals suffered from pneumo-abomasal enteritis. only . % of the fallen animals suffered from a distinct pneumonia. an epidemiological survey of calf losses in free range and suckling cow herds showed that the percentage of calf losses is increased with herd size. % of herds with less than suckling cows had a calf mortality of less than %. in herds with more than cows, these were % of all investigated farms, calf losses were higher than % (laiblin and metzner, ) . main disorders were again diseases of the digestive tract ( %) and respiratory tract ( %). the authors calculated that the disease risk for calves born from cows that were housed during the calving season was . times higher compared with cows kept the whole year on free-range. the epidemiological data from the vast majority of investigations suggest considerable differences in morbidity and mortality of calves among different farms. this implies that the management and housing conditions greatly influence heath, welfare and survival of calves in the first months of their life. the situation was not substantially improved by vaccination of cows against a cocktail of infectious agents causing diarrhoea. there are no experimental studies available to indicate whether or not there is any advantage to calf welfare of preventing individuals in separate pens from social contact as opposed to a disadvantage to calf welfare of greater spread of disease with housing where such social contact is possible. in general, disease spread occurs in buildings with continous air space and contact is not a clearly identified factor. however, recent results indicate an important role of direct contact for the transmission of respiratory disease and hence the importance in disease control of decreasing direct contact between calves within the same building by means of solid walls (see chapter / / ). antibiotic resistance although the use of antibiotics as growth promoters is being progressively restricted through eu regulations, they are still used in large quantities in calf rearing for both prophylactic and therapeutic purposes. in those instances where calves are not reared on site but transported to other locations and mixed in groups, the incidence of clinical illness is high and the use of antibiotics is frequent. in a study of antibiotic resistance, berge et al. ( ) found high levels of multiple resistance in calf commensal faecal escherichia coli both on farms with calf production and on dairy farms. the investigators found that escherichia coli from calves on dedicated calf-rearing facilities was more likely to be multiple-resistant than e. coli from dairy-reared calves (or: . ) (berge et al., a) . in her phd thesis, berge ( ) showed that both prophylactic use of antibiotics in milk replacer and individual antibiotic therapy increased the resistance of faecal e. coli in calves. e. coli isolates from calf ranches were the most resistant, with in order of decreasing levels, isolates from feedlots, dairies and beef cow-calf farms. on organic dairies fewer resistant e. coli isolates were found in comparison with conventional dairies. e. coli isolates from beef cow-calf farms were less resistant if the farms were on remote locations compared to those on locations close with dairy intense areas. the use of antibiotics to treat clinical illness will increase the welfare of the animal given that the drug has a beneficial clinical effect. however, the frequent use of antibiotics results in increasing resistance in bacteria such as e. coli and thus poses a threat to the welfare of calves in a longer perspective as well as to man (aarestrup and wegener, ) . in a clinical trial on a calf ranch in california, it was shown that the most important factor for decreasing morbidity and mortality was to ensure adequate passive transfer through colostrum (berge et al., c) . thereafter, the ability to use antibiotics for clinical treatment of disease was important to decrease morbidity and mortality. the use of antibiotics in the milk replacer had a minor protective effect on calf health. the authors concluded that in order to minimize prophylactic use of antibiotics, adequate passive transfer of colostrum needs to be assured. furthermore measures need to be taken to optimize nutrition, decrease environmental stress and pathogen load on the farms. in the same study, the antibiotic resistance patterns of the commensal faecal e. coli of calves receiving antibiotics in the milk-replacer, antibiotics for clinical disease, and no antibiotic therapy were compared (berge et al., ) . the study showed the emergence of highly multiple resistant e. coli in the calves receiving antibiotics in the milk-replacer. the commensal faecal e. coli were predominantly resistant to at least of antibiotics tested. the resistance covered the antibiotics available for clinical therapy. antibiotic treatments for clinical disease resulted in a transitory shift to more resistant faecal e. coli, but the effect was not detectable approximately days post-treatment. the effect of clinical therapy with antibiotics was similarly assessed in steers in south dakota. in a feedlot study a single dose injectable florfenicol to steers resulted in transitory shifts to increasing levels of multiple resistant e. coli in the faeces. the e. coli from the treated steers were not only more resistant to chloramphenicol (same antibiotic group as florfenicol), but were increasingly resistant to several other antibiotics in other antibiotic classes. (berge et al., b) in dairy cattle it has been estimated (kelton et al., ) that between and % of all lactations include a mastitis infection. most of these cases are treated with antibiotics. milk must be withheld from sale during the treatment and for the compulsory withdrawal period. such "waste milk" is often fed to calves as it is the most economical alternative from the farmer's perspective. earlier studies have previously given various results on how antibiotic resistance develops as a result of the use of this procedure. recently, a controlled, multiple-dose experiment by langford et al. ( ) found an increasing resistance of gut bacteria to antibiotics with increasing concentrations of penicillin in milk fed to dairy calves. in a multi-farm study in california (berge et al., ) including dairies, no association was found between increasing levels of antibiotic resistance in calf faecal e. coli and the consumption of waste milk. it should, however, be noted that mastitis in these dairies are predominantly treated with intra-mammary antibiotics (cephalosporins) and injectable antibiotics are rarely used for mastitis treatments (berge, non-published data) . the use of rearing systems for calves that increase the incidence of disease and thus the use of antibiotics for either preventive or clinical purposes should be avoided. further, there is a risk that the use of "waste milk" for calves will increase antibiotic resistance in gut bacteria in calves. . food safety aspects of calf farming foodborne hazards that can be present on calf farms include biological and chemical hazards. biological hazards associated with calf farming include following main examples: a) bacterial foodborne pathogens salmonella spp., human pathogenic vtec (hp-vtec), thermophilic campylobacter spp., and mycobacterium bovis; and b) parasitic foodborne pathogens tania saginata cysticercus and cryptosporidium/giardia. on-farm control of chemical foodborne hazards is outside the scope of this chapter and will not be considered. faecal shedding of foodborne pathogens can occur in calves and adult bovines without symptoms of disease; but the shedding pattern may differ between the two age categories. in the conventionally reared animal the intestinal tract becomes colonised from birth by combinations of bacterial species until the characteristic and complex flora in the adult animal is achieved. in the early stages of the process infections with bacterial pathogens are common. once the indigenous flora is established it resists colonization by pathogens and other 'foreign' strains by competitive exclusion . the gut flora of the bovine species changes with ruminal development and the population of faecal coliforms of the adult differ markedly, both qualitatively and quantitatively, from that of the young; particularly that of veal calves fed milk replacer (smith, some early studies (gronstol et al., a; gronstol et al., b) , based on experimental salmonella infection of calves, described virulence, spread of infection and the effects of stress on the carrier status. hinton et al. ( hinton et al. ( , a hinton et al. ( , b determined the incidence of salmonella typhimurium (dt and dt ) excretion by veal calves fed milk replacer and report that, while initially low on in-take at around days of age, its incidence rose to a peak by days. the level of salmonella contamination of the environment also affects the incidence of infection in housed animals (hinton et al., a (hinton et al., , b . provided calves are reared in separate fattening units and slaughtered on separate slaughterlines the incidence of salmonellae in calves can be maintained at a very low level (guinée et al., ) . during meat inspection clinical salmonellosis is sometimes diagnosed in a herd of veal calves; however, the prevalences are usually of the order of magnitude of per thousand and the strains isolated are generally restricted to salmonella typhimurium (occasionally) and more commonly, salmonella dublin (up to %). although salmonellae may not be isolated from faeces, significant proportions of calves slaughtered commercially ( . - . %) have contamination involving hepatic lymph nodes, liver, mesenteric lymph nodes and, because of cross contamination, they may ultimately also be isolated from the carcass surface (nazer and osborne, ; wray and sojka, ) . studies conducted in the netherlands in the late 's indicate that microorganisms may be released from lymph nodes through transport stress and may appear in the faeces. this results in young veal calves being cross-infected in transit and at markets; however, in dutch studies faecal samples from no more than . % of the animals were found to contain salmonellae on arrival at the fattening units (van klink and smulders, ) . moreover, within weeks of arrival, faeces samples become negative again (van zijderveld et al., ) . subsequent studies by the same workers (van zijderveld et al., unpublished, cited by van klink and smulders, ) indicate that faecal samples from calves which had survived clinical salmonellosis also become culture-negative, albeit only after weeks. these findings suggest that, provided stressful transport conditions are avoided and sufficient hygienic care is taken to avoid cross infection during transport to the abattoir, the extent of introduction of salmonellae to the veal slaughterline is indeed extremely low. this is substantiated by repeated failure to isolate salmonellae from carcass surfaces of veal calf populations, and from their livers and offal meats (van klink and smulders, ) . as with other bacterial foodborne pathogens, antimicrobial resistance in salmonella shed by calves represents an additional food safety risk. numerous studies have shown that use of antimicrobials in food producing animals selects for resistance in non-typhoid salmonella spp. and that such variants have been spread to humans (who, ; walker et al., ; fey et al., ) . in general, antimicrobial resistance in s. typhimurium isolates from bovines in the eu was widespread in , with highest prevalence of resistance to ampicilin, sulfonamide, tetracycline and streptomycin (efsa zoonosis report, b), but the data does not relate specifically to calves. vtec is a group of e. coli that produces one or more verocytotoxins (vt) also known as shiga toxins (stx), but not all members of this group cause foodborne disease in humans. in the opinion on verotoxigenic e. coli (vtec) in foodstuffs (scvph, c) , vtec that have been associated with causing human disease were referred to as human pathogenic vtec (hp-vtec). foods of bovine origin (e.g. beef, milk) have been implicated in a number of foodborne outbreaks caused by hp-vtec (borczyck et al., ; chapman et al., ; martin et al., ; pennington, ; scvph, a) ; these include several serotypes (e.g. o , o , o , o and o ). when adult cattle were inoculated with vtec o , they showed no outward signs of infection and the organism was cleared from the gastrointestinal tract within two weeks (wray et al., ) . the organism seems to be a constituent of their naturally-occurring microflora, and longitudinal studies show most cattle occasionally carry e. coli o in their faeces (hancock et al., ; lahti, ) . however, the prevalence of infection with hp-vtec of, and the shedding patterns in, cattle can vary due to variable factors including age, immunocompetence status, husbandry conditions, season and geographical areas. prevalence of vtec o is usually higher in younger animals (synge, ; cray and moon, ; hancock et al., ; mechie et al., ; van donkersgoed et al., ) . in calves, the prevalence of e. coli o :h can range from zero to . % prior to weaning, and often increases after weaning (bonardi et al., ; laegried et al., ) . calves normally show no, or little, sign of infection, perhaps only some excess faecal mucus (myers et al., ; synge and hopkins, ; brown et al., ; richards et al., ; wray et al., ) ; the shedding rate can fall rapidly in the first two weeks after inoculation and continue intermittently for several weeks. in the first three months of life on contaminated farms, faecal prevalence can increase from around % to %, possibly due to the decline in maternal immunity (busato et al., ) . fasting showed little effect on the carriage and excretion of e. coli o in calves (harmon et al., ) . less information is available on non-o hp-vtec in calves that have potential to cause enterohaemorrhagic disease in humans, so establishing indicators for virulence and clarifying the epidemiology of such serotypes is needed. thermophilic campylobacter spp. according to the biohazard scientific report on campylobacter in animals and foodstuffs (efsa, a) , the most important species of campylobacter are the thermophilic species c. jejuni ssp. jejuni, c. coli and c. lari; other species which are known to cause human illness are c. upsaliensis, c. fetus ssp. fetus and c. jejuni ssp. doylii. the most common species recovered in human disease is c. jejuni. campylobacter spp. can be found throughout the intestine of cattle, but the level of the organism in the rumen is significantly lower than that found in the small intestine and in faeces (stanley et al., ) . the class of cattle also has an effect on the level of campylobacter spp. found in the faeces; faeces may contain around cfu/g in calves, around cfu/g in beef cattle and around cfu/g in adult dairy cattle. campylobacter spp. are more often found in the faeces and intestine than in the rumen, while campylobacter jejuni prevalence is much less than that of campylobacter hyointestinalis (ataby and corry, ; grau, ) . campylobacter jejuni has been found in calves in % of beef farms, campylobacter coli in % and campylobacter hyointestinalis in % (busato et al., ) . within the herds, zero to over % of the calves may be excreting campylobacter spp.; % of the calves may be positive for campylobacter spp. in herds without evidence of diarrhoeal disease (myers et al., ) . in this study, % of the isolates were c. jejuni. in a study of veal calves at slaughter, c. jejuni was found in % of calf rumen samples (in low numbers; < /g) and in % of calf faecal samples (grau, ) , whilst c. hyointestinalis was found in % of calf rumen samples (in numbers > /g) and in % of faecal samples. the coats of the calves were also contaminated, as % were positive for c. jejuni and % for c. hyointestinalis. as with other bacterial foodborne pathogens, antimicrobial resistance in thermophilic campylobacter spp. shed by calves represents an additional food safety risk. in , although the total number of isolates was relatively small and the data are not related specifically to calves, some eu member states reported relatively high prevalence of resistance to quinolones, fluoroquinolones and tetracyclines in campylobacter spp. including c. jejuni from bovines (efsa zoonosis report, ) which can be an emerging public health concern. . . mycobacterium bovis efsa a, opinion on the risk assessment of a revised inspection of slaughter animals in areas with low prevalence of cysticercus); therefore, the parasite will not be further considered here. cryptosporidium parvum and giardia duodenalis are protozoan parasites that have caused disease in humans primarily via contaminated water or foods (e.g. salads), but also via chicken salad and milk drinks. high prevalences of cryptosporidium and giardia in veal calves (the age group - weeks) have been reported (van der giessen et al., ) . however, in this study, all isolates from the former group belonged to the pathogenic cryptosporidium parvum genotype , whilst only few isolates from the latter group showed similarities with giardia isolates from humans. other authors also reported presence of these protozoan parasites, cryptosporidium (de visser et al., ) and giardia (trullard ; mcdonough et al., ) in veal calves. risk evaluation and principles of food safety assurance at calf farm level the prevalence-level of infection and/or contamination of calves with, and further spread of, foodborne pathogens at calf farms depend on a large number of risk factors that are inherently variable even at single-farm level. the complexity of the problem is further exacerbated by the existence of a number of different farming systems for veal calves in the eu; and even within each of the main farming categories (e.g. intensive vs extensive) a large number of "epidemiological" subcategories exists that differ with respect to one or more risk factors. therefore, presently, both knowledge and published data are insufficient to produce a universal risk assessment enabling quantitative categorization of different types of calf farms and/or their quantitative comparison/ranking with respect to main foodborne pathogens. nevertheless, the role of some main factors contributing to an increase in prevalence and/or in levels of foodborne pathogens in food animals on farms (including calves) are reasonably well understood, as are the generic principles of their control. they are indicated in a condensed form in table . . it is logical that calves from farming systems in which fewer of the contributing factors exist and where the controls are more complete/efficient will represent lower foodborne pathogen-risk than calves from farming systems having opposite contributing factors-controls situation. therefore, future food safety risk categorization of individual farming system, or related between-systems comparisons, would be dependent upon obtaining and analysing quantitative information on: a) status/levels of contributing factors; b) status/levels of hazards of main concern; and c) existence and effectiveness of their controls. increased "importation" and spread of pathogens via animals "asymptomatic excretors" animal supply only from known, epidemiologically "equivalent" sources; "all in-all out" system presence of animal diseases spread of zoonotic agents in calves global disease control programmes; heard health plans *good farming practice-good hygiene practice . risk assessment . . introduction to risk assessment approach when the ahaw panel of efsa was confronted with the tasks of updating the report (svc, ) , the working group members were asked to make it on the basis of a risk assessment , and particularly to consider the possible effects on the calf and, where relevant, on food safety. it appeared entirely feasible for the working group members to follow this part of a risk analysis approach where risks were defined as those concerning the welfare of calves. the risk of concern in this report is that the welfare of the calves will be poor. this may involve an increased risk of injury, of disease, of negative feelings or of failure to cope. the time span of such poor welfare might vary from short to long and severity can vary from low to high. a member experienced in risk assessment procedures was included in the working group from the start. initially, the procedure adopted for the risk assessment was identified and presented to the participants of the whole working group. when identifying the hazards, it has been assumed that the managers of the farm and animal keepers have a basic knowledge, that they have undergone training, that they are aware that the particular constraints on the farm do not hamper their work (e.g. lack of facilities on a farm). however, it is pointed out that under practical conditions hazards may interact, e.g. inadequate air flow may interact with poor air quality, inadequate clinical health monitoring may interact with inadequate haemoglobin monitoring, etc. the identification of hazards and consequential risks to welfare, as well as the risk assessment approach, were agreed by the working group. steps of the risk assessment a. multidisciplinary approach the expert working group it was selected on the basis of having expertise in animal science, ethology, veterinary medicine, risk assessment and food safety. b. listing of potential hazards, hazard characterization and exposure assessment the first step was to describe the needs of calves (see chapter and listed below). then, hazards that might compromise those needs were identified (table . ) and related to each specific need (table . ). the hazards were characterized in relation to the impact they have on the animal. the exposure to the hazard might vary between different rearing systems. for this purpose a set of different rearing categories was developed (table . ) as well as scoring categories for the hazard characterisation, exposure assessment and risk evaluation (table . ). b.a. small groups, bucket fed (not suckling) + solid foods, weaned at - months b.b. groups with an automatic feeding system (not suckling) + solid foods, weaned at - months b.c. feed lots (high density groups within outside pens) b.d. hutches outside, bucket fed (not suckling) + solid foods, weaned at - months beef calves c.a. suckler calves in small groups kept inside, led twice a day to the dam for suckling up to - months the hazards were identified and characterized, as well as, an estimate of the probable exposure. however, to ensure that these estimates of exposure correspond with current practice in various european calf production systems, a group of veterinarians, experts in clinical practice in calf production, named the "consultation group", was identified. criteria for invitation were the following; predominantly engaged in clinical practice; extensive clinical experience in calf medicine; and covering various geographical areas where calf production is significant in the eu. another important criterion was that the consultant should not be affiliated with the calf production industry. in total veterinarians accepted an invitation to assist in the exposure assessment. the experience of the individuals covered the various husbandry systems and important veal producing countries in europe. the consultation group prompted that for the exposure assessment, a quintile distribution (i.e. five classes of % increments) of exposure classes be adopted. in some instances the estimates of the wg and the consultation group on exposure did not agree in which case the opinion of the consultation group was interpreted to represent the factual situation. in other instances the exposure could not be estimated due to lack of data, in which cases the risks were labelled "uncertain". for hazards characterized as moderately to very serious, this uncertainty is highlighted. table in annex show the agreed scoring between the wg members and the "consultation group" for the hazard characterisation, exposure assessment and risk evaluation. inadequate colostrum intake -quantity b) inadequate colostrum intake -quality c) inadequate colostrum intake -duration ) iron deficiency resulting in haemoglobin levels below . mmol/l ) deficiency of other minerals (cu, se) ) insufficient access to water (not milk) (especially during warm season) ) allergenic proteins ) insufficient appropriately balanced solid food ) overfeeding (too rich diet) ) underfeeding ) too low temperature of milk or milk replacer ) exposure to excessively contaminated feed that results in pathology ) no access to natural teat or artificial teat housing ) high humidity and too high or low a temperature ) indoor draughts ) inadequate ventilation, inappropriate airflow, and air distribution within the house, airspeed, temperature ) poor air quality (ammonia, bio-aerosols and dust)) ) poor air quality ( major risk ; is considered not applicable c. assessment of whether hazards pose risks (substantiation by scientific evidence) as a consequence of the hazard characterisation and exposure assessment, the risk for poor animal welfare and health was assessed by integrating the hazard character with exposure according to the table . below. the risk was assessed as "major" if the hazard was judged to have a very serious effect and the exposure was frequent or very frequent or if the hazard was serious and exposure was very frequent. the risk was assessed to be "minor" if the hazard was very serious and exposure was rare, if hazard was moderately serious and exposure was moderately frequent or if hazard was adverse and exposure was very frequent. the hazards of iron deficiency and insufficient floor space is considered to be very serious, the hazard of inadequate health monitoring is considered to be serious and the hazards of exposure to inadequate hemoglobin monitoring, allergenic proteins and too rich diet is considered to be moderately serious. for these hazards, there is not enough information on the exposure of calves mainly due to lack of data why it is recommended that further studies should be made to provide evidence for an exposure assessment. regarding the hazard ) castration and dehorning without anesthetic and analgesic drugs, there is a variation in relation to national legislation as to why the risk of poor welfare in relation to castration and dehorning is widely different between countries. further, there is a variation in the use of analgesia during the time after the surgery is carried out which also affects the welfare of the calf. the ahaw panel wishes to thank the members of the working group chaired by panel member the needs and functioning of calves the needs of calves hutches: partially closed, outside area individual pens: open front structure straw yard with bedded lying area . . . group pens with automatic milk feeder . calf rearing and animal environmental pollution . . the quantitative share of calf production in the pollution problem . calf diseases and use of antibiotics human pathogenic-verotoxigenic escherichia coli (hp-vtec) on calf farms risk evaluation and principles of food safety assurance at calf farm level listing of potential hazards, hazard characterization and exposure assessment assessment of whether hazards pose risks (substantiation by scientific evidence) annex . hazard characterisation and exposure assessment the effects of antibiotic usage in food animals on the development of antimicrobial resistance of importance for humans in campylobacter and escherichia coli cytotoxic interactions between bovine parainfluenza type virus and bovine alveolar macrophages immune function is impaired in iron-deficient, homebound, older women effect of orally administered cimetidine and ranitidine on abomasal luminal ph in clinically normal milk-fed calves effect of feeding frequency and route of administration on abomasal luminal ph in dairy calves fed milk replacer effect of orally administered omeprazole on abomasal luminal ph in dairy calves fed milk replacer dairy calf pneumonia. the disease and its impact effect of type of housing on veal calf growth performance, behavior and meat quality incidence of respiratory disorders during housing in non-weaned charolais calves in cow-calf farms of pays de la loire (western france) age, geographic, and temporal distribution of fecal shedding of cryptosporidium parvum oocysts in cow-calf herds effect of individual versus group rearing on ethological and physiological responses of crossbred calves bovine respiratory syncytical virus bovine respiratory syncytical virus ventilation system to reduce pneumonia in a warm calf house: a case study the effect of increasing dietary fibre on feeding, rumination and oral sterotypies in captive giraffes (giraffa camelopardalis) neuroendocrine alterations in iron deficiency effects of form of the diet on anatomical, microbial, and fermentative development of the rumen of neonatal calves pattern of diarrhoea in newborn beef calves in south-west france respiratory infections -project, panorama and treatment strategies kälberkrankheiten spatial, temporal and management-specific factors influencing antibiotic resistance and carbohydrate fermentation patterns in bovine enteric escherichia coli and the clinical consequences of limiting antibiotic use in pre-weaned calves assessing antibiotic resistance in fecal escherichia coli in young calves using cluster analysis techniques animal and farm influences on the dynamics of antibiotic resistance in faecal escherichia coli in young dairy calves assessing the effect of a single dose florfenicol treatment in feedlot cattle on the antimicrobial resistance patterns in fecal escherichia coli a clinical trial evaluating prophylactic and therapeutic antibiotic use on health and performance of calves a field trial evaluating the influence of prophylactic and therapeutic antibiotic administration on antibiotic resistance in fecal escherichia coli in dairy calves cryptosporidium parvum, giardia intestinalis and other enteric pathogens in swedish dairy calves role of oxytocin in glucose homeostasis and weight gain iron-deficiency anaemia in calves endocrine and metabolic aspects in milk-fed calves nutritional physiology of neonatal calves grouping and social preferences in calves, heifers and cows emotional reactivity affected by chronic stress: an experimental approach in calves submitted to environmental instability is gentling by people rewarding for beef calves? early contact with peers or a stockperson influences later emotivity and social behaviour of dairy calves the maternal environment limits the effect of early human nursing and contact on lambs' socialisation to the stockperson stockmanship and farm animal welfare activity, oral behaviour and slaughter data as welfare indicators in veal calves: a comparison of three housing systems isolation of verocytotoxin-producing escherichia coli o :h from cattle at slaughter in italy bovine reservoir for verotoxin-producing escherichia coli o :h . the lancet(i anaemia and veal calf production abomasal ulcers in veal calves: pathogenesis and prevention husbandry methods leading to inadequate social and maternal behaviour in cattle indicators of poor welfare needs and welfare of housed calves. in: new trends in veal calf production scientific research on veal calf welfare welfare, stress and the evolution of feelings welfare assessment and problem areas during handling and transport the use of the concept animal welfare in european conventions, regulations and directives. food chain evolution of pain coping, stress and welfare welfare behaviour and welfare in relation to pathology the effects of group-rearing or partial isolation on later social behaviour of calves approaching questions of stress and welfare effects of disease on farm animal welfare bem-estar animal: conceito e questões relacionadas -revisão brain measures which tell us about animal welfare experimental escherichia coli o :h carriage in calves untersuchung von todesursachen bei kälbern in schleswig-holstein in den jahren eimeria-infektionen beim rind calf health in cow-calf herds in swotzerland prevalence and infection risks of zoonotic enteropathogenic bacteria in swiss cow-calf farms principles and guidelines for the conduct of microbiological risk assessment. document cac/gl the influence of air quality on production increases associated with all-in/all-out management essentials of veterinary bacteriology and cattle as a possible source of verocytotoxin-producing escherichia coli o infections in man effects of pair versus individual housing on the behavior and performance of dairy calves antimicrobial use in the treatment of calf diarrhoea effects of various levels of forage and form of diet on rumen development and growth in calves somministrazione di un mangime solido a vitelli a carne bianca stabulati in gabbia individuale the provision of solid feeds to veal calves: i. growth performance, forestomach development, and carcass and meat quality experimental infection of calves and adult cattle with escherichia coli o :h effects of handling aids on calf behavior iron deficiency and the immune response the pituitary-adrenal response to stress in the iron-deficient rat indicators relevant to animal welfare: a seminar in the c.e.c. programme of coordination of research on animal welfare the behaviour of calves under four levels of lighting the effect of different housing conditions on behavioural and adrenocortical reactions in veal calves from an animal's point of view: motivation, fitness, and animal welfare phylogenetic characterization of 'candidatus helicobacter bovis', a new gastric helicobacter in cattle rotavirus infections in calves in dairy herds sucking motivation and related problems in calves dairy calves discrimination of people based on previous handling iron supplementation improves iron status and reduces morbidity in children with or without upper respiratory tract infections: a randomized controlled study in colombo, sri lanka enhancement of clinical signs in experimentally rotavirus infected calves by combined viral infections group a rotavirus as a cause of neonatal calf enteritis in sweden incidence of diarrhea among calves after strict closure and eradication of bovine viral diarrhea virus infection in a dairy herd enteric infections in veal calves: a longitudinal study on four veal calf units behaviour and welfare of veal calves in relation to husbandry systems comparison of four methods of calf confinement. ii report of the scientific committee on animal health and animal welfare. health and consumer protection, directorate-general, directorate c -scientific opinions iron status, immune capacity and resistance to infections planungs-und berechnungsgrundlagen für geschlossene zwangsbelüftete ställe (thermal insulation for closed livestock buildings. thermal insulation and ventilation forced. part : principles for planning and design for closed ventilated livestock buildings). deutsches institut für normung e assosiations between passive immunity and morbidity and mortality in dairy heifers in florida, usa contribution to the study of gut hypersensitivity reactions to soybean proteins in preruminant calves and early-weaned piglets igg, iga, igg and igg specific responses in blood and gut secretion of calves fed soyabean products the interpretation of preference tests in animal behaviour animal right-animal welfare. a scientist's assessment measuring preferences and the strength of preference animal welfare as defined in terms of feelings. acta agriculturae scandinavica section a opinion of the scientific panel on biological hazards of the european food safety authority on standards for the microclimate inside animal transport road vehicles opinion of the scientific panel on biological hazards of the european food safety authority on "the risk assessment of a revised inspection of slaughter animals in areas with low prevalence of cysticercus welfare aspects of the main systems of stunning and killing the main commercial species of animals opinion of the scientific panel on biological hazards of the european food safety authority on "campylobacter in animals and foodstuffs the community summary report on trends and sources of zoonoses, zoonotic agents and antimicrobial resistance in the european union in opinion of the scientific panel on biological hazards of the european food safety authority on "an assessment of the public and animal health risks associated with the adoption of a visual inspection system in veal calves raised in a member state (or part of a member state) considered free of tuberculosis an experimental study of a concurrent primary infection with bovine respiratory syncytical virus (brsv) and bovine viral diarrhoea virus (bvdv) in calves a study of bovine respiratory syncytical virus infections in swedish cattle performance and health of group-housed dairy calves fed milk automatically versus manually virkning av forskellige insaettelsestrategier på tillvaekst og sundhed hos gruppeopstaldede kalve increased mortality among calves in danish cattle herds during bovine virus diarrhoea infection reducing pain after dehorning in dairy calves veterinary virology the effect of the size of individual crates on the behavioural and immune reactions of dairy calves ceftriaxone-resistant salmonella infection acquired by a child from cattle effect of castration method and the provision of local anesthesia on plasma cortisol, scrotal circumference, growth, and feed intake of bull calves farm animal behaviour and welfare farm animal behaviour and welfare preference and motivation testing pleasures, "pains" and animal welfare: towards a natural history of affect the effects of early separation on the dairy cow and calf the production system and disease incidence in a national random longitudinal study of swiss dairy herds comparison of four methods of calf confinement. i interleukin production in iron-deficient children rumional acidosis in milk fed calves. large animal veterinary rounds inzidenz und verlauf von neugeborenendurchfall bei kälbern in einem praxisgebiet in oberbayern economic analysis of feeding pasteurized nonsaleable milk versus conventional milk replacer to dairy calves campylobacter jejuni and campylobacter hyointestinalis in the intestinal tract and on the carcasses of calves and cattle a new method of measuring diet abrasion and its effect on the development of the forestomach urinary catecholamines in iron-deficient rats at rest and following surgical stress experimental salmonella infection of calves. . the effect of stress factors on the carrier state experimental salmonella infection of calves. . virulence and the spread of infection concentrations and emissions of ammonia in livestock buildings in northern europe zur Äthiologie und prophylaxe der perinatalen mortalität beim kalb collegium veterinarium xxiv salmonella bij gezonde runderen en kalveren in nederland immune functions of veal calves fed low amounts of iron delaying colostrum intake by one day has important effects on metabolic traits and on gastrointestinal and metabolic hormones in neonatal calves dynamics of virus infections involved in the bovine respiratory disease complex in swedish dairy herds. the vet effects of resistance to milk flow and the provision of hay on non-nutritive sucking by dairy calves sucking behaviour of dairy calves fed milk ad libitum by bucket or teat a longitudinal study of escherichia coli o in fourteen cattle herds resting behaviour, growth and diarrhoea incidence rate of young dairy calves housed individually or in groups in warm or cold building the effect of flooring type and social grouping on the rest and growth of dairy calves fecal shedding and rumen growth of escherichia coli o :h in fasted calves feeding the newborn dairy calf calf and heifer rearing : principles of rearing the modern heifer from calf to calving the national dairy heifer evaluation project: a profile of heifer management practices in the united states human-livestock interactions: the stockperson and the productivity and welfare of intensively farmeed the relationship between the presence of helicobacter pylori, clostridium perfringens type a, campylobacter spp, or fungi and fatal abomasal ulcers in unweaned beef calves effects of confinement on rebound of locomotor behaviour of calves and heifers, and the spatial preferences of calves the effects of feeding method, milk allowance and social factors on milk feeding behaviour and cross-sucking in group housed dairy calves computer-controlled milk feeding of dairy calves: the effects of number of calves per feeder and number of milk portions on use of feeder and social behaviour who needs 'behavioural needs'? motivational aspects of the needs of animals play behaviour in group-housed dairy calves, the effect of space allowance the effect of milk flow rate and milk allowance on feeding related behaviour in dairy calves fed by computer controlled milk feeders effect of single versus group housing and space allowance on responses of calves during open-field tests play behaviuor in domestic calves kept in pens: the effect of social contact and space allowance evaluation of welfare indicators for social environment in cattle effect of amount of milk, milk flow and access to a rubber teat on cross-sucking and non-nutritive sucking in dairy calves epidemiologische erhebungen zur kälberdiarrhoe in einem praxisgebiet in oberbayern effect of shipping and chromium supplementation on performance, immune response, and disease resistance of steers the development of intersucking in dairy calves around weaning factors associated with intersucking in swiss dairy heifers whole blood leukocytes vs separated mononuclear cell blastogenesis in calves. time-dependent changes after shipping recommendations for recording and calculating the incidence of selected clinical diseases in dairy cattle castration of calves: a study of methods used by farmers in the united kingdom onderzoek naar de uit ethologisch oogpunt minimaal gewenstle boxmaten voor vleeskalveren met een gewicht van tot kg spatial requirements of individually housed veal calves of to kg nutritional effects on response of children in developing countries to respiratory tract pathogens: implications for vaccine development problems on the etiological importance of pasteurella, mycoplasma and parainfluenza- -virus in enzootic pneumonia of calves the behaviour of beef suckler cattle enzootic pneumonia in calves: clinical and morphological features theoretical comparison of the consumer surplus and elasticities of demand as measures of motivational strength effects on calves less than one month old of feeding or not feeding tham during road transport of up to hours untersuchungen zur pathogenese der pylorusulzera beim mestkalb bovine cryptosporidiosis in denmark. ii. cryptosporidia associated with neonatal calf diarrhoea an evaluation of two management systems for rearing calves fed milk replacer prevalence of escherichia coli o :h in range beef calves at weaning cattle and reindeer as possible sources of escherichia coli o infection in humans aktuelle probleme der tierärztlichen betreuung von mutterkuhherden mother-young behaviour in cattle prevention of microbial contamination of red meat in the ante mortem phase; epidemiological aspects small intestinal motility disorders in preruminant calves chronically fed a diet based on antigenic soya : characterization and possible mediators influence of soya antigen levels in milk replacers on the disruption of intestinal motility patterns in calves sensitive to soya systemic and local gutspecific antibody responses in preruminant calves sensitive to soya feeding heated soyabean flour increases the density of b and t lymphocytes in the small intestine of calves hydrolysed soy protein isolate sustains high nutritional performance in veal calves antigenicity and digestive utilization of four soya products by the preruminant calves antibiotic resistance in gut bacteria from dairy calves: a dose response to the level of antibiotics fed in milk iron deficiency and immune responses the relationship between farmers' attitude and behaviour towards calves, and productivity of veal units reducing veal calves' reactivity to people by providing additional human contact the influence of farmers' behavior towards calves on animals' responses to transport and quality of veal meat the farmers' influence on calves' behaviour, health and production of a veal unit the impact of gentle contacts on ease of handling, welfare, and growth of calves and quality of veal meat reactions of calves to handling depend on housing condition and previous experience with humans effects of supplemental yeast culture on rumen development, growth characteristics, and blood parameters in neonatal dairy calves effects of corn processing on growth characteristics, rumen development, and rumen parameters in neonatal dairy calves anemia as a risk factor for infectious diseases in infants and toddlers: results from a prospective study colostrum in swedish dairy cows: quality and effects on passive immunity in the calves cross-sucking in group-housed dairy calves before and after weaning off milk intersucking in dairy cattle--review and questionnaire cryptosporidium andersoni n. sp. (apicomplexa: cryptosporidiidae) from cattle, bos taurus occurrence of iron deficiency in growing cattle growth performance, haematological traits, meat variables, and effects of treadmill and transport stress in veal calves supplied different anounts of iron prevention of microbial contamination of red meat in the ante mortem phase; epidemiological aspects management practices associates with high mortality among preweaned dairy heifers the gastric mucosal barrier and abomasal ulcers in veal calves housing, management and health in swedish dairy calves. doctoral thesis herd-level risk factors for infectious diseases in swedish dairy calves aged - days automated feeding of milk replacer and health control of group-hiused veal calves automated feeding of milk replacer and health control of group-housed veal calves effect of transportation and weaning on humoral immune responses of calves responses of dairy cows and calves to each other's vocalisations after early separation cross-sucking and other oral behaviours in calves, and their relation to cow suckling and food provision isolation of escherichia coli o :h from dairy cattle associated with two cases of haemolytic uraemic syndrome the provision of solid feeds to veal calves: ii. behavior, physiology, and abomasal damage enteric pathogens in intensively reared veal calves a fifteen month study of escherichia coli o :h in a dairy herd a comparison of catecholamine and cortisol responses of young lambs and calves to painful husbandry procedures acute pain in an emergency clinic regulation of sucking behaviour of calves laboratory findings associated with abomasal ulcers/tympany in range calves biological response to stress: key to assessment of animal well-being? association between resting behaviour and live weight gain in dairy heifers housed in pens with different space allowance and floor type long-term effect of housing method during the first three months of life on human-animal relationship in female dairy cattle risk factors associated with cryptosporidium parvum infection in dairy cattle in southeastern new york state the response of normal and iron anemic calves to nasal infection with an attenuated strain parainfluenza- virus assessment of acute and chronic pain after different methods of castration of calves intestinal digestion of dietary and endogenous proteins along the small intestine of calves fed soybean or potato influence of dietary protein level and source on the course of protein digestion along the small intestine of the veal calf effect of diet on mucin kinetics and composition: nutrition and health implications calf rearing a practical guide, natural resource and environment influence of dry feed supplements on different parameters of welfare in veal calves the effect of four fibrous feed supplementations on different welfare traits in veal calves behavioural investigation of group rearing calves in automatic milk replacer feeding system effect of transportation on blood serum composition, disease incidence, and production traits in young calves. influence of the journey duration critical anthropomorphism, animal suffering and the ecological context bacterial pneumonia dust and microbiol emissions from animal production influence of truck transportation of calves on their cellular immune function prevalence of enteric pathogens in the feces of healthy beef calves salmonella infection and contamination of veal calves: a slaughterhouse survey comparison of two castration methods in cattle: plasma cortisol levels, leukocyte count and behavioral changes calf diseases and mortality in swedish dairy herds abomasal ulcers in captive white-tailed deer (odocoileus virginianus) growth performance and health of dairy calves bedded with different types of materials factors involved in recent outbreaks of escherichia coli o :h in scotland and recommendations for its control protection of calves against cryptosporidiosis with immune bovine colostrum induced by a cryptosporidium parvum recombinant protein treadmill exercise of calves with different iron supply, husbandry, and work load beurteilung verschiedener tränketechniken und betreuungsmassnahmen hinsichtlich ihrer auswirkungen auf die oralen aktivitäten, den gesundheitszustand und die mastleitung über zwei bis acht wochen alter mastkälber in gruppenhaltung. dissertation. institut für tierhygiene und tierschutz der tierärztlichen hochschule hannover group housing for two to eight week old fattening calves veterinary and zootechnical aspects of veal production veterinary medicine. a textbook of the diseases of cattle, sheep, pigs, goats and horses serum immunoglobulin concentrations and health of dairy calves in two management systems from birth to weeks of age influence of feeding different amounts of first colostrum on metabolic, endocrine, and health status and on growth performance in neonatal calves the impact of social and human contacts on calves' behaviour and stress responses stereotypies om heifers are affected by feeding regime feeding level and oral stereotypies in dairy cows blood studies and performance among calves reared by different methods consequences of short term fluctuations of the environmental temperatures in calves--part : effects on the health status of animals within three weeks after exposure microbiology of calf diarrhea in southern britain studies on the presence of verocytotoxic escherichia coli o in bovine faeces submitted for diagnostic purposes in england and wales and on beef carcases in abattoirs in the united kingdom effect of different methods of castration on behaviour and plasma cortisol in calves of three ages iron, thermoregulation, metabolic rate the validity of behavioural measures of aversion: a review fear of people by cows and effects on milk yield, behavior, and heart rate at milking recognition of people by dairy calves using colour of clothing endotoxins in the environment: a criteria document reduced cortisol secretions in patients with iron deficiency viral diarrhea in man and animals mouth-based anomalous syndromes the influence of management routines on endocrine systems involved in the control of lactation in dairy cattle immune response in anemic calves the welfare of cattle kept for beef production. directorate general health and consumer protection, directorate c -scientific health opinions, unit c -management of scientific committees staphylococci as an indicator for bacterial emissions from a broiler house the use of trainer cows to reduce stress in newly arrived feedlot calves scvph (scientific committee on veterinary measures relating to public health) scvph (scientific committee on veterinary measures relating to public health) scvph (scientific committee on veterinary measures relating to public health) bioaerosole in und aus der landwirtschaftlichen nutztierhaltung (vorläufiger titel). kuratorium für technik und bauwesen in der landwirtschaft (ktbl, hrsg.) concentrations and emissions of airborne endotoxins and microorganisms in livestock buildings in northern europe a survey of ventilation rates in livestock buildings in northern number of viable bacteria and presumptive antibiotic residues in milk fed to calves on commercial dairies effects of transportations, a high lactose diet and acth injections on the white blood cell count, serum cortisol and immunoglobulin g in young calves descriptive epidemiology of morbidity and mortality in minnesota dairy heifer calves the development of the bacterial flora of the faeces of animals and man: the changes that occur during aging clostridial abomasitis in calves: case report and review of the literature destruction of mycobacterium paratuberculosis, salmonella spp., and mycoplasma spp effects of local anaesthesia or local anaesthesia plus a non-steroidal anti-inflammatory drug on the acute cortisol response of calves to five different methods of castration dehorning and disbudding distress and its alleviation in calves seasonal variation of thermophilic campylobacters in lambs at slaughter losses in young calves after transportation reaction of claves to two flooring materials offered simultaneously in one pen effects of three different methods of dehorning on cortisol levels and behaviour of calves evaluation of the effect of local anaesthesia and local anaesthesia associated with analgesia on the levels of cortisol after hot-iron, chemical or scoop dehorning effects of in vivo release of insulin and glucagon studied by microdialysis in rat pancreas and audiographic evidence for ( h)-oxytocin binding sites within the islets of langerhans some theoretical and observed relationships of fixed and portable spacing behavior of animals selfish animats and robot ethology: using artificial animals to investigate social and spatial behavior effect of local anaesthetic combined with wound cauterisation on the cortisol response to dehorning in calves cortisol responses to dehorning of calves given a -h local anaesthetic regimen plus phenylbutazone, ketoprofen, or adrenocorticotropic hormone prior to dehorning scientific veterinary committee, animal welfare section. directorate generale for agriculture, vi/bii. . adopted the effect of group size on health and growth rate of swedish dairy calves housed in pens with automatic milk-feeders bovine coccidiosis with special reference to eimeria alabamensis infections in grazing calves morbidity in swedish dairy calves from birth to days of age and individual calf-level risk factors for infectious diseases health status of dairy calves kept in individual pens or in group pens with or without automatic milk feeder acute cortisol responses of calves to scoop dehorning using local anaesthesia and/or cautery of the wound verotoxigenic escherichia coli o in scottish calves verocytotoxin-producing escherichia coli: a veterinary view concentrations and emissions of airborne dust in livestock buildings in northern europe ethological, physiological and histological aspects of pain and stress in cattle when being dehorned individual humans as discriminative stimuli for cattle (bos taurus) pathological and microbiological studies on pneumonic lungs from danish calves effects of individual housing design and size on special-fed holstein veal calf growth performance, hematology, and carcass characteristics calf rearing verordnung zum schutz landwirtschaftlicher nutztiere und anderer zur erzeugung tierischer produkte gehaltener tiere bei ihrer haltung (tierschutz-nutztierhaltungsverordnung -tierschnutztv)*) vom geändert durch die erste verordnung zur Änderung der tierschutz -nutztierhaltungsverordnung vom effects of age of holstein-friesian calves on plasma cortisol, acute-phase proteins, immunological function, scrotal measurements and growth in response to burdizzo castration effect of ketoprofen, lidocaine local anesthesia, and combined xylazine and lidocaine caudal epidural anesthesia during castration of beef cattle on stress responses, immunity, growth, and behavior etude de la prévalence de l'infection des veaux par giardia duodenalis en pays de la loire effects of housing on responses of veal calves to handling and transport. in new trends in veal calf production the importance of straw for pig and cattle welfare: a review behavioural and physiological studies on rearing and veal calves molecular epidemiology of cryptosporidium parvum and giardia duodenalis to study the potential for zoonotic transmission dynamics of bovine respiratory syncytical virus: a longitudinal epidemiological study in dairy herds the prevalence of verotoxins, escherichia coli o :h and salmonella in the feces and rumen of cattle at processing haemoglobin and haematocrit levels in veal calves is anaemia of veal calves normochromic or hypochromic a comparison of different enrichment media for the isolation of salmonella dublin from livers, kidneys and muscles of salmonella-positive veal calves zusatzfuttering von stroh an mastkälber the effect of housing system and dietary iron supply on stress physiological measures and resistance against an experimental infection with bovine herpesvirus (bhv ) in veal calves enkele epidemiologische aspecten van salmonellose bij mestkalveren in nederland calves' responses to repeated social regrouping and relocation does nutritive and non-nutritive sucking reduce other oral behaviors and stimulate rest in calves nonnutritive oral activities and stress responses of veal calves in relation to feeding and housing conditions providing social contacts and objects for nibbling moderates reactivity and oral behavior in veal calves the effects of rearing in individual crates on subsequent social behaviour of veal calves somministrazione di un mangime solido a vitelli a carne bianca stabulati in gabbia individuale o in box di gruppo. . parametri comportamentali, fisiologici e patologici behaviour and performance of veal calves under different stabling conditions assessing animal welfare: the significance of causal studies of behaviour at the motivational level epidemiologic and pathological characteristics of respiratory tract disease in dairy heifers during the first three months of life morbidity from nonrespiratory diseases and mortality in dairy heifers during their first months of life iron deficiency anemia and increased urinary norepinephrine excretion decreased susceptibility to ciprofloxacin in outbreak-associated multiresistant salmonella typhimurium dt otitis media in preweaned holstein dairy calves in michigan due to mycoplasma bovis effects of group, individual, and isolated rearing of calves on weight gain and behavior colostrum -the beginning of a successful calf raising program. dairy feed facts air and surface hygiene the response of beef cattle to noise during handling reduction of cross-sucking in calves by the use of a modified automatic teat feeder calf husbandry, health and welfare rearing of veal calves the effect of different rearing systems on the development of calf behaviour a survey of abomasal ulceration in veal calves haematology of veal calves reared in different husbandry systems and the assessment of iron deficiency health status of preweaned dairy heifers in the united states the effect of feeding pellets of different types of roughage on the incidence of lesions in the abomasum of veal calves st joint fao/oie/who expert workshop on non-human antimicrobial usage and antimicrobial resistance: scientific assessment on the significance of ethological criteria for the assessment of animal welfare effects of individual housing design and size on behaviour and stress indicators of special-fed holstein veal calves feeding antibiotic-contaminated waste milk to calves--effects on physical performance and antibiotic sensitivity of gut flora reviews of the progress of dairy science: bovine salmonellosis factors influencing occurrence of colibacillosis in calves natural and experimental infection of normal cattle with escherichia coli o rearing veal calves with respect to animal welfare: effects of group housing and solid influence of dietary concentrate to forage ratio on the development of rumen mucosa in calves the members of the working group which authored this report were: animals. cab international, oxon/new-york pp.hepola, h., . milk feeding systems for dairy calves in groups: effects on feed intake, growth and health. applied animal behaviour science : - .herrmann, j., knierim, u., hinton, m., linton, a.h., . antibacterial drug resistance among escherichia coli isolated from calves fed on a milk substitute diet, vet. rec., , - .hinton, m., ali, e. a., allen, v., linton, a. h., . the excretion of salmonella typhimurium in the faeces of calves fed on a milk substitute diet. j. hyg. camb. , - .hinton, m., hedges, a. j., linton, a. h., a . the ecology of escherichia coli in calves fed on a milk substitute diet. j. appl. bacteriol. , - .hinton, m., linton, a. h., hedges, a. j., b . the ecology of escherichia coli in calves reared as dairy-cow replacers. j. appl. bacteriol. , - .hinton, m., rixson, p. d., allen, v., linton, a. h., a . the persistence of drugresistant escherichia coli strains in the majority faecal flora of calves, j. hyg. camb., , - .hinton, m., suleyman, i. o., allen, v., linton, a. h., b . further observations on the excretion of salmonella in the faeces of calves fed milk substitute, j. hyg. camb., , - .holliman, a., . overview of coccidiosis -recent observations. cattle practice ; - .holm, l., jensen, m. b., jeppesen, l. l., key: cord- - pm rpzj authors: parnell, gregory s.; smith, christopher m.; moxley, frederick i. title: intelligent adversary risk analysis: a bioterrorism risk management model date: - - journal: risk anal doi: . /j. - . . .x sha: doc_id: cord_uid: pm rpzj the tragic events of / and the concerns about the potential for a terrorist or hostile state attack with weapons of mass destruction have led to an increased emphasis on risk analysis for homeland security. uncertain hazards (natural and engineering) have been successfully analyzed using probabilistic risk analysis (pra). unlike uncertain hazards, terrorists and hostile states are intelligent adversaries who can observe our vulnerabilities and dynamically adapt their plans and actions to achieve their objectives. this article compares uncertain hazard risk analysis with intelligent adversary risk analysis, describes the intelligent adversary risk analysis challenges, and presents a probabilistic defender–attacker–defender model to evaluate the baseline risk and the potential risk reduction provided by defender investments. the model includes defender decisions prior to an attack; attacker decisions during the attack; defender actions after an attack; and the uncertainties of attack implementation, detection, and consequences. the risk management model is demonstrated with an illustrative bioterrorism problem with notional data. toward risk-based regulations, specifically using pra to analyze and demonstrate lower cost regulations without compromising safety. ( , ) research in the nuclear industry has also supported advances in human reliability analysis, external events analysis, and common cause failure analysis. ( − ) more recently, leaders of public and private organizations have requested risk analyses for problems that involve the threats posed by intelligent adversaries. for example, in , the president directed the department of homeland security (dhs) to assess the risk of bioterrorism. ( ) homeland security presidential directive (hspd- ): biodefense for the st century, states that " [b] iological weapons in the possession of hostile states or terrorists pose unique and grave threats to the safety and security of the united states and our allies" and charged the dhs with issuing biennial assessments of biological threats to "guide prioritization of our on-going investments in biodefense-related research, development, planning, and preparedness." a subsequent homeland security presidential directive (hspd- ): medical countermeasures against weapons of mass destruction directed an integrated risk assessment of all chemical, biological, radiological, and nuclear (cbrn) threats. ( ) the critical risk analysis question addressed in this article is: are the standard pra techniques for uncertain hazards adequate and appropriate for intelligent adversaries? as concluded by the nrc ( ) study on bioterrorism risk analysis, we believe that new techniques are required to provide credible insights for intelligent adversary risk analysis. we will show that treating adversary decisions as uncertain hazards is inappropriate because it can provide a different risk ranking and may underestimate the risk. in the rest of this section, we describe the difference between natural hazards and intelligent adversaries and demonstrate, with a simple example, that standard pra applied to attacker's intent may underestimate the risk of an intelligent adversary attack. in the second section, we describe a canonical model for resource allocation decision making for an intelligent adversary problem using an illustrative bioterrorism example with notional data. in the third section, we describe the illustrative analysis results obtained from the model and discuss the insights they provide for risk assessment, risk communication, and risk management. in the fourth section, we describe the benefits and limitations of the model. finally, we discuss future work and our conclusions. we believe that risk analysis of uncertain hazards is fundamentally different than risk analysis of intelligent adversaries. ( , ) some of the key differences are summarized in table i . ( ) a key difference is historical data. for many uncertain events, both natural and engineered, we have not only historical data of extreme failures or crises, but many times we can replicate events in a laboratory environment for further study (engineered systems) or analyze using complex simulations. intelligent adversary attacks have a long historical background, but the aims, events, and effects we have recorded may not prove a valid estimate of future threat because of changes in adversary intent and capability. both uncertain hazard risks of occurrence and geographical risk can be narrowed down and identified concretely. intelligent adversary targets vary by the goals of the adversary and can be vastly dissimilar between adversary attacks. information sharing between the two events differs dramatically. after hurricanes or earthquakes, engineers typically review the incident, publish results, and improve their simulations. sometimes after intelligent adversary attacks, or near misses, the situation and conduct of the attack may involve critical state vulnerabilities and protected intelligence means. in these cases, intelligence agencies may be reluctant to share complete information even with other government agencies. the ability to influence the event is also different. though we can prepare, we typically have no way of influencing the natural event to occur or not occur. on the other hand, governments may be able to affect the impact of terrorism attacks by a variety of offensive, defensive, and recovery measures. in addition, adversary attacks can take on so many forms that one cannot realistically defend/respond/recover/etc. against all types of attacks. although there have been efforts to use event tree technologies in intelligent adversary risk analysis (e.g., btra), many believe that this approach is not credible. ( ) the threat from intelligent adversaries comes from a combination of both intent and capability. we believe that pra still has an important role in intelligent adversary risk analysis for assessment of the capabilities of adversaries, the vulnerabilities of potential targets, and potential consequences of attacks. however, intent is not a some cities may be considered riskier than others (e.g., new york city, washington), but terrorists may attack anywhere, any time. information sharing: asymmetry of information: new scientific knowledge on natural hazards can be shared with all the stakeholders. governments sometimes keep secret new information on terrorism for national security reasons. natural event: intelligent adversary events: to date, no one can influence the occurrence of an extreme natural event (e.g., an earthquake). governments may be able to influence terrorism (e.g., foreign policy; international cooperation; national and homeland security measures). government and insureds can invest in well-known mitigation measures. attack methodologies and weapon types are numerous. local agencies have limited resources to protect potentially numerous targets. federal agencies may be in a better position to develop better offensive, defensive and response strategies. modified from kunreuther. ( , ) - . ( ) factor in natural hazard risk analysis. in intelligent adversary risk analysis, we must consider the intent of the adversary. the adversary will make future decisions based on our preparations, its objectives, and information about its ability to achieve its objectives that is dynamically revealed in a scenario. bier et al. provides an example of addressing an adversary using a defender-attacker game theoretic model. ( ) nrc provides three examples of intelligent adversary models. ( ) we believe it will be more useful to assess an attacker's objectives (although this is not a trivial task) than assigning probabilities to their decisions prior to the dynamic revelation of scenario information. we believe that modeling adversary objectives will provide greater insight into the possible actions of opponents rather than exhaustively enumerating probabilities on all the possible actions they could take. furthermore, we believe the probabilities of adversary decisions (intent) should be an output of, not an input to, risk analysis models. ( ) this is a principal part of game theory as shown in aghassi et al. and jain et al. ( , ) to make our argument and our proposed alternative more explicit, we use a bioterrorism illustrative example. in response to the hspd, in october , the dhs released a report called the bioterrorism risk assessment (btra). ( ) the risk assessment model contained a -step event tree ( steps with consequences) that could lead to the deliberate exposure of civilian populations for each of the most dangerous pathogens that the center for disease control tracks (emergency.cdc.gov/bioterrorism) plus one engineered pathogen. the model was extremely detailed and contained a number of separate models that fed into the main btra model. the btra resulted in a normalized risk for each of the pathogens, and rank-ordered the pathogens from most risky to least risky. the national research council (nrc) conducted a review of the btra model and provided specific recommendations for improvement to the model. ( ) in our example, we will use four of the recommendations: model the decisions of intelligent adversaries, include risk management, simplify the model by not assigning probabilities to the branches of uncertain events, and do not normalize the risk. the intelligent adversary technique we developed builds on the deterministic defenderattacker-defender model and is solved using decision trees. ( ) because the model has been simplified to reflect the available data, the model can be developed in a commercial off-the-shelf (cots) software package, such as the one we used for modeling, dpl (www.syncopation.org). other decision analysis software may work as well. ( ) event trees have been useful for modeling uncertain hazards. ( ) however, there is a key difference in the modeling of intelligent adversary decisions that event trees do not capture. as norman c. rasmussen, the director of the reactor safety study that validated pra for use in nuclear reactor safety, states in a later article, while the basic assumption of randomness holds true for nuclear safety, it is not valid for human action. ( ) the attacker makes decisions to achieve his or her objectives. the defender makes resource allocation decisions before and after an attack to try to mitigate vulnerabilities and consequences of the attacker's actions. this dynamic sequence of decisions made by first the defender, then an attacker, then again by the defender should not be modeled solely by assessing probabilities of the attacker's decisions. for example, when the attacker looks at the defender's preparations for their possible bioterror attack, it will not assign probabilities to its decisions; it chooses the agent and the target based on perceived ability to acquire the agent and successfully attack the target that will give it the effects it desires to achieve its objectives. ( ) representing an attacker decision as a probability may underestimate the risk. consider the simple bioterrorism event tree given in fig. with notional data. using an event tree, for each agent (a and b) there is a probability that an adversary will attack, a probability of attack success, and an expected consequence for each outcome (at the terminal node of the tree). the probability of success a useful reference for decision analysis software is located on the orms website (http://www.lionhrtpub.com/orms/surveys/das/ das.html). involves many factors, including the probability of obtaining the agent and the probability of detection during attack preparations and execution. the consequences depend on many factors, including agent propagation, agent lethality, time to detection, and risk mitigation; in this example, the consequences range from or no consequences to , the maximum consequences (on a normalized scale of consequences). calculating expected values in fig. , we would assess expected consequences of . we would be primarily concerned about agent b because it contributes % of the expected consequences ( × . = for b out of the total of ). looking at extreme events, we would note that the worst-case consequence of has a probability of . . however, adversaries do not assign probabilities to their decisions; they make decisions to achieve their objectives, which may be to maximize the consequences they can inflict. ( ) if we use a decision tree as in fig. , we replace the initial probability node with a decision node because this is an adversary decision. we find that the intelligent adversary would select agent a, and the expected consequences are , which is a different result than with the event tree. again, if we look at the extreme events, the worstcase event ( consequences) probabilities are . for the agent a decision and . for the agent b decision. the expected consequences are greater and the primary agent of concern is now a. in this simple example, the event tree approach underestimates the expected risk and provides a different risk ranking. furthermore, the event tree example underestimates the risk of the extreme events. however, while illustrating important differences, this simple decision tree model does not sufficiently model the fundamental structure of intelligent adversary risk. this model has a large number of applications for homeland security. for example, it would be easy to see the use of this canonical model applied to a dirty bomb example laid out in rosoff and von winterfeldt ( ) or any other intelligent adversary scenario. in this article, we show a use of a bioterrorism application. we believe that the canonical risk management model must have at least six components: the initial actions of the defender to acquire defensive capabilities, the attacker's uncertain acquisition of the implements of attack (e.g., agents a, b, and c), the attacker's target selection and method of attack(s) given implement of attack acquisition, the defender's risk mitigation actions given attack detection, the uncertain consequences, and the cost of the defender actions. from this model, one could also conduct baseline risk analysis by looking at the status quo. in general, the defender decisions can provide offensive, defensive, or information capabilities. we are not considering offensive decisions such as preemption before an attack; instead, we are considering decisions that will increase our defensive capability (e.g., buy vaccine reserves) ( ) or provide earlier or more complete information for warning of an attack (add a bio watch city). ( ) in our defenderattacker-defender decision analysis model, we have the two defender decisions (buy vaccine, add a bio watch city), the agent acquisition for the attacker is uncertain, the agent selection and target of attack is another decision, the consequences (fatalities and economic) are uncertain, the defender decision after attack to mitigate the maximum possible casualties, and the costs of defender decisions are known. the defender risk is defined as the probability of adverse consequences and is modeled using a multiobjective additive model similar to multiobjective value models. ( ) we have assumed that the defender minimizes the risk and the attacker maximizes the risk. we implemented this model as a decision tree (fig. ) and an influence diagram (fig. ) using dpl. the mathematical formulation of our model and the notional data are provided in the appendix the illustrative decision tree model (figs. and ) begins with decisions that the defender (united states) makes to deter the adversary by reducing the vulnerabilities or be better prepared to mitigate a bioterrorism attack of agents a, b, or c. we modeled and named the agents to represent notional bioterror agents using the cdc's agent categories in table ii . for example, agent a represents a notional agent from category a. table iii provides a current listing of the agents by category. there are many decisions that we could model; however, for our simple illustrative example, we chose to model notional decisions about the bio watch program for agents a and b and the bioshield vaccine reserve for agent a. bio watch is a program that installs and monitors a series of passive sensors within a major metropolitan city. ( ) the bioshield program is a plan to purchase and store vaccines for some of the more dangerous pathogens. ( ) the defender first decides whether or not to add another city to the bio watch program. if that city is attacked, this decision could affect the warning time, which influences the response and, ultimately, the potential consequences of an attack. of course, the bio watch system does not detect every agent, so we modeled agent c to be the most effective agent that the bio watch system does not sense and provide additional warning. adding a city will also incur a cost in dollars for the united states. the second notional defender decision is the amount of vaccine to store for agent a. agent a is the notional agent that we have modeled with the largest probability of acquisition and potential consequences. the defender can store a percentage of what experts think we would need in a largescale biological agent attack. the more vaccine the united states stores, the fewer consequences we will have if the adversaries use agent a and we have sufficient warning and capability to deploy the vaccine reserve. however, as we store more vaccine, the costs for purchasing and storage increase. for ( ) category definition a the u.s. public health system and primary healthcare providers must be prepared to address various biological agents, including pathogens that are rarely seen in the united states. high-priority agents include organisms that pose a risk to national security because they: can be easily disseminated or transmitted from person to person; result in high mortality rates and have the potential for major public health impact; might cause public panic and social disruption; and require special action for public health preparedness. b second highest priority agents include those that: are moderately easy to disseminate; result in moderate morbidity rates and low mortality rates; and require specific enhancements of cdc's diagnostic capacity and enhanced disease surveillance. c third highest priority agents include emerging pathogens that could be engineered for mass dissemination in the future because of: availability; ease of production and dissemination; and potential for high morbidity and mortality rates and major health impact. simplicity's sake, each of the defender decisions cost the same amount; therefore, at the first budget level of us$ million, the defender can only choose to one decision. after the defender has made its investment decisions, which we assume are known to the attacker, the attacker makes two decisions: the type of agent and the target. we will assume that the attacker has already made the decision to attack the united states with a bioterror agent. in our model, there are three agents it can choose, although this can be increased to represent the other pathogens listed in table iii . as stated earlier, if we only looked at the attacker decision, agent a would appear to be the best choice. agents b and c are the next two most attractive agents to the attacker, respectively. again, agents a and b can be detected by bio watch whereas agent c cannot. the attacker has some probability of acquiring each agent. if the agent is not acquired, the attacker cannot attack with that agent. in addition, each agent has a lethality associated with it, represented by the agent casualty factor. finally, each agent has a different probability of being detected over time. generally, the longer it takes for the agent to be detected, the more consequences the united states will suffer. the adversary also decides what size of population to target. generally, the larger the population targeted, the more potential consequences could result. the attacker's decisions affect the maximum possible casualties from the scenario. however, regardless of the attacker's decisions, there is some probability of actually attaining a low, medium, or high percentage of the maximum possible casualties. this sets the stage for the next decision by the defender. after receiving warning of an attack, the defender decides whether or not to deploy the agent a vaccine reserve. this decision depends upon how much of the vaccine reserve the united states chose to store, whether the attacker used agent a, and the potential effectiveness of the vaccine given timely attack warning. in addition, there is a cost associated with deploying the vaccine reserve. again, for simplicity's sake, the cost for every defender decision is the same, thus forcing the defender to only choose the best option(s) for each successive us$ million increase in budget up to the maximum of us$ million. in our model (fig. ) , we have two types of consequences: casualties and economic impact. given the defender-attacker-defender decisions, the potential casualties and the economic impact are assessed. casualties are based on the agent, the population attacked, the maximum potential casualties, the warning time given to the defender, and the effectiveness of vaccine for agent a (if the agent a is the agent and the vaccine is used). economic effects are modeled using a linear model with a fixed economic cost that does not depend on the number of casualties and a variable cost of the number of casualties multiplied by the cost per casualty. of course, the defender would like potential consequences (risk) given an attack to be low, whereas the attacker would like the potential consequences (risk) to be high. our economic consequences model was derived using a constant and upper bound from wulf et al. ( ) the constant cost we used is $ billion, and from the upper bound, also given in wulf et al., we derived the cost per casualty. ( ) we believe this fixed cost is reasonable because when looking at the example of the anthrax letters of , experts estimate that although there were only infected and five killed, there was a us$ billion cost to the united states. ( ) in this tragic example, there was an extremely high economic impact even when the casualties were low. each u.s. defender decision incurs a budget cost. the amount of money available to homeland security programs is limited by a budget determined by the president and congress. the model will track the costs incurred and only allows spending within the budget (see the appendix). we considered notional budget levels of us$ million, us$ million, us$ million, and us$ million. normally, a decision tree is solved by maximizing or minimizing the expected attribute at the terminal branches of the tree. in our model however, the defender risk depends on the casualty and economic consequences given an attack. we use multiple objective decision analysis with an additive value (risk) model to assign risk to the defender consequences. the defender is minimizing risk and the attacker is maximizing risk. we assign a value of . to no consequences and a value of . to the worst-case consequences in our model. we model each consequence with a linear risk function and a weight (see the appendix). the risk functions measure returns to scale on the consequences. of course, additional consequences could be included and different shaped risk curves could be used. some of the key assumptions in our model are listed in table iv (the details are in the appendix) along with some possible alternative assumptions. given different assumptions, the model may produce different results. we model the uncertainty of the attacker's capability to acquire an agent with a probability distribution and we vary detection time by agent. clearly, other indications and warnings exist to detect possible attacks. these programs could be included in the model. we model three defender decisions: add a bio watch city for agents a and b, increase vaccine reserve for agent a, and deploy agent a. we assume limited decision options (i.e., % storage of vaccine a, % storage, % storage), but other decisions could be modeled (e.g., other levels of storage, storing vaccines for other agents, etc). we used one casualty model for all agents. other casualty and economic models could be used. finally, our model makes some assumptions about objectives. in the first of these we assume that the risks important to the defender are the number of casualties and the economic impact, but additional measures could be used. second, we assume defenders and attackers have a diametrically opposed view of all of the objectives. clearly, we could model additional objectives. in addition, we made some budget assumptions, which could be improved or modified. we assumed a fixed budget, but this budget could be modeled with more detailed cost models (e.g., instead of a set amount to add a city to the bio watch program, the budget could reflect different amounts depending upon the city and the robustness of the sensors installed). finally, our model results in a risk of a terrorist attack; the same methodology for a defender-attacker-defender decision tree can be used to determine a utility score instead of a risk; an example of this is in keeney. ( ) one thing to consider when altering or adding to the assumptions is the number of strategies the model evaluates. currently, the canonical model has different strategies to evaluate (table v) . with more complexity, the number of strategies that would need to be evaluated could grow significantly. largescale decision trees can be solved with monte carlo simulation. after modeling the canonical problem, we obtained several insights. first, we found that in our model economic impact and casualties are highly correlated. higher casualties result in higher economic impact. other consequences, for example, psychological consequences, could also be correlated with casualties. second, a bioterror attack could have a large economic impact (and psychological impact), even if casualties are low. the major risk analysis results are shown in fig. . risk shifting occurs in our decision analysis model. in the baseline (with no defender spending), agent a is the most effective agent for the attacker to select and, therefore, the agent against which the defender can protect if the budget is increased. as we improve our defense against agent a, at some point the attacker will choose to attack using agent b. the high-risk agent will have shifted from agents a to b. as the budget level continues to increase, the defender adds a city to the bio watch program and the attackers choose to attack with agent c, which bio watch cannot detect. we use notional data in our model, but if more realistic data were used, the defender could determine the cost/benefit ratios of additional risk reduction decisions. this decision model uses cots software to quantitatively evaluate the potential risk reductions associated with different options and make cost-benefit decisions. fig. provides a useful summary of the expected risk. however, it is also important to look at the complementary cumulative distribution (fig. ) to better understand the likelihood of extreme outcomes. for example, the figure shows that spending us$ or us$ million gives the defender a % chance of zero risk, whereas spending us$ or us$ million gives the defender an almost % chance of having zero risk. the best risk management result would be that option deterministically or stochastically dominates (sd) option , option sd option , and option sd option . the first observation we note from fig. is that options , , and stochasically dominate because option has a higher probability for each risk outcome. a second observation is that while option sd option , option does not sd option because option has a larger probability of yielding a risk level of . . along the x-axis, one can see the expected risk (er) of each alternative. this expected risk corresponds to the expected value of risk from the budget versus risk rainbow diagram in fig. . this example illustrates a possibly important relationship necessary for understanding and communicating how the budget might affect the defender's risk and choice of options. risk managers can run a value of control or value of correlation diagram to see which nodes most directly affect the outcomes and which are correlated (fig. ) . because we only have two uncertainty nodes in our canonical model, the results are not surprising. the graphs show that the ability to acquire the agent is positively correlated with the defender risk. as the probability of acquiring the agent increases, so does defender risk. in addition, the value of control shows the amount of risk that could be reduced given perfect control over each probabilistic node, and that it is clear that acquiring the agent would be the most important variable for risk managers to control. admittedly, this is a basic example, but with a more complex model, analysts could determine which nodes are positively or negatively correlated with risk and which uncertainties are most important. using cots software also allows us to easily perform sensitivity analysis on key model assumptions. from the value of correlation and control above, the probability of acquiring the agent was highly and positively correlated with defender risk and had the greatest potential for reducing defender risk. we can generate sensitivity analysis such as rainbow diagrams. the rainbow diagram (fig. ) shows the decision changes as our assumption about the probability of acquiring agent a increases. the different shaded regions represent different decisions, for both the attacker and the defender. this rainbow diagram was produced using a budget level of us$ million, so in the original model, the defender would choose not to add a city to bio watch, store % of vaccine for agent a, but not choose to deploy it because the attacker chose to use agent b. if the probability of acquiring agent a was low enough (in section a from fig. ) , we see that the attacker would choose to use agent c because we have spent our money on adding another city to bio watch, which is the only thing that affects both agents a and b, but not agent c. as the probability of acquiring agent a increases, both the attacker's and the defender's optimal strategies change. our risk management decision depends on the probability that the adversary acquires agent a. risk analysis of intelligent adversaries is fundamentally different than risk analysis of uncertain hazards. as we demonstrated in section . , assigning probabilities to the decisions of intelligent adversaries can underestimate the potential risk. decision tree models of intelligent adversaries can provide insights into the risk posed by intelligent adversaries. the defender-attacker-defender decision analysis model presented in this article provides four important benefits. first, it provides a risk assessment (the baseline or status quo) based on defender and attacker objectives and probabilistic assessment of threat capabilities, vulnerabilities, and consequences. second, it provides information for risk-informed decision making about potential risk management options. third, using cots software, we can provide a variety of very useful sensitivity analysis. fourth, although the model would be developed by a team, the risk analysis can be conducted by one risk analyst with an understanding of decision trees and optimization and training on the features of the cots software. the application of risk assessment and risk management techniques should be driven by the goals of the analysis. in natural hazard risk analysis, there is value in performing risk assessment without risk management. some useful examples are "unfinished business," a report from the epa and the u.k. national risk register. ( , ) in intelligent adversary risk analysis, the defender-attacker-defender decision analysis can provide essential information for risk management decision making. in our example, risk management techniques are important, and this type of model provides insights about resource allocation decisions to reduce or shift risk. in addition, with budget set to us$ , the model can be used to assess the baseline risk. as the budget increases, the model clearly shows the best risk management decisions and the associated risk reduction. this model enables the use of cots risk analysis software. in addition, the use of cots software enables the use of standard sensitivity analysis tools to provide insights into areas in which the assumptions are critical or where the model should be improved or expanded. currently, many event tree models including the dhs btra event tree require extensive contractor support to run, compile, and analyze. ( ) although one would still need a multidisciplinary team to create the model, once completed the defenderattacker-defender decision analysis model is usable by a single risk analyst who can provide near realtime analysis results to stakeholders and decisionmakers as long as the risk analyst understands the risk management options, decision trees, optimization, and has training in the cots tool. the technique we advocate in this article has limitations. some of the limitations of this model are the same as those of event trees. there are limitations on the number of agents used in the models. we easily modeled bioagents with reasonable run times, but more agents could be modeled. in addition, there are challenges in assessing the probabilities of uncertain events, for example, the probability that the attacker acquires agent a. next, there is a limitation in the modeling of the multiple consequences. another limitation may be that to get more realistic results, we may have to develop "response surface" models of more complex consequence models. these limitations are shared by event trees and decision trees. however, decision trees also have some limitations that are not shared by event trees. first, only a limited number of risk management decisions can realistically be modeled. therefore, care must be used to choose the most appropriate set of potential decisions. ( , ) in addition, there may be an upper bound on the number of decisions or events that can be modeled in cots software. it is important to note that it may be difficult to determine an objective function for the attacker. as mentioned before, there is a tradeoff in replacing the probabilities assigned to what an attacker might do (event tree approach) with attacker objectives (decision tree approach). we believe it is easier to make informed assessments about the objectives of adversaries than to assess the probabilities of their future actions. however, we need more research on assessing the robustness of risk management decisions to assumptions about adversary objectives. finally, successful model operation and interpretation requires trained analysts who understand decision analysis and defenderattacker-defender optimization. this article has demonstrated the feasibility of modeling intelligent adversary risk using defenderattacker-defender decision analysis. table iv and section . identified several alternative modeling assumptions that could be considered. we can modify and expand our assumptions to increase the complexity and fidelity of the model. the next step is to use the model with the best data available on the agents of concern and a proposed set of potential risk management options. use of our defender-attacker-defender model does not require a major intelligent adversary research program; it requires only the willingness to change. ( ) much of the data used for event tree models can be used in the decision analysis model. assessing probabilities of attacker decisions will not increase our security but defender-attacker-defender decision analysis models can provide a sound assessment of risk and the essential information our nation needs to make risk-informed decisions. g.s.p. is grateful for the many helpful discussions on intelligent adversary risk analysis with his colleagues on the nrc committee and the defender-attacker-defender research of jerry brown and his colleagues at the naval postgraduate school. the authors are grateful for the dpl modeling advice provided by chris dalton of syncopation. the authors thank roger burk at the united states military academy for his useful reviews and suggestions. finally, the authors thank the area editor and reviewers for very detailed comments and suggestions that have helped us improve our article. this model is a multiobjective decision analysis/game theory model that allows for risk management at the u.s. governmental level in terms of budgeting and certain bioterror risk mitigation decisions. the values for probabilities as well as factors are notional and could easily be changed based on more accurate data. it uses the starting u.s. (defender) decisions of adding a city to the bio watch program (or not) and the percent of storing an agent in the nation's vaccine reserve program to set the conditions for an attacker decision. the attacker can choose which agent to use as well as what size of population to target. there is some unpredictability in the ability to acquire the agent as well as the effects of the agent given the defender and attacker decisions. finally, the defender gets to choose whether to deploy the vaccine reserve to mitigate casualties. the model tracks the cost for each u.s. decision and evaluates them over a specified budget. the decisions cannot violate the budget without incurring a dire penalty. the objectives that the model tracks are u.s. casualties and impact to the u.s. economy. they are joined together using a value function with weights for each objective. we outline our model using a method suggested by brown and rosenthal. ( ) probabilistic risk assessment: reliability engineering, design, and analysis risk analysis in engineering and economics risk modeling, assessment, and management reactor safety study: assessment of accident risk in u.s. commercial nuclear plants nuclear regulatory commission (usnrc) fault tree handbook understanding risk management: a review of the literature and industry practice a survey of risk assessment methods from the nuclear, chemical, and aerospace industries for applicability to the privatized vitrification of hanford tank wastes procedural and submittal guidance for the individual plant examination of external events (ipeee) for severe accident vulnerabilities procedure for analysis of common-cause failures in probabilistic safety analysis a technique for human error analysis (atheana) homeland security presidential directive homeland security presidential directive [hspd- ]: medical countermeasures against weapons of mass destruction guiding resource allocations based on terrorism risk modeling values for anti-terrorism analysis department of homeland security's bioterrorism risk assessment: a call for change, committee on methodological improvements to the department of homeland security's biological agent risk analysis insurability of (mega)-terrorism risk: challenges and perspectives. report prepared for the oecd task force on terrorism insurance, organization for economic cooperation and development integrating risk management with homeland security and antiterrorism resource allocation decision-making. chapter in kamien d (ed). the mcgraw-hill handbook of homeland security fort detrick, md: biological threat characterization center of the national biodefense analysis and countermeasures center choosing what to protect: strategic defensive allocation against an unknown attacker robust game theory robust solutions in stackelberg games: addressing boundedly rational human preference models. association for the advancement of artificial intelligence workshop: - syncopation software. available at probabilistic modeling of terrorist threats: a systems analysis approach to setting priorities among countermeasures probabilistic risk assessment: its possible use in safeguards problems. presented at the institute for nuclear materials management meeting nature plays with dice-terrorists do not: allocating resources to counter strategic versus probabilistic risks a risk and economic analysis of dirty bomb attacks on the ports of los angeles and long beach project bioshield: protecting americans from terrorism the bio watch program: detection of bioterrorism strategic decision making: multiobjective decision analysis with spreadsheets bioterrorist agents/diseases definitions by category center for disease control (cdc) emerging and re-emerging infectious diseases strategic alternative responses to risks of terrorism world at risk: report of the commission on the prevention of wmd proliferation and terrorism unfinished business: a comparative assessment of environmental problems optimization tradecraft: hard-won insights from real-world decision support. interfaces key: cord- -gex zvoa authors: abdulkareem, shaheen a.; augustijn, ellen-wien; filatova, tatiana; musial, katarzyna; mustafa, yaseen t. title: risk perception and behavioral change during epidemics: comparing models of individual and collective learning date: - - journal: plos one doi: . /journal.pone. sha: doc_id: cord_uid: gex zvoa modern societies are exposed to a myriad of risks ranging from disease to natural hazards and technological disruptions. exploring how the awareness of risk spreads and how it triggers a diffusion of coping strategies is prominent in the research agenda of various domains. it requires a deep understanding of how individuals perceive risks and communicate about the effectiveness of protective measures, highlighting learning and social interaction as the core mechanisms driving such processes. methodological approaches that range from purely physics-based diffusion models to data-driven environmental methods rely on agent-based modeling to accommodate context-dependent learning and social interactions in a diffusion process. mixing agent-based modeling with data-driven machine learning has become popularity. however, little attention has been paid to the role of intelligent learning in risk appraisal and protective decisions, whether used in an individual or a collective process. the differences between collective learning and individual learning have not been sufficiently explored in diffusion modeling in general and in agent-based models of socio-environmental systems in particular. to address this research gap, we explored the implications of intelligent learning on the gradient from individual to collective learning, using an agent-based model enhanced by machine learning. our simulation experiments showed that individual intelligent judgement about risks and the selection of coping strategies by groups with majority votes were outperformed by leader-based groups and even individuals deciding alone. social interactions appeared essential for both individual learning and group learning. the choice of how to represent social learning in an agent-based model could be driven by existing cultural and social norms prevalent in a modeled society. a a a a a when facing risks, people go through a complex process of collecting information, deciding what to do, and communicating with others about the effectiveness of their actions. social influence may interfere with personal experiences, making peer groups and group interactions important factors. this is especially important in understanding disease diffusion and the emergence of epidemics, as these phenomena annually take thousands of lives worldwide [ ] . hence, good responsive and preventive strategies at both the individual and government levels are vital for saving lives. a choice of strategy depends on behavioral aspects, complex interactions among people [ ] , and the information available about a disease [ ] . perceiving the risk of an infectious disease may trigger behavioral change, as during the sars epidemic [ ] . gathering information and experience through multiple sources is essential for increasing disease risk awareness about the disease and taking protective measures [ ] . to help prevent epidemics, we need advanced tools that identify the factors that help spread of information about life-threatening diseases and that change individual behavior to curbs the diffusion of disease. various scientific approaches have been developed to tackle this challenge. network science is prominent in studying how epidemics propagate and how different awareness mechanisms can help to prevent the outbreak of disease. some researchers propose a framework with different mechanisms for spreading awareness about a disease as an additional contagion process [ ] . others model populations as multiplex networks where the disease spreads over one layer and awareness spreads over another [ ] . the influence of the perception of risk on the probability of infection also has been studied [ ] . several recent studies have shown how information spreads in complex networks [ , ] . however, a different approach is needed to account for individual heterogeneity (such as income and education levels), the richness of the information on social and spatial distance or media influence. here, a combination of modeling with data-driven machine learning becomes particularly attractive. simulation tools are commonly used to assess the effects of policy impacts in the health domain [ , , ] . among the models for policy-making, agent-based modeling (abm) is recommended as the most promising modeling approach [ ] . abm studies the dynamics of complex systems by simulating an array of heterogeneous individuals that make decisions, interact with each other, and learn from their experiences and the environment. the method is widely used to analyze epidemics [ ] [ ] [ ] [ ] . its advantage is in analyzing the factors that influence the spread of infectious diseases and the actions of individual actors [ ] . as a bottom-up method, abm integrates micro-macro relationships while accommodating agents' heterogeneity and their adaptive behavior. it ensures that the interaction between the spatial environment and the behavior agents can integrate a variety of data inputs including aggregated, disaggregated and qualitative data [ ] [ ] [ ] [ ] . two processes are essential in representing agents' health behavior and disease dynamics, the evolution of risk perception, and selection of a coping strategy. hence, the core of a disease abm lies in defining the learning methods that steer these two processes. sensing of information (global, from the environment, and social, i.e., from other agents), exchanging information (i.e., interactions between agents), and processing of information (i.e., decision making) are critical. machine learning (ml) techniques can support these three elements and offer a more realistic way to adjust agents' behavior in abm [ ] [ ] [ ] [ ] . as more data become available in the analysis of the spread of disease, supporting abm with data-driven approaches becomes a prominent research direction. ml has the potential to enhance abm performance, especially when the number of agents is large (e.g., pandemics) and the decision-making process is complex (e.g., depending on both past experience and new information from the environment and peers). ml approaches in abm can provide agents with the ability to learn by adapting their decision-making process in line with new information. people make decisions both as individuals and as members of a group who imitate the decisions taken by the group or its leader [ ] . information about social networks is becoming increasingly available, e.g., through social media analysis. it may reveal collective behavior in various domains, including health [ ] . for example, people are not entirely rational and imitate others in their views about vaccines [ ] . many abms rely solely on the decisions of individuals, paying little attention to group behavior [ ] . yet, mirroring emotions, beliefs, and intentions in an abm with the collective decision making of crowds affects social contagion in abms [ ] . agents-individuals and groups-may learn in isolation or through interactions with others, such as their neighbors [ ] . in isolated learning, agents learn independently, requiring no interaction with other agents. in interactive learning, several agents are engaged in sensing and processing information and communicating and cooperating to learn effectively. interactive learning can be done in multiple ways, i.e., based on different social learning strategies [ ] . agents might be represented as members of local groups, learning together and mimicking behavior from other group members (i.e., collective learning) [ ] . yet, the impact of different types of interactive learning in groups compared to learning by an individual is an under-explored domain in the development of abms of socio-environmental systems. this article examines the influence of individual vs group learning on a decision-making process in abms enhanced with ml. to illustrate the implications of individual and collective intelligence in abms, we used a spatially explicit disease model of cholera diffusion [ ] as a case study. bayesian networks (bns) steer agents' behavior when judging on risk perception (rp) and coping appraisal (ca). we quantitatively tested the influence of agents' ability to learn-individually or in a group-on the dynamics of disease. the main goal is, therefore, methodological: to introduce ml into a spatial abm with a focus on comparing individual learning to collective learning. the added value of the analysis of alternative implementations of learning in abms goes beyond the domain of disease modeling. it illustrates the effects of individuals learning and collective learning on the field of abms of socio-environmental systems as a whole. therefore, our main objectives are to ( ) simulate the learning processes of agents on a gradient of learning from individual to collective, and ( ) understand how these learning processes reveal the dynamics of social interactions and their emergent features during an epidemic. to address these objectives, the article aims to answer the following research questions: (rq ) what is the impact of social interactions on the perceptions and decisions of intelligent individuals facing a risk? (rq ) how do different implementations of group learning-deciding by majority voting vs by leaders-impact the diffusion process? (rq ) what are the implications of implementing collective learning for risk assessment combined with individual coping strategies? by answering these methodological questions for our case study, we reveal whether individuals perform better than groups at perceiving risks and at coping during epidemics. to explore the implications of intelligent learning on the gradient from individual to collective, we advance the existing cholera abm (cabm) originally developed to study cholera diffusion [ ] . in cabm, mls steer agents' behavior [ , , ] , helping them to adjust risk perception and coping during an epidemic outbreak. for this study, we ran eight abms to test various combinations of individual and group learning, using different information sources-with or without interactions among agents-as factors in the bns. we investigate the extent to which the epidemic spreads, depending on these different learning approaches regarding risk perception and coping decisions. s appendix provides a technical description of the model and the mls. below we briefly outline the processes in cabm essential to understand the performed simulation experiments. nowadays, countries worldwide are labeled as cholera-endemic, with . million cases each year leading to , deaths [ ] . people in urban slums and refugee camps are at high risk of cholera because of limited or no access to clean water and adequate sanitation. cabm is an empirically and theoretically grounded model developed to study the cholera outbreak in kumasi, ghana [ ] . the open-source code for the model code is available online. cabm is grounded in the protection motivation theory (pmt) in psychology [ , ] . the empirically-driven bns model a two-stage decision process of people facing a disease risk: learning to update risk perceptions (threat appraisal, bn in fig ) and making decisions about how to adapt their behavior during the epidemic (coping appraisal, bn in fig ) . according to pmt, threat appraisal depends on individual perceptions of the severity of the disease (evaluating the state of the environment and observing what happens to others) and one's own susceptibility. the coping appraisal is driven by the perceived response efficacy (the belief that the recommended behavior will protect) and one's own self-efficacy (the ability to perform the recommended behavior). cabm simulates individuals who are spatially located in a city. these agents differ by income and education level. individual agents form households and neighborhood groups and are susceptible to cholera at the beginning of the simulation. cabm implements an adjusted seir model [ ] as explained in fig below . instead of going directly from susceptible to exposure, we introduced an awareness component in which agents can assess their risk. options included: no risk perception in which the agent will be exposed (arrow , fig ) ; no risk perception yet no exposure (arrow , fig ) ; and risk perception leading agents to the coping phase (arrow , fig ) . exposure to cholera takes place through the use of unsafe river water. agents can influence their exposure by selecting alternative water sources. these alternative water sources can either reduce their exposure to zero (arrow , fig ) or have no effect on their infection risk (arrow , fig ) . their actions are contingent on income and education levels, as well as on the information that they retrieve from their own experience, information received from others, or observations of the environment. it is not possible to judge by sight whether surface water is infected with cholera, but the agents use other types of visual pollution, e.g., floating garbage, as a proxy. when household agents find the visual pollution level too high, they may decide on an alternative. household agents with high incomes do not take a risk and will buy safe water. in cabm, the risk perception was updated using bn , which depends on the agent memory (me), the visual pollution at the water fetching point (vp), and the evidence of the severity of the epidemic based on communication from the media (m) and potentially with neighbor households (cnh). media broadcast news about cholera starting on day onward (see: ghana news archive). during a simulation, household agents may also interact with their neighbors zero to seven times a day (applied randomly) [ ] . when interactive learning was activated, social interactions among household agents helped to share information on cholera cases that occurred in their communities and on the effectiveness of coping decisions. if risk perception was positive (bn returns a value above . ), household agents activate bn to decide which action (d -d , fig ) to take given their income (i) and education (e) level, the experience of their own household with cholera (oe), and possibly their neighbors' experiences with cholera (ne) [ ] . s appendix provides further details on how the bns are implemented, together with tables of the parameters. sensitivity analysis of the aggregated model dynamics on the bns inputs and training alternatives can be found in [ , ] . a feeling of risk among individuals is fueled by the type of information, the amount of information communicated, and the attention to specific information that may trigger fear and stimulate a learning process regarding a new response strategy [ ] . gained information helps individuals (i) to estimate the severity of the emerging event, (ii) to assess the probability of being exposed to infection, and (iii) to evaluate the efficiency of their coping responses. we used a complex network approach to illustrate the gradual processes from individual to collective learning in cabm (fig ) . each stage is presented as a single network over which a given learning process spreads. each network in fig had the same set of nodes and connections to show how different processes can lead to different outcomes in the same network structure when different information is used to make decisions. in individual learning (fig , process a and process b), agents depend on their prior knowledge (memory, experience, and/or the perceived risk of the environment, such as visual pollution). such learning is the process of gaining skills or knowledge, which an agent pursues individually to support a task [ ] . group learning is the process of acquiring new skills or knowledge that is undertaken collectively in a group of several individual agents and driven by a common goal [ ] . group learning can be realized by making all group members use their own ml algorithms to gather information to perform a specific sub-task (decentralized), and then pool their opinions collectively by making one decision for the entire group (fig , process a and process b). here, we adopt a "majority vote" as the resolution mechanism in the decentralized group decision-making. however, group learning can also be realized by introducing a single agent (leader) who uses ml to learn for the whole group to help it accomplish its group task (centralized). in the centralized group learning, agents in the group copy the decisions of their leader. in both cases, all agents that belong to a group share the same decision, but the information on which this decision is based on varies considerably (fig , process a and process b). both individuals and groups may learn by either by taking information from their social networks (i.e., have it as an additional source of information in their ml algorithms) or not. when individual agents are isolated learners (fig , process a) , they do not have a social network but use only their own information to make a decision in an isolated environment using the information they possess. when individuals learn in an interactively (fig , process b) , https://doi.org/ . /journal.pone. .g they gain new skills or knowledge by perceiving information, experience, and the performance of other agents through their social network. like individual agents, groups can also learn in isolation or interactively. in isolated learning, agents learn independently within their groups, without exchanging any information with each other or with their neighbors (fig , process a and process a). in interactive learning, agents communicate with their neighbors to learn effectively within their groups (fig process b and process b). neighbors could be members of the same group or belong to other income/education groups but live in the same community and share the water collection points. therefore, there might be communication across the groups (fig , process b and process b). groups can be defined in different ways and at different hierarchical levels. this model uses three levels of an organization, the individual agent, groups of agents and communities that comprise several groups. in cabm, household agents living in the same community are grouped based on their income and education level since their coping behavior depends on these factors. agents' behavior in the disease abm also is contingent on their geographic location. hence, all neighbors that share the same water fetching point may contact and exchange information between their groups in cabm. the size and compilation of the groups impact the results of the different learning strategies. when applying interactive learning, a group's decision can be influenced by information retrieved from neighbors inside the group and neighbors outside the group but inside the community. for interactive groups, process b (fig ) shows a situation in which individual household agents make decisions that account for interactions in their social networks (as in process b). then each household conducts a majority vote, allowing it to proceed with the option chosen by the majority of its members. process b (fig ) shows a situation in which the leaders of each group make decisions based on their interactions with others (nodes a, b, and c are leaders of groups g , g , and g respectively in fig ) . the decisions of group leaders are adopted by the household agent of the group. we designed eight simulation scenarios to answer the research questions about the influence of isolated vs interactive individual learning (rq ); centralized vs decentralized learning in processes-during both the risk perception (rp, bn ) and coping appraisal (ca, bn ) processes (rq ); and collective learning about risk perception combined with individual coping appraisal (rq ) on the dynamics of the epidemic and the performance of the model (table ) . we systematically vary cabm settings following the steps in fig to change the gradient of intelligent learning (steps and ) in different cognitive stages corresponding to our decisions of interest: risk and coping appraisal (step ). table shows the setup of the eight scenarios that reflects the three stages shown in fig . the area of the case study captured in cabm is . km and comprises of communities. we assumed that high-income households bought water, so they were excluded from intelligent learning. communities can have up to four groups based on their income and education levels. ten to fifteen percent of the household agents in the case study area usually fetch water from the river. two communities in our dataset (# and # ) hosted only high-income households, so they were excluded from the intelligent learning. hence, we simulated groups spread over communities. each simulation was run for days with a time step equal to one hour. given the inherent randomness of abms, we ran each model for times, generating a new synthetic population every runs. besides the extensive gis data and aggregated data on disease dynamics, we ran a survey via a massive open online course (mooc) geohealth in two rounds ( and ) to gain data on individual behavior. the participants-primarily students from developing countrieswere introduced to the problem of cholera disease, saw pictures of water, and were asked if they would use the water as it is (d in fig ) , walk to a cleaner water point (d ), or use the water after boiling it (d ). the survey data were used to construct and train our bns [ ] . we also used these data to evaluate the results of expert-driven bns in cabm [ ] . table shows that trust in boiled water was much higher than trust in un-boiled water. agents also changed their behavior and began boiling water in the model. to evaluate the impact of individual and social intelligence on agents' learning processes regarding risk perception and coping appraisal and the resulting patterns of disease spread, we used four output measures: disease diffusion, risk perception, spatial patterns, and model performance. these aspects are described in more detail in the odd protocol (s appendix). we also measured the performance of models m -m in terms of run time and the number of intelligent decision steps, i.e., when agents called their bn and/or bn . given the stochastic nature of abms, we ran each of the eight models times. the average and standard deviations of the results of these runs for each output measure were listed in table . behavioral changes can lead to different duration times of the epidemic and reduce the number of infected cases [ ] . this was shown by running cabm with the eight models. models m , m , m , and m recorded a longer duration of active infection during the epidemic ( - days, table ). these results are closer to the real duration of the epidemic in ( days, table ). m , m , and m applied centralized learning, while m applied decentralized learning, but only for the risk perception stage. m , which is individual learning with social interactions, also recorded a shorter duration when compared to the real data of ( days in m ). however, isolated learning and decentralized learning for both risk perception and coping appraisals recorded shorter epidemic duration, with an average difference of - % compared to the empirical data (table ) . all eight scenarios generated more infected cases than the empirical data. this was because infection with cholera bacteria leads to a clinical spectrum that ranges from asymptomatic cases to symptomatic cholera cases. asymptomatic cases are not reported, although they represent roughly half of all cases [ ] . in our simulations, we did not differentiate between symptomatic and asymptomatic cases; we considered all infected cases are considered to be symptomatic cases. therefore, following [ ] , in table , we reported that % of the total infected cases occurred when running the eight models. m , which uses centralized learning for risk perception and individual interactive learning for coping appraisal, reported the fewest infected cases ( , against , in reality). this was followed by m (individual social learning) with , cases and m (individual isolated learning) with , occurrences. these three values reflect the fact that when household agents learned to cope and make decisions individually, they were more efficient than when they were in groups. when these decisions were combined with social interactions, they lead to better protection (m and m ). in general, group behavior had a negative effect, although centralized groups had a less negative impact compared to decentralized ones. finally, in m , where household agents learned risk perception in decentralized groups and learned to cope individually, , infected cases were recorded (table ) . hence, cabm household agents' engagement in decentralized groups for appraising disease risk hindered the perception of risk, lowering agents' motivation to change their behavior to more protective alternatives. the spatial distribution of infected cases (spi) of m reported the closest spi over the communities ( . ) compared to in the empirical data. this was followed by m , with . ( table ). the spatial patterns of the two collective learning models (m and m ) reflected their similarity to the spatial patterns in the empirical data. the correlation between the peak of the epidemic and the peak of risk perception reflects the responsiveness of the household agents' risk perception of the epidemic. scenarios m , m , m , and m were more responsive. that is, the peak of risk perception in m came three days after its epidemic peak, and the peaks in m , m , and m came seven days after their epidemic peaks (table ) . m , m , m and m showed peaks for risk perception near the end of the simulation time. individuals in m were isolated, along with individuals in m ; therefore, they kept following their usual behavior of fetching water and using it as it is. in m and m , household agents depended on majority votes in their groups to make their decisions on risk and to change behavior. more explanations are represented visually in the next sections. table shows the number of steps and the time required to run one simulation of each model. the number of agents that were supposed to go for risk perception daily was % of the total number of household agents (which totaled , ). this percentage was derived from national statistical data from ghana statistical services [ ] . over the days of the epidemic, , agents appraised their risk perception (use their bn ). table also shows the number of steps, during which agents perceived the risk of disease (i.e., risk perception equals ). notably, in m -m , if a group at large assessed the risk perception as zero, then none of its members did the coping appraisal, i.e., the number of steps when bn was activated is zero. in such cases, only the total number of steps with activated bn assessing risk perceptions was included in table . models with centralized learning required the shortest computation times (table ) . for example, m , where only the isolated leaders with the centralized learning consult their bns, had the best performance with the shortest runtime. moreover, m and m recorded the fewest steps across all models. although the average number of agents with risk perception per simulated day was high ( agents), there were only , steps in risk perception and the same number of steps when coping appraisal mls were activated. that is, only leaders activated their bn and bn . this is only % compared to what it would be if agents decided individually. without voting, only one agent per group assessed the situation and made decisions. this made m and m time-efficient. on the opposite end, m recorded the highest computational time because of the intensive calculations required in the individual agents' network and the decentralized group network. among all models, m recorded the longest process time. agents individually perceived risk (bn ) before going back to their groups to negotiate a final decision on risk perception and then repeating the same individual-group sequence for the coping appraisal. in models m , m , and m only one agent per group-a total of leaders-assessed risk perception daily, leading to , steps over the -day epidemic. in m and m only the leaders also went for coping appraisal, while in m , the group members individually assessed the coping appraisal ( , steps in m vs , steps in m and m ). calibration of the original model was conducted in two steps: first the hydrological submodel was calibrated, followed by a calibration of the complete model [ ] . after the calibration, a stability check was performed [ ] . for the current work, the objective of this epidemic model is not to reproduce the real data. it is focusing on the impact of social interactions (present or not, on the level of individual or groups) on both risk perception and coping appraisal of the individual agent. to calibrate a scenario further, one would need risk perception data for that area for the duration of the epidemic. however, such data are very scarce, not only for kumasi but worldwide. hence, risk perception was randomized at initialization. therefore, the eight models cannot be calibrated individually because they need to be comparable at initialization. s appendix shows the statistical analysis that was performed on the output data of the eight models to show and analyze the distribution of the obtained results. when household agents evaluated the risks of getting cholera and made coping decisions individually (m ), they relied only on their own experience. that is, each had individual bn and bn and did not communicate with neighbors. scenario m extends this stylized isolated benchmark case by assuming that while agents continued to make decisions individually, they table risk perception and behavioural change during epidemics did share information with neighbors about the perception of risk and protective behavior. that is, both bn and bn included neighbors' experiences among the information input nodes. fig shows the epidemic curves and the dynamics of risk perception for all scenarios. in the absence of social interactions, more agents became infected with cholera. the peak of the epidemic curve in m (in-i) is higher than m (in-n), leading to % more cases of disease ( fig and table ). overlaying risk perception and epidemic curves suggests that when agents made decisions in isolation (m : in-i), the dynamics of risk perception were hardly realistic (fig a) . namely, when the epidemic was at its peak, household agents in m responded very slowly, with bn delivering a wrong evaluation of risk perception (fig a) . they became aware of the risks very late, so when the epidemic vanished, the number of agents with risk perception = kept increasing. in the absence of communication and experience sharing among peers (in-i), the information about disease spread slowly and there was a significant time-lag between the occurrence of the disease and people's awareness. the small stepwise increase, around day , was because the media started to broadcast information about the epidemic on that day. in m , household agents behaved according to the expected pattern: risk perception became amplified by media coverage and social interactions and then vanished as disease cases became rare (fig b) . only those who experienced cholera infection in their households remained alert. household agents in m after day had more responses to the media's news compared to isolated agents. media supported the agents' social interactions with their risk perception and behavioural change during epidemics neighbors, which led to more agents perceiving risk, especially when the number of infected cases reached their peak (fig b) . even in m , there were limitations of making decisions about risk perceptions individually: risk perception fell too quickly, implying that people stopped worrying about the epidemics although they continued. since household agents in m did not have interactions with other agents, running this model required less time than m (creating a % increase in performance, table ). the interaction between household agents required time to process the information exchanged between agents. in addition, (m : in-i) and (m : in-n) were approximately the same in terms of the realistic spatial distribution of infected cases over the communities, with values of . and . , respectively (table ) . fig presents the spatial distribution of decision types over the study area in both m (in-i) and m (in-n). the household agents in isolated learning were not aware of the cholera-infected cases in their neighbors' household. household agents in m took an unsecured decision and trusted more in using the water fetched from the river as it is (d in fig a) . household agents in m were more rational and mostly boiled the water that they fetched from the river (d in fig b) . in decentralized learning, groups of household agents vote for risk perception and coping appraisal. the final decision of the group is the output of the majority votes. thus, all group members follow the final decision of the group. these groups represent the democratic system, which depends very much on the composition of the group. the decentralized groups with a majority vote can lead to a negative perception of risk. besides, a coping appraisal that depends on a majority vote can lead to inappropriate decisions regarding protection from cholera. when individuals are engaged in social groups, their behaviors are not independent anymore https://doi.org/ . /journal.pone. .g [ ] . this leads to an increase in the randomization of decentralized learning models (m and m ). these two models had higher standard deviations in all measures ( table ) . the qualitative patterns of the three scenarios (m , m , and m ) were the same regardless of the social interactions that added new information to ml (fig ) . for the development of the disease, the voting mechanisms seemed to overwrite individual judgments. the m scenario assumes that household agents were isolated when performing risk perception and coping appraisals. in contrast, m and m allowed household agents to communicate with neighbors during the process of risk perception and before making a coping decision. as a result, m and m generated greater risk perception than m (fig c, d and e ). this suggests that the social interactions still amplify both the awareness of risks and the diffusion of preventive actions. given approximately the same peak heights, the epidemic curves in the three majority voting scenarios reported more infected cases than the other models. among the majority votes, m reported the fewest infected cases, since household agents in their coping appraisal relied on themselves rather than their decentralized groups. overall, it seems that all three models-m , m , and m -got the process of disease risk evaluation wrong. in those cases, risk perception slowly grew in the days when the epidemic was peaking (fig c, d and e) and did not react to the peak in any way, which is unrealistic. moreover, risk perception in the three models continued to grow when the epidemics were almost over. risk perception peaked when there was no longer a risk, i.e., in the last days of the simulation, as shown in table . hence, group voting on risk perception operated with a major time lag: household agents ignored early signals of disease that occurred in just a few households. then they increased their awareness about risk only when most of them were already infected, and they continue to be falsely alerted when the epidemic was over. in m , the small stepwise increase in risk perception represents the response to media, and it is similar to m (in-i) in its development (fig c) . the household agents in their decentralized groups did not have contact with neighbors, therefore, no cases were reported to them from their neighborhoods. as such, they were disconnected from what is happening around them. in m and m , which included social interactions, the development of risk perception seems more responsive, especially after the activation of media on day . nevertheless, their response time was still slow (fig d and e ). in these models, the group decisions were very much dependent on the composition of the group members' opinions. these varied from one another and had different information sources for the final decisions about risk perception (in both m and m ) and coping appraisal (in m ). thus, majority voting led to unsecured decisions. groups in these models were heterogeneous in that household agents had different levels of exposure to the group members with which they voted. decentralized groups with isolated input information (m ) led household agents to vote to use the water fetched from the river (d ) most of the time (fig , map a) . because of their lack of communication with neighbors, household agents missed the opportunity to get information about the infection in their neighborhoods. this explains the higher numbers of infected cases in the majority vote models. social interactions in both m and m helped agents make better decisions, although following the majority still biased their choices. for instance, in m high-income communities (upper communities in maps b and c, fig ) , household agents mostly used the river water as it was even though they were rich enough to boil it before using it (d ) or to buy bottled water (d ). the opposite also occurred when a majority vote forced low-income households to buy bottled water, which is an expensive decision for them. the group voting on the coping appraisal in m might have made individual members uncomfortable when they followed the decisions of their groups even though they might not protect. in reality, household agents sought a balance between preventive behavior and their capability to implement it. moreover, there is always the possibility of routinely changing one's mind based on daily updates of information regarding the epidemic and updates from neighbors. as in m , the household agents in m relied on their decentralized groups for risk perception. this, often led to risk ignorance (fig e) . however, since the agents in m decided on coping appraisals individually, more agents adopted d (fig c) . when they perceived risk during the last days of the epidemic, household agents in the middle-income level switched to boiling water or buying bottled water (d , d in fig c) . those in the low-income level walked to another water fetching point (d ). in centralized groups, one household agent is randomly selected to be the group leader. the leader is responsible for risk perception and the coping appraisal of the group. group members copy the risk perception and disease preventive decisions of their leaders. it is argued that group leaders may improve their group's performance if they model the responses to the situation the group faces [ ] . in this article, we considered two types of leaders: a dictator making top-down decision about risk perception and coping strategy (m and m ), and an opinion leader evaluating risk perception top-down but giving group members the freedom to pursue their own disease coping behavior (m ). the qualitative trends of all three models coincided with what is expected: peaks caused by amplification of risk perception followed by a gradual decrease when epidemics plateau (fig f, g and h) . the centralized group learning on average represented the processes well, as the leader alerted the group members about the disease. however, since no real data are available on risk perception dynamics or the actual coping behaviors that people pursued during the epidemic, we cannot determine which of the models m , m , and m is the best. the following subsections compare models with a leader-dictator (m , and m ) to one with an opinion leader (m ). a dictator-leader decides on behalf of his group regarding disease risk and coping strategies, and both decisions are adopted top-down. a dictator leader learns either in isolation (m ) or in interaction with her/his neighbors (m ). isolated dictators in m are overestimated disease risks (fig f) . for example, if such a leader had his/ her own bad experience with cholera, s/he would keep warning the group. with social interactions (m ), there is less uncertainty in the process of updating the risk perception than in m . for example, compare risk perception assessments around the epidemic peak (fig g) . fig illustrates the impact of social interactions on the dictator's decisions regarding coping appraisal. isolated leaders guided their groups to various types of decisions (fig a) , which were sometimes less secure decisions (e.g., d ). with social interactions, leaders relied on their neighbors and decided more often to walk to a point along the river where the water was cleaner (d ). very few dictators directed their groups to boil the fetched water (d ) or buy bottled water (d ) (fig b) . this shows how centralized decisions making undermines heterogeneity in individual circumstances, such as disease exposure or coping capacity. in m , the leaders in the centralized groups were responsible for evaluating disease risks for their groups, but they interacted with neighbors during the risk perception process. for the coping appraisal, the group members made their own decisions, using the information from their social networks. as a result of this combination of centralized speed alertness about risk perception and individual coping strategies, m generated the fewest infections. the shape of the epidemic curve (except for its height) is very close to the empirical data of , (fig h) . as in m , the uncertainty in the process of risk perception in m , is lower than in m (fig h) . the risk perception curve developed around the epidemic peak followed the dynamics of the epidemic (fig g) . when group members relied on social interaction to learn about the effectiveness of various coping strategies but eventually chose one themselves (m ), there was a diversity of coping strategies. fig c shows the spatial distribution of different types of decisions during the simulation. more household agents went for d and d , which were considered to be the most protective decisions. consequently, communities pursued at least three types of decisions, reflecting the disease coping diversity so important for resilience. the goal of this paper is to perform a systematic comparison of individual vs group learning. the methodological advancements showed that different implementations of individual and collective decision-making in agents' behavior led to different model outcomes. in particular, the stepwise approach of testing how learning (on a gradient from individual learning-without any interactions-to collective-with social networks) affects an abm's dynamics is generic and can be used for other models. to illustrate the subtle difference in implementing learning in abms, we used the example of the spatial empirical abm of cholera diffusion with intelligent agents that employ ml to assess disease risk and decide on protective strategies, which define the dynamics of the epidemic. interactive learning, which assumes that agents share information about risks and potential protective actions, outperformed isolated learning both for individuals and in groups. this underlines the fact that social learning in the decision-making process is very important in abms. while we used disease modeling as a case study, the results may be contingent on the endogenous dynamics of this particular cholera abm. notably, simulation results may differ for abms with other underlying dynamics. this calls for further scrutiny in testing and reporting cases of intelligent social and individual learning in other models. the results indicate that decentralized groups with majority votes are less successful than groups with leaders, whether dictators or opinion leaders. when evaluating current disease risks, majority voting appears to be the worst mechanism for group decisions, often arriving at a wrong decision because of time lags compared to the dynamics of objective disease risks. perceiving risk is a very personal decision-making process [ ] . in contrast, when leaders develop risk perception and propose it to the group, such groups perform better in terms of risk appraisal. moreover, opinion leaders are very effective in helping their group members be alert about disease while giving them the freedom to make coping decisions that accommodate heterogeneity in their socio-economic status and geographical locations. in contrast, dictatorleaders and majority votes that impose a decision that all group members must follow are less effective in reducing the incidence of disease. in our simulation experiments, the structure of the groups is simple and is formed based on the spatial and socio-demographic characteristics of the agents. as grouping seems to have an impact on the spatio-temporal diffusion of the disease, the importance of disease modeling stresses the fact that for this type of model a careful evaluation of the social structures in the case study area should be conducted, to generate trustworthy results. future research should focus on constructing groups based on different variables (family ties, religion, tribes). also, in our abm the leaders had no particular knowledge but were randomly selected and assigned to groups. in reality, this may not be the case. leaders may have access to better information or have already earned the group's trust and respect. in addition, decentralized groups can be improved by giving greater weights to more trusted partners to make wise decisions. the model's performance can be a strong argument when the number of agents is massive, e.g., when simulating a pandemic or epidemics within a very large population is needed to detect a worldwide diffusion mechanism. in that case, social group learning, as described in model m , is a very good alternative to individual interactive behavior. moreover, m shortens the computation time by % while maintaining a good quality model output. the number of contacts each household agent has when they are in their collective learning may impact the diffusion of cholera. however, running a fat tail distribution of the number of contacts would be an interesting topic for future study. different considerations steer the ultimate decision on which type of social behavior to use. besides the technical model performance metrics discussed here, the choice of a particular type of social behavior can also be based on the society that is being modeled. different political systems, the presence of tribes, and different ethnic groups or religious leaders require careful considerations of the social interactions in a model. one should make sure that the actual situation regarding social learning represents the cultural and social norms of the society being modeled. in this article, it was not possible to define, which implementation (m -m ) represented the situation in kumasi most closely. to validate the risk perception-behavior, one would need risk perception data for that area for the duration of the epidemic. however, such data are very scarce, not only for kumasi but worldwide. as we illustrated in this study, many different implementations of social behavior using ml are technically possible, but data are needed to validate alternative implementations. yet, research on risk perception during epidemics is often conducted too late (when the peak is over) or at distance (not in the area where the disease spreads). hence, researches provide little empirical proof of people's behavior and risk perception. more research on risk perception during epidemics, including other variables such as cultural aspects and group behavior, can be very helpful in generating a model that represents a specific society realistically. on a technical note, agent-based modeling software does not always include ml toolkits and libraries. this complicates the implementation of different types of social intelligence. hence, better integration of abm and ml in one software package or linkable libraries could eliminate this problem in the future. finally, an important direction of future research is to implement other ml techniques besides bns, such as decision trees and genetic algorithms. in addition, modeling groups with different ml algorithms may lead to different results since groups will be heterogeneous in terms of members' learning algorithms. several developments in health research drew our attention to the implementation of learning in disease models. one is the impact of fake news on the behavior of people. the other is the fact that human behavior toward vaccination can change radically based on (fake) news it. therefore, including these factors and testing their impact on the behavior of agents may lead to more conclusions for policymakers to consider in their efforts to control epidemics. managing epidemics: key facts about major deadly diseases. world heal organ learning from each other: where health promotion meets infectious diseases modeling infection spread and behavioral change using spatial games severe acute respiratory syndrome epidemic and change of people's health behavior in china the role of risk perception in reducing cholera vulnerability towards a characterization of behavior-disease models dynamical interplay between awareness and epidemic spreading in multiplex networks epidemic spreading and risk perception in multiplex networks: a self-organized percolation method spreading processes in multilayer networks interacting spreading processes in multilayer networks epidemic and intervention modelling-a scientific rationale for policy decisions? lessons from the influenza pandemic wrong, but useful": negotiating uncertainty in infectious disease modelling models for policy-making in sustainable development: the state of the art and perspectives for research using data-driven agent-based models for forecasting emerging infectious diseases out of the net: an agent-based model to study human movements influence on local-scale malaria transmission modelling the transmission and control strategies of varicella among school children in shenzhen a taxonomy for agent-based models in human infectious disease epidemiology an open-data-driven agent-based model to simulate infectious disease outbreaks modeling human decisions in coupled human and natural systems: review of agent-based models spatial agent-based models for socio-ecological systems: challenges and prospects agent-based models global sensitivity/uncertainty analysis for agent-based models intelligent judgements over health risks in a spatial agent-based model a multi-agent based approach for simulating the impact of human behaviours on air pollution simulating exposurerelated behaviors using agent-based models embedded with needs-based artificial intelligence agent-based modeling in supply chain management: a genetic algorithm and fuzzy logic approach evidence for a relation between executive function and pretense representation in preschool children scalable learning of collective behavior based on sparse social dimensions the impact of imitation on vaccination behavior in social contact networks agent based simulation of group emotions evolution and strategy intervention in extreme events modelling collective decision making in groups and crowds: integrating social contagion and interacting emotions, beliefs and intentions learning in multi-agent systems simulate this! an introduction to agent-based models and their power to improve your research practice do groups matter? an agent-based modeling approach to pedestrian egress agent-based modelling of cholera bayesian networks for spatial learning: a workflow on using limited survey data for intelligent learning in spatial agent-based models. geoinformatica the global burden of cholera a comprehensive review of the applications of protection motivation theory in health related behaviors seasonality and period-doubling bifurcations in an epidemic model social contact structures and time use patterns in the manicaland province of zimbabwe cognitive and physiological processes in fear appeals and attitute change: a revised theory of porotection motivation. social psychophysiology: a sourcebook artificial intelligence: a modern approach. third edit integrating spatial intelligence for risk perception in an agent based disease model resilience management during large-scale epidemic outbreaks susceptibility to vibrio cholerae infection in a cohort of household contacts of patients with cholera in behavioral modeling and simulation risk perception and human behaviors in epidemics risk perception it's personal. environmental health perspectives. national institute of environmental health science key: cord- - rv xof authors: levintow, sara n.; pence, brian w.; powers, kimberly a.; sripaipan, teerada; ha, tran viet; chu, viet anh; quan, vu minh; latkin, carl a.; go, vivian f. title: estimating the effect of depression on hiv transmission risk behaviors among people who inject drugs in vietnam: a causal approach date: - - journal: aids behav doi: . /s - - - sha: doc_id: cord_uid: rv xof the burden of depression and hiv is high among people who inject drugs (pwid), yet the effect of depression on transmission risk behaviors is not well understood in this population. using causal inference methods, we analyzed data from pwid living with hiv in vietnam – . study visits every months over years measured depressive symptoms in the past week and injecting and sexual behaviors in the prior months. severe depressive symptoms (vs. mild/no symptoms) increased injection equipment sharing (risk difference [rd] = . percentage points, % ci − . , . ) but not condomless sex (rd = − . , % ci − . , . ) as reported months later. the cross-sectional association with injection equipment sharing at the same visit (rd = . , % ci . , . ) was stronger than the longitudinal effect. interventions on depression among pwid may decrease sharing of injection equipment and the corresponding risk of hiv transmission. clinical trial registration clinicaltrials.gov nct . electronic supplementary material: the online version of this article ( . /s - - - ) contains supplementary material, which is available to authorized users. despite global progress in combating the hiv epidemic, people who inject drugs (pwid) remain disproportionately at risk of hiv infection in southeast and central asia and eastern europe [ ] [ ] [ ] [ ] [ ] . sharing injection equipment is one of the most efficient means of hiv transmission [ , ] , and in these regions, pwid have limited access to and suboptimal use of harm reduction services and antiretroviral therapy (art) [ , ] . the persistence of injection drug use and viremia, without adequate preventive services, results in a high risk of hiv transmission to injecting or sexual partners [ ] . the burden of depression is high among pwid and may further interfere with hiv prevention efforts. up to % of pwid suffer from severe depressive symptoms [ ] [ ] [ ] [ ] [ ] , and the presence and severity of depressive symptoms are closely linked to frequency of injection and risk of relapse, suggesting a bidirectional relationship between depression and injection drug use [ ] [ ] [ ] . comorbid depression consistently results in poor hiv treatment outcomes, such as lowering art use and viral suppression [ ] [ ] [ ] [ ] [ ] [ ] . the online version of this article (https ://doi.org/ . /s - - - ) contains supplementary material, which is available to authorized users. depression may be an important driver of continued hiv transmission among pwid if symptoms increase transmission risk behaviors (e.g., sharing injection drug use equipment, engaging in condomless sex) in the absence of viral suppression. however, while the deleterious effect of depression on hiv treatment outcomes is well established across populations, its effect on the injecting and sexual behaviors that can facilitate hiv transmission or acquisition is not well understood among pwid. although there is substantial evidence that depression increases sexual risk behaviors among men who have sex with men (msm) [ ] [ ] [ ] [ ] [ ] , few studies have focused on pwid and assessed injecting behaviors. specifically, in vietnam, the focus of this study and a setting where the hiv epidemic is concentrated among men who inject drugs [ , ] , there have been no prior studies of the relationship of depression with injecting and sexual behaviors. existing studies on depression and hiv transmission risk behaviors among pwid have suffered from several methodological limitations. to our knowledge, all previous studies that include pwid populations have assessed only correlations between depression and transmission risk behaviors, without inferring causality [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] . in these studies, depression and risk behaviors have typically been evaluated for the same time period (e.g., self-report covering the last month), without the ability to infer whether depression preceded risk behaviors or vice versa [ - , , ] . potential confounders of the relationship between depression and risk behaviors were also measured for the same retrospectively assessed time period. studies that used traditional statistical adjustment for these contemporaneous covariates [ , ] may have induced bias if these variables acted as causal mediators rather than confounders [ ] . in addition, although depression is known to be episodic [ ] , prior analyses have primarily relied on a single assessment rather than accounting for changes in both depressive symptoms and time-varying confounders [ , , , ] . possibly stemming from these methodological issues, existing evidence for an association between depression and transmission risk behaviors in pwid is inconsistent. while an early meta-analysis (that included studies among pwid) found little evidence for an association between depression and sexual risk behaviors [ ] , more recent studies in pwid and msm populations have found higher sexual risk associated with depression [ , , ] or a non-linear association [ ] in which mild symptoms are most predictive of sexual risk. the few studies that have evaluated the association of depression with injecting risk behaviors among pwid have suggested that depressive symptoms were associated with greater injecting risk behaviors [ , [ ] [ ] [ ] . we sought to overcome past methodological issues by using a causal approach to estimate the effect of depressive symptoms on hiv transmission risk behaviors among pwid. we used marginal structural models, a tool for causal inference that accounts for time-varying exposures and confounders [ ] [ ] [ ] , with longitudinal data from male pwid living with hiv in vietnam. we hypothesized that depression would increase behaviors associated with risk of hiv transmission to injecting partners (sharing injection equipment) and sexual partners (condomless sex). by examining depression as a potential underlying cause of hiv transmission through these behavioral mechanisms, we sought to provide clearer evidence about the potential for interventions against depression to avert future hiv infections among pwid. we used longitudinal data from a randomized controlled trial of an hiv stigma and risk reduction intervention among pwid living with hiv in thai nguyen, vietnam from through [ ] . thai nguyen is a province in northeastern vietnam with an estimated hiv prevalence of % among its approximately pwid [ ] [ ] [ ] . participants were recruited via snowball sampling from the thai nguyen sub-districts (of total) with the most pwid. recruiters (former and current pwid) approached members of drug networks in private places to discuss study enrollment and then accompanied or referred interested participants to the study site for screening. at screening, all participants were tested for hiv using two rapid enzyme immunoassay tests run simultaneously (determine: abbot laboratories, abbott park, il and bioline: sd, toronto canada), with discordant results resolved with a third rapid assay (hiv rapid test: acon, san diego, ca). the trial enrolled participants who met the following eligibility criteria: ) hiv-positive according to study test results, ) male (given that % of pwid in thai nguyen are male), ) age ≥ years, ) had sex in the past months, ) injected drugs in the previous six months, and ) planned to live in thai nguyen for the next months (the duration of the trial). questionnaire and laboratory data were collected at study visits every months during months of follow-up. the questionnaire collected information on demographics, injection drug use and other substance use, sexual behavior, depressive symptoms, quality of life, pre-study hiv diagnoses (baseline only), and art use. blood specimens were collected to confirm hiv infection at baseline and measure cd cell count at baseline and over follow-up. the exposure of interest was depressive symptoms over the past week, as assessed by the -item center for epidemiologic studies depression scale (ces-d), which has been validated as a reliable measure of depressive symptoms in vietnam [ , ] . consistent with past work, we defined severe depressive symptoms as ces-d scores ≥ , mild depressive symptoms as scores [ ] [ ] [ ] [ ] [ ] [ ] [ ] , and no symptoms as scores < [ , [ ] [ ] [ ] . the transmission risk behavior outcomes were any sharing of injection equipment (needles, syringes, solutions, or distilled water) and any condomless sex with a female partner, reported for the prior months. we also descriptively examined the numbers of injection equipment sharing and condomless sex acts in the prior months reported at each visit. questionnaire and laboratory data included potential confounders of the depression-risk behavior relationship. time-fixed covariates, which were reported at baseline and assumed to be stable throughout the study period, were marital status, age, employment status, intervention arm, history of overdose, alcohol use, hiv diagnosis prior to enrollment, and previous art use. employment and alcohol use could, in theory, vary over time, but these variables remained fairly constant in our population, motivating our decision to treat them as time-fixed. time-varying covariates measured at one time point may affect subsequent depression and risk behaviors; they may also be influenced by depression and risk behaviors from a previous time point. thus, time-varying covariates may act as either confounders or mediators, depending on the time point assessed [ , ] . for this analysis, time-varying covariates were cd cell count category (< , - , ≥ cells/μl), depressive symptoms at the visit prior to exposure measurement, and transmission risk behaviors at the visit prior to exposure measurement. in the main analysis, we used marginal structural models to estimate the average causal effect of severe depressive symptoms on the risks of any injection equipment sharing or any condomless sex (separately) in the period three to months later, controlling for time-fixed and time-varying confounders. we evaluated each risk behavior outcome (reported with respect to the prior -month period) at the next -month visit in order to temporally separate it from the exposure of depressive symptoms (hereafter referred to as the "longitudinal effect"). in a second analysis, to facilitate comparison with prior research, we used marginal structural models to estimate the association between depressive symptoms and risk behaviors reported at the same visit, where temporal ordering could not be differentiated ("cross-sectional association"). we repeated both analyses using three levels of depressive symptoms (severe, mild, none) in addition to the binary categorization (severe, not severe). we used inverse probability weighted estimation of marginal structural models [ , ] . weights were estimated from a propensity score model for the probability of severe depressive symptoms as a function of time-fixed and time-varying confounders. time-fixed confounders had a constant (baseline) value over all visits; time-varying confounders used the value from the visit immediately preceding the visit at which depressive symptoms were assessed. in the main analysis, the propensity score model was estimated using logistic regression to model the probability of severe (vs. mild or no) depressive symptoms. in a second set of analyses, we used ordinal logistic regression to separately model the three levels of depressive symptoms (severe vs. mild vs. none). propensity score model diagnostics assessed positivity for all confounderdefined subsets of the study population. the denominator of the weights was the predicted probability of depressive symptoms from the propensity score model, and weights were stabilized using the marginal probability of depressive symptoms in the numerator. application of the weights to the study population removes the association between depressive symptoms and potential confounding variables included in the propensity score model, permitting estimation of a causal effect under key assumptions [ , ] (see discussion). in the weighted study population, we estimated the risk difference (rd) for the risk behavior outcomes using generalized estimating equations (binomial regression models with an identity link) to account for repeated observations on participants [ ] [ ] [ ] . for the longitudinal analysis, this weighted rd can be interpreted as the causal effect of depressive symptoms on the risk behavior outcome: that is, the difference in risk of the behavior in the period three to months later if all participants had depressive symptoms compared with the risk if they all did not have depressive symptoms. to account for missing data due to missed study visits, we used multiple imputation by chained equations (mice) [ , ] , imputing and analyzing datasets. we included participants who were incarcerated or died during the study period in the main analysis up until the start of the -month follow-up interval during which incarceration or death occurred, censoring them after their final visit preceding death or incarceration. in sensitivity analyses, we instead used the imputed risk behavior outcome for that -month interval (and censored them at the start of the following interval), given the possibility of engaging in unmeasured risk behaviors prior to incarceration or death. for all estimates, our interpretation focuses on the point estimate and confidence interval, rather than statistical significance [ ] . analyses were conducted using r version . . [ ] . this study was approved by the ethical review committees at all participating institutions. written informed consent was obtained from all participants. as required by inclusion criteria, all participants were male, hiv-positive, and reported being sexually active and using injection drugs at baseline. the median age of participants was years (interquartile range [iqr]: , ), and half were married or cohabitating ( %) ( table ). onethird had a high school education ( %), and the majority were employed full-time ( %). most participants were newly hiv-diagnosed at baseline ( %), while % had been previously diagnosed and were not taking art, and % reported being previously diagnosed and currently using art. the median cd cell count was cells/µl (iqr: , ). general health was rated as poor by %. nearly half reported injecting heroin daily ( %), % had a history of overdose, and % reported current alcohol use. participants completed between zero and four follow-up study visits (median = , iqr: , ) at -month intervals over months, with % completing at least one follow-up visit. at baseline, % of participants reported severe depressive symptoms (ces-d ≥ ), % had mild symptoms ( ≤ ces-d ≤ ), and % had no symptoms (ces-d < ). one quarter of participants reported sex without a condom in the prior months ( %), with a median of condomless sex acts reported for that period (iqr: , ) . most participants reported sharing injection drug use equipment with injecting partners over the past months ( %); these participants reported a median of sharing acts during that period (iqr: , ). after the baseline visit-when all participants received risk reduction counseling and the majority became newly hiv-diagnosed-sharing injection equipment and condomless sex decreased across trial arms (previously reported in [ ] ). however, among the participants attending ≥ follow-up visit, % reported sharing injection equipment at ≥ visit, and % reported condomless sex at ≥ visit. the severity of depressive symptoms varied over time (supplemental fig. ) with % of those attending ≥ follow-up visit reporting severe depressive symptoms at least once. the percentage of participants experiencing competing events increased over time, with % incarcerated and % deceased at months. in our main analysis, we estimated that severe depressive symptoms (compared to no or mild symptoms) increased the risk of sharing injection equipment by . percentage points (rd = . %, % ci − . %, . %) and decreased the risk of condomless sex by . percentage points (rd = − . %, % ci − . %, . %) in the period three to months later (table , fig. ). in the crosssectional analyses, the association between severe depressive symptoms and contemporaneous injection equipment sharing (rd = . %, % ci . %, . %) was stronger than the estimated longitudinal effect, while the association with condomless sex was attenuated (rd = − . %, % ci − . %, . %). in analyses using three levels of depressive symptoms, there were small decreases in the risk of condomless sex as depressive symptoms increased, although all confidence intervals overlapped substantially ( table , fig. ). for injection equipment sharing, patterns of risk corresponding to the three levels of depressive symptoms differed between the longitudinal effect and the cross-sectional association. in longitudinal analyses, we observed a u-shaped relationship in which the risk of injection equipment sharing in the period three to months later was . % ( % ci . %, . %) among those with no depressive symptoms, . % ( % ci . %, . %) among those with mild symptoms, and . % ( % ci . %, . %) among those with severe symptoms. in contrast, in the cross-sectional analysis, we observed a monotonic increasing relationship in which those with no depressive symptoms had the lowest risk of . % ( % ci . %, . %) while those with mild symptoms had a risk of . % ( % ci . %, . %) and those with severe symptoms had a risk of . % ( % ci . %, . %). we did not find appreciable differences in sensitivity analyses that varied censoring time for participants who were incarcerated or deceased (supplemental fig. ). using longitudinal data and methods for causal inference, we found that severe depressive symptoms increased the risk of sharing injection equipment but not the risk of condomless sex among pwid. to overcome past methodological issues, we used marginal structural models to capture the episodic nature of depression, enforce temporal ordering of depression and transmission risk behaviors, and control time-varying confounding in the analysis. by focusing on pwid living with hiv in vietnam, a population at high risk of ongoing hiv transmission, we aimed to better understand depression as an underlying cause of behaviors associated with transmission. in our main analysis of injection equipment sharing in the period three to months after assessment of depression, we found a rd of . % ( % ci − . %, . %), comparing participants with severe depressive symptoms to those with mild or no depressive symptoms. this longitudinal effect was only slightly weaker than the corresponding cross-sectional association (rd = . %, % ci . %, . %) found in the analysis that did not enforce temporality. the % ci of the longitudinal effect (− . %, . %) shows that a risk difference ranging from a . percentage point decrease, a small negative association, to a . percentage point increase, a substantial positive association, is compatible with the data. given that the overall risk of injection equipment sharing was % across follow-up visits, the point estimate of a . % point increase is substantively meaningful. previous research has suggested a possible non-linear relationship between the severity of depressive symptoms and occurrence of sexual risk behaviors, although this literature has focused on msm, not pwid, and findings have been mixed. some studies have found that mild depressive symptoms are associated with higher levels of sexual risk behavior but decreasing risk with severe depressive symptoms [ , ] ; others have observed increasing risk with increasing severity of depressive symptoms [ ] [ ] [ ] . in contrast, our analysis of condomless sex according to three levels of depressive symptoms suggested slight decreases in condomless sex with increasing severity of depressive symptoms, consistent with our main analysis. participants with depressive symptoms -regardless of severity -may be experiencing fatigue, social isolation, and loss of interest in sex, thereby reducing the risk of engaging in this behavior [ ] . although all participants reported sex in the months prior to baseline (due to trial eligibility criteria), a loss of interest in sex over months of follow-up, particularly among participants with depressive symptoms, may have contributed to our findings. in contrast to condomless sex, we observed possible nonlinearities in the relationship between depressive symptom severity and risk of sharing injection equipment, which have not been observed previously. prior studies have found an increasing risk of injecting risk behavior with increasing depressive symptom severity [ ] or have not differentiated between mild and severe symptoms [ , , ] . we found monotonically increasing risk with increasing depressive symptoms in our cross-sectional analysis, and a u-shaped risk in our longitudinal analysis, where those with mild depressive symptoms had the lowest risk. interestingly, the u-shaped relationship we observed for longitudinal injecting risk is the inverse of some previous findings on sexual risk among msm (where those with mild depressive symptoms had the highest risk) [ ] . this may be due to mild depressive symptoms manifesting differently for injecting behavior compared to sexual behavior and inherent differences between pwid and msm populations. depressive symptoms could lead to cognitive distortions, maladaptive coping, and loss of risk aversion [ ] [ ] [ ] , and such symptoms may need to become severe in order to be expressed behaviorally as increased frequency of injection drug use (to treat severe symptoms) and consequently, greater sharing of equipment. although various relationships between depression and hiv transmission risk behaviors have been studied previously, the unique contributions of this study are its focus on pwid living with hiv, a population for whom there is limited data on depression and risk behaviors, and its methodological rigor in inferring causality rather than correlation. our modeling approach controlled time-varying confounding and incorporated the episodic nature of depressive symptoms by using longitudinal data from five study visits over years. given that the longitudinal effect enforced temporal ordering of depressive symptoms prior to risk behaviors, we believe that it more closely reflects the causal effect than does the cross-sectional association. however, it is important to consider the trade-off between temporal ordering and etiologic relevance in the context of data limitations particular to this study. separating the measurement of depressive symptoms and risk behaviors by months (with a -month "blackout period" in between) was necessitated by the parent trial's data structure. this incomplete interval coverage could have attenuated effect estimates relative to what they might have been if the entire interval were included (that is, if depressive symptoms were more likely to influence risk behaviors in the first months of the follow-up interval). a shorter time interval with more complete data coverage may allow better capture of the effect of episodic depressive symptoms on subsequent risk behavior. inferring causality relies on several key assumptions, which must be evaluated carefully in light of the limitations of this observational study [ , ] . the assumption of no unmeasured confounding holds that there are no systematic differences between participants with and without depression beyond any differences in variables controlled for in the analysis. although we controlled for a variety of confounders, it is possible that unmeasured confounding biased estimates of the effect of depression on risk behaviors. we also assumed positivity (i.e., participants with and without depressive symptoms were in all confounder-defined subsets of the population) and that models were correctly specified without measurement error for covariates. importantly, this study's ascertainment of depression relied on ces-d score categories, and the ces-d is not diagnostic of clinical depression. however, we used a conservative cut-point for severe depressive symptoms with high reliability and validity [ , ] . there may also have been under-ascertainment of risk behaviors due to social desirability and recall bias. however, participants reported high levels of drug use and had been recruited by former drug users (aware of their injection drug use), indicating a willingness to disclose sensitive behaviors. finally, the consistency assumption holds that there is no meaningful variability in treatment relevant to its effect on the outcome. here, we did not model a specific treatment on depression, and results should only be interpreted as the hypothetical effect of eliminating severe depressive symptoms without specifying the treatment used for elimination. our conclusions are specific to this study population, which was not randomly sampled and may not be representative of all pwid living with hiv. while men who inject drugs drive the hiv epidemic in vietnam, our findings may not be applicable to other groups, such as women or pwid in other regions. however, our findings may be broadly generalizable to other asian and european countries where the hiv epidemic is concentrated among similar groups. we also note that the sample size of this hard-to-reach population was relatively small, which limited our statistical power to detect small differences in risk between depression groups. importantly, the risk behavior outcome in our study does not allow direct prediction of forward hiv transmission risk, as we did not take into account viral suppression status, the frequency of risk acts, or partner susceptibility to hiv. these determinants of transmission will be incorporated into a future mathematical modeling analysis that will explicitly estimate forward transmission events from this study population. we found that severe depressive symptoms may perpetuate the risk of sharing injection equipment among pwid living with hiv in vietnam. during the study period ( ) ( ) ( ) ( ) ( ) , there was very limited access to mental health services for people living with depression in vietnam [ ] . however, in recent years, mental health services have become a national health priority, and there is growing attention and funding for increasing local services and availability of depression treatment [ , ] . screening and treating depressive symptoms among pwid presents an opportunity not only to improve mental health and drug abuse outcomes but also to reduce behaviors associated with hiv transmission risk. funding doctoral training support for sara n. levintow was provided by nida (r da ), niaid (t ai - ), and viiv healthcare (pre-doctoral fellowship). the parent trial for this study was funded by nida (r da - ). the funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. this content is solely the responsibility of the authors and does not necessarily represent the official views of the national institutes of health. conflicts of interest none. ethical approval this research was approved by the ethical review committees at the thai nguyen center for preventive medicine, the johns hopkins bloomberg school of public health, and the university of north carolina at chapel hill gillings school of global public health. all procedures performed were in accordance with the helsinki declaration and its later amendments or comparable ethical standards. informed consent written informed consent was obtained from all participants. global epidemiology of injecting drug use and hiv among people who inject drugs: a systematic review hiv prevention, treatment, and care services for people who inject drugs: a systematic review of global, regional, and national coverage the global hiv epidemics among people who inject drugs the hiv epidemic in eastern europe and central asia drug use as a driver of hiv risks estimating per-act hiv transmission risk a probability model for estimating the force of transmission of hiv infection and its application scaling up hiv prevention efforts targeting people who inject drugs in central asia: a review of key challenges and ways forward global, regional, and country-level coverage of interventions to prevent and manage hiv and hepatitis c among people who inject drugs: a systematic review the perfect storm: incarceration and the high-risk environment perpetuating transmission of hiv, hepatitis c virus, and tuberculosis in eastern europe and central asia prevalence of depressive symptoms and associated factors among people who inject drugs in china factors associated with symptoms of depression among injection drug users receiving antiretroviral treatment in indonesia depression and clinical progression in hiv-infected drug users treated with highly active antiretroviral therapy frequency of and risk factors for depression among participants in the swiss hiv cohort study (shcs) prevalence and predictors of depressive symptoms among hiv-positive men who inject drugs in vietnam longitudinal predictors of depressive symptoms among low income injection drug users depression as an antecedent of frequency of intravenous drug use in an urban, nontreatment sample depression severity and drug injection hiv risk behaviors interrelation between psychiatric disorders and the prevention and treatment of hiv infection psychiatric disorders and drug use among human immunodeficiency virus-infected adults in the united states role of depression, stress, and trauma in hiv disease progression depression in hiv infected patients: a review psychiatric illness and virologic response in patients initiating highly active antiretroviral therapy mortality under plausible interventions on antiretroviral treatment and depression in hivinfected women: an application of the parametric g-formula risk factors for hiv infection among men who have sex with men depression, compulsive sexual behavior, and sexual risk-taking among urban young gay and bisexual men: the p cohort study depression and oral ftc/tdf pre-exposure prophylaxis (prep) among men and transgender women who have sex with men (msm/tgw) depression, substance use and hiv risk in a probability sample of men who have sex with men a pilot study examining depressive symptoms, internet use, and sexual risk behaviour among asian men who have sex with men mortality and hiv transmission among male vietnamese injection drug users regional differences between people who inject drugs in an hiv prevention trial integrating treatment and prevention (hptn ): a baseline analysis are negative affective states associated with hiv sexual risk behaviors? a meta-analytic review health psychol correlates of depression among hiv-positive women and men who inject drugs depression and sexual risk behaviours among people who inject drugs: a gender-based analysis people who inject drugs and have mood disorders-a brief assessment of health risk behaviors moderate levels of depression predict sexual transmission risk in hiv-infected msm: a longitudinal analysis of data from six sites involved in a prevention for positives study psychiatric correlates of injection risk behavior among young people who inject drugs association of depression, anxiety, and suicidal ideation with high-risk behaviors among men who inject drugs in delhi intimate relationships and patterns of drug and sexual risk behaviors among people who inject drugs in kazakhstan: a latent class analysis associations of depression and anxiety symptoms with sexual behaviour in women and heterosexual men attending sexual health clinics: a cross-sectional study the control of confounding by intermediate variables measuring depression over time or not? lack of unidimensionality and longitudinal measurement invariance in four common rating scales of depression marginal structural models and causal inference in epidemiology effect of highly active antiretroviral therapy on time to acquired immunodeficiency syndrome or death using marginal structural models marginal structural models for analyzing causal effects of time-dependent treatments: an application in perinatal epidemiology efficacy of a multi-level intervention to reduce injecting and sexual risk behaviors among hiv-infected people who inject drugs in vietnam: a four-arm randomized controlled trial ministry of health of vietnam. results from the hiv/sti integrated biological and behavioral surveillance (ibbs) in vietnam, round ii socialist republic of viet nam. vietnam aids response progress report , following up the political declaration on hiv/ aids, reporting period thai nguyen provincial aids center and the division of social evils control and prevention, department of labor a self-report depression scale for research in the general population screening value of the center for epidemiologic studies-depression scale among people living with hiv/aids in ho chi minh city, vietnam: a validation study changes in depressive symptoms and correlates in hiv people at an hoa clinic in ho chi minh city vietnam constructing inverse probability weights for marginal structural models estimating causal effects from epidemiological data the r package geepack for generalized estimating equations estimating equations for association structures yet another package for generalized estimating equations. r-news multiple imputation for nonresponse in surveys mice: multivariate imputation by chained equations in r scientists rise up against statistical significance r: a language and environment for statistical computing depressive symptoms, social support, and personal health behaviors in young men and women sex, drugs and escape: a psychological model of hiv-risk sexual behaviours co-occurrence of treatment nonadherence and continued hiv transmission risk behaviors: implications for positive prevention interventions testing a social-cognitive model of hiv transmission risk behaviors in hiv-infected msm with and without depression mental health in vietnam: burden of disease and availability of services barriers and facilitators to the integration of depression services in primary care in vietnam: a mixed methods study key: cord- -t ydu l authors: shi, peijun title: hazards, disasters, and risks date: - - journal: disaster risk science doi: . / - - - - _ sha: doc_id: cord_uid: t ydu l in this chapter, we will elaborate on three basic terms in the field of disaster risk science: hazards, disasters, and risks. we will also discuss the classification, indexes, temporal and spatial patterns, and some other fundamental scientific problems that are related to these three terms. atmospheric hazard: tropical cyclone, tornado, hail, snow, lightning and thunderstorm, long-term climatic change, and short-term climatic change. biophysical hazard: wildfire. space hazard: geomagnetic storm and extra impact events. the hazard groups proposed by joel c. gill et al. are almost equivalent to the hazard families of icsu-irdr classification except for two differences. one difference is that the meteorological and climatological families of icsu-irdr were combined into a single atmospheric group in gill's classification. the other difference is that the hazard group of shallow earth processes was added in gill's classification in order to emphasize the hazardous impacts of shallow earth changes (table . ). gill and malamud ( ) hazard group code definition component hazards (where applicable) geophysical earthquake eq the sudden release of stored elastic energy in the earth's lithosphere, caused by its abrupt movement or fracturing along zones of preexisting geological weakness and resulting in the generation of seismic waves ground shaking, ground rupture, and liquefaction tsunami ts the displacement of a significant volume of water, generating a series of waves with large wavelengths and low amplitudes. as the waves approach the shallow water, their amplitude increases through wave shoaling volcanic eruption vo the subterranean movement of magma and its eruption and ejection from volcanic systems together with associated tephra, ashes, and gases, under the influence of the confining pressure and superheated steam and gases gas and aerosol emission, ash and tephra ejection, pyroclastic and lava flows in the book regions of risk by hewitt ( ) , hazards were divided into the following categories: natural hazards include four types (meteorological, hydrological, geological and geomorphological, biological and disease hazards) technological hazards include hazardous materials, destructive processes, and hazardous designs. social violence hazards include weapons, crime, and organized violence. compound hazards include fog, dam failure, and gas explosion. complex disasters include famine, refugees, poisonous flood, nuclear wastes and explosion of nuclear power plants (table . ). famine (drought + poor harvest + food hoarding + poverty) refugee crisis (famine + war) toxic floods (tailings dams + toxic waste + flood) harmful nuclear tests and power plant explosions (nuclear explosion and pollution + atmospheric circulation + rain and atomic dust + migration) another way to categorize hazards is based on the environment where hazards occur (also called disaster-formative environment). the classification based on causes emphasizes the origin of hazards, that is, whether the hazards are caused by natural factors, human factors, or the interaction between natural and human factors. in contrast, the classification based on disaster-formative environment lays stress on the environmental basis of hazards, especially the distinctions among different spheres of the earth, and relatively ignores the causes. actually, different kinds of hazards nowadays contain effects from both the natural and human factors to different degrees. and this is one of the important reasons why un changed the goal of the global disaster reduction activities from natural disaster reduction to disaster risk reduction. ( ) classification of hazards by peijun shi in shi's paper ( ) published on the journal of nanjing university (natural sciences, special issue on natural hazards), hazards were divided into four levels: systems, groups, types, and kinds. this classification highlights not only the occurrence environment but also the causes of hazards (shi ) . the first level of this classification is focused on the causes, the second level the environments, the third level the types, and the fourth level the detailed hazards. the hazard system is composed of three systems: nature, human, and environment. the natural hazard system is then divided into four groups: atmosphere, lithosphere, hydrosphere, and biosphere. the hazards are mainly caused by natural environmental factors. the human hazard system includes three groups: technology, conflicts, and wars. the hazards are mainly caused by human environmental factors. the environmental hazard system is made up of five groups: global change, environmental pollution, desertification, vegetation degradation, and environmental diseases. the hazards are due to integrated natural and human factors. ( ) classification of hazards in zhang lansheng and liu enzhen the atlas of natural hazards in china edited by zhang and liu, as a result of the cooperation between beijing normal university and the people's insurance company of china, was published by china science press (beijing) in . based on the atlas, the paper a research on regional distribution of major natural hazards in china by wang et al. ( ) was published, and the classification system of major natural hazards in china consisting of types and subtypes (table . ) was built. the major natural hazards in china can be divided into environments, types, and subtypes based on the differences in disaster-formative environments. atmosphere including nine natural hazards-drought, typhoon, rainstorm, hailstorm, extreme low temperatures, frost, ice and snow, sandstorm, and dry-hot wind. hydrosphere including five natural hazards-flood, waterlogging, storm surge, sea wave, and tsunami. lithosphere including five natural hazards-earthquake, landslide, debris flow, subsidence, and wind-drift sand. biosphere including six natural hazards-crop diseases, crop pests, forest diseases and pests, rodents, poisonous weeds, and red tide. geosphere including six natural hazards-soil erosion, desertification, soil salinization, frozen soil, endemic disease, and environmental pollution. ( ) intensity classification of single hazard the intensity classification of single hazard is based on the measurement specifications and standards of hazards. hazards of different origins and in different environments are measured by different indicators. for example, earthquakes are measured in magnitude, rainstorms in rainfall intensity, typhoons in maximum sustained wind, and floods in flood stage. those hazard measurement specifications and classification standards can be found on the web sites of international or national departments of measurement standards. generally speaking, meteorological departments set up the measurement specifications and classification standards for atmospheric hazards; hydrological or water resources, and oceanic administrations for hydrosphere hazards; geological and a large number of observations show that there is a negative correlation between hazard intensity and frequency. in other words, the higher the intensity is, the lower the occurring frequency is and the longer the repeating period is. there is a power function relationship between the hazard intensity and the occurring frequency (chen and shi ) . refer to textbooks or monographs on geoscience, life science and resources and environmental science for the intensity classification of single hazard. ( ) intensity classification of multi-hazards the regional and integrated disaster risk research requires scientists to understand the diversity of hazards of different spatial and temporal scales and classify the intensities of multi-hazards. because the measurement indicators vary among different hazards and there is no universal indicator, the intensity classification method for single hazard mentioned in the previous section will not be able to meet the needs of the regional and comprehensive studies of the diversity of hazards. based on current data, it is very difficult to synthesize various hazard intensities measured in different indicators. one way to get around this problem is to divide each kind of hazard intensity into relative levels and then calculate the average of levels weighted by the area that respective type of hazard covers during a certain period of time. this method can approximately reflect the regional overall hazard intensities in a certain space and a certain period of time. but there is one problem with this method; that is, different hazards with the same level of relative intensity might have different impacts on hazard-affected bodies. therefore, in order to eliminate this effect, another term is added-the weighted average of the loss rate of each hazard in a certain space and time period. referring to the quadrat method in the vegetation investigation, we proposed to use multiple degree to describe the abundance of hazards in a region. another way to do this is similar to the multiple cropping index calculation in land-use research. based on wang et al.'s paper ( ) , in this book, we propose to use multiple degree and covering index of hazards to express the clustering degree and influence of multiple hazards in a region. multiple degree (h d ): the clustering degree of hazards in a certain region. as a relative value changing with the compared region, it can be expressed as where h d is the multiple degree of hazards in a region (%), n is the number of hazards in the region, and n is the number of hazards in a higher level of region (e.g., world, asia, china). the value of n is set to be (table . ) for the calculation of county-level multiple degree of natural hazards in china. relative intensity (h i ): the relative destructive or damaging ability of hazards. relative intensity is a relative value and only a quantity of the hazard per se. it is not an obvious positive correlation with the disaster loss or damage but is the basic reason (condition) for the regional loss. it can be calculated as follows: where h i is the relative intensity (level) of hazards in a region, p i is the relative intensity of hazard i, and s i is the area ratio of hazard i, ranging from . to . , i.e., - % and i is the number of hazard types. covering index of hazards (h c ): the percentage of covering area of hazards in a region. it can be expressed as where s i is the percentage of covering area of a type of hazard in a region and i is the number of hazard types. composite index (h): the sum of the three indexes mentioned above divided by the respective maximum values. the formula is where h d is the hazard multiple degree, h i is the relative intensity, h c is the covering index of a hazard in a region, and max () is the maximum value of the respective index. we will use the calculated results in wang et al.'s paper ( ) to demonstrate the practical application of the four indexes-multiple degree, relative intensity, covering index, and composite index of hazards. multiple degree of natural hazards. in fig. . , the maximum value of the natural hazard multiple degree is about eight times as large as the minimum value in china. the value ranges from below . to above . . this large variation shows that there is an obvious spatial clustering feature of natural hazards in china. generally speaking, the high values are centered in north china and decrease toward northeast, northwest, and southeast. ninety percent of the districts and counties with h d value greater than % are located in the middle latitude belt ( °- °n). in southwest china where the h d values are relatively low, the h d value increases in some topography-transition areas. thus, it can be seen that natural hazards relatively cluster in natural environment transition zones, such as the middle latitude belt, sea-land transition zones, topography-transition areas, and semiarid farming-pastoral ecotone. in the transition regions of several natural environments, there exist continuous areas with high h d values. north china is right in such location and thus becomes the most concentrated area of natural hazards in china, also an important part of the pacific rim and mid-latitude multiple hazard belt. therefore, regional natural hazards' factors are of important value to the degree of regional natural environmental change. covering index of natural hazards. figure . shows that there is a large variation in covering index of natural hazards in china, ranging from less than . to more than . and indicating obvious regional differences. on the whole, the trapezoid region with qiqihar, harbin, tianshui, and hangzhou as four vertexes has the highest h c values (> . ) in the country. in this high-value region, the northeast china plain and the north china plain have values usually greater than . . the regions with h c values greater than display a lambda-shaped layout; that is, one line is qiqihar-tongliao-beijing-taiyuan-baoji-tianshui, and the other line stretches from southern hebei province to hangzhou along the grand canal. the low-value regions are centered in the northern tibetan plateau, from which h c value increases outwards. in regions south of the yangtze river, there are two high-value belts: southeast coastal belt and southwestern provinces including yunnan, guizhou, and sichuan. there is a positive correlation between the h c value and the h d value. it can be seen from figs. . and . that h c values and h d relative intensity of natural hazards. figure . shows that h i values are within the range of . - . . regions with a h i value greater than . are sparsely distributed. one high-intensity area (h i > . ) stretches from the northeast to the southwest, and another one is in the southeastern side of the first one-hunan and jiangxi provinces. the relative intensities in the vast north-central tibetan plateau and northwest china are relatively low. the regional differentiation of relative intensity is tightly associated with the regional distribution of several major hazards. first of all, the seismically active belts of china, i.e., the pacific rim belt and himalayan seismic belt, have the correspondingly high intensities. seismic regions, once having an earthquake with a magnitude greater than , usually become small high-intensity centers, such as west china, tangshan. secondly, the high-intensity regions are overlapped with the regions concentrated with cloudbursts. for example, the coastal typhoon belt, the northern hebei mountains-taihang mountains-dabie mountains cloudburst belt, and the cloudburst belts in western sichuan and (shi ) western hunan. thirdly, the frequently flooded areas also correspond to high relative intensity areas. these areas include liaohe plain, north china plain, northern jiangsu plain, and hubei and hunan plains. finally, areas with frequent debris flows and landslides, mainly in the "second step" east to tibetan plateau, have high values of relative intensity. therefore, the overall relative intensity of natural hazards is controlled by several major natural hazards. however, these major natural hazards may also interact in the same region, which makes the regional differentiation of relative intensity of china's natural hazards more complicated and also expands the high-intensity regions. in every high relative intensity area, there is at least one dominant natural hazard. relationships among multiple degree, relative intensity, and covering index. the interaction of the three indexes varies among regions. figure . shows the regional distribution of the composite index of natural hazards in china. north china has the highest values of all three indexes and thus is affected by frequent and catastrophic hazards. coastal areas have the second highest values of three indexes and are subject to frequent and severe hazards. the third highest value regions include the farming-pastoral ecotone in northern and western sichuan, yunnan, western guizhou, and southeastern tibetan plateau in the southwest. whereas, the northern tibet is a low-value region. the above outlines the basic regional differentiation of natural hazards in mainland china. (shi ) there are differences in natural hazards between eastern and western china and between southern and northern china. as for east-west differentiation, the values of multiple degree, relative intensity, and covering index are higher in the east and lower in the west. the high values in the east are centered in north china while the low values in the west are centered in northern tibet. as for north-south differentiation, the vast area within °to °n in the east has values of all three indexes higher than areas to its south and north. among this vast area, the highest values exist in range of °to °n. however, the north-south differentiation in the west is not obvious, since there are incomplete data records, especially in the border area among tibet, qinghai, and xinjiang, or the hoh xil region. due to inadequate data, this region has the lowest values of all three indexes nationwide. the regional differentiation of natural hazards is closely associated with the environments where hazards develop. the environmental evolution-sensitive zones usually have high multiple degree, high relative intensity, and high covering index, suffering frequent or severe hazards. however, a small number of ecologically vulnerable areas have low values of multiple degree and intensity. one outstanding example is eastern guizhou. in areas with harsh environment, such as the vast west china, the multiple degree and relative intensity are not necessarily high. therefore, there is no direct relationship between environmental conditions and the impacts of natural hazards. disasters are direct or indirect results of hazards. disaster impacts include human losses, property losses, resources and environmental destruction, ecological damages, disruption of social order, and threats to the normal functioning of lifelines and production lines. the classification of disasters is closely associated with hazards and disaster-affected bodies. in chinese literatures, "zaihai" is used to refer to both hazards and disasters. however, in western literatures, hazard and disaster are two terms used separately. most researches in the west are focused on the classification of hazards, rarely on the classification of disasters. whereas, in chinese literatures, the classification of disasters takes the place of the classifications of both hazards and disasters. this confusion of hazards with disasters, or the confusion of hazard science with disaster science (e.g., seismology substitutes for earthquake catastrophology, and rainstorm meteorology for rainstorm catastrophology), negatively affects the development of disaster risk science. with the development of human society, the types of disaster-affected bodies (exposure) have increased and the distribution of disaster-affected bodies has expanded. at the same time, human's ability of disaster prevention has also been improved. therefore, even the same hazard could induce varying degrees of disasters. when analyzing the disasters, people stress on the disaster-affected bodies; namely, focus on human's disaster prevention level, which is referred as the vulnerability, resilience, and adaptation of human beings to hazards in the western literatures. as mentioned above, in western research there is more of an emphasis on the classification of hazards than that of disasters. in chinese official documents or research literatures, the majority is the classification of disasters based on the causes and scales of disasters. the genetic classification of disasters according to the causes in chinese literatures is basically the same as that of hazards in western literatures. ( in the book introduction to catastrophology by ma ( ) , according to the causes, disasters can be divided into natural disasters and man-made disasters. natural disasters can be further categorized into natural disasters and man-made natural disasters, while man-made disasters are composed of man-made disasters and natural man-made disasters. when the management of disasters is taken into account at the same time, disasters can be divided into classes and further types. in the book, the author also clearly pointed out the administration departments in charge of each type of disaster (table . ) . this classification is different from others in the classification of the disaster-formative environments, with an inclusion of ocean sphere instead of hydrosphere. another difference is that sources of flood and drought are attributed to the atmosphere in ma's classification (ma ) . besides, this classification is basically in accordance with the classifications in prc disaster reduction report ( ) and major natural disasters and disaster reduction in china ( ) (table . ). similar to the classification of introduction to catastrophology, in the book natural disasters by chen ( ) , based on the differences between the internal, external, and gravitational energy of the earth, natural disasters were divided into seven major categories: earthquakes, tsunamis, volcanos, meteorological disasters, floods, landslide and debris flow, and spatial disasters. this classification does not only reflect the holistic view of disasters but also emphasizes the timescales of disasters and environmental processes of the earth system. ( ) classification of disasters in chinese state standards for the dual purposes of comprehensive prevention, reduction and relief of disasters and counting the losses and damages caused by natural disasters, experts organized by state disaster reduction center of ministry of civil affairs drew up the classification standards of natural disasters in china, in which the definition and code of each disaster are also given. in this classification, natural disasters in china are divided into groups and specific types, including specific meteorological and hydrological disasters, seismic and geological disasters, ocean disasters, biological disasters, and eco-environmental disasters (table . ). natural disasters resulting from the abnormal or anomalous quantity, intensity, temporal and spatial distribution, and combination of meteorological and hydrological elements, causing adverse impacts on people's lives and properties, industrial and agricultural production, and ecological environment drought a deficiency in precipitation and/or a shortage in river runoff or other kinds of water resources, causing adverse impacts on people's life, industrial and agricultural production, and ecological environment flood an overflow of water from rivers or other water bodies onto land which is usually dry, caused by excessive rainfall, melting snow and ice, levee breach and storm surge, resulting in life losses, property losses, and disruption in social functioning typhoon a tropical cyclone that develops in a wide area over tropical or subtropical oceans, accompanied with heavy winds, rainstorm, storm surge and huge waves, bringing out damages to human lives and properties rainstorm rainstorm happens when the precipitation rate is more than mm per hour, or more than mm for h or mm for h, causing damages to human lives and properties (continued) hail hail is a type of solid precipitation formed in thunderstorm clouds and controlled by strong convective weather, causing damages to human lives and properties and to crops and animals thunder disaster thunder disaster is an electric discharge, directly or indirectly striking human and animals, resulting in damages to human lives and properties low-temperature disaster intrusion of strong cold front or constant low temperatures, causing freezing injury and damages to crops, animals, human beings, and infrastructures, disrupting normal life and production snow and ice disaster due to snowfall, a wide area is covered with snow or affected by snowstorm, avalanche, frozen road, and other infrastructures. it severely disturbs the lives of human beings and animals and causes damages to traffic, power, and communication systems high-temperature disaster high temperatures cause harms to the health of animals, plants and human beings and damages to production and environment sandstorm a strong wind blows loose sand and dirt that later mix with air from a dry surface, causing damages to human lives and properties. horizontal visibility is usually less than km fog a visible mass composed of cloud water droplets and ice crystals suspended in the air or near the earth's surface, causing damages to human lives and properties and especially harms to the traffic safety. horizontal visibility is usually less than km other m&h disasters meteorological and hydrological disasters that are not mentioned above seismic and geological disasters natural disasters resulting from the sudden energy release or violent mass transport in the lithosphere of the earth or long-term accumulative geological changes, causing damages to human lives and properties and ecological environment earthquake the strong shaking of the earth's surface and the accompanying ground rupture, resulting from the sudden release of energy in the earth's crust. it causes damages to human lives, buildings and infrastructures, social functioning, and eco-environment (continued) the sudden occurrence of a violent discharge of the interior materials of the earth, causing direct damages to human lives and properties. the erupted material is referred to as lava. other impacts include pyroclastic flow, lava flow, volcanic gases and ashes, and eruption-induced debris flow, landslide, earthquake, and tsunami collapse the sudden fall of unstable materials occurring at the edge of a steep cliff, causing damages to human lives and properties landslide a slide of a large mass of dirt and rock down a slope under the action of gravity, causing damages to human lives and properties debris flow a special water flow, entraining objects such as fragmented rocks, muds, branches in the path, rapidly rushes down mountain valleys or slopes. it results from heavy rains, reservoir or pond breach, or a sudden melting of snow and ice, causing damages to human lives and properties surface collapse a surface depression due to abandoned mines or karst processes, causing damages to human lives and properties ground subsidence a large-area land subsidence due to excessive extraction of groundwater or gas and oil, causing damages to human lives and properties. it occurs in unconsolidated or semi-consolidated soil areas ground fracture a linear fissure on the ground surface cracking through the rocks or soils, causing damages to human lives and properties other geological disasters geological disasters that are not mentioned above ocean disasters disasters resulting from the abnormal or drastic change of the ocean environment and occurring on the sea or coast strom surge a coastal flood caused by non-periodic abnormal rising of water over part of the sea that results from tropical cyclone, extratropical cyclone or cold front, causing damages to human lives and properties along the coast sea wave sea waves with wave height of more than meters, causing damages to ships, offshore oil drilling facilities, fishery, aquaculture, harbors and ports, seawalls, or other ocean and coastal engineering sea ice it blocks channels and causes damages to ships, offshore facilities and coastal engineering (continued) tsunami sea waves with wavelength up to hundreds of kilometers, induced by seafloor earthquakes, volcanic eruptions, underwater landslide, and subsidence, producing a sudden upward displacement of seawater and forming a "water wall" on the coast, devouring farmlands and villages, causing damages to human lives and properties red tide a sudden increase or high concentration of aquatic planktons and microorganisms changing the water body color to red or brown. it disrupts the normal aquatic ecology and causes damages to human lives and properties and eco-environment. see also the red tide disaster in the biological disasters other ocean disasters ocean disasters that are not mentioned above biological disasters natural disasters in the forest or grassland resulting from activities of living being, lightning, or spontaneous combustion, causing damages to crops, woods, cultivated animals and related facilities plant diseases and pests an outbreak of pathogenic microorganisms and pests, harming the farming and forestry pandemic disease an epidemic of infectious disease caused by microorganisms or parasites that rapidly spreads through human or animal populations, usually resulting in a high morbidity or mortality. it causes great damages to animal husbandry and harms to human health and life safety rodents an outbreak of rodent-related disasters, causing damages to plantation, animal husbandry, forestry and properties weeds weeds cause severe damages to plantation, animal husbandry, forestry, and human health red tide a sudden increase or high concentration of aquatic planktons and microorganisms changing the water body color to red or brown. it disrupts the normal aquatic ecology and causes damages to human lives and properties and eco-environment forest/grassland fire a fire in a forest or grassland caused by lightning, spontaneous combustion or human beings under combustible conditions. it causes damages to human lives and properties and eco-environment other biological disasters biological disasters that are not mentioned above natural disasters induced by the damage to ecosystem or ecological imbalance, bringing out negative impacts on the harmony between human beings and nature and on the living environment of human beings (continued) from the comparison between the classification of the twelfth five-year special plan and that of the state standards, it can be seen that they share the same five big groups of natural disasters, but the latter one has more specific types than the former one. besides, an emergency incident is defined as "a natural disaster, accidental disaster, public health incident or social safety incident, which takes place by accident, has caused or might cause serious social damage and needs the adoption of emergency response measures" in emergency response law of the people's republic of china ( ). there is no universal standard for the classification of disaster scale. although there are different standards in different fields, the major factor considered is the scale of the hazardous event-induced disasters. generally, the classification indicators include the number of casualties, the amount of property loss, disaster-affected area, and hazard intensity. ( ) indicator system of unisdr in the sendai framework for disaster risk reduction - , there are seven disaster reduction indicators, four of which are related to the measuring of disasters, namely disaster mortality, disaster-affected people, direct disaster mortality. number of people killed or missing from a hazardous event. the death toll refers to the number of death population during or after the event, while the missing toll only refers to the total number of missing people during the event. besides counting the total number of dead and missing people, it is also important to calculate the percentage of killed and missing people per , people. thus, the effect of population base can be eliminated in temporal and spatial comparison of mortality. affected people. it refers to the total population that are affected directly or indirectly by disasters. directly affected people are those whose health was affected, such as injured and sick people, and those evacuated, displaced or relocated, and those who suffered from the disaster-induced direct damages to livelihoods, infrastructure, social culture, environment, and properties. at the same time, disaster statistics also need to include people whose houses were destroyed or collapsed and people who receive food aid. indirectly affected population are those suffered from the additive effects of disasters, namely people affected by disaster-induced disruption or modification of economy, critical facilities, basic services, business, work, society, and health. in practice, due to the difficulty in counting indirectly affected population, only directly affected population are included in the disaster statistics. likewise, it is also worth calculating the percentage of affected people per , people. in addition to counting the killed and missing people and affected people, it is also common to specify their ages, genders, residence addresses, and disabilities. direct economic loss. direct economic loss refers to disaster-induced loss of materials or properties, such as houses, factories, and infrastructures. usually after the occurrence of a disaster, it is advised to assess the property loss as soon as possible to facilitate the cost estimation for disaster recovery and insurance claims processing. it is also recommended to calculate the percentage of direct economic loss accounting for the global or national gross domestic product (gdp). direct economic loss can be further divided into agriculture loss, loss of industrial and commercial facilities, houses, critical infrastructure damaged or destroyed by disasters. direct agriculture loss: it refers to crop and livestock losses and also includes the losses of poultry, fishery, and forestry. industrial facilities damaged or destroyed: it refers to the loss of manufacturing and industrial facilities damaged or destroyed by hazardous events. commercial facilities damaged or destroyed: it refers to the loss of commercial facilities (including storage, warehouse, cargo terminal, etc.) that are damaged or destroyed by hazardous events. houses damaged: it refers to the loss of houses slightly affected by hazardous events and subject to no structural or architectural damages. after repair or cleanup, these damaged houses can still be habitable. houses destroyed: it refers to the loss of houses that collapsed or were burnt, washed away, and severely damaged and are no longer suitable for long-term habitation. critical infrastructure damaged or destroyed: it refers to the loss of educational and health facilities, and roads damaged or destroyed by hazardous events. educational facilities damaged or destroyed: it refers to the number of educational facilities damaged or destroyed by hazardous events. educational facilities include children's playroom, kindergarten, elementary school, high school (junior and senior), vocational school, college, university, training center, adult education school, military school, and prison school. health facilities damaged or destroyed: it refers to the number of health facilities damaged or destroyed by hazardous events. health facilities include health centers, clinics, local or regional hospitals, outpatient centers, and facilities that provide basic health services. roads damaged or destroyed: it refers to the length of road networks in kilometers that are damaged or destroyed by hazardous events. infrastructure damaged or destroyed: it refers to the loss of infrastructures other than the critical infrastructures, such as railways, ports, airports. railways damaged or destroyed: it refers to the length of railway networks in kilometers that are damaged or destroyed by hazardous events. ports damaged or destroyed: it refers to the number of ports that are damaged or destroyed by hazardous events. airports damaged or destroyed: it refers to the number of airports that are damaged or destroyed by hazardous events. basic services. basic services refers to the disruption of public services or time loss due to low-quality services, which are caused by hazardous events. basic services include health facilities, educational facilities, transportation system (including train and bus terminals), ict system, water supply, solid waste management, power supply system, emergency responses, etc. the health facilities, educational facilities, transportation system are mentioned above in the critical infrastructure loss and infrastructure loss sections. ict system refers to communications and the associated equipment network, including radio and tv stations, post offices, public information offices, internet, landline and mobile telephones. water supply includes drinking water supply and sewerage systems. drinking water supply system includes drainage system, water processing facilities, water transporting channels (channels and aqueducts) and canals, water tank, or tower. sewerage system includes public sanitary facilities, sewerage treatment system, collection and treatment of solid wastes from public sanitation. solid waste management refers to collection and treatment of solid wastes that are not from public sanitation. power/energy system includes power facilities, electrical substations, power control centers, and other power services. emergency response includes disaster management offices, fire departments, police stations, military, and emergency control centers. ( ) indicator system of statistical system of damages and losses of large-scale natural disasters in china the ministry of civil affairs and national bureau of statistics of china jointly introduced the regulation statistical system of damages and losses of large-scale natural disasters in , which brought the comprehensive assessment of natural disaster loss into the regulation system (shi and yuan ) . this statistical system explains the purpose and meaning of statistics of large-scale disasters and defines the statistical scope and major indicators. other contents described in this regulation include the submission procedure, forms of organization and data collection, loss statistical report forms ( of which is the loss summary table), basic report, and indicators. some examples of these indicators are affected people, houses damaged and destroyed, household property loss, agriculture loss, industry loss, service loss, infrastructure loss, loss of public service system, resources and environmental loss, and so on (table . ) . figure . shows the changes in the percentage of direct economic loss accounting for gdp and human mortality caused by disasters in china ( china ( - which wenchuan earthquake data are not included). the overall decreasing trends of the two items demonstrate a good result of comprehensive disaster reduction. compared to the disaster indicators in sendai framework for disaster risk reduction - that incorporates both the human-made and natural disasters, the statistical system can only be applied to natural disasters. in contrast to the emphasis of the latter one on the comprehensiveness, the former one only highlights the key points. another difference between these two is that the latter one includes report of rural residential houses damaged and destroyed b report of urban residential houses damaged and destroyed b report of non-residential houses damaged and destroyed c report of household property loss d report of agriculture loss e report of industry loss f report of service loss g report of infrastructure (transportation) loss g report of infrastructure (communications) loss g report of infrastructure (energy) loss g report of infrastructure (water conservancy) loss amount of materials damaged and destroyed g report of infrastructure (municipal service) loss economic loss g report of infrastructure (living facilities in rural area) loss g report of infrastructure (geological hazard prevention) loss h report of public service (educational system) loss h report of public service (technology system) loss h report of public service (health system) loss h report of public service (culture system) loss h report of public service (media system) loss h report of public service (sports system) loss h report of public service (social security and service system) loss h report of public service (social management system) loss h report of public service (cultural heritage system) loss report of resources and environmental loss amount of materials damaged and destroyed j report of basic indicators the resources and environmental damages caused by natural disasters, while the former one stresses on the effectiveness and quality losses of infrastructure and services caused by disasters. therefore, there are similarities in the disaster indicators of these two regulations, and there are also differences due to social and cultural differences. even though some indicators share the same name in two systems, the actual meanings might be different. in practice, people need to be cautious in choosing the right indicator(s). at present, the classification of disaster grade mainly adopts the standardized division method of disaster risk factors of each disaster, while there is no standard division for multi-hazard classification. the qualitative approach is usually used to classify disaster intensity levels, that is, the use of continuous quantitative or semiquantitative indicators, such as applied multi-risk mapping of natural hazards for impact assessment (armonia) that categorizes a disaster into high, medium, or low level according to its intensity. another example is hazard score proposed by odeh engineers inc. ( ) that takes into account the level, frequency, and percentage of the affected area in the total research area. a higher score means the hazard has a higher intensity. the world natural disaster hotspots identified by the world bank are based on . °Â . °grid cells for risk assessment. in each grid cell, the hazard indexes of all types of hazards occurred are summed to give a score for the determination of hotspots. the hazard index of each type of hazard is established according to the corresponding data. the term very large-scale disaster emerged in the beginning of the twenty-first century. at the end of the twentieth century, a series of disasters happened worldwide and caused great impacts on the human society and economy. for example, hurricane andrew occurred in usa in claimed lives and caused -billion-dollar definition of very large-scale disaster. the chinese word "juzai" appeared in in china for the first time and was used to mistranslate the word "catastrophic disaster" in the western literatures. the appearance of "juzai" in chinese media and academia is closely related to the founding and explanation of the catastrophic disaster insurance funds. according to the statistical data from cnki.com.cn, as of the end of december , there were up to literatures that include "juzai" in the titles. and the number of papers increased annually with the peak of publications in the year . more than half of these papers are related to the catastrophic disaster insurance. due to the frequent occurrences of very large-scale disasters in recent years, the new words such as "juzai prevention," "juzai relief," and "juzai assessment" are becoming more and more widely used in scientific publications. in the chinese academic literatures, it is the author of this book that first introduced the word "dazai" for "large-scale disaster" and "juzai" for "very large-scale disaster" in the western literatures after attending the high level advisory board seminar of financial management of large-scale catastrophes held by oecd in paris in the july (shi et al. (shi et al. , . although a lot of work has been done in the definition and classification of very large-scale disasters, there are no well-recognized definition and classification standards of very large-scale disasters in the fields of academia or finance. different scholars have their own angles. in western literatures, the following definitions have great influences. in the book large-scale disasters-lessons learned published by oecd in , the terms large-scale disasters (or megadisasters) and very large-scale disasters were used, but the specific quantitative criterion was not provided. in oecd's opinion, very large-scale disasters can cause a great number of casualties, property losses, and widespread infrastructure damage. the impacts are so great that governments of the affected area and neighboring regions become unable to cope with; even public panic occurs. oecd also emphasizes the importance of cooperation and assistance among the member countries in response to the very large-scale disasters (oecd ) . in the book large-scale disaster-prediction, control, reduction by mohamed gad-el-hak ( ), disasters are divided into large-scale and very large-scale disasters based upon the disaster scope and death toll ( fig. . ) . a very large-scale disaster is defined as a disaster with the death toll more than , or the affected area over km . the definition of catastrophic disaster is usually based on the scale of the insured property losses by experts on insurance and financial management and development. the insurance services office (iso) of usa defines a catastrophic disaster as an event that causes insured property losses of million dollars or more and affects a significant number of property/casualty policyholders and insurers. swiss re uses losses more than . million us dollars as a standard. from the amount of property losses, it can be seen that the scale of a catastrophic disaster cannot reach that of a large-scale disaster or megadisaster, let alone a very large-scale disaster. this also shows that the term "juzai" mentioned in the chinese literatures in the late s has a scale of the catastrophic disaster and was only paid attention to by experts on insurance and financial management and development. therefore, before the use of large-scale disasters or megadisasters and very large-scale disasters in the western literatures in the early twenty-first century, the term "juzai" in the chinese literatures only refers to a catastrophic disaster. from the angle of geoscientists, very large-scale disasters are usually defined according to the hazard intensity, casualties, property losses, and affected scope. a very large-scale disaster in ma's opinion must reach two of the following criteria: over , deaths, direct economic losses of more than billion chinese yuan of , economic losses of more than the average annual fiscal revenue of the previous three years of a chinese province, drought disaster rate more than %, or flood disaster rate more than %, crop losses of more than % of the average annual crop production of the previous three years of a chinese province, more than , houses collapsed, and livestock death toll of more than million (ma et al. ). shi et al. defines a very large-scale disaster as a great disaster caused by a -year hazard (e.g., a . -magnitude or stronger earthquake) and resulting in a great number of casualties and large and widespread property losses (shi et al. ) . also in shi's definition, the impacts of a very large-scale disaster are so great that the affected area is unable to respond by itself and has to resort to outside help (table . ). according to the classification standard in table . , the very large-scale disasters caused by natural hazards worldwide between and are listed in table . . from table . , we can see that one of the characteristics of the very large-scale disasters is the big hazard intensity. a very large-scale disaster can be a disaster chain composed of a very large hazard and its induced secondary disasters. it can also be a superposition of multiple types of disasters that are triggered by multiple hazards in a specific region and during a specific period of time. besides, very large-scale disasters usually cause a great number of deaths and injuries, a huge amount of property losses, severe impacts on economy, society and natural large-scale . - . (earthquake) or / a- / a - . - . , - , medium-scale . - . (earthquake) or / a- / a - . - . - small-scale below . (earthquake) or below / a . note ( ) for each disaster level, at least two of the four criteria must be met. ( ) death toll includes both the people dead and people missing over month. ( ) direct economic loss is the value of actual disaster-caused property loss of the year. ( ) the affected area is the area where there are casualties, property losses, or damages to ecosystems approx. environment, and a large disaster area. the emergency aids and reconstruction when or after the occurrence of the very large-scale disasters usually need help from a larger region or the whole country. in some cases, even international aids are indispensable. all the very large-scale disasters mentioned so far are caused by sudden hazards. the indicators and classification standards for disasters caused by the accumulation of gradual hazards should be different (zhang et al. ) . however, there are few discussions about the classification standards of gradually generated very large-scale disasters. drought is one of the major natural disasters in both china and the world. since , a number of severe droughts causing great number of casualties and huge property losses have happened in china. for example, more than tens of thousands of people were killed due to the three-year great drought from to . based on the case of drought, we will discuss the classification standard of gradual very large-scale disasters below. we cannot use hazard intensity to measure or to classify very large-scale droughts. this is because the forming process of a drought is very complicated. a drought hazard could be meteorological, or hydrological. it can also be soil drought or socioeconomic drought. the indicators and measurement criteria vary among different types of droughts. the data and studying methods are also different. what's more, there is no linear relationship between the drought intensity and drought losses. and there is no definite relationship between the drought hazards and the formation of drought disasters neither. the impacts of a very large-scale drought disaster can be represented in crop losses and population in need of aids. drought could result in a bad harvest or total crop failure and water shortage for both human beings and livestock. industrial production, urban water supply, and ecological environment could also be affected to varying degrees if a drought lasts for a long time. in the statistical system of damages and losses of natural disasters by prc ministry of civil affairs ( ), the following items are included in the statistics of droughts: affected population, population affected by water shortage, number of livestock affected by water shortage, affected crop area, crop disaster area, total crop failure area, affected grassland area, and population in need of food and water aids. the inclusion of population affected by water shortage and population in need of aids in this statistical system demonstrates the "people-oriented" disaster relief philosophy. in the state-level contingency plan for natural disaster relief by general office of the state council of prc, it is mentioned that when the number of people in need of food and water aids from governments accounts for a certain percentage of the agricultural population or reaches a designated magnitude, the state will initiate emergency response of the corresponding level (table . ). based on the severe droughts in china in table . , five criteria are used to define very large-scale drought disaster, crop disaster ratio, crop disaster area, disaster population, population in need of aids ratio, and direct economic loss (table . ). ( ) indicator explanation. affected crop area is the crop area that has a reduction of more than % of production. crop disaster area is the crop area that has a reduction of more than % of production. crop disaster ratio is the ratio of crop disaster area over affected crop area. population affected is the number of people who suffer from losses caused by natural disasters (including non-permanent residents). disaster population is the population that are affected by the crop disaster. in this table, it is estimated from the crop disaster area and the cultivated area per capita in the disaster province. population in need of aids is the number of people who are directly affected by natural disasters and are in need of food and water supply or medical treatment from the government (including non-permanent residents). population in need of aids ratio is the ratio of population in need of aids to the population affected. direct economic loss is the value of depreciated of the disaster-bearing bodies or the value of the disaster-bearing bodies forfeited. in this table, it is the value of actual property damages of the year when the disaster happened. ( ) the disaster population in events - was estimated from the crop disaster area and the cultivated area per capita in the disaster province. forum are fiscal crises in key economies, structurally high unemployment/ underemployment, water crises, severe income disparity, failure of climate change mitigation and adaptation, greater incidence of extreme weather events (e.g., floods, storms, fires), global governance failure, food crises, failure of a major financial mechanism/institution, and profound political and social instability. it can be seen from the above that, besides the continuing focus on the traditional risks, we need to accelerate the study of response to a series of non-traditional risks. the five categories of risks were not changed in the global risk report , but the number of specific risks was decreased from to (global risk ). in this report, a global risk is an uncertain event or condition, if occurring, which can cause significant negative impact for several countries or industries within the next years. a global trend is a long-term pattern that is currently taking place and that could contribute to amplifying global risks and/or altering the relationship between them (table . ). the global risks landscape was proposed in the global risk report . from the landscape, it can be seen that the risks with the highest impact and likelihood are failure of climate change mitigation and adaptation, water crises, large-scale involuntary migration, fiscal crises, interstate conflict, profound social instability, cyber attacks, and unemployment or underemployment (global risk ). in the global risks interconnections map , the most strongly connected risks are failure of climate change mitigation and adaptation, profound social instability, large-scale involuntary migration, and unemployment or underemployment (global risk ). the davos world economic forum reports involve a wide range of global risks covering the fields of economy, politics, culture, society, and ecology and could be corresponding to the economic development, political development, cultural development, social development, and ecological development proposed by the chinese government, respectively. thus, it can be seen that the risk classification of the world economic forum emphasizes the combination with practice. the risk taxonomy of irgc is from the perspective of hazards, similar to the disaster classification in sect. . . this classification stresses on the causes of risks and thus lacks in combination with practice. however, it pays attention to emerging risks and slow-developing catastrophic risks, including the governance of very large-scale disaster risks. at the same time, it provides the framework for systematic risk assessment and governance. in china, the classification of risks is tightly associated with the security and disaster classifications. for example, the overall national security concept proposed by the chinese government is in a one-to-one correspondence with the global risks in the world economic forum report. in detail, the political security, homeland security, and military security correspond to geopolitical risks; economic and resource security to economic risks; cultural and societal security to societal risks; technology, information, and nuclear security to technological risks; ecological security to environmental risks (xi ) . another example is the four public securities proposed by the chinese government corresponding to five of the six risk categories of irgc, that is, natural disaster of the former corresponds to the natural forces of the latter, accidental disasters to physical risks, public health accidents to chemical and biological risks, and social security incidents to social-communicative hazards. the complex hazards are usually related to the four public securities proposed by the chinese government, and also to the integrated disasters. the classification system of risks is built upon the hazard and disaster classifications in china. for example, if hazards are divided into natural, man-made, and environmental ones, risks can be classified into the corresponding three types. in the same way, risks can also be divided into four categories of natural, accidental, public health, and social security ones based on the four-type classification of hazards. the natural disaster risk level is usually expressed in exceedance probability or return period, the same way as the intensity level of natural hazards. for example, the meteorological, hydrological, and ocean disaster risks can be divided into -year level (small-scale disaster), -year level (medium-scale disaster), -year level (large-scale disaster), and -year level (very large-scale disaster). the earthquake disaster risk level is usually expressed in earthquake magnitude. for example, a magnitude . or above earthquake poses a very large-scale disaster risk, . - . large-scale, . - . medium-scale, and . or below small-scale disaster risks. the natural disaster risk level does not only depend on the natural hazard intensity but also count on the vulnerability and exposure of the hazard-bearing bodies. in practice, the classification of natural disaster risk levels is even more complicated and thus usually resorts to the relative levels such as the first-level risk, the second-level risk, the third-level risk, the fourth-level risk, and the fifth-level risk. the larger the number is, the higher the risk level is. in the atlas of natural disaster risk of china by peijun shi (chinese-english bilingual version, shi ) and the world atlas of natural disaster risk by peijun shi and roger kasperson . risks (shi et al. ) , the temporal and spatial patterns of natural disaster risks of china and the world are displayed by using indicators including risks, risk grades, and risk levels (qin et al. ; shi shi , . it is more difficult to classify man-made and environmental risk levels by using quantitative criteria. a common way is to use relative level, or using the trends and changes of man-made and environmental risks to describe their levels. the global risk trends in the davos world economic forum risk report is an example of this kind of way of reflecting global risk levels. in detail, increasing global risk levels at are aging population, changing landscape of international governance, climate change, environmental degradation, growing middle class in emerging economies, increasing national sentiment, increasing polarization of societies, rise of chronic diseases, rise of cyber dependency, rising geographic mobility, rising incoming and wealth disparity, shifts in power, and urbanization (wef ) . the top three most likely global risks in in each region are reported in the global risk report of wef (wef ) . in north america, the top three are cyber attacks, extreme weather events, and data fraud or theft. in latin america and the caribbean, the top three are failure of national governance, profound social instability, and unemployment/underemployment. in europe, the three are large-scale involuntary migration, unemployment/underemployment, and fiscal crisis. in the middle east and north africa, they are water crises, unemployment/ underemployment, failure of national governance, and profound social instability. in sub-saharan africa, they are failure of national governance, unemployment/ underemployment, and failure of critical infrastructure. in central asia (including russia), they are energy price shock, interstate conflict, and failure of national governance. in east asia and the pacific, they are natural catastrophes, extreme weather events, and failure of national governance. in south asia, the top three are water crises, unemployment/underemployment, and extreme weather events. the exceedance probability mentioned previously, a concept usually used in the study of natural disaster risks, refers to the likelihood of the intensity or motion parameters of an earthquake, or the flood level, or the maximum wind speed at the center of a typhoon exceeding a designated value or values in a specific location and during a certain period of time. in other words, it is the probability of the required value exceeding the given value and can be mathematically expressed as where p exceed is the likelihood of the required value (u) of a data series exceeding the limit value (u limit ). for example, a set of data x (x , x ,…x n ) have n raw data points that are arranged from the lowest to the highest. the exceedance probability of data point x i is p ¼ n À i þ n  % ð : Þ the following takes the earthquake as an example for the calculation of exceedance probability. within t years, the probability of earthquake occurrence for n times p(n) in a region is p n ð Þ ¼ f n ð Þ ð : Þ in the same way, within t years, the likelihood of no earthquake happening in this region is then, the likelihood of at least one earthquake within t years, or the exceedance probability, is the probability density is poisson distribution is widely used in earthquake studies. within t years, the probability p(n) of n earthquakes occurring in a region can be expressed in the poisson distribution form as below: p n ð Þ ¼ e Àu  vt n n! ð : Þ then, within t years, the likelihood of no earthquake happening in this region is p ð Þ ¼ e Àvt  vt ! ¼ e Àvt ð : Þ so the likelihood of at least one earthquake happening or the exceedance probability within t years is f t ð Þ ¼ À p ð Þ ¼ À e Àvt ð : Þ the corresponding probability density is : Þ the variable v mentioned above is the annually averaged occurrence probability of earthquake in a region, which has an inverse relationship with the return period t : from here, we can see that the relationship between the return period t and the exceedance probability f(t) can be expressed as based on the equation above, we can calculate the return periods of different exceedance probabilities for a period of time. for example, the exceedance probability of % for years is equivalent to a -year disaster, % means a -year disaster, and - % means a - -year disaster. in summary, hazards are negative factors to human beings, and the temporal and spatial patterns of hazards can be studied by comparing with historical observed data. disasters are the impacts of hazards on human beings and can be measured in terms of losses and damages. risks are future hazard-induced disasters in a specific location. in short, disaster risk science is a discipline studying the mechanics, processes, and dynamics of the interactions among hazards, disasters, and risks, as well as disaster risk prevention and reduction. the relationships among hazards, disasters, and risks are shown in fig. . . greater incidence of man-made environmental catastrophes (e.g., oil spills failure of climate change mitigation and adaptation geopolitical . global governance failure . political collapse of a nation of geopolitical importance . increasing corruption . major escalation in organized crime and illicit trade . large-scale terrorist attacks . deployment of weapons of mass destruction . violent interstate conflict with regional consequences . escalation of economic and resource nationalization (continued) and sabotage • human violence (criminal activities) • humiliation, mobbing consumer products (chemical, physical, etc • technologies (physical, chemical • large constructions, like buildings, dams, highways, and bridges • critical infrastructures, in terms of physical, economic, social-organizational and communicative severe acute respiratory syndrome (sars) china international decadal commission for disaster reduction china's extreme weather events and disaster risk management and adaptation assessment report gb/t . . natural disaster classification and code notice on the emergency plan for the issuance of natural disaster relief in china beijing: ministry of civil affairs, people's republic of china reviewing and visualizing the interactions of natural hazards regions of risk: a geographical introduction to disasters risk governance: towards an integrative approach. white paper no. , author o. renn with an annex by a research on regional distribution of major natural hazards in china remarks on national security the atlas of natural hazards in china large-scale disaster prediction, control, and mitigation notice on statistical regulations of natural disasters. ministry of civil affairs, people's republic of china notice of "twelfth five-year" special plan on disaster prevention and reduction large-scale disasters -lessons learned theory and practices on disaster science world atlas of natural disaster risk on integrated disaster risk governance: seeking for adaptive strategies for global change on the classification standards of catastrophe and catastrophe insurance: the perspective from wenchuan earthquake and southern freezing rain and snowstorm disaster, china [c]. international integrated disaster prevention and mitigation and sustainable development forum integrated governance of natural disaster risk integrated assessment of large scale natural disasters in china theory on disaster science and disaster dynamics china atlas of natural disaster risk integrated risk governance: ihdp comprehensive risk prevention science program and comprehensive catastrophe risk prevention research integrated risk governance: ihdp integrated scientific plan and integrated catastrophe risk governance research death toll exceeded in europe during the summer of the law on response to emergencies. bulletin of the standing committee of the national people's congress of the people's republic of china terminology on disaster risk reduction disaster risk reduction for sustainable development. guidelines for mainstreaming disaster risk assessment in development. a publication of the united nations' international for disaster reduction integrated research on disaster risk. peril classification and hazard glossary united nations office for disaster risk reduction study on definition and division criteria of a large-scale disaster: analysis of typical disasters in the world in recent years global risks report global risks china's major natural disasters and mitigation measures (overview introduction to natural disasters hunan people's press (in chinese) risk is the probability of disaster loss in a future period of time in a region, or the future disaster. essentially, risk is the probability of occurrence of a future hazardous event and its impacts (loss and/or damage). unisdr ( ) defines risk as the probability of harmful consequences resulting from interactions between natural or human-induced hazards and vulnerable conditions. two aspects that need special attention are the influence of social factors on risk and the estimation of hazard intensity and distribution.disaster risk usually refers to natural disaster or environmental risk that is associated with natural factors. the wide attention which disaster risk receives is related to the disaster (especially catastrophic disaster) insurance and the risk governance of emerging risks and very large-scale disasters.the international risk governance council, founded in in geneva, switzerland, paid high attention to the governance of emerging risk and slow-developing catastrophic risks and also established the transition from risk management to risk governance.in , chinese national committee for the international human dimensions program on global environmental change (cnc-ihdp) proposed to ihdp to undertake the integrated risk governance (irg) research under the background of global environmental change. this international scientific program proposal was approved by the scientific committee of ihdp and launched in (shi et al. key: cord- - cjlolp authors: cotton‐barratt, owen; daniel, max; sandberg, anders title: defence in depth against human extinction: prevention, response, resilience, and why they all matter date: - - journal: glob policy doi: . / - . sha: doc_id: cord_uid: cjlolp we look at classifying extinction risks in three different ways, which affect how we can intervene to reduce risk. first, how does it start causing damage? second, how does it reach the scale of a global catastrophe? third, how does it reach everyone? in all of these three phases there is a defence layer that blocks most risks: first, we can prevent catastrophes from occurring. second, we can respond to catastrophes before they reach a global scale. third, humanity is resilient against extinction even in the face of global catastrophes. the largest probability of extinction is posed when all of these defences are weak, that is, by risks we are unlikely to prevent, unlikely to successfully respond to, and unlikely to be resilient against. we find that it’s usually best to invest significantly into strengthening all three defence layers. we also suggest ways to do so tailored to the classes of risk we identify. lastly, we discuss the importance of underlying risk factors – events or structural conditions that may weaken the defence layers even without posing a risk of immediate extinction themselves. • future research should identify synergies between reducing extinction and other risks. for example, research on climate change adaptation and mitigation should assess how we can best preserve our ability to prevent, respond to, and be resilient against extinction risks. our framework for discussing extinction risks human extinction would be a tragedy. for many moral views it would be far worse than merely the deaths entailed, because it would curtail our potential by wiping out all future generations and all value they could have produced (bostrom, ; parfit, ; rees, rees, , . human extinction is also possible, even this century. both the total risk of extinction by and the probabilities of specific potential causes have been estimated using a variety of methods including trend extrapolation, mathematical modelling, and expert elicitation; see rowe and beard ( ) for a review, as well as tonn and stiefel ( ) for methodological recommendations. for example, pamlin and armstrong ( ) give probabilities between . % and % for different scenarios that could eventually cause irreversible civilisational collapse. to guide research and policymaking in these areas, it may be important to understand what kind of processes could lead to our premature extinction. people have considered and studied possibilities such as asteroid impacts (matheny, ) , nuclear war (turco et al., ) , and engineered pandemics (millett and snyder-beattie, ) . in this article we will consider three different ways of classifying such risks. the motivating question behind the classifications we present is 'how might this affect policy towards these risks? ' we proceed by identifying three phases in an extinction process at which people may intervene. for each phase, we ask how people could stop the process, because the different failure modes may be best addressed in different ways. for this reason we do not try to classify risks by the kind of natural process they represent, or which life support system they undermine (unlike e.g. avin et al., ) . an event causing human extinction would be unprecedented, so is likely to have some feature or combination of features that is without precedent in human history. now, we see events with some unprecedented property all of the timewhether they are natural, accidental, or deliberateand many of these will be bad for people. however, a large majority of those pose essentially zero risk of causing our extinction. why is it that some damaging processes pose risks of extinction, but many do not? by understanding the key differences we may be better placed to identify new risks and to form risk management strategies that attack their causes as well as other factors behind their destructive potential. we suggest that much of the difference can usefully be explained by three broad defence layers ( figure ): . first layer: prevention. processesnatural or humanwhich help people are liable to be recognised and scaled up (barring defeaters such as coordination problems). in contrast processes which harm people tend to be avoided and dissuaded. in order to be bad for significant numbers of people, a process must either require minimal assistance from people, or otherwise bypass this avoidance mechanism. . second layer: response. if a process is recognised to be causing great harm (and perhaps pose a risk of extinction), people may cooperate to reduce or mitigate its impact. in order to cause large global damage, it must impede this response, or have enough momentum that there is nothing people can do. . third layer: resilience. people are scattered widely over the planet. some are isolated from external contact for months at a time, or have several years' worth of stored food. even if a process manages to kill most of humanity, a surviving few might be able to rebuild. in order to cause human extinction, a catastrophe must kill everybody, or prevent a long-term recovery. the boundaries between these different types of risk-reducing activity aren't crisp, and one activity may help at multiple stages. but it seems that often activities will help primarily at one stage. we characterise prevention as reducing the likelihood that catastrophe strikes at all; it is necessarily done in advance. we characterise response as reducing the likelihood that a catastrophe becomes a severe global catastrophe (at the level which might threaten the future of civilisation). this includes reducing the impact of the catastrophe after it is causing obvious and significant damage, but the response layer might also be bolstered by mitigation work which is done in advance. finally, we characterise resilience as reducing the likelihood that a severe global catastrophe eventually causes human extinction. successfully avoiding extinction could happen at each of these defence layers. in the rest of the article we explore two consequences of this. first, we can classify damaging processes by the way in which we could stop them at the defence layers. in section , we'll look at a classification of risks by their origin: understanding different ways in which we could succeed at the prevention layer. in section , we'll look at the features which may allow us to block them at the response layer. in section , we'll classify risks by the way in which we could stop them from finishing everybody. we conclude each section by policy implications. each risk will thus belong to three classesone per defence layer. for example, consider a terrorist group releasing an engineered virus that grows into a pandemic and eventually kills everyone. in our classification, we'll call this prospect a malicious risk with respect to its origin; a cascading risk with respect to its scaling mechanism of becoming a global catastrophe; and a vector risk in the last phase we've called endgame. we'll present more examples at the end of section and in table . second, we present implications of our framework distinguishing three layers. in section , we discuss how to allocate resources between the three defence layers, concluding that in most cases all of prevention, response, and resilience should receive substantial funding and attention. in section , we highlight that risk management, in addition to monitoring specific hazards, must protect its defence layers by fostering favourable structural conditions such as good global governance. avin et al. ( ) have recently presented a classification of risks to the lives of a significant proportion of the human population. they classify such risks based on 'critical systems affected, global spread mechanism, and prevention and mitigation failure'. our framework differs from theirs in two major ways. first, with extinction risks we focus on a more narrow type of risk. this allows us, in section , to discuss what might stop global catastrophes from causing extinction, a question specific to extinction risks. second, even where the classifications cover the same temporal phase of a global catastrophe, they are motivated by different questions. avin et al. attempt a comprehensive survey of the natural, technological, and social systems that may be affected by a disaster, for example listing critical systems in their second section. by contrast, we ask why a risk might break through a defence layer, and look for answers that abstract away from the specific system affected. for instance, in section , we'll distinguish between unforeseen, expected but unintended, and intended harms. we believe the two classifications complement each other well. avin and colleagues' ( ) discussion of prevention and response failures is congenial to our section on underlying risk factors. their extensive catalogues of critical systems, spread mechanisms and prevention failures highlight the wide range of relevant scientific disciplines and stakeholders, and can help identify fault points relevant to particularly many risks. conversely, we hope that our coarser typology can guide the search for additional critical systems and spread mechanisms. we believe that our classification also usefully highlights different ways of protecting the same systems. for example, the risks from natural and engineered pandemics might best be reduced by different policy levers even if both affected the same critical systems and spread by the same mechanisms. lastly, our classification can help identify risk management strategies that would reduce whole clusters of risks. for example, restricting access to dangerous information may prevent many risks from malicious groups, irrespective of the critical system that would be targeted. our classification also overlaps with the one by liu et al. ( ) , for example when they distinguish intended from other vulnerabilities or emphasise the importance of resilience. while the classifications otherwise differ, we believe ours contributes to their goal to dig 'beyond hazards' and surface a variety of intervention points. both the risks discussed by avin et al. ( ) and extinction risks by definition involve risks of a massive loss of lives. this sets them apart from other risks where the adverse outcome would also have global scale but could be limited to less severe damage such as economic losses. such risks are being studied by a growing literature on 'global systemic risk' (centeno et al., ) . rather than reviewing that literature here, we'll point out throughout the article where we believe it contains useful lessons for the study of extinction risks. finally, it's worth keeping in mind that extinction is not the only outcome that would permanently curtail humanity's potential; see bostrom ( ) for other ways in which this could happen. a classification of these other existential risks is beyond the scope of this article, as is a more comprehensive survey of the large literature on global risks (e.g. baum and barrett, ; baum and handoh, ; bostrom and cirkovi c ; posner, ) . avoiding catastrophe altogether is the most desirable outcome. the origin of a risk determines how it passes through the prevention layer, and hence the kind of steps society can take to strengthen prevention ( figure ). the simplest explanation for a risk to bypass our background prevention of harm-creating activities is if the origin is outside of human control: a natural risk. examples include a large enough asteroid striking the earth, or a naturally occurring but particularly deadly pandemic. we sometimes can take steps to avoid natural risks. for example, we may be able to develop methods for deflecting asteroids. preventing natural risks generally requires proactive understanding and perhaps detection, for instance scanning for asteroids on earth-intersecting orbits. such risks share important properties with anthropogenic risks, as any explanation for how they might materialise must include an explanation of why the human-controlled prevention layer failed. all non-natural risks are in some sense anthropogenic, but we can classify them further. some may have a localised origin, needing relatively small numbers of people to trigger them. others require large-scale and widespread activity. in each case there are at least a couple of ways that it could get through the prevention layer. note that there is a spectrum in terms of the number of people who are needed to produce different risks, so the division between 'few people' and 'many people' is not crisp. we might think of the boundary as being around one hundred thousand or one million people, and things close to this boundary will have properties of both classes. however, it appears to us that for many of the plausible risks the number required is either much smaller (e.g., an individual or a cohesive group of people such as a company or military unit) or much larger than this (e.g., the population of a major power or even the whole world), so the qualitative distinction between 'few people' and 'many people' (and the different implications of these for responding) seems to us a useful one. also potentially relevant are the knowledge and intentions of the people conducting the risky activity. they may anthropogenic risks from small groups the case of a risk where relatively few people are involved in triggering and they are unaware of the potential harm is an unseen risk. this is likely to involve a new kind of activity; it is most plausible with the development of unprecedented technologies (gpp, ) , such as perhaps advanced artificial intelligence (bostrom, ) , nanotechnology (auplat, (auplat, , umbrello and baum, ) , or high-energy physics experiments (ord et al., ) . the case of a localised unintentional trigger which was foreseen as a possibility (and the dynamics somewhat understood) is an accident risk. this could include a nuclear war starting because of a fault in a system or human error, or the escape of an engineered pathogen from an experiment despite safety precautions. if the harm was known and intended, we have a malicious risk. this is a scenario where a small group of people wants to do widespread damage; see torres ( torres ( , b for a typology and examples. malicious risks tend to be extreme forms of terrorism, where there is a threat which could cause global damage. turning to scenarios where many people are involved, we ask why so many would pursue an activity which causes global damage. perhaps they do not know about the damage. this is a latent risk. for them to remain ignorant for long enough, it is likely that the damage is caused in an indirect or delayed manner. we have seen latent risks realised before, but not ones that threatened extinction. for example, asbestos was used in a widespread manner before it was realised that it caused health problems. and it was many decades after we scaled up the burning of fossil fuels that we realised this contributed to climate change. if our climate turns out to be more sensitive than expected (nordhaus, ; wagner and weitzman, ; weitzman, ) , and continued fossil fuel use triggers a truly catastrophic shift in climate, then this could be a latent risk today. in some cases people may be aware of the damage and engage in the activity anyway. this failure to internalise negative externalities is typified by 'tragedy of the commons' scenarios, so we can call this a commons risk. for example, failure to act together to tackle global warming may be a commons risk (but lack of understanding of the dynamics causes a blur with latent risk). in general, commons risks require some coordination failure. they are therefore more likely if features of the risk inhibit coordination; see for example barrett ( ) and sandler ( ) for a game-theoretic analysis of such features. finally, there are cases where a large number of people engage in an activity to cause deliberate harm: conflict risk. this could include wars and genocides. wars share some features with commons risk: there are solutions which are better for everybody but are not reached. in most conflicts, actors are intentionally causing harm, but only as an instrumental goal. in the above we classify risks according to who creates the risk and their state of knowledge. we have done this because if we want to prevent risk it will often be most effective to go to the source. but we could also ask who is in a position to take actions to avoid the risk. in many cases those creating it have most leverage, but in principle almost any actor could take steps to reduce the occurrence rate. if risk prevention is underprovided, this is likely to be a tragedy of the commons scenario, and share characteristics with commons risk. from a moral and legal standpoint intentionality often matters. the possibility of being found culpable is an important incentive for avoiding risk-causing activities and part of risk management in most societies. if creating or hiding potential catastrophic risks is made more blameworthy, prevention will likely be more effective. unfortunately it also often motivates concealment that can create or aggravate risk; see chernov and sornette ( ) for case studies of how this misincentive can weaken prevention and response. this shows the importance of making accountability effectively enforceable. • to be able to prevent natural risks, we need research aimed at identifying potential hazards, understanding their dynamics, and eventually develop ways to reduce their rate of occurrence. • to avoid unseen and latent risks, we can promote norms such as appropriate risk management principles at institutions that engage in plausibly risky activities; note that there is an extensive literature on rivalling risk management principles (e.g. foster et al., ; o'riordan and cameron, ; sandin, ; sunstein, ; wiener, ) , especially in the face of catastrophic risks (baum, ; bostrom, ; buchholz and schymura, ; sunstein, sunstein, , tonn, ; tonn and stiefel, )advocating for any particular principle is beyond the scope of this article. see also jebari ( ) for a discussion of how heuristics from engineering safety may help prevent unseen, latent, and accident risks. regular horizon scanning may identify previously unknown risks, enabling us to develop targeted prevention measures. organisations must be set up in such a way that warnings of newly discovered risks reach decision-makers (see clarke and eddy, , for case studies where this failed). • accidents may be prevented by general safety norms that also help reduce unseen risk. in addition, building on our understanding of specific accident scenarios, we can design failsafe systems or follow operational routines that minimise accident risk. in some cases, we may want to eschew an accident-prone technology altogether in favour of safer alternatives. accident prevention may benefit from research on high reliability organisations (roberts and bea, ) and lessons learnt from historical accidents. where effective prevention measures have been identified, it may be beneficial to codify them through norms and law at the national and international levels. alternatively, if we can internalise the expected damages of accidents through mechanisms such as insurance, we can leverage market incentives. • solving the coordination problems at the heart of commons and conflict risks is sometimes possible by fostering national or international cooperation, be it through building dedicated institutions or through establishing beneficial customs. one idea is to give a stronger political voice to future generations (jones et al., ; tonn, tonn, , . • lastly, we can prevent malicious risks by combating extremism. technical (trask, ) as well as institutional (lewis, ) innovations may help with governance challenges in this area, a survey of which is beyond the scope of this article. • note that our classification by origin is aimed at identifying policies that wouldif successfully implementedreduce a broad class of risks. developing policy solutions is, however, just one step toward effective prevention. we must then also actually implement themwhich may not happen due to, for example, free-riding incentives. our classification does not speak to this implementation step. avin et al. ( ) congenially address just this challenge in their classification of prevention and mitigation failures. classification by scaling mechanism: types of response failure for a catastrophe to become a global catastrophe, it must eventually have large effects despite our response aimed at stopping it. to understand how this can happen, it's useful to look at the time when we could first react. effects must then either already be large or scale up by a large factor afterwards ( figure ). if the initial effects are large, we will simply say that the risk is large. if not, we can look at the scaling process. if massive scaling happens in a small number of steps, we say there is leverage in play. if scaling in all steps is moderate, there must be quite a lot of such stepsin this case we say that the risk is cascading. paradigm examples of catastrophes of an immediately global scale are large sudden-onset natural disasters such as asteroid strikes. since we cannot respond to them at a smaller-scale stage, mitigation measures we can take in advance (part of the second defence layer as they would reduce damage after it has started) and the other defence layers of prevention and resilience are particularly important to reduce such risks. prevention and mitigation may benefit from detecting a threatsay, an asteroidearly, but in our classification this is different from responding after there has been some actual small-scale damage. leverage points for rapid one-step scaling can be located in natural systems, for example if the extinction of a key species caused an ecosystem to collapse. however, it seems to us that leverage points are more common in technological or social systems that were designed to concentrate power or control. risks of both natural and anthropogenic origin may interact with such systems. for instance, a tsunami triggered the disaster at the fukushima daiichi nuclear power plant. anthropogenic examples include nuclear war (possible to trigger by a few individuals linked to a larger chain of command and control) or attacks on weak points in key global infrastructure. responding to leverage risks is challenging because there are only few opportunities to intervene. on the other hand, blocking even one step of leveraged growth would be highly impactful. this suggests that response measures may be worthwhile if they can be targeted at the leverage points. with the major exception of escalating conflicts, cascading risks normally cascade in a way which does not rely on humans deciding to further the effects. a typical example is the self-propagating growth of an epidemic. as automation becomes more widespread, there will be larger systems without humans in the loop, and thus perhaps more opportunities for different kinds of cascading risk. since cascading risks are those which have a substantial amount of growing effects after we're able to interact with defence in depth them, it seems likely that they will typically give us more opportunities to respond, and that response will therefore be an important component of risk reduction. for risks which cascade exponentially (such as epidemics), an earlier response may be much more effective than a later one. reducing the rate of propagation is also effective if there exist other interventions that can eventually stop or revert the damage. however, there are a few secondary risk-enabling properties that can weaken the response layer and therefore help damage cascade to a global catastrophe which we could have stopped. for example, a cascading risk may: • impede cooperation: by preventing a coordinated response, the likelihood of a global catastrophe is increased. cooperation is harder when communication is limited, when it is hard to observe defection, or when there is decreased trust. • not obviously present a risk: the longer a cascading risk is under-recognised, the more it can develop before any real response. for example, long-incubation pathogens can spread further before their hazard becomes apparent. • be on extreme timescales: if the risk presents and cascades very fast, there is little opportunity for any response. johnson et al. ( ) analyse such 'ultrafast' events, using rapid changes in stock prices driven by trading algorithms as an example (braun et al., , however find that most of these 'mini flash crashes' are dominated by a single large order rather than being the result of a cascade). note, however, that which timescales count as relevantly 'fast' depends on our response capabilitiestechnological and institutional progress may result in faster-cascading threats but also in opportunities to respond faster. on the other hand people may be bad at addressing problems that won't manifest for generations, as is the case for some impacts of global warming. policy implications for responding to extinction risk • by their nature, we cannot respond to large risks before they become a global catastrophe. of particular importance for such risks are therefore: mitigation that can be done in advance, and the defence layers of prevention and resilience. • leverage risks provide us with the opportunity of a leveraged response: we can identify leverage points in advance and target our responses at them. • while the details of responses to cascading risks must be tailored to each specific case, we can highlight three general recommendations. first, detect damage early, when a catastrophe is still easy to contain. second, reduce the time lag between detection and response, for example, by continuously maintaining response capabilities and having rapidly executable contingency plans in place. third, ensure that planned responses won't be stymied by the cascading process itselffor example, don't store contingency plans for how to respond to a power outage on computers. for a global catastrophe to cause human extinction, it must in the end stop the continued survival of the species. this could be direct: killing everyone; or indirect: removing our ability to continue flourishing over a longer period (figure ). in order to kill everyone, the catastrophe must reach everyone. we can further classify direct risks by how they reach everyone. the simplest way this could happen is if it is everywhere that people are or could plausibly be: a ubiquity risk. if the entire planet is struck by a deadly gamma ray burst, or enough of a deadly toxin is dispersed through the atmosphere, this could plausibly kill everyone. if it doesn't reach everywhere people might be, a direct risk must at least reach everywhere that people in fact are. this might occur when people have carried it along with them: a vector risk. this includes risk from pandemics (if they are sufficiently deadly and have a long enough incubation period that it is spread everywhere) or perhaps risks which are spread by memes (dawkins, ) , or which come from some technological artefacts which we carry everywhere. note that to directly cause extinction, a vector would need to impact hard-to-reach populations including 'disaster shelters, people working on submarines, and isolated peoples' (beckstead, a, p. ) . if not ubiquitous and not carried with the people, we would have to be extraordinarily unlucky for it to reach everyone by chance. setting this aside as too unlikely, we are left with agency risk: deliberate actors trying to reach everybody. the actors could be humans or nonhuman global policy ( ) intelligence (perhaps machine intelligence or even aliens). agency risk probably means someone deliberately trying to ensure nobody survives, which may make it easier to get through the resilience layer by allowing anticipation of and response to possible survival plans. in principle agency risk includes cases where someone is deliberately trying to reach everyone, and only by accident does so in a way that kills them. if the risk threatens extinction without killing everyone, it must reduce our long-term ability to survive as a species. this could include a very broad range of effects, but we can break them up according to the kind of ability it impedes. habitat risks make long-term survival impossible by altering or destroying the environment we live in so that it cannot easily support human life. for example a large enough asteroid impact might throw up dust which could prevent us from growing food for many yearsif this was long enough, it could lead to human extinction. alternatively an environmental change which lowered the average number of viable offspring to below replacement rates could pose a habitat risk. capability risks knock us back in a way that permanently remove an important societal capability, leading in the long run to extinction. one example might be moving to a social structure which precluded the ability to adapt to new circumstances. we are gesturing towards a distinction between habitat risks and capability risks, rather than drawing a sharp line. habitat risks work through damage to an external environment, where capability risks work through damage to more internal social systems (or even biological or psychological factors). capability risks are also even less direct than habitat risks, perhaps taking hundreds or thousands of years to lead to extinction. indeed there is not a clear line between capability risks and events which damage our capabilities but are not extinction risks (cf. section ). nonetheless when considering risks of human extinction it may be important to account for events which could cause the loss of fragile but important capabilities. an important type of capability risk may be civilisational collapse. it is possible that killing enough people and destroying enough infrastructure could lead to a collapse of civilisation without causing immediate extinction. if this happens, it is then plausible that it might never recover, or recover in a less robust form, and be wiped out by some subsequent risk. it is an open and important question how likely this permanent loss of capability is (beckstead, b) . if it is likely, the resilience layer may therefore be particularly important to reinforce, perhaps along the lines proposed by maher and baum ( ) . on the other hand, if even large amounts of destruction have only small effects on the chances of eventual extinction, it becomes more important to focus on risks which can otherwise get past the resilience layer. we finally illustrate our completed classification scheme by applying it to examples, which we summarise in table . throughout the text, we've repeatedly referred to an asteroid strike that might cause extinction due to an ensuing impact winter. we've called this a natural risk regarding its origin; a large risk regarding scale, with no opportunity to intervene between the asteroid impact and its damage affecting the whole globe; and, if we assume that humanity dies out because climatic changes remove the ability to grow crops, a habitat risk in the endgame phase. our next pair of examples illustrates that risks with the same salient central mechanismin this case nuclear warmay well differ during other phases. consider first a nuclear war precipitated by a malfunctioning early warning system that is, a nuclear power launching what turns out to be a first strike because it falsely believed that its nuclear destruction was imminent. suppose further that this causes a nuclear winter, leading to human extinction. this would be an accident that scales via leverage, and finally manifests as a habitat risk. contrast this with the intentional use of nuclear weapons in an escalating conventional war, and assume further that this either doesn't cause a nuclear winter or that some humans are able to survive despite adverse climatic conditions. instead, humanity never recovers from widespread destruction, and is eventually wiped out by some other catastrophe that could have easily been avoided by a technologically advanced civilisation. this second scenario would be a conflict that again scaled via the leverage associated with nuclear weapons, but then finished off humanity by removing a crucial capability rather than via damage to its habitat. we close by applying our classification to a more speculative risk we might face this century. some scholars (e.g. bostrom, ) have warned that progress in artificial intelligence (ai) could at some point allow unforeseen rapid self-improvement in some ai system, perhaps one that uses machine learning and can autonomously acquire additional training data via sensors or simulation. the concern is that this could result in a powerful ai agent that deliberately wipes out humanity to pre-empt interference with its objectives (see omohundro, , for an argument why such preemption might be plausible). to the extent that we currently don't know of any machine learning algorithms that could exhibit such behaviour, this would be an unseen risk; the scaling would be via leverage if we assume a discrete algorithmic improvement as trigger, or alternatively the risk could be rapidly cascading; in the endgame, this scenario would present an agency risk. • to guard against what today would be ubiquity risks, we may in the future be able to establish human settlements on other planets (armstrong and sandberg, ) . • vector risks may not reach people in isolated and self-sufficient communities. establishing disaster shelters may hence be an attractive option. self-sufficient shelters can also reduce habitat risk. jebari ( ) discusses how to maximise the resilience benefits from shelters, while beckstead ( a) has argued that their marginal effect would be limited due to the presence of isolated peoples, submarine crews, and existing shelters. • resilience against vector and agency risks may be increased by late-stage response measures that work even in the event of widespread damage to infrastructure and the breakdown of social structure. an example might be the 'isolated, self-sufficient, and continuously manned underground refuges' suggested by jebari ( , p. ). in this section we will use our guiding idea of three defence layers to present a way of calculating the extinction probability posed by a given risk. we'll draw three high-level conclusions: first, the most severe risks are those which have a high probability of breaking through all three defence layers. second, when allocating resources between the defence layers, rather than comparing absolute changes in these probabilities we should assess how often we can halve the probability of a risk getting through each layer. third, it's best to distribute a sufficiently large budget across all three defence layers. we are interested in the probability p that a given risk r will cause human extinction in a specific timeframe, say by . whichever three classes r belongs to, in order to cause extinction it needs to get past all three defence layers; its associated extinction probability p is therefore equal to the product of three factors: . the probability c for r getting past the first barrier and causing a catastrophe; . the conditional probability g that r gets past the second barrier to cause a global catastrophe, given that it has passed the first barrier; and . the conditional probability e that r gets past the third barrier to cause human extinction, given that it has passed the second barrier. in short: p = cÁgÁe. each of c, g, and e can get extremely small for some risks. but the extinction probability p will be highest when all three terms are non-negligible. hence we get our (somewhat obvious) first conclusion that the most concerning risks are those which can plausibly get past all three defence layers. however, most concerning doesn't necessarily translate into the most valuable to act on. suppose we'd like to invest additional resources into reducing risk r. we could use them to strengthen either of the three defences, which would make it less likely that r passes that defence. we should then compare relative rather than absolute changes to these probabilities, which is our second conclusion. that is, to minimise the extinction probability p we should ask which of c, g, and e we can halve most often. this is because the same relative change of each probability will have the same effect on the extinction probability phalving either of c, g, or e will halve p. by contrast, the effect of the same absolute change will vary depending on the other two probabilities; for instance, reducing c by . reduces p by . ÁgÁe. in particular, a given absolute change will be more valuable if the other two probabilities are large. when one of c, g, or e is close to %, it may be much harder to reduce it to % than it would be to halve a smaller probability. the principle of comparing how often we can halve c, g, and e then implies that we're better off reducing probabilities not close to %. for example, consider a large asteroid striking the earth. we could take steps to avoid it (for example by scanning and deflecting), and we could take steps to increase our resilience (for example by securing food production). but if a large asteroid does cause a catastrophe, it seems very likely to cause a global catastrophe, and it is unclear that there is much to be done in reducing the risk at the scaling stage. in other words, the probability g is close to and prohibitively hard to substantially reduce. we therefore shouldn't invest resources into futile responses, but instead use them to strengthen both prevention and resilience. what if each defence layer has a decent chance of stopping a risk? we'll then be best off by allocating a non-zero chunk of funding to all three of thema strategy of defence in depth, our third conclusion. the reason just is the familiar phenomenon of diminishing marginal returns of resources. it may initially be best to strengthen a particular layerbut once we've taken the low-hanging fruit there, investing in another layer (or in reducing another risk) will become equally cost-effective. of course, our budget might be exhausted earlier. defending in depth therefore tends to be optimal if and only if we can spend relatively much in total. we close by discussing some limitations of our analysis. first, we remain silent on the optimal allocation of resources between different risks (rather than between different layers for a fixed risk or basket of risks); indeed, as we'll argue in section , comprehensively answering the question of how to optimally allocate resources intended for extinction risk reduction requires us to look beyond even the full set of extinction risks. we do hope that our work could prove foundational for further research that investigates both the allocation between risks and between defence layers simultaneously. indeed, it would be straightforward to consider several risks p i = c i Ág i Áe i , i = , . . ., n; assuming specific functional forms for how the probabilities c i , g i , and e i change in response to invested resources could then yield valuable insights. second, we have not considered interactions between different defence layers or different risks (graham et al., ; baum, ; baum and barrett, ; martin and pindyck, ) . these can present both as tradeoffs or synergies. for example, traffic restrictions in response to a pandemic might slow down research on a treatment that would render the disease non-fatal, thus harming the resilience layer; on the other hand, they may inadvertently help with preventing malicious risk or being resilient against agency risk. • the most important extinction risks to act on are those that have a non-negligible chance of breaking through all three defence layersrisks where we have a realistic chance of failing to prevent, a realistic chance of failing to successfully respond to, and a realistic chance of failing to be resilient against. • due to diminishing marginal returns, when budgets are high enough it will often be best to maintain a portfolio of significant investment into each of prevention, response, and resilience. in sections - we have considered ways of classifying threats that may cause human extinction and the pathways through which they may do so. our classification was based on the three defence layers of prevention, response, and resilience. giving centre stage to the defence layers provides the following useful lens for extinction risk management. if our main goal is to reduce the likelihood of extinction, we can equivalently express this by saying that we should aim to strengthen the defence layers. indeed, extinction can only become less likely if at least one particular extinction risk is made less likely; in turn this requires that it has a smaller chance of making it past at least one of the defence layers. this is significant because there is a spectrum of ways to improve our defences depending on how narrowly our measures are tailored to specific risks. at one extreme, we can increase our capacity to prevent, respond to, or be resilient against one risk; for example, we can research methods to deflect asteroids. in between are measures to defend against a particular class of risk, as we've highlighted in our policy recommendations. at the other extreme is the reduction of underlying risk factors that weaken our capacity to defend against many classes of risks. risk factors need not be associated with any potential proximate cause of extinction. for example, consider regional wars; even when they don't escalate to a global catastrophe, they could hinder global cooperation and thus impede many defences. global catastrophes constitute one important type of risk factor. we already discussed the possibility of them making earth uninhabitable or removing a capability that would be crucial for long-term survival. but even if they do neither of these, they can severely damage our defence layers. in particular, getting hit by a global catastrophe followed in short succession by another might be enough to cause extinction when neither alone would have done so. there are significant historic examples of such compound risks below the extinction level. for instance, the deadliest accident in aviation history occurred when two planes collided on an airport runway; this was only possible because a previous terrorist attack on another airport had caused congestion due to rerouted planes, which disabled the prevention measure of using separate routes for taxiing and takeoff (weick, ) . when considering catastrophes we should therefore pay particular attention to negative impacts they may have on the defence layers. our capacity to defend also depends on various structural properties that can change in gradual ways even in the absence of particularly conspicuous events. for example, the resilience layer may be weakened by continuous increases in specialisation and global interdependence. this can be compared with the model of synchronous failure suggested by homer-dixon et al. ( ) . they describe how the slow accumulation of multiple simultaneous stresses makes a system vulnerable to a cascading failure. it is beyond the scope of this article to attempt a complete survey of risk factors; we merely emphasise that they should be considered. we do hope that our classifications in sections - may be helpful in identifying risk factors. for example, thinking about preventing conflict and common risks may point us to global governance, while having identified vector and agency risks may highlight the importance of interdependence (even though, upon further scrutiny, these risk factors turn out to be relevant for many other classes of risk as well). we conclude that the allocation of resources between layers defending against specific risks, which we investigated in section , is not necessarily the most central task of extinction risk management. it is an open and important question whether reducing specific risks, clusters of risks, or underlying risk factors is most effective on the margin. the study and management of extinction risks are challenging for several reasons. cognitive biases make it hard to appreciate the scale and probability of human extinction (wiener, ; yudkowsky, ) . most potential people affected are in future generations, whose interests aren't well represented in our political systems. hazards can arise and scale in many different ways, requiring a variety of disciplines and stakeholders to understand and stop them. and since there is no precedent for human extinction, we struggle with a lack of data. faced with such difficult terrain, we have considered the problem from a reasonably high level of abstraction; we hope thereby to focus attention on the most crucial aspects. if this work is useful, it will be as a foundation for future work or decisions. in some cases our classification might provoke thoughts that are helpful directly for decision-makers that engage with specific risks. however, we anticipate that our work will be most useful in informing the design of systems for analysing and prioritising between several extinction risks, or in informing the direction of future research. data sharing is not applicable to this article as no new data were created or analysed. notes we are particularly indebted to toby ord for several very helpful comments and conversations. we also thank scott janzwood, sebastian farquhar, martina kunz, huw price, se an o h eigeartaigh, shahar avin, the audience at a seminar at cambridge's centre for the study of existential risk (cser), and two anonymous reviewers for helpful comments on earlier drafts of this article. we're also grateful to eva-maria nag for comments on our policy suggestions. the contributions of owen cotton-barratt and anders sandberg to this article are part of a project that has received funding from the european research council (erc) under the european union's horizon research and innovation programme (grant agreement no ). . in the terminology of the united nations office for disaster risk reduction (undrr, ), response denotes the provision of emergency services and public assistance during and immediately after a disaster. in our usage, we include any steps which may prevent a catastrophe scaling to a global catastrophe. this could include work traditionally referred to as mitigation. . the concept of resilience, originally coined in ecology (holling, ) , today is widely used in the analysis of risks of many types (e.g. folke et al., ) . in undrr ( ) terminology, resilience refers to '[t]he ability of a system, community or society exposed to hazards to resist, absorb, accommodate, adapt to, transform and recover from the effects of a hazard in a timely and efficient manner, including through the preservation and restoration of its essential basic structures and functions through risk management.' in this article, we usually use resilience to specifically denote the ability of humanity as a whole to recover from a global catastrophe in a way that enables its long-term survival. this ability may in turn depend on the resilience of many smaller natural, technical, and socio-ecological systems. . strictly knowledge and intentionality are two separate dimensions; however it is essentially impossible to intend the harm without being aware of the possibility, so we treat it as a spectrum with ignorance at one end, intent at the other end, and knowledge without intent in the middle. again, there is some blur between these: there are degrees of awareness about a risk, and an intention of harm may be more or less central to an action. . there are degrees of lack of foresight of the risk. cases where the people performing the activity are substantially unaware of the risks have many of the relevant features of this category, even if they have suspicions about the risks, or other people are aware of the risks. . they may not intend for that damage to cause human extinctionfor the purposes of acting on this classification it's more useful to know whether they were trying to cause harm. . we thank an anonymous reviewer for suggesting the policy responses of avoiding dangerous technologies and mandating insurance. . global coordination more broadly may however be a double edged tool, since increased interdependency if not well managed can also increase the chance of systemic risks (goldin & mariathasan, ) . . we thank an anonymous reviewer for suggesting both the third general recommendation and the example. . what about a risk that directly kills, say, . % of people? technically this poses only an indirect risk, since to cause extinction it needs to remove the capability of the survivors to recover. however, if the proportion threatened is high enough then we can reason that it must also have a way of reaching essentially everyone, so the analysis of direct risks will also be relevant. . some scholars have argued that humanity expanding into space would increase other risks; see for example an interview (deudney, n.d.) and an upcoming book (deudney, forthcoming) by political scientist daniel deudney and torres ( a) . assessing the overall desirability of space colonisation is beyond the scope of this article. eternity in six hours: intergalactic spreading of intelligent life and sharpening the fermi paradox the challenges of nanotechnology policy making part . discussing mandatory frameworks the challenges of nanotechnology policy making part . discussing voluntary frameworks and options classifying global catastrophic risks collective action to avoid catastrophe: when countries succeed, when they fail, and why risk and resilience for unknown, unquantifiable, systemic, and unlikely/catastrophic threats risk-risk tradeoff analysis of nuclear explosives for asteroid deflection', risk analysis towards an integrated assessment of global catastrophic risk global catastrophes: the most extreme risks integrating the planetary boundaries and global catastrophic risk paradigms how much could refuges help us recover from a global catastrophe? the long-term significance of reducing global catastrophic risks', the givewell blog existential risk prevention as global priority superintelligence: paths, dangers, strategies global catastrophic risks impact and recovery process of mini flash crashes: an empirical study expected utility theory and the tyranny of catastrophic risks the emergence of global systemic risk man-made catastrophes and risk information concealment: case studies of major disasters and human fallibility warnings: finding cassandras to stop catastrophes the selfish gene an interview with daniel deudney forthcoming) dark skies: space expansionism, planetary geopolitics, and the ends of humanity resilience thinking: integrating resilience, adaptability and transformability science and the precautionary principle the butterfly defect: how globalization creates systemic risks, and what to do about it policy brief: unprecedented technological risks resilience and stability of ecological systems synchronous failure: the emerging causal architecture of global crisis existential risks: exploring a robust risk reduction strategy financial black swans driven by ultrafast machine ecology representation of future generations in united kingdom policy-making horsepox synthesis: a case of the unilateralist's curse? governing boring apocalypses: a new typology of existential vulnerabilities and exposures for existential risk research adaptation to and recovery from global catastrophe', sustainability averting catastrophes: the strange economics of scylla and charybdis reducing the risk of human extinction existential risk and cost-effective biosecurity the economics of tail events with an application to climate change the basic ai drives probing the improbable: methodological challenges for risks with low probabilities and high stakes interpreting the precautionary principle global challenges: risks that threaten human civilization reasons and persons catastrophe: risk and response our final hour: a scientist's warning: how terror, error, and environmental disaster threaten humankind's future in this century -on earth and beyond on the future: prospects for humanity must accidents happen? lessons from high-reliability organizations probabilities, methodologies and the evidence base in existential risk assessments. working paper, centre for the study of existential risk dimensions of the precautionary principle strategic aspects of difficult global challenges global policy ( ) © the authors laws of fear: beyond the precautionary principle the catastrophic harm precautionary principle', issues in legal scholarship worst-case scenarios the court of generations: a proposed amendment to the us constitution obligations to future generations and acceptable risks of human extinction philosophical, institutional, and decision making frameworks for meeting obligations to future generations evaluating methods for estimating existential risks human extinction risk and uncertainty: assessing conditions for action agential risks: a comprehensive introduction space colonization and suffering risks: reassessing the "maxipok rule agential risks and information hazards: an unavoidable but dangerous topic?', futures, safe crime prediction: homomorphic encryption and deep learning for more effective nuclear winter: global consequences of multiple nuclear explosions evaluating future nanotechnology: the net societal impacts of atomically precise manufacturing report of the open-ended intergovernmental expert working group on indicators and terminology relating to disaster risk reduction climate shock: the economic consequences of a hotter planet the vulnerable system: an analysis of the tenerife air disaster on modeling and interpreting the economics of catastrophic climate change the rhetoric of precaution the tragedy of the uncommons: on the politics of apocalypse cognitive biases potentially affecting judgment of global risks owen cotton-barratt is a mathematician at the future of humanity institute, university of oxford. his research concerns high-stakes decision-making in cases of deep uncertainty, including normative uncertainty, future technological developments, unprecedented accidents, and untested social responses.max daniel is a senior research scholar at the future of humanity institute, university of oxford. his research interests include existential risks, the governance of risks from transformative artificial intelligence, and foundational questions regarding our obligations and abilities to help future generations.anders sandberg is a senior research fellow at the future of humanity institute, university of oxford. his research deals with the management of low-probability high-impact risks, societal and ethical issues surrounding human enhancement, estimating the capabilities of future technologies, and very long-range futures. key: cord- - ge i s authors: andrews, jack l.; foulkes, lucy e.; bone, jessica k.; blakemore, sarah-jayne title: amplified concern for social risk in adolescence: development and validation of a new measure date: - - journal: brain sci doi: . /brainsci sha: doc_id: cord_uid: ge i s in adolescence, there is a heightened propensity to take health risks such as smoking, drinking or driving too fast. another facet of risk taking, social risk, has largely been neglected. a social risk can be defined as any decision or action that could lead to an individual being excluded by their peers, such as appearing different to one’s friends. in the current study, we developed and validated a measure of concern for health and social risk for use in individuals of years and over (n = ). concerns for both health and social risk declined with age, challenging the commonly held stereotype that adolescents are less worried about engaging in risk behaviours, compared with adults. the rate of decline was steeper for social versus health risk behaviours, suggesting that adolescence is a period of heightened concern for social risk. we validated our measure against measures of rejection sensitivity, depression and risk-taking behaviour. greater concern for social risk was associated with increased sensitivity to rejection and greater depressed mood, and this association was stronger for adolescents compared with adults. we conclude that social risks should be incorporated into future models of risk-taking behaviour, especially when they are pitted against health risks. adolescence is a sensitive period of development, characterised by significant changes in both the biological and social environment. in particular, adolescence is a time of social reorientation, greater susceptibility to peer influence and heightened sensitivity to social rejection [ ] . adolescents are also stereotyped as risk takers, which is likely due to evidence that risk behaviours, such as binge drinking, risky driving and smoking, are heightened during this period of life [ , ] . this commonly held perspective, that adolescence is a period of heightened risk taking, conceals a more nuanced reality. social context significantly affects adolescents' engagement in health risk behaviours. for example, evidence from car accidents shows that, for young drivers, the risk of engaging in a fatal car accident increases with the number of passengers in the car [ ] . this is reflected in the experimental literature, with one study finding that, when playing alone, adolescents and adults take a similar number of risks on an incentivised computerised driving task (the stop light task). however, when adolescents played the same driving game in the presence of friends, they took significantly more risks, which was not the case for adults [ ] . adolescents are also more likely to smoke, binge drink and take illicit substances with their peers, compared to when alone [ ] . however, not all adolescents take risks, and recent work has led to the suggestion that adolescence is in fact a time of increased sensitivity to risk, characterised by wide variation in risk seeking and risk averse sit at the extreme end of concern for social risk, when the environmental cues potentially signal that one's social burden is significantly greater than one's social value. however, few studies have directly investigated whether adolescence is a period of heightened concern for social risk, and the extent to which concern for social risk predicts depressive symptomatology. current questionnaire measures of risk-taking behaviour do not uniformly include social risks as a risk-taking domain, and instead focus on the domains of health (e.g., taking illicit substances), financial (e.g., gambling) or legal (e.g., stealing) risk. one adult risk-taking questionnaire, the domain-specific risk-taking questionnaire (dospert) includes a social risk subscale, but this includes items that are not applicable to adolescent populations. for example, the social-risk items in this measure include 'approaching your boss to ask for a raise' and 'taking a job that you enjoy over one that is prestigious but less enjoyable' [ ] . another issue with current questionnaire measures of risk taking is the conflation between health and social risk. many health risks carry with them some degree of social risk, e.g., smoking may carry with it both health and social risk considerations. further, it is unclear whether concerns about social risk are independent of concerns for other risk domains, such as health risk behaviours, so whether or not an individual's propensity to take risks is uniform across risk domains. given these issues, we developed and validated a measure of concern for health and social risk, which is suitable for both adolescents and adults. in this measure, we conceptualised a social risk as any behaviour that marks individuals as being different from their peers-for example, openly endorsing music that friends do not like, or befriending an unpopular peer. we attempted to isolate the social-risk items by including social risks that involve little or no obvious health risk. we conceptualised a health risk as risks to one's physical wellbeing, such as crossing a street on a red light. we included health risk behaviours that have as little conflation with social risk as possible. we had four primary hypotheses. we first hypothesised that concern for social risk would be distinct from health risk concerns. in order to establish this, we developed a measure using exploratory and confirmatory factor analysis (efa; cfa) to assess whether health and social risk domains are distinct constructs. second, and in order to validate our measure, we hypothesised that higher concern for social risk would be associated with greater sensitivity to rejection and lower mood. we hypothesised that this relationship would be stronger for adolescents compared with adults. third, we hypothesised that greater concern for each risk domain would be positively related to risk perception and negatively related to engagement in that risk domain. finally, we hypothesised that concern for social risk would decrease with age from early adolescence to late adulthood, relative to concern for health risk. sample (exploratory factor analysis: efa; adults). participants (n = ) were recruited from two sources: the university participant pool (n = ) and prolific, an online participant recruitment and data collection platform (n = ). participants ( females, males, one did not disclose gender) were aged - years (mean = . , sd = . ). sample (confirmatory factor analysis: cfa; adults). participants (n = ) were recruited via prolific. participants ( females, males, two did not disclose gender) were aged - years (mean = . , sd = . ). sample (confirmatory factor analysis: cfa; adolescents). participants (n = ) were recruited from schools in the greater london area, as part of ongoing research projects in our lab. participants ( females, males, four did not disclose gender) were aged - years (mean = . , sd = . ). all participants were from the united kingdom and all completed the questionnaires online. ethical approval was obtained from the university ethics board ( / ; / ). participants were paid at a rate of approximately £ per hour for their time. we developed a questionnaire measure in order to assess the degree to which adolescents and adults are concerned about engaging in health and social risk behaviours. given that many social risks also incur health risks, we developed items with as little conflation between the two as possible. we developed a list of social-risk items, e.g., "spend time with someone your friends don't like", and health risk items, e.g., "cross a main road when the crossing light is red". a panel of five researchers with expertise in adolescent social development reviewed an initial list of items and provided feedback on the content and suitability for individuals aged and above, with the aim of making sure each item was distinct from the opposing type of risk. following this, a total list of items was included in the scale validation: eight health and eight social (see table ). in the version of the questionnaire given to participants, individuals were asked: "for each statement please rate how worried you would feel doing this behaviour. (if you have never done it, imagine how you would feel)." answers were given on a sliding scale from, "not worried at all ( )" to "very worried ( )". the questionnaire was administered online and the numbers ( - ) were visible along a slider (see supplementary materials for final questionnaire). all participants completed a number of additional measures in order to assess the construct validity of the hsrq. all participants included in the adult cfa completed each additional measure (n = ). however, due to time constraints imposed by testing sessions, a subset of the participants in the adolescent cfa completed the rejection sensitivity (c-rsq; n = ) and depressed mood (mfq; n = ) measures only. questionnaire is a validated measure of sensitivity to actual or perceived rejection [ ] . individuals were presented with nine scenarios such as "you approach a close friend to talk after doing or saying something that seriously upset him/her" and are asked to rate their rejection concern and level of acceptance expectancy. scores are computed by reversing the level of acceptance expectancy and multiplying this by the level of rejection concern. scores across the nine items are then averaged to create a total rejection sensitivity score; higher scores indicate higher rejection sensitivity. we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the a-rsq. adolescents: children's rejection sensitivity questionnaire (c-rsq). participants completed the anxious expectations subscale of the children's rejection sensitivity questionnaire, which is a valid measure of rejection sensitivity in children [ ] . participants were presented with six scenarios and were asked to report on a scale of - their expected likelihood of the outcome of the scenario and how nervous they would be given the content of the scenario. their expected likelihood was multiplied by their nervous expectation for each scenario and then a mean score was derived across all items. higher scores relate to greater rejection sensitivity. we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the c-rsq. adults: personal health questionnaire depression scale (phq- ). the phq- is a validated eight item measure of depression [ ] . participants were asked how often over the past two weeks they have experienced eight different symptoms, such as "how often were you bothered by feeling down, depressed, or hopeless?" participants were asked to report on a -point scale ( = "not at all" [ . . . ] = "nearly every day"). we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the phq- . the mfq [ ] is a depression screening tool for individuals aged to years old. it is a validated measure of depression in children and young people [ ] . individuals were presented with questions, such as "i felt miserable or unhappy" in the past two weeks. responses were scored on a -point scale ( = "not true", = "somewhat true", = "true"). we hypothesised that higher scores on the social subscale of the hsrq would be positively associated with higher scores on the mfq. adults: domain-specific risk-taking (dospert) scale. participants completed the health and social risk subscales of the item dospert scale, a validated risk-taking measure for adults [ ] . individuals were asked to report on a -point scale their likelihood of engaging in each activity or behaviour such as "speaking your mind about an unpopular issue in a meeting at work" (" = "very unlikely" to = "very likely") and their assessment of how risky each situation or behaviour was (" = "not at all risky" to = "extremely risky"). we hypothesised that higher scores on the social subscale of the hsrq would be negatively associated with the social risk engagement subscale of the dopsert and positively associated with the social risk perception subscale of the dospert, with the same being true for the health risk subscales. adolescents. note that adolescents did not complete a social risk-taking measure because the items from the dospert are not appropriate for this age group (e.g., "approaching your boss to ask for a raise") and there is no existing social risk-taking measure for adolescents. all data was analysed primarily using the laavan (version . - ), psych (version . . . ) and semtools (version . - ) packages in r (version . ; r core team, ). we first conducted an exploratory factor analysis (efa) using oblique (oblimin) rotation on the initial items relating to health and social risks (eight health, eight social) on a sample of adults. we determined the suitability of our sample size and data for efa based on the kaiser-meyer-olkin (kmo) index (> . ) and bartlett's test (< . ) [ ] . we determined the number of factors to retain based on examination of the scree plot, retention of factors with eigenvalues of or greater and factors with at least three items. items with factor loadings of < . were removed. following factor and item reduction based on the above criteria, we subjected the same data to a confirmatory factor analysis (cfa) to assess the strength of the proposed factor structure. we then used cfa to assess the strength of this factor structure in two new samples: one adult group (aged - ; n = ) and one adolescent group (aged - , n = ). in line with the recommendations outlined by [ ] , our primary measure of model fit was root mean squared error of approximation (rmsea). an rmsea of around < . indicates reasonable fit [ ] . we also assessed the model fit with the standardised root mean square residual (srmr; < . reasonable fit), comparative fit index (cfi; > . reasonable fit), and the tucker-lewis index (tli; > . reasonable fit). we computed measures of internal consistency using cronbach's alpha and mcdonalds omega. we further tested the fit of each two-factor cfa using aic, by comparing a one-factor solution (where all items are loaded onto one higher order risk factor) with the two-factor solution (health and social risk). a lower aic represents a better fit to the data. to assess convergent and divergent validity, we assessed the relationship between the new hsrq, rejection sensitivity [ , ] and depressed mood [ , ] across both cfa samples using pearson r correlations. we then compared the strength of the relationship between the adolescent and adult sample with a z statistic. one additional risk-taking questionnaire, the dospert [ ] , was used to brain sci. , , of relate the hsrq to risk perception and engagement health and social risks, in the adult sample only. in order to establish the test-retest reliability of the hsrq, we invited participants from the adult cfa sample to complete the questionnaire a second time - days after the first completion. we used pearson r correlations to establish the relationship between these individuals' scores at time point and . using all the data collected (n = ), we computed a mean score of the validated health and social subscale. we determined the relationship between age and the two subscales of the hsrq using multiple linear regression. we included age, gender and risk domain (health, social) in the model, as well as an age*risk domain interaction, to predict risk concern. we used aic to compare between linear, quadratic and cubic models, with a lower aic representing a better fit. analyses showed that the sample size (n = ) was suitable for conducting factor analysis (kmo = . , bartlett's test < . ). factor loadings of each item are presented in table . three factors showed eigenvalues above our threshold of : . , . , . , respectively. a fourth factor with an eigenvalue of . was removed. the third factor (eigenvalue . ) only consisted of two items and so was removed. this resulted in a two-factor, -item solution. the two factors contained items pertaining to health risks ( items) and social risks ( items). we tested the strength of this two-factor solution on the same sample with cfa. the two-factor solution fit the data well (rmsea = . ( . - . ), srmr = . , cfi = . , and tli = . ). we conducted a cfa on a new sample of adults. the sample size was deemed appropriate for testing a model comprising of parameters ( factor loadings, error variances and factor correlations). the model approximates to a : subject to parameter ratio, above the recommended : [ ] . the two-factor structure adequately fit the data according to our primary fit index; rmsea = . ( . - . ). other model fit indices were good (srmr = . ) or fell just below the suggested cut off (cfi = . and tli = . ). factor loadings of each item (see table ) were medium to high ( . - . ) except for one item (loading of . ). although this item loading was low, we decided to retain it in order to maintain consistency with the factor structure in the adolescent sample and given its good loading in the adult efa and the adolescent cfa sample. there was a positive correlation between the health and social subscale of the hsrq (r( ) = . , p < . ). measures of internal consistency were good (see table ). an additional cfa to assess a one-factor structure did not achieve good model fit (rmsea = . ( . - . ), srmr = . , cfi = . , and tli = . ), indicating that concern about risk taking is not a unitary construct and is instead domain specific (health, social). the aic of the two-factor model ( . ) was lower than the aic of the one-factor model ( . ), suggesting that the two-factor model provided a better fit. to measure the test-retest reliability of the hsrq, adult participants were invited to complete the questionnaire a second time, - days later; participants responded. pearson r correlation between the two time points indicated good test-retest reliability (social risk subscale: r( ) = . , p < . ; health subscale: r( ) = . , p < . ). to assess convergent and divergent validity, participants also completed measures of rejection sensitivity (a-rsq), depressed mood (phq- ) and risk taking (dospert). association with rejection sensitivity. the social risk subscale positively correlated with rejection sensitivity (r( ) = . , p < . ) such that individuals who scored high on concern for social risk also scored high in rejection sensitivity (see figure , panel b) . the health risk subscale did not significantly correlate with rejection sensitivity (r( ) = − . , p = . ). association with depressed mood. the social risk subscale positively correlated with depressed mood (r ( ) = . , p = . ) such that individuals who scored high on concern for social risk also scored high in depressed mood (see figure , panel d). the health risk subscale did not significantly correlate with depressed mood (r( ) = − . , p = . ). association with risk taking. the social risk subscale of the hsrq negatively correlated with the likelihood of engaging in social risks subscale of the dospert (r( ) = − . , p < . ) and was positively correlated with the perception of social risks subscale of the dospert (r( ) = . , p < . ). in other words, individuals who scored high on concern for social risk on the hsrq were less likely to engage in social risk behaviours and more likely to rate social risk behaviours as risky. the health risk subscale of the hsrq was negatively correlated with the likelihood of engaging in health risks subscale of the dospert (r( ) = − . , p < . ) and was positively correlated with the perception of health risks subscale of the dospert (r( ) = − . , p < . ). thus, individuals who scored high on concern for health risks were less likely to engage in health risk behaviours and more likely to rate health risk behaviours as risky. we conducted a cfa on a new sample of adolescents. the sample size was deemed appropriate for testing a model comprising of parameters ( factor loadings, error variances and factor correlations). the model approximates to a : subject to parameter ratio, above the recommended : [ ] . the two-factor structure fit the data well (rmsea = . ( . - . ), srmr = . , cfi = . , and tli = . ). factor loadings of each item were medium to high ( . - . ) (see table ). there was a positive correlation between the health and social subscale of the hsrq (r( ) = . , p < . ). measures of internal consistency were good (see table ). an additional cfa to assess a one-factor structure did not achieve good model fit (rmsea = . ( . - . ), srmr = . , cfi = . , and tli = . ), indicating that concern about risk taking is not a unitary construct across domains, and is instead domain specific (health, social), as in the adult sample. the aic of the two-factor model ( . ) was lower than the aic of the one-factor model ( . ), suggesting that the two-factor model provides a better fit. validation to assess convergent and divergent validity, a subset of the adolescent participants completed measures of rejection sensitivity (c-rsq; n = ) and depressed mood (mfq; n = ). association with rejection sensitivity. the social risk subscale positively correlated with rejection sensitivity (r( ) = . , p< . ) such that individuals who scored high on concern for social risk also scored high in rejection sensitivity (see figure , panel a). the health risk subscale did not significantly correlate with rejection sensitivity (r( ) = − . , p = . ). association with depressed mood. the social risk subscale positively correlated with depressed mood (r( ) = . , p < . ) such that individuals who scored high on concern for social risk also scored high in depressed mood (see figure , panel c). the health risk subscale did not significantly correlate with depressed mood (r( ) = − . , p = . ). we compared the strength of the correlations between concern for social risk, rejection sensitivity and depression between the adolescent cfa and adult cfa sample. the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = . , p < . ; depression: z = . , p = . ). we conducted a multiple regression to assess the relationship between the hsrq and age, using data collected across all participants (n = ; aged - ). the outcome was risk concern (i.e., the mean score of the health and social subscales) and the predictor variables were age, gender, risk domain (health, social), and an age by risk domain interaction. the overall regression model was significant (r = . , f( , ) = . , p < . ; see table for estimates). there was a significant main effect of age (β = − . ; % ci: − . - . ; p < . ) and risk domain (β = − . ; % ci: − . - . ; p < . ) and a significant interaction between age and risk domain (β = − . ; % ci: − . - . ; p < . ). there was no main effect of gender (β = . % ci: − . - . ; p < . ). . the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = . , p < . ; depression: z = . , p = . ). we conducted a multiple regression to assess the relationship between the hsrq and age, using data collected across all participants (n = ; aged - ). the outcome was risk concern (i.e., the mean score of the health and social subscales) and the predictor variables were age, gender, risk domain (health, social), and an age by risk domain interaction. the overall regression model was significant (r = . , f( , ) = . , p < . ; see table for estimates). there was a significant main effect of age (β = − . ; % ci: − . - . ; p < . ) and risk domain (β = − . ; % ci: − . - . ; p < . ) and a significant interaction between age and risk domain (β = − . ; % ci: − . - . ; p < . ). there was no main effect of gender (β = . % ci: − . - . ; p < . ). to explore the interaction between age and risk domain, we plotted the relationship ( figure ) and used simple slope analyses. the slope for both risks was significant (social: β = − . , p < . ); health: β = − . , p < . ). there was a significant difference between the gradient of these slopes (t( ) = . , p = . ), driven by a steeper decline across age in concern for social risk compared to concern for health risk. this linear model (aic: . ) outperformed a quadratic model (aic: . ) and cubic model (aic: . ). relationship between concern for social risk and rejection sensitivity for adolescents (r( ) = . , p < . ; panel (a) and adults (r( ) = . , p < . ; panel (b). relationship between risk concern and depression for adolescents (r( ) = . , p < . ; panel (c) and adults (r( ) = . , p = . ; panel (d). the strength of the correlations between concern for social risk and rejection sensitivity and depression was stronger for adolescents than for adults (rejection sensitivity: z = . , p < . ; depression: z = . , p = . ). to explore the interaction between age and risk domain, we plotted the relationship ( figure ) and used simple slope analyses. the slope for both risks was significant (social: β = − . , p < . ); health: β = − . , p < . ). there was a significant difference between the gradient of these slopes (t( ) = . , p = . ), driven by a steeper decline across age in concern for social risk compared to concern for health risk. this linear model (aic: . ) outperformed a quadratic model (aic: . ) and cubic model (aic: . ). − . . − . . note: *= an interaction term; β= beta coefficient; se= standard error; t = t statistic (the β divided by the se); p = significance. relationship between age and concern for health risk (slope: β = − . , p < . ) and social risk (slope: β = − . , p < . ). there was a significant difference between the gradient of these slopes (t( ) = . , p = . ), driven by a steeper decline across age in concern for social risk than for concern for health risk. in this study, we developed a questionnaire measure of concern for health and social risk behaviours for use in adolescents and adults. our results showed that concerns related to engaging in social risks are distinct from concerns related to engaging in health risks. overall, we found that people reported greater concern for health risk compared with social risk. we investigated age differences in concern for health and social risk, and found that concern for both health and social risk decreased with age, from adolescence through adulthood. however, concern for social risk decreased to a greater extent than concern for health risk. this suggests that, relative to adults, adolescents are more concerned about social risks than health risks. this heightened concern for social risk in adolescence has implications for understanding why adolescents engage in health and legal risks. one hypothesis is that adolescents are motivated to avoid what they consider to be a greater immediate risk, the social risk of being rejected or excluded by their peers [ ] . avoiding social risks can be considered an important goal during adolescence, a period when social status and friendships provide psychological and physical health benefits [ , ] . the association between our new measure, the health and social risk questionnaire (hsrq), rejection sensitivity and depression indicate the potential relevance of social risk for understanding adolescent behaviour and mental health. individuals who report greater concern for social risk were relationship between age and concern for health risk (slope: β = − . , p < . ) and social risk (slope: β = − . , p < . ). there was a significant difference between the gradient of these slopes (t( ) = . , p = . ), driven by a steeper decline across age in concern for social risk than for concern for health risk. in this study, we developed a questionnaire measure of concern for health and social risk behaviours for use in adolescents and adults. our results showed that concerns related to engaging in social risks are distinct from concerns related to engaging in health risks. overall, we found that people reported greater concern for health risk compared with social risk. we investigated age differences in concern for health and social risk, and found that concern for both health and social risk decreased with age, from adolescence through adulthood. however, concern for social risk decreased to a greater extent than concern for health risk. this suggests that, relative to adults, adolescents are more concerned about social risks than health risks. this heightened concern for social risk in adolescence has implications for understanding why adolescents engage in health and legal risks. one hypothesis is that adolescents are motivated to avoid what they consider to be a greater immediate risk, the social risk of being rejected or excluded by their peers [ ] . avoiding social risks can be considered an important goal during adolescence, a period when social status and friendships provide psychological and physical health benefits [ , ] . the association between our new measure, the health and social risk questionnaire (hsrq), rejection sensitivity and depression indicate the potential relevance of social risk for understanding adolescent behaviour and mental health. individuals who report greater concern for social risk were more likely to report greater sensitivity to rejection (adolescents: c-rsq; adults: a-rsq). social rejection is an unpleasant feeling and therefore it makes sense that individuals with a heightened degree of sensitivity to the negative effects of social rejection would be more concerned with engaging in situations that could lead to, or indicate a possibility of, social rejection. within the adult sample, individuals who scored high on concern for social risk were less likely to engage in socially risky behaviours and were more likely to rate social risk behaviours as risky. this finding indicates that higher concern for social risk is related to an increase in rejection sensitivity and an increase in socially risk-averse behaviour. concern for social risk was also related to depressive symptomatology (adolescents: mfq; adults: phq- ), such that individuals with greater concern for social risk were more likely to report higher levels of depressive symptoms. this finding supports the predictions made by the social risk hypothesis of depression [ ] . this hypothesis proposes that, when cues in the environment signal that one's social burden is significantly greater than their social value, depression manifests as an adaptive mechanism to remove the individual from social situations which might confer further risk of social rejection. we showed that concern for social risk was more strongly associated with rejection sensitivity in adolescents ( - years) , than in adults ( + years). during adolescence, individuals are particularly sensitive to social evaluative concerns [ ] , and peer perceptions influence adolescents' social and personal worth [ ] . adolescents are also hypersensitivity to social rejection relative to adults [ ] . this fits with our finding that concerns for social risk are more tightly linked to rejection sensitivity among adolescents, relative to adults. in addition, and as previously discussed, adolescents with good quality friendships and higher social status have more favourable psychological and physical outcomes later in life. thus, it is potentially beneficial and adaptive for adolescents to try to avoid the risk of social rejection [ , ] . additionally, the association between concern for social risk and depressive symptoms was stronger in adolescents than adults. this suggests that the social environment may be particularly salient for mental health during this developmental period [ , ] . this is important because the incidence of many mental health problems, including depression, increases significantly during adolescence [ ] . our findings have a number of implications. at the theoretical level, the way in which risk behaviours have been traditionally conceptualised has focused heavily on the health, financial, legal and recreational domains. our results suggest that social risk should be incorporated into our understanding of risk-taking behaviour. for some individuals, taking a social risk, and placing themselves at risk of social rejection, is a real and 'risky' decision. at the practical level, interventions aimed at reducing health and legal risk behaviours should recognise the importance of concerns surrounding social risks. one promising approach is to focus on peer-led interventions, which work to influence social norms surrounding unhealthy or illegal behaviours [ ] . this approach encourages healthy behaviours by reducing the social risk of being ostracised by peers. interventions using a peer-led approach have shown positive results for unhealthy behaviours such as bullying [ ] and smoking [ ] . the hsrq is a valid measure for individuals aged +. however, this measure has not been validated for children below and very little is known about social risk in this younger age group. future work should explore the extent to which the current items and factor structure are valid for use in children below this age. additionally, we did not test the relationship between our measure of concern for social risk and engagement in real-life social risks in the adolescent sample ( - years) because of a lack of appropriately validated scales for this age group. this is a limitation when making comparisons with the adult sample ( +) and future work should explore the relationship between our concern for social risk measure and engagement in real-life social risks among adolescents. further, our sample was collected from the united kingdom and therefore this measure should be cross-culturally validated for use in other socio-cultural environments. in addition, the hsrq is based on self-report, and an important line of subsequent work is to relate responses on this questionnaire measure to a task-based assessment of social risk. finally, the present study was not designed to investigate the degree to which individuals weigh up the health vs. social consequences of a given 'risky' decision. therefore, an important outstanding question is the degree to which individual variation in concern for health and social risk impacts involvement in 'risky' behaviours, especially when individuals are presented with risks that often carry both social and health consequences, such as smoking or dangerous driving. in the current study, we developed a self-report measure of concern for health and social risk for use with adolescents and adults. we found that heightened concern for social risk was related to increased sensitivity to rejection and depression, with this relationship being stronger for adolescents compared to adults. this supports the body of evidence that adolescence is a period of heightened sensitivity to the social environment. in addition, both concern for health and social risk decreased with age, but the rate of decrease was steeper for social versus health risk, suggesting that adolescence is a period of amplified concern for social risk. practically, these findings have potential implications for policy. within an educational context, an understanding of social risk may offer insight into why adolescents are more or less motivated to engage with school work. for example, if individuals who try hard at school are perceived as unpopular or uncool, then being openly motivated in the classroom could be a social risk [ ] . within a legal context, concerns surrounding social risk may be a factor in adolescents' decisions to engage in criminal behaviour, particularly in peer contexts when opting out of a group behaviour could risk being excluded from the group. together, these findings highlight the importance of social risk in adolescent behaviour and suggest that interventions to reduce risk-taking behaviours in this age group should consider the role of social risk. the following are available online at http://www.mdpi.com/ - / / / /s , table s : health and social risk questionnaire (hsrq). is adolescence a sensitive period for sociocultural processing? the relationship between early age of onset of initial substance use and engaging in multiple health risk behaviors among young adolescents clustering of health-compromising behavior and delinquency in adolescents and adults in the dutch population carrying passengers as a risk factor for crashes fatal to -and -year-old drivers peer influence on risk taking, risk preference, and risky decision making in adolescence and adulthood: an experimental study is it all in the reward? peers influence risk-taking behaviour in young adulthood neural correlates of expected risks and returns in risky choice across development age-related differences in social influence on risk perception depend on the direction of influence social influence on risk perception during adolescence social brain development and the affective consequences of ostracism in adolescence the teenage brain: sensitivity to social evaluation risk-taking and social exclusion in adolescence: neural mechanisms underlying peer influences on decision-making avoiding social risk in adolescence peer status in school and adult disease risk: a -year follow-up study of disease-specific morbidity in a stockholm cohort goodyer, i. the nspn consortium adolescent friendships predict later resilient functioning across psychosocial domains in a healthy community cohort peripheral ingroup membership status and public negativity toward outgroups how intragroup dynamics affect behavior in intergroup conflict: the role of group norms, prototypicality, and need to belong the role of peer rejection in adolescent depression peer relationships in adolescence the social risk hypothesis of depressed mood: evolutionary, psychosocial, and neurobiological perspectives darwinian models of depression: a review of evolutionary accounts of mood and mood disorders a domain-specific risk-taking (dospert)scale for adult populations rejection sensitivity and disruption of attention by social threat cues rejection sensitivity and children's interpersonal difficulties the phq- as a measure of current depression in the general population development of a short questionnaire for use in epidemiological studies of depression in children and adolescents: factor composition and structure across development criterion validity of the mood and feelings questionnaire for depressive episodes in clinic and non-clinic subjects statistical methods for health care research conceptions and perceived influence of peer groups: interviews with preadolescents and adolescents social support and mental health in late adolescence are correlated for genetic, as well as environmental lifetime prevalence and age-of-onset distributions of mental disorders in the world health organization's world mental health survey initiative peer influence in adolescence: public-health implications for covid- changing climates of conflict: a social network experiment in schools an informal school-based peer-led intervention for smoking prevention in adolescence (assist): a cluster randomised trial role theory of schools and adolescent health this article is an open access article distributed under the terms and conditions of the creative commons attribution (cc by) license the authors declare no conflicts of interest. key: cord- -g gesvzt authors: heitzer, andrew m.; piercy, jamie c.; peters, brittany n.; mattes, allyssa m.; klarr, judith m.; batton, beau; ofen, noa; raz, sarah title: cumulative antenatal risk and kindergarten readiness in preterm-born preschoolers date: - - journal: j abnorm child psychol doi: . /s - - - sha: doc_id: cord_uid: g gesvzt a suboptimal intrauterine environment is thought to increase the probability of deviation from the typical neurodevelopmental trajectory, potentially contributing to the etiology of learning disorders. yet the cumulative influence of individual antenatal risk factors on emergent learning skills has not been sufficiently examined. we sought to determine whether antenatal complications, in aggregate, are a source of variability in preschoolers’ kindergarten readiness, and whether specific classes of antenatal risk play a prominent role. we recruited preschoolers ( girls; ages – years), born ≤ ( )/( ) weeks’ gestation, and reviewed their hospitalization records. kindergarten readiness skills were assessed with standardized intellectual, oral-language, prewriting, and prenumeracy tasks. cumulative antenatal risk was operationalized as the sum of complications identified out of nine common risks. these were also grouped into four classes in follow-up analyses: complications associated with intra-amniotic infection, placental insufficiency, endocrine dysfunction, and uteroplacental bleeding. linear mixed model analyses, adjusting for sociodemographic and medical background characteristics (socioeconomic status, sex, gestational age, and sum of perinatal complications) revealed an inverse relationship between the sum of antenatal complications and performance in three domains: intelligence, language, and prenumeracy (p = . , . , . , respectively). each of the four classes of antenatal risk accounted for little variance, yet together they explained . %, . %, and . % of the variance in the cognitive, literacy, and numeracy readiness domains, respectively. we conclude that an increase in the co-occurrence of antenatal complications is moderately linked to poorer kindergarten readiness skills even after statistical adjustment for perinatal risk. electronic supplementary material: the online version of this article ( . /s - - - ) contains supplementary material, which is available to authorized users. born preschoolers and their term-born peers yielded poorer scores in the former group, regardless of gestational age. group differences have been documented in expressive and receptive language abilities as well as in visuomotor or graphomotor (preprinting) skills in both very preterm (foster-cohen et al. ; caravale et al. ; torrioli et al. ) and late-preterm (baron et al. ; baron et al. ) cohorts. establishing the nature of early biomedical risk factors that forecast deficits in critical academic precursor skills is essential for identification of preschoolers at risk for deviation from the typical neurodevelopmental trajectory. findings from a recent quantitative integration of the literature suggest that both preterm birth and adverse antenatal factors are important antecedents of intellectual disability diagnosed between and years of age (huang et al. ). yet few preschool outcome studies included within-group examination of the links between complications associated with preterm-birth and performance on neuropsychological tasks that tap domain-specific, literacy or numeracy, precursor skills. foster- cohen et al. ( ) , focusing on the impact of peri-, and neonatal, but not antenatal, complications, reported no significant associations between several early risk factors (including the sum of perinatal complications) and language delay within a cohort of four-year-old preschoolers born very preterm. in a similar sample of preschoolers born < weeks gestation torrioli et al. ( ) were able to document a significant relationship between a single major antenatal complication, intrauterine growth restriction, and intelligence (but not visual-motor integration). consistent with the prevailing fetal programming framework (barker ) , conditions or risks originating in-utero have the unique capacity to modify long-term physical health and behavioral outcome. though the exact mechanisms leading to disease or cognitive-behavioral deficits have yet to be specified, it has been suggested that fetal adaptation to environmental stress may involve vascular, metabolic or endocrine changes that permanently alter bodily structure or function (hocher et al. ) . antenatal perturbations are likely transmitted to the infant through their effects on placental function. the latter, in turn, adversely influences fetal and postnatal brain development and cognitive-behavioral outcome (zeltser and leibel ; buss et al. ; davis and sandman ) . within the high-risk group of preterm-born children, both the variability in the base-rates of various antenatal complications associated with prematurity and the sheer number of medical risk factors that require consideration often impede exploration of developmental outcome effects of early biological adversity. additionally, the developmental impact of discrete medical complications is probably cumulative (shalev et al. ) . cumulative risk indices may show increased stability across developmental periods, accounting for more outcome variability between individuals than specific complications (wade et al. ) . aggregate scores that reflect cumulative risk associated with a distinct early risk epoch may be increasingly sensitive compared to discrete complications that are likely linked to small effects that are difficult to detect (whitehouse et al. ) . although the influence of cumulative perinatal risk on developmental outcome of preterm children has received some consideration (e.g., carmody et al. ; foster-cohen et al. ) , the effects of cumulative antenatal risk in this vulnerable group remain essentially unexplored. hence, our chief objective was to examine the combined contribution of common antenatal complications to explaining individual differences in academic (kindergarten) readiness within a preterm-born cohort. in addition to gauging intellectual abilities, we evaluated language skills as precursors of reading attainment, visual-motor integration skills as antecedents of writing-skill development, and early number concepts as the preschool forerunners of math achievement. we predicted that degree of antenatal risk in preterm-born preschoolers will be inversely related to both general cognitive, and domain-specific, neuropsychological skills that provide the basis for scholastic achievement. preterm-born children (< weeks' gestation) were recruited for the study between and years of age and evaluated between may -july . the children were born at william beaumont hospital (wbh), royal oak, mi, in mi, in - , and treated in the neonatal intensive care unit (nicu). at wbh-nicu resuscitation is attempted for all infants with an estimated gestational age ≥ / weeks (batton et al. ) . children with congenital anomalies, or who required mechanical ventilation after discharge, were excluded from the retrospectively identified nicu cohort matching our inclusion criteria. the families of . % of cases could still be reached for recruitment attempt based on contact information provided at the time of birth (see online resource ). of these, families of . % cases were not interested, for multiple reasons. in accord with wbh human investigation committee guidelines nonparticipating families were not specifically queried, yet amongst common reasons spontaneously provided by families for nonparticipation were being "too busy", residing too far away or not wanting to travel from suburban areas to the city. families of additional cases ( . % of those successfully contacted) "no-showed" for the schedule assessment twice and were not rescheduled. the available participants constituted . % of the pool of cases whose families could be contacted between and years of age. of evaluated cases ten were excluded from data analyses, five with suspected antenatal substance exposure and five with missing background medical information pertinent to this investigation. altogether cases were available for this study (see table for sociodemographic characteristics). as the table shows, our sample included participants from a very broad socioeconomic range. nonetheless, the relatively high mean ses ( . of possible points; sd = . ) reflects the composition of the catchment area of wbh. this region includes primarily middle social strata, thereby minimizing the possibility of confounding the effects of antenatal risk with other adverse environmental factors associated with socioeconomic status. correspondingly, about % of nicu admissions were covered by private medical insurance and only % by medicaid. additionally, . % of the mothers and . % of the fathers attained a college degree. the final sample included ( . %) boys and ( . %) participants of african-american descent. the proportion of males to females in our sample ( . %: . %) was not significantly different from the proportion observed in the remaining portion of the total relevant nicu cohort from which we attempted to recruit ( . %: . %; χ [ ] = . , p = . ). the proportion of singletons to multiples in our sample ( . %: . %) was also not significantly different from the proportion observed in the remainder of the total cohort ( . %: . %; χ [ ] = . , p = . ). information about racial distribution was available for two thirds of the total relevant cohort. as noted above, our sample included . % african americans, a somewhat smaller proportion than that observed in the remainder of the relevant nicu cohort ( . %;χ [ ] = . , p = . ). the mean gestational age ( . ± . weeks) and birth weight ( , ± g) in our sample approximated the mean gestational age ( . ± . weeks) and birth weight ( , . ± g) available for the total relevant cohort. likewise, the length of stay was also similar for our sample ( . ± . days) and the total cohort ( . ± . days), and the proportion of children requiring ventilation in our sample was not statistically different from that observed in the remaining portion of the total cohort ( . % vs. . %; χ [ ] = . , p = . ). according to parental report, none of the children had sustained a severe head injury with loss of consciousness. postnatal seizure history was reported for cases, yet only three required anticonvulsant medications. whereas the intellectual abilities of two of these three children fell well within the average range, the remaining case manifested severe intellectual deficits. as table shows, ninety participants were singletons and seventy were products of multiple pregnancy. sixty-four multiples were co-members of twinships or triplets and therefore shared antenatal risk. descriptive statistics regarding pregnancy and perinatal risk in our sample were based on data obtained retrospectively from hospital records ( table ). additional information regarding intervention procedures is provided in online resource . gestational age in our sample ranged from / - / weeks and was determined by maternal dates and confirmed by early prenatal ultrasound in > % of cases. three children with cp diagnosis (spastic diplegia) were included in the sample. additionally, there were three cases with perinatal diagnoses of grade iii intraventricular hemorrhage, one with grade iv, one with periventricular leukomalacia (pvl), one with both pvl and subsequent cerebral palsy (spastic diplegia), and one with grade iv hemorrhage coupled with hydrocephalus (requiring reservoir placement). adding the aforementioned single seizure case with severe intellectual impairment, the "significant brain injury" subgroup included cases. importantly, statistical analyses were performed both with and without these cases. general assessment considerations evaluations were conducted between and by clinical psychology graduate students who had been extensively trained in developmental neuropsychological assessment. they were kept uninformed about participants' medical background data, with the single exception of being aware that the children were nicu graduates. all testing and other data collection procedures were conducted in compliance with ethical standards of the helsinki declaration and its later amendments, as well kramer et al. ( ) i summary score for the nine above listed antenatal complications with sample frequency > % j includes any atypical presentations (breech, transverse lie, footling, etc.) k arterial cord ph examined for participants, arterial ph below . was recorded (n = ). when only venous cord ph was available (n = ), venous ph below . was recorded (n = ). for four cases, initial capillary blood below . was recorded (n = ), whereas acid-base information was unavailable for five cases l as determined by obstetrician; > % of cases were corroborated by antenatal ultrasound. distribution: cases ≤ / weeks ( . %) + cases ≤ / weeks ( . %) + cases < / weeks ( . %), + cases ≤ / weeks ( . %) m initial hematocrit < % n established by positive blood culture o chronic lung disease: supplemental oxygen required at weeks gestation or discharge for infants < weeks gestation. no cases were observed in this sample with gestational age ≥ weeks p based on a chest roentgenogram and clinical evaluation q peak bilirubin ≥ mg/dl r diagnosed at least once during nicu stay s documented on the basis of cranial ultrasound. (mild = bleed grade & ; severe = grade & using grading criteria by volpe, ) . routine cranial ultrasound given to all infants with gestational age ≤ weeks, and when clinically indicated to infants with gestational age > weeks. there were twenty cases ( . %) with mild brain bleeds (sixteen grade i and four grade ii) and seven ( . %) with severe intracranial pathology, including three grade iii intraventricular hemorrhage, two grade iv (one requiring shunt), and two cases with periventricular leukomalacia t infants discharged on the ventilator were not included in the current study u diagnosed by clinical manifestations and echocardiographic information v summary score for the nine above listed perinatal complications with sample frequency > % w the rate of severe retinopathy of prematurity (> stage ) in the total sample was . %, below our inclusion cutoff (one cases with stage iii, four cases with stage iii+, and two cases with stage iv) and there were no cases with grade iv rop after exclusion of eleven cases with significant neurological background. all cases with stage iii+ and stage iv had received laser treatment (wechsler ) . prereading skills language skills were assessed using the core language (cl) score from the clinical evaluation of language fundamentals-preschool (celf-p ; wiig et al. ). the cl stability coefficient for ages : - : is high (r = . ), whereas internal consistency and split-half reliability are excellent (r xx = . and . , respectively). the cl scale is comprised of three subtests: sentence structure, word structure, and expressive vocabulary. prenumeracy skills quantitative knowledge and reasoning were measured using the applied problems (ap) subtest from the woodcock johnson (wj-iii) tests of achievement (woodcock et al. ) . at the prekindergarten level, ap assesses emergent counting, addition, and subtraction skills. the split-half reliability for ages and is excellent (r = . and . , respectively), and test-retest correlations range from r = . -. for ages - (woodcock et al. ) . quantitative concepts (qc), another wj-iii task that may be administered at three years of age does not possess an adequate floor (alfonso and flanagan ) , and was therefore not included. preprinting skills eye-hand coordination skills were assessed using the visual-motor integration (vmi) subtest from the peabody developmental motor scales (pdms- ; folio and fewell ) . the vmi includes items that require reaching and grasping, building with blocks, or copying designs. internal consistency reliability for - months of age is excellent (r = . ), whereas the overall test-retest (r = . ) and inter-scorer (r = . ) reliability coefficients are also exceptionally high (folio and fewell ) . descriptive information for performance measures the mean fsiq, cl, and vmi scores (±sd) of our participants fell well within the average range on three of the four readiness measures ( . ± . , . ± . , and . ± . , respectively), whereas mean ap score fell above average ( . ± . ). these favorable results are likely attributable to the preponderantly middle-class background of our sample. notably, participants' scores spanned a broad range (with similar ranges and sd's for ap and cl) and it was our goal to explore the contribution of antenatal risk to explaining this variability. all standard scores were corrected for prematurity. the recalculation of the preterm preschooler's age-at-testing as time elapsed since the expected date of delivery allows one to derive standard scores based on age-reference norms of typical children who are similar in biological maturity. additional descriptive data for the total-, and subsample without significant brain injury may be found in online resource . cumulative antenatal risk consistent with the fetal programming hypothesis, our chief variable of interest was cumulative antenatal risk, operationalized as an index comprised of equally weighted complications. we included nine major complications with sample frequency > % (see table ) and with well documented relationships to unfavorable neonatal outcome or neurobehavioral sequelae. hence, for each participant we summed the identified complications, out of the nine relatively common antenatal risks, with the following sample distribution: ( %), ( . %), ( . %), ( . %) and ( . %). no cases were observed with frequency ≥ complications. the correlations between pairs of antenatal complications were moderate at best, suggesting that the information provided by discrete complications was not redundant. the strongest relationship was observed between diagnosis of histological chorioamnionitis and pprom latency duration (time between membrane rupture and birth) > h (r = . ; p < . ). cumulative perinatal risk a cumulative perinatal risk score was also computed and was included as a covariate. nine major perinatal complications with sample frequency > % provided the basis for the perinatal summary score (see table ). abnormal presentation was not included because this complication is thought to exert little influence on longterm outcome (eide et al. ) and since the potentially unfavorable outcome effects are believed to be the result of confounding with the effects of prematurity or gestational age (ismail et al. ). the latter variable was taken into account as a separate risk factor in this study. background medical information was obtained retrospectively from the mother's labor and delivery hospitalization as well as the nicu records. intra-class correlations (icc- ; shrout and fleiss ) for the antenatal and the perinatal complications composite scores were excellent, owing in part to the ease and efficiency of searching electronic records. based on two independent and trained medical records reviewers, icc equaled or approached unity for ten cases (icc [ ] = . , and . , for antenatal and perinatal composites, respectively). construct validity of the two summary scores was established in the singletons subsample (n = ) by examination of the associations between the cumulative antenatal risk score and birth weight, an index of fetal growth and well-being, and between the cumulative perinatal risk score and both length of nicu stay and days on the ventilator, two indexes of need for medical intervention. in addition to the singletons and seven single-members of twin pairs, our sample included sets of co-twins and three sets of co-triplets. to capture the correlation between participants within sets of multiples, we used spss . mixed (maximum likelihood) to fit separate linear mixed effects models for each outcome variable, with multiplicity (cases from the same birth mother) as a random effect. thus, the individual multiples nested within each set were conceptualized as replications and were given a random block number that was unique, but identical for set members. in contrast, singleton children or multiples without an evaluated co-multiple had no replicates and were considered random block with size . this model enabled both co-multiples and singletons to be used in the same analysis without either violating independence assumptions or, alternatively, discarding important information by including only a single member of each set. the variable of interest, antenatal risk (the sum of antenatal complications), was entered together with sociodemographic covariates, sex and ses (hollingshead ), into the model. the hollingshead index is a composite score reflecting family social status. the score is based on four factors: parental marital status, employment status, educational attainment and occupational prestige. two additional covariates were entered to capture the influence of perinatal risk: gestational age and the perinatal complications summary score (see table ). birth weight was highly correlated with gestational age and was therefore not included (r = . ; p < . ). hence, the effect of antenatal risk on readiness was statistically adjusted for influence of four covariates, with all five predictors entered simultaneously into the model. information about bivariate correlations among the five predictors may be found in online resource . one should note here that a fifth covariate, iq test edition, was used in analyses of cognitive outcome. the predicted variables were the cognitive (prorated fsiq) and pre-academic (celf-p cl score, wj-iii ap subtest score, and pdms- vmi score) measures of kindergarten readiness corrected for degree of prematurity. information about the correlations between the five predictors and four outcome measures is presented in online resource . three ( . %) of the participants were unable to complete any of the tasks included in this evaluation (two cases with cp and a single case without significant neurological findings who required > . months on the respirator at birth). of the children without significant brain injury who completed at least one task, a minority of cases failed to complete the tasks needed to obtain a score on one or more of the four preacademic performance indices (n = [ . %], [ . %], [ . %], and [ . %] cases for fsiq, cl, ap, and vmi, respectively). due to the children's young age, it was difficult, if not impossible, to ascertain the causes of failure to attempt or follow instructions for specific subtests. examination of the correlates of the sum of incomplete subtests per participant r e v e a l e d s i g n i f i c a n t a s s o c i a t i o n s n e i t h e r w i t h sociodemographic variables (r[ ] = −. and − . for ses and sex; p = . and . , respectively) nor with preschool attendance (r[ ] = . ; p = . ). in contrast, a significant inverse relationship between the number of incomplete subtests and the fsiq was observed (r[ ] = −. , p < . ). to avoid bias resulting from potential influence of the missing subtest scores, we applied multiple imputation to replace these performance data. the results of data analyses are reported both before and after exclusion of cases with significant brain injury. prior to data analyses, interactions between sex and the remaining predictors were examined for all outcome measures. as none of the interactions were significant (all p values > . ), the reduced models were used. follow-up analyses were conducted to explore whether any observed associations between antenatal risk and kindergarten readiness were attributable to a particular class of complications. hence we grouped the nine antenatal complications comprising the summary score (table ) into four categories. these four groupings were based on shared etiological pathways, or shared antepartum symptoms coupled with at least some shared antecedents. specifically, complications associated with either ( ). intra-amniotic infection and inflammation (histologic chorioamnionitis and membranes ruptured > h; e.g., stock et al. ); ( ). placental insufficiency (hypertension in pregnancy, hellp syndrome, and iugr; e.g., stepan et al. ) ; ( ). maternal endocrine dysfunction (hypothyroid and diabetes; e.g., haddow et al. ) or ( ). uteroplacental bleeding (abruption and previa; e.g., berhan ; getahun et al. ) . category scores were then assigned to participants based on the sum of complications accumulated within each class of antenatal risk. the data were then reanalyzed using the same sociodemographic and medical variables as covariates; however, we substituted the four sums of risk scores for the single composite sum of nine antenatal complications. table shows analyses of performance for the total sample and for the subsample without significant brain injury cases. as shown, the antenatal complications score was inversely related to the fsiq, both before (p = . ) and after (p = . ) exclusion of brain injury cases. analyses of preacademic performance revealed similar associations between antenatal risk and both cl and ap scores, before (p = . and p = . , respectively) and after (p = . and p = . , respectively) excluding brain injury cases. no significant relationships were observed between cumulative antenatal risk and the vmi, yet the vmi was linked to gestational age (p = . and . ). added variable plots, depicting relationships between residualized antenatal risk scores and outcome of the remaining cases, two failed to obtain a score on the fsiq (one with brain injury), nine on cl ( with brain injury), seventeen on ap (three with brain injury), six on the vmi. f of the "non-neurological" cases completing tasks required for a score on at least one of the four outcome measures, one (. %) could not obtain a score on the fsiq, six ( . %) on cl, fourteen ( . %) on ap, six ( . %) on the vmi. g computation of Δr based on snijders and bosker ( , pp. - ) . h antenatal complications score is a composite of presence (= ) vs. absence (= ) of nine complications: placental abruption, placenta previa, chorioamnionitis, diabetes, hypertension in pregnancy, hellp syndrome (hemolysis, elevated liver enzymes, low platelet count), hypothyroidism, preterm premature rupture of the membranes (pprom) > h, and intrauterine growth restriction. i due to the paucity of cases with four antenatal complications, analyses were repeated with three and four complications combined into a single category ( table shows the relative contribution of each of the four categories of antenatal risk to the four performance measures; the proportion of variance explained (Δr ) is provided for statistical effects with p < . . as the table reveals, intraamniotic infection risk was significantly related to the fsiq (p = . ), cl (p = . ), and ap (p = . ), whereas conditions associated with placental insufficiency were significantly related to cl (p = . ), with trends for associations with the fsiq and ap (p = . and . , respectively). maternal endocrine dysfunction was associated with cl (p = . ), and disorders associated with uteroplacental bleeding were significantly related to the fsiq (p = . ). following statistical adjustment for sociodemographic and perinatal confounds, the sum of nine, relatively common, antenatal complications remained a significant source of variability in preterm-born preschoolers' cognitive and academic performance. exploration of the relative outcome contribution of four classes of antenatal risk revealed that complications associated with intra-amniotic infection, placental insufficiency and uteroplacental bleeding accounted for . %, . % and . % of iq variance, respectively, altogether . % of variability in kindergarten cognitive readiness. similarly, complications associated with maternal endocrine dysfunction, intraamniotic infection, and placental insufficiency accounted for . %, . % and . %, of the variance in language skills, respectively, altogether . % of variability in literacy readiness. complications associated with intra-amniotic infection risk and placental insufficiency contributed . % and . % of the variance in early number concepts, respectively, a combined share of . % to variability in numeracy readiness. hence, when considered separately, each of the four risk categories accounts for a small slice of outcome variance in one or more preacademic domains (except visual-motor integration). yet in aggregate they accounted for . - . % of the variance in kindergarten readiness of preschoolers free of major handicaps, consistent with effect sizes of moderate magnitude (cohen ) . interestingly, amongst the four categories of antenatal risk examined here, intra-amniotic infection was the most consistent contributor to kindergarten readiness. these findings are compatible with recent evidence that inflammation, including chorioamnionitis, contributes to preterm cns injury and is also an independent risk factor for brain injury in the term infant (yellowhair et al. ) . these results are also consistent with reports of marked increase in the probability of unexplained cerebral palsy in the presence of antenatal inflammation-infection (horvath et al. ) . in this context, one should note that a sizeable proportion of the mothers in our sample received antibiotics prophylaxis for prevention of early onset neonatal infection (see online resource ). in a recent integrative review of the literature, braye et al. ( ) highlighted the observed decrease in incidence of early onset infection since the introduction of intrapartum antibiotic prophylaxis, as well as findings of randomized controlled studies documenting effectiveness. at the same time, however, braye and colleagues emphasized that the longer-term health implications of prophylaxis for early onset infection are unknown. it is difficult, therefore, if not impossible, to draw conclusions regarding the potential implications of antibiotic prophylaxis on kindergarten readiness until such data become available. nonetheless, our findings revealed that each of the four classes of antenatal risk studied here contributed significantly to explaining performance variance on one or more kindergarten readiness domains. the putative influence of each antenatal risk category on brain maturation trajectory and cognitivebehavioral development may be conceptualized within the broad framework of fetal programming of disease (buss et al. ; zeltser and leibel ; myatt ; andersen et al. ; miller et al. ; godfrey ) . however, the biological mechanisms mediating the relationship between cumulative antenatal risk and kindergarten readiness require specification. antenatal stressors lead to placental adaptive responses to the variations in the maternal-fetal environment (myatt ) . these responses, in turn, are followed by fetal adaptations expressed via vascular, metabolic or endocrine changes that permanently modify bodily structure or function (hocher et al. ). the precise nature and sequence of the biological changes mediating deficits in cognitive-behavioral functioning have yet to be elucidated. to accomplish this goal, future investigations should incorporate measures of potential sequential mediators, including indexes of placental size or function, intrauterine cerebral development, and postnatal brain structure or function. our findings are consistent with zeltser and leibel's ( ) observations that dissimilar intrauterine stress factors may nonetheless lead to similar fetal outcomes because they activate related mechanisms of placental adaptation (fowden et al. ) which, in turn, shape the trajectory of fetal brain development (buss et al. ) . the thesis that diverse insults converge on similar unfavorable outcomes (a relationship mediated by fetal brain programming) is compatible with the notion that the amalgamation of antenatal complications into a single index or several classes of risk may offer an important tool for the study of individual differences among pretermborn children in the severity of neurocognitive deficits. antenatal complications contributed less than ses to explaining outcome variance, although effect magnitude was typically moderate for both variables. girls outperformed boys on all measures, consistent with other reports of female outcome advantage following preterm birth (lauterbach et al. ). similar to foster- cohen et al. ( ) , we did not find significant associations between the sum of perinatal complications and kindergarten readiness in this young age group. however, gestational age, often considered a proxy for perinatal risk, was found to be linked to development of visualmotor integration skills even after we excluded cases with evidence of significant brain injury from the analyses. additionally, there was a weak trend (p < . ; table ) for a relationship between gestational age and global intellectual skills. the absence of the oft-reported significant relationships between the degree of prematurity and the remaining outcome measures (e.g., heuvelman et al. ; a recent epidemiological investigation) was somewhat unexpected. it is possible that within a restricted gestational age range, where values ≥ weeks were truncated, the relationships between gestational age and preschool performance are more difficult to demonstrate whereas the adverse influence of other risk scales- . f perinatal complications score is a composite score reflecting presence (= ) vs. absence (= ) of nine complications: anemia, bronchopulmonary dysplasia, bacterial infection, hyaline membrane disease, hyperbilirubinemia, hypoglycemia, intracranial pathology, patent ductus arteriosus, and supplemental oxygen requirement following discharge. g intra-amniotic infection risk score is a composite of two complications believed to share etiological pathways: preterm premature rupture of membranes (pprom) > h and histological chorioamnionitis. h placental insufficiency score is a composite of three complications thought to share etiological pathways: maternal hypertension, hellp syndrome (hemolysis, elevated liver enzymes, low platelet count), and intrauterine growth restriction. i maternal endocrine dysfunction score is a composite of two complications found to share some etiological pathways: maternal diabetes and hypothyroidism. j uteroplacental bleeding risk score is a composite of two complications sharing antenatal symptomatology: placental abruption and placenta previa factors (e.g., antenatal complications) becomes more apparent. interestingly, the absence of gestational age effects on several performance measures employed here seems consistent with findings from a recent meta-analysis of the long-term cognitive and academic performance of children born with various degrees of prematurity (allotey et al. ). the quantitative integration revealed a miniscule, if any, effect of lower gestational age. children born between and weeks performed almost as poorly as those born < weeks. as noted earlier, our findings of an association between cumulative antenatal risk and kindergarten readiness are consistent with the fetal programming hypothesis. researchers of fetal programming have typically studied the role of the prenatal environment without taking into consideration the perinatal, or postnatal environment (grant et al. ). in the current study, however, we statistically adjusted cumulative antenatal risk for perinatal complications and for socioeconomic status in effort to account for confounding influences occurring after birth. limitations of this study include the retrospective nature of the participant identification and recruitment component. medical data were also collected retrospectively, yet interrater reliability for information obtained from medical records was excellent. the difficulty of a small number of participants to complete various tasks is not surprising, given the combination of young age and increased biomedical risk in our sample. the percentage of missing outcome data for the children without significant neurological injury completing ≥ one task ranged from negligible ( . %) for cognitive readiness (fsiq) to moderate ( . %) for numerical readiness (ap). the amount of missing data for three of the four outcome measures was nonetheless small (< . %). although our sample included children with a wide range of socioeconomic circumstances, the mean ses and educational characteristics of the sample revealed a greater representation of middle-class strata, thereby reducing the potentially confounding effects of socioeconomic adversity on kindergarten readiness. nonetheless, because to some extent generalizability to lower strata was traded-off for improved internal validity, our findings may have underestimated statistical effects in samples with greater representation of the lower end of the socioeconomic scale. the cross-sectional nature of this investigation precluded examination of the generalizability of the findings to elementary school readiness and beyond. in the current investigation antenatal risk accounted for up to . % of the variance in kindergarten readiness. antenatal risk estimation was based on a simple frequency count of common complications. a more sensitive measure of risk may be developed to take the severity of discrete complications within each of the four antenatal risk categories examined here into account. increased sensitivity, in turn, may serve to enhance the magnitude of the statistical effects observed here between antenatal risk and emergent academic skills. as noted above, we statistically adjusted for degree of gestational maturity and for the presence of nine common perinatal risk factors (table ) in order to estimate the unique portion of outcome variance that is attributable to cumulative antenatal risk. we further examined the data with and without 'neurological' cases, based on both ultrasound evidence obtained in the nicu and subsequent evidence of cerebral palsy. nonetheless, one may argue that the full effect of confounding perinatal risk factors such as chronic lung disease or germinal matrix hemorrhage may become evident in the longer term only, when academic performance demands are increased. a longitudinal study of educational attainment is best suited to address this issue. the significance of early academic skills as the foundation of scholastic achievement has been established in multiple follow-up investigations of typical kindergartners to the early school years (e.g., pagani et al. ; romano et al. ; grissmer et al. ) . moreover, developmental continuity has been demonstrated between preschool quantitative and oral-language skills and elementary school attainment in math and reading, in a variety of student populations (e.g., davison et al. ; manfra et al. ; manfra et al. ; nguyen et al. ) . the existence of robust developmental continuity between pre-academic and academic skill levels supports the notion that the same factors that explain variability in emergent academic skills also account for variability in mastery of both the reading process and basic arithmetic in the early elementary-school years. based on our findings, it is therefore likely that differences in scholastic achievement among graduates of the nicu are partly explained by cumulative antenatal risk. a follow-up study of preterm-born preschoolers, extending to early school-age and beyond, will be needed to support this thesis. development of preschool and academic skills in children born very preterm comparative features of comprehensive achievement batteries cognitive, motor, behavioural and academic performances of children born preterm: a meta-analysis and systematic review involving , children psychiatric disease in late adolescence and young adulthood. foetal programming by maternal hypothyroidism maternal thyroid function in early pregnancy and child neurodevelopmental disorders: a danish nationwide case-cohort study in utero programming of chronic disease hellp syndrome and the effects on the neonate visuospatial and verbal fluency relative deficits in "complicated" late-preterm preschool children cognitive deficit in preschoolers born late-preterm one hundred consecutive infants born at weeks and resuscitated predictors of perinatal mortality associated with placenta previa and placental abruption: an experience from a low income country effectiveness of intrapartum antibiotic prophylaxis for earlyonset group b streptococcal infection: an integrative review fetal programming of brain development: intrauterine stress and susceptibility to psychopathology cognitive development in low risk preterm infants at - years of life early risk, attention, and brain activation in adolescents born preterm a power primer prenatal psychobiological pred i c t o r s associations between preschool language and first grade reading outcomes in bilingual children neonatal outcomes associated with placental abruption breech delivery and intelligence: a population-based study of , breech infants peabody developmental motor scales high prevalence/low severity language delay in preschool children born very preterm placental efficiency and adaptation: endocrine regulation associations of existing diabetes, gestational diabetes, and glycosuria with offspring iq and educational attainment: the avon longitudinal study of parents and children previoius cesarean delivery and risks of placenta previa and placental abruption the role of the placenta in fetal programming-a review prenatal programming of postnatal susceptibility to memory impairments: a developmental double jeopardy fine motor skills and early comprehension of the world: two new readiness indicators free thyroxine during early pregnancy and risk for gestational diabetes four-factor index of social status prenatal, perinatal and neonatal risk factors for intellectual disability: a systemic review and meta-analysis comparison of vaginal and cesarean section delivery for fetuses in breech presentation meta-analysis of the association between preterm delivery and intelligence a new and improved populationbased canadian reference for birth weight for gestational age neonatal hypoxic risk in preterm birth infants: the influence of sex and severity of respiratory distress on cognitive recovery the association between maternal subclinical hypothyroidism and growth, development, and childhood intelligence: a meta-analysis hypertensive disorders of pregnancy and risk of neurodevelopmental disorders -a systematic review and meta-analysis protocol associations between counting ability in preschool and mathematic performance in first grade among a sample of ethnically diverse, low-income children preschool writing and premathematics predict grade achievement for low-income, ethnically diverse children the consequences of fetal growth restriction on brain structure and neurodevelopmental outcome cognitive function after intrauterine growth restriction and very preterm birth placental adaptive responses and fetal programming which preschool mathematics competencies are most predictive of fifth grade achievement? early child research quarterly school readiness and later achievement: a french canadian replication and extension neonatal outcome in preterm deliveries before -week gestation -the influence of the mechanism of labor onset school readiness and later achievement: replication and extension using a nationwide canadian survey perinatal complications and aging indicators by midlife intraclass correlations: uses in assessing rater reliability multilevel analysis: an introduction to basic and advanced multilevel modeling chorioamnionitis occurring in women with preterm rupture of the fetal membranes is associated with a dynamic increase in mrnas coding cytokines in the maternal circulation perceptual-motor, visual and cognitive ability in very low birthweight preschool children without neonatal ultrasound abnormalities intracranial-hemorrhage: germinal matrixintraventricular hemorrhage of the premature infant cumulative biomedical risk and social cognition in the second year of life: prediction and moderation by responsive parenting wechsler primary and preschool scale of intelligencetm united states of america: the psychological corporation the effect of placenta previa on fetal growth and pregnancy outcome, in correlation with placental pathology prenatal, perinatal, and neonatal risk factors for specific language impairment: a prospective pregnancy cohort study clinical evaluation of language fundamentals: preschool- woodcock-johnson tests of achievement preclinical chorioamnionitis dysregulates cxcl /cxcr signaling throughout the placentalfetal-brain axis roles of the placenta in fetal brain development acknowledgements the authors thank beth kring and tammy swails for their help in data collection. evaluations and testing materials were funded in part by the merrill-palmer skillman institute. none of the authors has a known conflict of interest concerning this manuscript.funding this work was supported in part by funding from the merrill palmer skillman institute, wayne state university, east ferry, detroit, mi . key: cord- -s e bwx authors: pulcini, elena title: spectators and victims: between denial and projection date: - - journal: care of the world doi: . / - - - - _ sha: doc_id: cord_uid: s e bwx this chapter goes into the unproductive metamorphosis of fear, and analyses the defence mechanisms that it generates: namely denial and projection. in the case of global risks, fear provokes self-defensive strategies based on denial (in the face of the nuclear challenge) and self-deception (in the face of global warming); and, in the case of the threat of the other, projective and persecutory strategies based on reactivating the dynamic of the ‘scapegoat’. they are two contrasting but specular responses which, at the emotional level, reflect the divarication between (unlimited) individualism and (endogamous) communitarianism. the first, implosive response converts into an absence of fear, attested to above all by the figure of the global spectator, while the second, explosive response converts into an excess of fear (fear of the other, fear of contamination), fuelled by forms of reinventing community. these responses are defined as irrational since in the first case they inhibit the spectator’s capacity to recognize himself as also a potential victim of the threats, thus preventing his mobilization, and in the second case they give rise to dynamics of demonization-dehumanization of the other, which result in a spiral of violence and impede forms of solidarity. this case a subjective factor comes into play, linked to the capacity and manner of perceiving the threats. it is telling that sociology and psychology converge on the importance of this aspect, underlining the fact that the very characteristics of the risk have a de fi nite in fl uence on the way in which it is perceived. ulrich beck had already stressed the fact that the often invisible nature of the global risks, the unforeseeability of their effects and the only potential character of the damage which they provoke mean that they are removed from our perception and require the intervention of a re fl exive attitude interpreting the new scenarios through a knowledge that is equal to the new challenges. but in reality the problem is more complex still, since rather than an absence, we are faced with processes that distort the perception and assessment of the risk, which affect both the emotional and the cognitive spheres, and above all how they interact together. among the approaches sensitive to this problem, the one which seems to dwell on it most is cognitive psychology. starting from the classic studies by chauncey starr and then fischoff and slovic, and on the basis of the so-called psychometric paradigm, cognitive psychology has built complex cognitive maps aimed at providing as exhaustive a list as possible of the variables that in fl uence the subjective perception of risk. the conclusions that have emerged from this interpretative approach show, for example, that concern in the face of threats (whether they derive from particular activities, substances or technologies) grows in correspondence to certain characteristics, amongst which the involuntary nature of the risks, the impossibility of controlling them, their capacity to cause irreversible damage and their originating from an unknown source. but above all, the results stress the fact that individuals are subject to distorted assessments and judgements in relation to the risks they are exposed to. for example, they tend to overestimate threats publicized by the media even if they are infrequent; to consider dangers dealt with voluntarily as more acceptable compared to those to which we are subjected or which are completely unprecedented or not very familiar; to feel fear in the face of very vivid events ( september ), at the same time being quite incapable of a historical memory that links these same events together. while the merit of this approach is that it accepts and recognizes the presence of the subjective aspect and the uncertainty factor in de fi ning the concept of risk, pointing out the presence of non-rational responses, its limits lie, however, in its still strongly assuming the notion of probability. namely, it ignores what is instead underlined by mary douglas, that is, the social and institutional context and the symbolic-cultural factors that in fl uence the perception of the threats, and reproposes the idea of an essentially individualistic and de-contextualized social actor based on an abstract idea of rationality. finally, what we are most interested in here, in part deriving from the latter aspect, the limit of this approach lies in its failure to account for the why , the deep reasons that pollute a correct perception and assessment of the risks. in this connection, based on the reassessment of the role of emotions that has greatly questioned the hegemonic paradigm of rationality over the last few decades, some authors have underlined that cognitive and emotional factors have to go together in order to recognize the existence of a risk and to weigh up its possible consequences. they have put forward the idea that the information that enters our cognitive system can only have an effective impact on our action if it succeeds in creating images laden with emotion in our psyche. in other words, this means that we can be perfectly aware of particular threats without this involving us emotionally. put differently, only if this converts into the capacity to 'feel', to react emotionally and imagine its possible effects can our knowledge of the risk be effectively said to be knowledge, and therefore produce apt mobilization. now, the problem with regard to global risks seems to be prompted, as günther anders had already perfectly grasped in his diagnosis of fear in the age of technology, by the very imbalance between knowing and feeling. this imbalance is none other than one of the many variants of the psychic split that characterizes the contemporary subject and that anders, as has already been hinted, calls the 'promethean gap'. with this expression, he alludes in general to the detachment between the faculties, fi rst of all between the power to do and the capacity to foresee, which characterizes contemporary homo faber , or rather homo faber who has become homo creator . paradoxically what corresponds to the immense human power to . global risks and absence of fear produce and create permitted by developments in technology is man's inability to imagine its consequences: the faculties have got further and further away from each other so now they can no longer see each other; as they cannot see each other, they no longer come into contact, they no longer do each other harm. in short: man as such no longer exists, there only exists he who acts or produces on one hand, and he who feels on the other; man as producer and man as feeling, and only these specialized fragments of men have a reality. no more are our imagination and our emotions equal to our unlimited power; at this point man's soul is irreparably 'outdated' with respect to what he produces and his colossal performances. in short, no more can we keep up-to-date with our promethean productivity and with the world that we ourselves have built: we are about to build a world that we cannot keep up with, and, in order to "catch" it, demands are made that go way beyond our imagination, our emotions and our responsibility. this 'schizophrenia', which is where the fundamental pathology of our time resides, prompts the paradoxical and ambivalent combination of power and impotence, activity and passivity, knowledge and unawareness that exposes the contemporary prometheus not only to previously inconceivable risks, but, also and above all, to the impossibility to recognize their destructive potential. this pathological drift appears particularly evident in the risk par excellence of the age of technology, which undermines not only the quality of individuals' lives (like in the case of the possible effects of the biotechnologies), but humankind's very survival on the planet: namely, the risk produced by the creation of the nuclear bomb, which we can recognize as the fi rst effectively global challenge. before the horror of hiroshima and the spectre of humankind's self-destruction anders says: we really have gained the omnipotence that we had been yearning for so long, with promethean spirit, albeit in a different form to what we hoped for. given that we possess the strength to prepare each other's end, we are the masters of the apocalypse. we are in fi nity . but the inability of our imagination to be equal to our unlimited power makes the latter mortally dangerous and transforms us into potential victims of what we ourselves have built: we, the men of today, are the fi rst men to dominate the apocalypse, hence we are also the fi rst to be endlessly subject to its threat. we are the fi rst titans, hence we are also the fi rst dwarves or pygmies -or whatever else we care to call ourselves, we beings with our collective deadline -we are no longer mortal as individuals, but as a group; whose existence is exposed to annulment. suf fi ce it to think that it is impossible to see the bomb simply a means; an impossibility generated by the fact that if someone used the bomb… the means would not be extinguished in the purpose, but, on the contrary, the effect of the presumed "means" would put an end to the purpose. and it would not be one effect, but an unforeseeable chain of effects, in which the end of our life would be but one link among the many. the gap between the power to do and the power to foresee, therefore, gives rise to the paradoxical coexistence of omnipotence and vulnerability, which exposes future humankind and the whole of civilization to the risk of extinction, thereby con fi guring the apocalyptic scenario of a 'world without man'. but the problem does not stop here. indeed, if men, even when faced with the loss of foresight and projectuality caused by their own action, were capable of recognizing the reality of the danger, a change of direction could be set in motion to restore their control over their future. or, to put it in terms that allow us to return to our theme, if people felt fear in the face of the spectre of self-destruction and the enormity of the risks ahead, they would probably manage to break that promethean spiral of unlimitedness and restore sense and purpose to their action. furthermore, this is the normative premise at the basis of hans jonas's whole line of argument in favour of an ethics of responsibility. he starts from a similar diagnosis to that of anders on the drifts of technological power and the threats, for the whole living world, produced by a ' fi nally unbound prometheus' to suggest what he de fi nes as a 'heuristics of fear', as the precondition for ethically responsible action. ' […] it is an anticipated distortion of man,' he says, 'that helps us to detect that in the normative conception of man which is to be preserved from that threat […] . we know the thing at stake only when we know that it is at stake .' this means that only the fear of 'losing the world' can push us to responsibly take on the problem of how to preserve it. i shall come back to the nexus between fear and responsibility later on. but the problem, which anders strongly underlines -showing, unlike jonas, its complex anthropological and psychic roots -is that today we are in the presence of the unavailability of fear ; in actual fact it is paradoxically absent, due to the additional and deeper manifestation of the promethean gap which is the imbalance between knowing and feeling . indeed, there is no one who does not know what the bomb is and who does not know its possible, catastrophic consequences, but, anders adds, 'most people indeed only "know" it: in the emptiest of manners'. namely, this this asynchrony, anders points out, is something that pertains to human nature as a matter of fact. in general, in itself this is not bad, since it only shows that feeling is slower to transform. however, so to speak, it degenerates into a pathology when the gap between the faculties becomes too wide, as is happening today. as a consequence, it breaks all bonds and communication between them, and reduces contemporary men to the 'most dissociated, most disproportionate in themselves, most inhuman that have ever existed.' therefore, it is here, in the inadequacy of our emotional resources with respect to our productive power, that the anthropological root of our 'blindness to the apocalypse' lies. and this inadequacy, which is true for all the emotions in general, concerns fear fi rst of all. everyone, in however confused a manner and in spite of the minimization strategies implemented by those who produce it, realize that the bomb is not a pure means whose function ends in the ful fi lment of a purpose, but a monstrous ' unicum ' that, together with our lives and the lives of future generations, can put an end to all purposes tout court . yet, surprisingly, there is no fear : if today we were to seek out fear ( angst ),* real fear in vienna paris, london, new yorkwhere the expression ' age of anxiety ' is very much in use -, the booty would be extremely modest. of course, we would fi nd the word 'fear', in swarms even, in whole reams of publications […] . because today fear has become a commodity; and these days everyone is talking about fear. but those talking out of fear these days are very few. if we are to observe our present-day situation, we could even claim that the more fear becomes the subject of talk in the newspapers and mass media, the more it is withdrawn from emotional perception and is anaesthetized by the reassuring urgency of routine and day-to-day concerns. the anaesthetizing mechanism also works in a directly proportionate manner to the enormity of the risk and the stake at play. while it may be true that at best we ibid., i, . ibid., i, - . ibid., i, - . see ibid., part iv, i, ff. anders underlines its historical roots, such as trust in progress that prevents man from thinking of an 'end', and above all, the con fi guration at the anthropological level of what he de fi nes as the 'medial man', whose passive and conformist action ends up removing his ability to project himself into the future, together with all sense and purpose. see ibid., part , i, ff. ibid., i, ff. * translator's note: anders only uses one term -angst -and does not distinguish between anxiety and fear. since, however, the meaning with which he uses the term angst coincides more with 'fear' in the acceptation put forward by elena pulcini, i have decided to translate it with 'fear' so as to distinguish it from 'anxiety'. ibid., i, . are able to imagine our own death, but not that of tens or thousands of people, and that we may be able to destroy a whole city without batting an eyelid while not managing, however, to imagine the actual, terrible scenario of 'smoke, blood and ruins', it is inevitable that we are totally incapable of perceiving the destruction of all humankind : 'before the thought of the apocalypse, the soul remains inert. the thought remains a word.' even though today the end of humankind has entered the sphere of possibility and even though man himself is responsible for this, the psyche removes the thought of this possibility, thus preventing fear from arising. hence, we are illiterate in fear -'analphabeten der angst' -and 'if one had to seek a motto for our age, the most appropriate thing to call it would be "the era of the inability to feel fear"'. anders's diagnosis concerning the anaesthetizing of fear and the imbalance between knowing and feeling seems to fi nd a perfect correspondence in that distinctive defence mechanism that freud de fi ned as 'denial of reality'. more complex and subtle than repression ( verdrängung ), which indicates the operation with which the subject pushes particular representations linked to an instinct to the unconscious, and which for freud becomes a sort of prototype of defence mechanisms, denial ( verleugnung ) causes the self, despite rationally recognizing a painful and dif fi cult situation, to prevent this reaching the emotional sphere. in other words, while repression is a defence against internal instinctual demands , denial is a defence against the claims of external reality , which is rationally recognized, but not emotionally felt or participated. this converts into that distinctive ambivalence of 'knowing and not-knowing' which, as has recently been ibid., i, - . ibid., i, . in his recent sociological valuation of the concept of 'denial', stanley cohen stresses this ambivalence, pointing this out as the most interesting side of the concept, and above all the most suited to accounting for a series of phenomena that characterize contemporary reality. explicitly drawing from psychoanalysis, whose worth he acknowledges -if nothing else against the reductive simpli fi cations of cognitive psychology -as more than any other approach having grasped the elusive quality of the concept of denial, cohen offers a de fi nition that fi rst of all takes into account the meaning that is more general and common to the various forms: […] people, organizations, governments or whole societies are presented with information that is too disturbing, threatening or anomalous to be fully absorbed or openly acknowledged. the information is therefore somehow repressed, disavowed, pushed aside or reinterpreted. or else the information "registers" well enough, but its implications -cognitive, emotional or moral -are evaded, neutralized or rationalized away.' on the basis of this premise, cohen analyses the many forms of denial. it can occur in good faith or be deliberate and intentional; it changes in relation to the subjects' different positions, that is, whether they are victims, guilty parties or witnesses; it depends on how the object is evaluated, which can be expressed through a simple refusal to acknowledge the facts, through a different interpretation or through a rationalization that aims to prevent its psychological, political and moral implications. but the most disconcerting and problematic form, since it can affect whole cultures -as is the case today -is what makes the subjects of the denial aware and unaware at the same time, that is, placed on the threshold between consciousness and unconsciousness. here they do have access to the reality, but in such a way as to ignore it since it is too frightening or painful, or simply too unpleasant to accept. 'we are vaguely aware,' cohen says, 'of choosing not to look at the facts, but not quite conscious of just what it is we are evading. we know, but at the same time we don't know.' for example, much more than the intentional denial which is often ibid, . anders, die antiquiertheit des menschen , i, - . cohen, states of denial . 'the cognitive revolution of the last thirty years has removed all traces of freudian and other motivational theories. if you distort the external world, this means that your faculties of information processing and rational decision making are faulty.' (ibid., ) . ibid., . ibid., . moreover, this is the core of the freudian concept, which evidently presupposes the idea of splitting the ego ( ichspaltung ): 'freud,' says cohen, 'was fascinated by the idea that awkward facts of life could be handled by simultaneous acceptance and disavowal. they are too threatening to confront, but impossible to ignore. the compromise solution is to deny and acknowledge them at the same time.' (ibid., ) . implemented by political actors and institutional authorities to cover up regrettable facts and unpopular decisions, this is the frame of mind that most interests us and disturbs us because it can explain the widespread and paradoxical indifference with which common people react to situations of suffering, atrocities and violence. tellingly, the focus of cohen's whole and documented analysis seems to be the fi gure of the 'passive bystander' who, when faced with other people's suffering (whether this is experienced in a direct manner like a rape or episode of bullying, or is distant like genocide or torture), defensively withdraws from all involvement, pretending not to see and not to know, inhibiting emotional reactions, minimizing the event's capacity or changing channel if the information is transmitted through mass media images. hence the bystander withdraws from facing up to painful and embarrassing situations and avoids all possible mobilization. therefore, cohen seems, quite rightly, to rediscover denial above all as a reaction of defence in the face of other people's suffering where this assumes such proportions as not to be acceptable by the psyche. as a consequence, he fi nds it to be the root of the emotional indifference that today seems to be permeating contemporary societies. nevertheless, as we have seen, anders's re fl ection allows us to grasp another aspect of denial that sharpens its paradoxical nature, since it concerns the tendency to ignore, wipe out or minimize something that not only concerns other people's destinies, but that threatens our own lives: like in the exemplary case of denying the global challenge par excellence , the nuclear risk. consistent with anders's diagnosis, a few decades ago the psychoanalysis of war had already re fl ected on the radical changes caused by the nuclear threat with respect to the traditional forms of war con fl ict, and hence explained, more or less indirectly, the psychic roots of this speci fi c case of denial. while underlining the abstract or phantasmal nature of the danger at the objective level -due to the invisibility and intangibility of nuclear weapons, the distance of the target, as well as the bureaucratic 'normality' of those who hold the actual decision-making powersome authors have singled out the unprecedented nature of the nuclear con fl ict in its split and autonomization from the individual's instinctual sphere. that is, unlike traditional war, based on mobilizing aggressive instincts, nuclear war (its destructive potential) appears as a mechanical event, or rather, a psychologically unreal event, in which the 'enemy' himself, far from being the object of projective dynamics, becomes an inanimate abstraction with whom all emotional bonds are lost. this sort of 'dehumanization' of war, which affects the relationship with the other and the relationship with oneself to the same extent, thereby producing its 'the grey areas between consciousness and unconsciousness are far more signi fi cant in explaining ordinary public responses to knowledge about atrocities and suffering' (ibid., ). 'devitalization', is at the root, together with the enormity of the risk and the impossibility to 'think the unthinkable', of the denial of the danger, which immunizes individuals from emotional involvement, and, therefore, from true awareness. it is telling that, in addition to denial, martin wangh spoke of a 'narcissistic withdrawal', as he alluded to the entropic and self-defensive strategy of individuals reduced to passive and indifferent 'spectators' of events. individuals who, with respect to events, preclude any form of effective reaction and thus inhibit the insurgence of fear at the outset. i will return to the 'spectator phenomenon' shortly. as i have already hinted, this phenomenon is one of the most disturbing pathologies of contemporary individualism. first, however, it is interesting to dwell on one of the -so to speakmore active variants of denial, which consists not only of withdrawal from a reality that is uncomfortable or painful for the psyche, sheltering in a sort of emotional indifference, but of lying to ourselves in order to believe something that does not respond to our rational evaluations, but to our desires. this is self-deception , a defence mechanism that has tellingly been de fi ned as 'the most extreme form of the paradox of irrationality'. without going into the (at times muddled) analytical controversies relating to a concept that is without doubt slippery and problematic, we can, however, try to sum up the characteristics -shared by many authors -which prove fruitful in further extending the picture relating to the metamorphosis of fear in the global age. self-deception is what pushes individuals to form a belief that contrasts with the information and proof at their disposal, since their desires end up interfering with their vision of reality and cause them to act in a different way from what their rational judgement would suggest. in other words, it consists of believing something because one desires it to be true, hence it converges, despite some differences, with martin wangh speaks of 'dehumanization' and 'devitalization' (meant as the impoverishment of the ability to feel) in 'narcissism in our time: some psychoanalytic re fl ections on its genesis," psychoanalytic quarterly ( ). the allusion is to the text by herman kahn, thinking about the unthinkable (new york: horizon press, ). martin wangh, "the nuclear threat: its impact on psychoanalytic conceptualizations," psychoanalytical inquiry , no. ( ). the expression ( zuschauer-phänomen ) is by martin wangh, "die herrschaft des thanatos," in zur psychoanalyse der nuklearen drohung. vorträge einer tagung der deutchsen gesellschaft für psychotherapie, psychosomatik und tiefenpsychologie , ed. carl nedelmann (göttingen: verlag für medizinische psychologie, ). david pears, "the goals and strategies of self-deception," in the multiple self , ed. elster, ; giovanni jervis, fondamenti di psicologia dinamica (milan: feltrinelli, ) and massimo marraffa, "il problema dell'autoinganno: una guida per il lettore," sistemi intelligenti , no. ( ): - . '[…] self-deception,' davidson says, 'is a problem for philosophical psychology. for in thinking about self-deception, as in thinking about other forms of irrationality, we fi nd ourselves tempted by opposing thoughts.' (donald davidson, "deception and division," in the multiple self , ed. elster, ). ibid., . the dynamic of wishful thinking. like denial, meant in its pure form, so to speak, self-deception implies ichspaltung , no matter what name may be given to what freud identi fi ed as the splitting of the ego. finally, like denial, it is an ambivalent phenomenon since it acts in that threshold between consciousness and unconsciousness which, as cohen stresses in this case too, creates a paradoxical situation of knowing and not-knowing. but while denial appears, as we have seen, effective in explaining the lack of perception and the anaesthetizing of fear in the face of the nuclear threat, selfdeception can prove pertinent in order to understand the complex emotional response that individuals give to the other global risk already brought up above: that is, the twofold environmental risk of global warming and the depletion of the ozone layer, which by no means seems to generate that mobilization of the whole of humankind which it would instead -urgently -require. from this point of view, the recently proposed de fi nition of 'global risks in the making' or 'potentially global' risks, which tends to distinguish them from the global risk par excellence represented by nuclear power, can prove to be extremely useful in explaining the however blurred difference in the subject's reaction and in further enlightening the phenomenology of fear. the inde fi nite nature that without doubt also pertains to the nuclear risk is greatly stressed here, due to the fact that global warming and depletion of the ozone layer have wider margins of uncertainty created by their inertial nature, the impossibility to measure and foresee their future development, and therefore to calculate with certainty, together with their possible effects, the last deadline for possible countermeasures. their ungraspable and invisible nature, further fuelled by the dif fi culty to point the fi nger of blame mean that, in spite of the alarming international reports on the climate and reliable scienti fi c forecasts on the devastating future damage, moreover given increasing mass media coverage, individuals mostly seem in paradoxes of irrationality (in richard a. wollheim and james hopkins, eds., philosophical essays on freud , (cambridge: cambridge university press, )), davidson upholds that in wishful thinking desire produces a belief without providing any proof in its favour, so that in this case the belief is evidently irrational. however, he underlines the differences between self-denial and wishful thinking: unlike the second, the fi rst requires the agent's intervention, that is, the agent has to 'do' something to change his way of seeing things; in the second the belief always takes the direction of positive effect, never of negative, while in the fi rst the thought that it triggers can be painful (see 'deception and division', ff.). in this connection pears speaks of 'functional insulation', "goals and strategies of self-deception", ; davidson speaks of 'boundaries': '[…] i postulate such a boundary somewhere between any (obviously) con fl icting beliefs. such boundaries are not discovered by introspection; they are conceptual aids to the coherent description of genuine irrationalities.' "deception and division", - . on self-denial and splitting of the ego, see herbert fingarette, self-deception (london : henley-routledge, ) . see cohen, states of denial , ff. it is important to point out that the second problem (the one relating to the risk of ozone layer depletion) nevertheless found some solutions as of the montreal protocol in , made possible due to the fact that they did not require costs or relinquishments in terms of economics or lifestyle. d'andrea, "rischi ambientali globali e aporie della modernità". to fail to suitably perceive the phenomenon. instead, it is often shrugged off with detached irony towards the excessive catastrophism, with resigned declarations of impotence, or the expression of enlightened trust in the capacities of technology to repair the situation. in other words, despite being rationally known and recognized, the risk does not produce such emotional involvement as to give rise to effective answers. at most it produces a widespread and generic feeling of anxiety which ends up imploding, sucked in by the much more real worries of everyday life. the causes of this paradoxical situation can be traced fi rst of all to within the same dynamic of fear of which, as i will recall, hobbes's diagnosis had grasped an essential aspect. namely, fear as a necessary and vital passion that allows us to respond to the immediate danger (of death) loses its ef fi cacy when the danger, and the damage it could cause, are shifted to the future, that is, when a time gap inserts itself between the present action (based on destructive passions) and its possible consequences. thus all certainty and inexorability are taken away from the evil, enabling individuals to imagine it as a remote and avoidable possibility, for which it makes no sense to mobilize themselves immediately. in other words, in this case, fear does not manage to overcome the passions of the present. hobbes's intuition is all the more valid in the case of global risks, whose possible damage is even more remote and does not concern current individuals, but future generations. that is, fear does not have the strength to change present action (and therefore the underlying desires and passions) when the damage that this action can cause is not an evil for ourselves but for 'others': anonymous, generic and distant in time. in short, by weakening fear, the future nature of the damage makes it easy for essentially selfpreserving and narcissistic individuals to deceive themselves as to the actual entity of the risk and therefore to minimize or deny the possible consequences. in this case, the aim is not so much for individuals to defend themselves emotionally from events that are too painful to bear (like in the case of nuclear con fl ict), but to carry on with a manner of acting that allows them to legitimize and satisfy their current desires, preserve their lifestyles and not lose consolidated privileges. to once again recall the pathologies of the global self, we could say that the acquisitive voracity of homo creator , orientated towards unlimited growth, combines with the parasitic bent of a consumer individual anchored solely to the present, to prevent access -through the cunning of self-deception -to a correct perception of the catastrophic effects of climate change, global warming, the greenhouse effect and depletion of the ozone layer. this appears all the more paradoxical where these effects start to be dramatically visible: tropicalization of the climate, deserti fi cation, destruction of the ecosystem, lethal viruses and infective diseases are no longer only remote possibilities but the disturbing proof of environmental risks. by now scattered all over the planet, they affect whole geographical areas and populations, damaging the illusion of individuals and states' immunity more and more. indeed, despite not just abstract information and forecasts, but a more and more invasive state of affairs that is starting to concern them at close quarters, guaranteed and supported by the instrumental interests of local politics and the global economy, individuals prefer to deceive themselves in order not to pay the costs of relinquishing their current desires, assets and pleasures; further eased, in this self-defensive operation, by the morally innocent, innocuous and banally everyday nature of the action that produces the risks. moreover, the absence of a 'productive' fear, inhibited by denial and self-deception, is not belied by the cyclical outbursts of panic and collective hysteria in the face of the sudden appearance of threats (as has always been the case, from chernobyl, to sars and bird fl u). on the contrary, the absence and the excess of fear are nothing but two sides of the same coin, the two extreme and 'unproductive' manifestations of what i de fi ned as global fear . both denial and self-deception leave individuals in the passive position of spectators of events. thus they are enclosed in the immunitarian circuit of a self-defensive and self-preserving individualism which anaesthetizes fear and is incapable of converting into effective action, practice or political participation. alongside the two extroverted pathologies, so to speak, of unlimited individualism, represented by the insatiable voracity of the consumer individual and the omnipotence of homo creator , appears a third, paradoxically introverted con fi guration, namely a passive and impotent individual, who helplessly watches the destructive effects of his own action, over which he seems to have lost all capacity for orientation and control. against the loss of objective spaces of protection and security, increasingly eroded by the global diffusion of the risks, he seems to seek shelter, as i have already hinted , in a sort of interior immunity , entrenching himself in the emotional indifference that is just one of the many manifestations of narcissism. in addition, the yearning for immunity becomes more tenacious and obstinate the more it is felt to be ineffective and illusory. thus a new condition is outlined, which to recall the metaphorical fi gures proposed by hans blumenberg in his shipwreck with spectator , is neither the premodern and 'lucretian' condition of the spectator watching the shipwreck from a safe place, sheltered from the danger, nor the modern and 'pascalian' condition of being the actors of our own lives, ' être embarqués ', involved in the things of the world and ready to put ourselves at stake fi rst of all by recognizing the constitutive precariousness of the human condition and accepting the very risk of existence. while modernity had rati fi ed the decline of the spectator fi gure, and enhanced the moments of practice and action, involvement and commitment; and while late modernity had radicalized his condemnation by emphasizing the need to expose oneself to risk and accept the uncertainty and fl uidity of the human condition, the global age seems to be objectively bringing the spectator up-to-date, which nevertheless coincides with a deep and disturbing change with respect to the fi gure of the lucretian wise man. the erosion of boundaries and disappearance of an 'elsewhere'redrawing global space, cancelling out the distinction between inside and out -is turning into the loss of free areas from where the shipwreck can be observed. at this point, due to the end of every real guarantee of immunity, deprived of the possibility of a safe harbour where he can feel sheltered from the world's dangers, the global self withdraws into the only space apparently able to protect him from events and threats that he is not able to deal with: namely, the wholly interior space of an emotional indifference, an anaesthetizing of emotions , generated by implementing sophisticated and for the most part unconscious defence mechanisms. in other words, the spectator fi gure is undergoing a process of interiorization , which replaces the spatial distance from the shipwreck and the contemplative safety of the lucretian subject with the apathetical extraneousness and obstinate blindness of he who refuses to recognize the very risk of the shipwreck, and encloses himself in the entropic space of an inert solitude. moreover, the spectator phenomenon seems to pervade the whole social structure, due to the spectacularization of reality that, as jacques debord had already masterfully diagnosed a few decades ago, deeply upsets the very nature of social relations. by denouncing the erosion of the boundaries between real and virtual and the pervasive power of images (mass media images fi rst of all), and by diagnosing life's 'total colonization' by commodi fi cation processes and the indistinctive overlapping of true and false as the effects of the 'society of the spectacle', debord had indeed grasped the spectator fi gure as the symptom and symbol of a new form of alienation that invades the individual's whole relationship with the world. passiveness and submission to the totalitarianism of images, prioritization of appearance, loss of contact with one's desires and genuine needs, atomism and isolation are among the most evident and disturbing characteristics of the spectator-individual, who thus ends up losing all capacity to be involved and to grasp reality. in short, the emotional indifference in which individuals shelter in order to cancel out the awareness of the risks surrounding them, unconsciously implementing powerful defence mechanisms, seems to be a sort of inevitable outcome of a widespread anthropological condition. or rather, it seems to be the extreme form of a general tendency towards apathy and inertia, produced by a spectacular society that empties reality of its contents and thus deprives individuals of pathos and action. suf fi ce it to think of the de-realizing effects, with respect to the effective drama of events, produced by mass media images (for example the fi rst gulf war), or the narcotizing addiction that they cause to dangers and catastrophes of all kinds (from tsunamis to sars). the images deprive events of the fl esh and blood of the experience and neutralize them in the aseptic and equalizing space of the screen. however, the problem today is no longer the subject's passivization and atomization alone, nor his a-pathetical detachment from reality: aspects which, moreover, sociological re fl ection on narcissism had already underlined some time ago, and to which the most recent and sagacious sociological diagnoses do not fail to draw attention. the problem, as we have seen, regards above all the negation of reality and the possible destructive effects of this denial on the very survival of individuals and the whole of humankind. by withdrawing into the immunitarian space of a selfdefensive apathy, the global spectator performs a dangerously illusory operation which precludes the possibility to perceive and understand what the unprecedented risk of the global age is: namely, that he himself is the potential victim of events from which there is no shelter, or rather, from which there is no other possible shelter than active and universal mobilization. while it may be true that the hallmark of global challenges is that they cross boundaries and no perimeter can be drawn around or circumscribe them, it is also true that everyone, in every corner of the planet, is always potentially exposed to their effects, that everyone is always potentially a victim of a shipwreck which, for the fi rst time, could affect and sweep away humankind and all living beings. by anaesthetizing fear, the denial (and self-deception) strategy paradoxically ends up betraying the very same purpose that it had been implemented for: namely, self-preservation. or rather, in order to pursue an entropic and defensive selfpreservation that preserves them from all emotional and active involvement, not only are individuals undermining the quality of their lives, but the very preservation of humankind and the world. see antonio scurati, televisioni di guerra. il con fl itto del golfo come evento mediatico e il paradosso dello spettatore totale (verona: ombre corte, ), who observes how the increase in media exposure of the war phenomenon corresponds to a lesser ability, on the part of the spectator, to grasp its reality. as a result, on the part of the citizen there is less possibility to decide and act. in other words, the 'total visibility' offered by the television medium corresponds, in an only apparent paradox, to the blindness and impotence of the 'total spectator'. this unwillingly nihilistic outcome could perhaps be interpreted as a radical and extreme manifestation of the immunitarian paradigm recognized as the very emblem of modernity, owing to which the preservation of life is paradoxically turned around into its negation. however, what i would like to stress, to go back to anders's diagnosis, is the fact that -in this case at least -this worrying reversal originates in the pathologies of feeling and the denial of fear, which prevent individuals from recognizing their paradoxical condition of spectators and victims at the same time. denial, however, is just one of the unproductive metamorphoses of fear in the global age, and only one of the strategies that the global individual uses to contrast the anxious perception of new risks. denial sums up the individualistic and implosive response to the inde fi nite and unintentional threats produced by techno-economic globalization. in parallel to this there emerges, as i had mentioned, another defence strategy, which responds to what is perceived as the second, fundamental source of danger, essentially generated by economic-cultural globalization: that is, defence against the other . this strategy is specular to the fi rst since it converts more into an excess rather than an absence of fear, and i have suggested de fi ning it as communitarian and explosive . it is based on reducing insecurity and inde fi niteness through the defence mechanism of projection : namely, the fear is displaced onto indirect and specious objects since these appear easier to de fi ne and identify. many of the ethnoreligious con fl icts that are traversing the planet can at least in part be traced back to this basic defence mechanism which converts inde fi nite anxiety into de fi nite fear. in this case too, we are dealing with a strategy that is anything but new since, as we will see, it results in the classic mechanism of building a 'scapegoat'. however, the novelty lies in the fact that, like in the denial strategy, this strategy seems to be resulting in substantial ineffectiveness . if, as suggested to us by rené girard's enlightening diagnosis, the fundamental goal of creating scapegoats has always been, since the origins of civilization, to keep check on and resolve violence in defence of a given community, today we are instead faced with an escalation in violence which attests to the substantial failure of the scapegoat dynamic. through a fascinating thesis that i can only brie fl y recall here, girard claims that in truth this loss of effectiveness has distant roots, since it coincides with the end of the processes which made violence ritual and sacred, and with the revelation of the see esposito, immunitas . here there is a generic allusion to bauman's 'explosive communities' in liquid modernity . of great use for the issues that follow is the essay by stefano tomelleri, "il capro espiatorio. la rivelazione cristiana e la modernità," studi perugini , no. ( ): - . victimage mechanism brought on by the advent of christianity. in other words, while archaic societies had entrusted the rite of sacri fi cing the scapegoat with the function of providing a remedy to internal violence in order to found and preserve social order and peaceful coexistence among men, the revelation of christ radically damaged this mechanism since, by disclosing the victim's innocence, for the fi rst time it made people aware of the victimizing and persecutory dynamics. by unmasking the nexus between violence and the sacred, the christian message led to the breakdown of the mythical-ritual universe, and placed people before the unavoidable truth of their violence. thus it weakened the possibility of resolving the violence through the sacri fi cial mechanism and opened totally new scenarios, affected by a fundamental ambivalence. on one hand, by depriving men of all external justi fi cation for their violence, the christian revelation of the victim's innocence opened up the possibility of renouncing the scapegoat logic and resolving the problem of the social bond, without any exclusion or sacri fi ce; on the other hand, in the absence of ritual antidotes and their power to create order, it exposed men to the spreading of violence and the persistence -in more ambiguous, disguised and clandestine forms -of the victimage mechanism. that this second scenario is the one which, unfortunately, has ended up prevailing is manifestly undeniable; and, paradoxically, it can be pinpointed as originating above all in modernity. while it may be true that modernity -the time of rights, democracy and equality -seems to offer the possibility of transforming violence into 'soft', peaceful and even emancipatory forms of competition and rivalry, it is also true that, for the same reasons, it can provide a breeding ground which favours the heightening of violence. indeed modernity produces an ampli fi cation of the mimetic dynamic that girard recognized as the constitutive source of violent con fl ictuality among men. as has been underlined, the same equality that, à la tocqueville, can be interpreted as a loss of differences, frees the mimetic desire, which becomes unlimited and inevitably exacerbates rivalry among people. in other words, in a society of equals the desire to be according to the other which pushes the mimetic actor to see the other as model and rival at the same time, triggers a spiral of competitive comparison. even the smallest difference becomes the opportunity for resentment, envy and hate, and can always provide the opportunity for violent clashes. while on one hand democratic indifferentiation and, we could add, narcissistic and postmodern intolerance towards every difference -which tocqueville had prophetically diagnosed -provoke the continuance of rivalry and con fl ict, on the other hand the sacri fi cial dynamic, to which premodern societies had entrusted the function of keeping check on violence, seems to have lost its traditional ef fi cacy due to its irreversible disclosure. this means that modern and contemporary societies are exposed to a radical 'crisis of the sacri fi cial system' which, since it is impossible to fi nd a solution in the scapegoat mechanism, can result in a multiplication of violence and its manifestation in increasingly crude and destructive forms. the loss of the victimage mechanism's ef fi cacy, due to the deritualization process, does not equate to its disappearance, however. on the contrary, girard once again observes that phenomena of 'sacri fi cial substitutions' reappear 'in a shameful, furtive, and clandestine manner' so as to avert moral condemnation (and selfcondemnation). they take on the shape of psychological violence which is easier to conceal, or they re-explode in the exacerbated form of immolating victims to evil ideologies, as was the case of the genocides in the twentieth century. these mechanisms continue in our world usually as only a trace, but occasionally they can also reappear in forms more virulent than ever and on an enormous scale. an example is hitler's systematic destruction of european jews, and we see this also in all the other genocides and near genocides that occurred in the twentieth century. of course the reference to the nazi genocide is not random, but extremely emblematic of the modern and contemporary reappearance of the victimage mechanism in spite of its disclosure. a fi rst formulation of this can be found in the diagnosis of totalitarianism that franz neumann was already suggesting in the s, as he traced its psychic origins back to the transformation of fear into 'persecutory anxiety'. every time, neumann says, over the course of history a particular social group (whether it can be de fi ned on the basis of class, religion or race) feels threatened by objective dangers which, together with material survival, compromise its prestige and identity, the deriving anxiety is displaced onto groups and people, who are given the requirements ad hoc , and the guilt made to converge on them. if we are to take up the freudian distinction between 'realistic anxiety' and 'neurotic anxiety ', neumann shows how fear and uncertainty are transformed into persecutory anxiety through the projective and hence specious creation of an enemy who becomes the subject of hate and aggression. as a result, the masses threatened with disintegration can rediscover their internal cohesion. in the case of nazism and the persecution of the jews, political and ideological manipulation linked up to this social dynamic, took advantage of the mass anxiety and pushed the masses towards 'caesaristic' and regressive identi fi cation with a leader libidinally attributed the task of resolving the anxiety by expelling the evil and its presumed carriers. by recognizing the victimage mechanism as originating in the persecutory transformation of anxiety, neumann allows us to see its emotional roots, which girard evidently considers less essential for his so-to-speak ontological diagnosis of violence. however, at the same time, while neumann particularly stresses the totalitarian outcomes of the scapegoat dynamic, girard underlines its persistence in 'all the phenomena of nonritualized collective transference that we observe or believe we observe around us.' although deritualized -and indeed all the more violent for this precise reason -the victimage mechanism continues to act in the same modern democratic societies in all creeping and disguised phenomena of exclusion and discrimination, or in the cyclical explosions of reciprocal aggression and disdain that are fuelled by identity con fl icts: we easily see now that scapegoats multiply wherever human groups seek to lock themselves into a given identity -communal, local, national, ideological, racial, religious, and so on. evidently, here we are coming back to the topic of identity con fl ict which, as we have seen, is proliferating inside and outside the west, bringing the scapegoat strategy back up-to-date: a strategy which becomes all the more aggressive the more the perception of the threat grows in a global society. by eroding territorial and cultural boundaries, globalization is producing, fi rst of all in western societies, a disturbing proximity of the other. as a result, the other can increasingly be identi fi ed with the simmelian fi gure of the 'stranger within', who challenges the order and cohesion of a given community through a swarming and liminal presence that is felt, as suggested neumann stresses the regressive nature of this identi fi cation mechanism for the very masses who implement it, since it involves alienation and the relinquishment of one's self: 'since the identi fi cation of the masses with the leader is an alienation of the individual member, identi fi cation always constitutes a regression'. ( anxiety and politics , ). 'caesaristic identi fi cations may play a role in history when the situation of masses is objectively endangered, when the masses are incapable of understanding the historical process, and when the anxiety activated by the danger becomes neurotic persecutory (aggressive) anxiety through manipulation.' (ibid . , . it is interesting to see how neumann indeed also alludes to the unconscious nature of the persecutory dynamic: 'hatred, resentment, dread, created by great upheavals, are concentrated on certain persons who are denounced as devilish conspirators. nothing would be more incorrect than to characterize the enemies as scapegoats […] for they appear as genuine enemies whom one must extirpate and not as substitutes whom one only needs to send into the wilderness.' (ibid., ) . girard, i see satan fall like lightning , . ibid., . by mary douglas, as potentially contaminating. coming forth in response to the siege of a hybrid and unstemmable multitude that is penetrating the protected spaces of our identity citadels is the ancestral fear of a 'contamination' endangering the need for 'purity' upon which, douglas says, every culture and civilization builds its reassuring separations and classi fi cations. the other (the stranger, he who is different, the migrant, the illegal immigrant) becomes the target upon whom to displace our fears, upon whom to project a persecutory anxiety that transforms him into the person responsible for the dangers threatening a society that is increasingly deprived of the traditional control structures. hence this enables that blaming process which is indispensable for social cohesion and which, however, the anarchic and anonymous logic of globalization seems to be progressively eroding. but since it is no longer possible to rely on ritual expulsion practices or strategies to con fi ne the other to a spatial and territorial elsewhere clearly divided by a de fi nite boundary that traces the separation between an inside and an outside, the exclusion mechanism becomes interiorized and acts at an eminently symbolic level. the exclusion dynamic, as has been underlined, is shifted into the conscience: 'defence and exclusion, no longer possible towards the outside, will be shifted into the conscience, the imagination, the social mythologies and into the self-evident that these hold up.' thus immunity is ensured through dehumanization processes that transform the stranger within (the metoikos ) into an 'inside being' in such a way that he remains an 'outside being' all the same. all this can take place in the insidious and hidden forms of psychological violence and everyday discrimination towards those who have crossed the territorial boundaries of a state and broken the taboo of distance and separation, therefore representing a constant challenge to consolidated privileges and to the 'purity' of identity. or it can occur through cyclical collective mobilization against the weak and marginalized in the attempt to deal with insecurity by displacing the fear onto problems of personal safety, which politics does not then hesitate to exploit, in selflegitimation, in the name of defending public order. but, as we have already seen, it can also convert into a real and proper 'attack on the minorities', in which it is perhaps legitimate to recognize, as suggested by arjun appadurai, the distinctive form of violence spreading to the global level. when global insecurity is added to the delirious fantasy of national purity which appadurai de fi nes as an 'anxiety of incompleteness', the majorities in every single state whose hegemony is threatened tend to transform into 'predatory identities'. their aim becomes to defend the purity of the ethnos by eliminating the element of disturbance represented by the 'minor differences'. the minorities 'are embarrassments to any state-sponsored image of national purity and state fairness. they are thus scapegoats in the classical sense.' more speci fi cally in the global age they 'are the major site for displacing the anxieties of many states about their own minority or marginality (real or imagined) in a world of a few megastates, of unruly economic fl ows and compromised sovereignties.' from iraq to ex-yugoslavia, from indonesia to chechnya, from palestine to rwanda, to the emblematic case of the clash between hindus and muslims within a modern democracy like india, the victimage mechanism seems to reassert itself with a fresh violence that tellingly -testimony to the obsession with purity at its origin -seems to repeat itself in particular towards the body . indeed, as appadurai underlines by taking on douglas's perspective, the body becomes subject to unheardof violations and atrocities (bodies massacred, decapitated, tortured, raped) in view of punishing the minorities for the fact that they 'blur the boundaries between "us" and "them," here and there, in and out, healthy and unhealthy, loyal and disloyal, needed but unwelcome.' nevertheless, it is precisely this obsessive, punitive and puri fi catory nature that announces the danger that the violence may assume an unstemmable drift. far from producing a stop to the violence, the scapegoat strategy causes its proliferation, through a sort of perverse up-the-ante that seems to bring the brutality of archaic practices, such as sacri fi ce, and of the starkest materiality back inside the abstract and impersonal space of globalization. but that is not all. today the spiral of violence is further fuelled by a new factor that upsets the logic -to date essentially one-way -of the persecutors-victims relationship. what happens, unlike for example the emblematic case of nazism, is that the other tries to overturn his position as victim, and in turn becomes the persecutor, giving rise to a dynamic of hostility and aggression that potentially becomes unlimited owing to its reciprocal and specular nature. suf fi ce it to think of islamic terrorism and the projection it puts upon the west as the image of the other and evil, against which, by fuelling passions of resentment , a compact and endogamous us is condensed together and built. indeed this proves the fact that the scapegoat, as girard warns, is not only necessarily embodied in the weak and oppressed but also in the rich and powerful. hence, in the grip of dehumanization on one hand and demonization on the other, the world becomes a theatre, through the reciprocal invention of an enemy, of an escalation in violence that has much to do with the persecutory metamorphosis of insecurity and anxiety and very little to do with a presumed 'clash of civilizations'. shifted to the inner self, the victimage mechanism continues to act, hidden from view. nonetheless, it ultimately becomes ineffective since it fails in its original purpose to resolve the fear and keep a check on violence. orphaned of ritualization processes and deprived of an 'elsewhere' that permits the other's spatial and territorial exclusion, the construction of the enemy/victim generates forms of identity cohesion that are as aggressive as they are regressive, fuelled by a reciprocal persecutory projection. far from restoring cohesion and security to a given community, the scapegoat dynamic gives rise to endogamous and reciprocally exclusive processes of building an us , whose foremost and manifest effect is to form what i have de fi ned as immunitarian communities : whether they are the 'voluntary ghettoes' and 'communities of fear' that explode cyclically in a west frightened by the siege of the other and anything but free from regressive phenomena, or ethno-religious communities entrenched around the obsession of identity and homogeneity, willing to reactivate atrocious forms of excluding the other, or lastly global communities that come together around the war/terrorism polarization. the metamorphosis of fear in the global age therefore seems to con fi rm, at the emotional level, the pathological split between an unlimited individualism and an endogamous communitarianism , which originates in the implementation of defence mechanisms leading to not only the polarization of an absence and excess of pathos , but also, it needs to be stressed, in their substantial inef fi cacy . on one hand, the denial of fear, we have seen, pushes individuals towards forms of apathy and narcissistic entropy that prevents them from recognizing the new risks produced by global challenges. as a consequence, this produces the individuals' incapacity to perceive their unprecedented condition of spectators and potential victims at the same time, and fuels the illusion of immunity: which means that in the name of entropic self-preservation we end up delivering the whole of humankind to the danger of self-destruction. on the other hand, the persecutory conversion of fear generates perverse and endogamous forms of alliance and solidarity, which thereby result in the reactivation of destructive communities driven by 'primordial loyalties'. this gives rise to the explosive drift of identity con fl icts and to an unlimited escalation of violence at the planetary level. between self-obsession and us-obsession , as the specular polarities of the same immunitarian strategy, we run the risk of not grasping the chance that the global age could actually be capable of offering through the very transformations that it produces and the very challenges that it contains. on one hand, as we will see, the risks that are bearing down on humankind for the fi rst time mean we can think of the latter as a new subject, as a set of individuals linked by their common vulnerability and weakness . therefore, they are able to take care of the world in the sense of the planet, the 'loss' of which would coincide with the disappearance of the only dwelling of living beings that we know of. on the other hand, the multiplication of differences and the slide of the idea of 'other' towards the notion of 'difference', which can neither be assimilated nor expelled into an elsewhere, for the fi rst time makes it possible to rethink the social bond as the solidaristic coexistence of a plurality of individuals, genders, cultures, races, religions, capable of forming a 'world', à la arendt, since they are capable of recognizing not only the necessity but also the potential vitality of reciprocal contamination . these real possibilities are, however, only a chance. insofar as it is a chance, the subjects have the task of knowing how to grasp it. to recall a successful suggestion by andré gorz, we could say that to pro fi t from the chance in the fi rst place means 'to learn to discern the unrealized opportunities which lie dormant in the recesses of the present' ; or, in a word, to lay a wager on the ability to build alternative scenarios and create possibilities that may not yet have been taken up but are still latent. the expression is inspired by georges bataille who, as already remembered above, proposes the idea of chance meant as the 'possibility of openness', see on nietzsche (london: athlone, ), originally published as "sur nietzsche," in oeuvres complètes, vol. (paris: gallimard, ). gorz, reclaiming work , . originally published as des choses cachées depuis la fondation du monde we haven't given up having scapegoats, but our belief in them is percent spoiled. the phenomenon appears so morally base to us, so reprehensible, that when we catch ourselves "letting off steam" against someone innocent, we are ashamed of ourselves originally published as je vois satan tomber comme l'éclaire democratic and authoritarian state neumann says, there must always be a core of truth that makes this choice particularly dangerous: so in the case of the jews, the core of truth is given by their being 'concrete symbols of a so-called parasitical capitalism, through their position in commerce and fi nance on this see part ii purity and danger. an analysis of concepts of pollution and taboo mary douglas underlines that risk itself becomes a resource at the moral and political level and speaks of a 'forensic theory of danger': 'disasters that befoul the air and soil and poison the water are generally turned to political account: someone already unpopular is going to be blamed for it.' ( risk and blame , ) risk and blame appadurai continues, 'are metaphors and reminders of the betrayal of the classical national project. and it is this betrayal -actually rooted in the failure of the nation-state to preserve its promise to be the guarantor of national sovereignty -that underwrites the worldwide impulse to extrude or to eliminate minorities originally published as orrorismo fear of small numbers , chap. again underlines the nexus between the abstract logic of globalization and the brutality of physical violence - ; for an interesting treatment of the topic see stefano tomelleri key: cord- - qhgeirb authors: busby, j s; onggo, s title: managing the social amplification of risk: a simulation of interacting actors date: - - journal: j oper res soc doi: . /jors. . sha: doc_id: cord_uid: qhgeirb a central problem in managing risk is dealing with social processes that either exaggerate or understate it. a longstanding approach to understanding such processes has been the social amplification of risk framework. but this implies that some true level of risk becomes distorted in social actors’ perceptions. many risk events are characterised by such uncertainties, disagreements and changes in scientific knowledge that it becomes unreasonable to speak of a true level of risk. the most we can often say in such cases is that different groups believe each other to be either amplifying or attenuating a risk. this inherent subjectivity raises the question as to whether risk managers can expect any particular kinds of outcome to emerge. this question is the basis for a case study of zoonotic disease outbreaks using systems dynamics as a modelling medium. the model shows that processes suggested in the social amplification of risk framework produce polarised risk responses among different actors, but that the subjectivity magnifies this polarisation considerably. as this subjectivity takes more complex forms it leaves problematic residues at the end of a disease outbreak, such as an indefinite drop in economic activity and an indefinite increase in anxiety. recent events such as the outbreaks in the uk of highly pathogenic avian influenza illustrate the increasing importance of managing not just the physical development of a hazard but also the social response. the management of hazard becomes the management of 'issues', where public anxiety is regarded less as a peripheral nuisance and more as a legitimate and consequential element of the problem (leiss, ) . it therefore becomes as important to model the public perception of risk as it does to model the physical hazard-to understand the spread of concern as much as the spread of a disease, for example. in many cases the perception of risk becomes intimately combined with the physical development of a risk, as beliefs about what is risky behaviour come to influence levels of that behaviour and thereby levels of exposure. one of the main theoretical tools we have had to explain and predict public risk perception is the social amplification of risk framework due to kasperson et al ( ) . as we explain below, this framework claims that social processes often combine to either exaggerate or underplay the risk events experienced by a society. this results in unreasonable and disproportionate reactions to risks, not only among the lay public but also among legislators and others responsible for managing risk. but since its inception the idea of a 'real', objective process of social risk amplification has been questioned (rayner, ; rip, ) and, although work in risk studies and risk management continues to use the concept, it has remained problematic. the question is whether, if we lose the notion of some true risk being distorted by a social process, we lose all ability to anticipate and explain perplexing social responses to a risk event in a way that is informative to policymakers. we explore this question in the context of risks surrounding the outbreaks of zoonotic diseases-that is, diseases that cross the species barrier to humans from other animals. recent cases of zoonotic disease, such as bse, sars, west nile virus and highly pathogenic avian influenza (hpai), have been some of the most highly publicised and controversial risk issues encountered in recent times. many human diseases are zoonotic in origin but in cases such as bse and hpai the disease reservoirs remain in the animal population. this means that a public health risk is bound up with risk to animal welfare, and often risk to the agricultural economy, to food supply chains and to wildlife. this in turn produces difficult problems for risk managers and policymakers, who typically want to avoid a general public amplifying the risk and boycotting an industry and its products, but also want to avoid an industry underestimating a risk and failing to practice adequate biosecurity. the bse case in particular has been associated with ideas about risk amplification (eg, eldridge and reilly, ) and continues to appear in the literature (lewis and tyshenko, ) . other zoonoses, such as chronic wasting disease in deer herds, have also been seen as recent objects of risk amplification (heberlein and stedman, ) . in terms of the social reaction, not all zoonoses are alike. endemic zoonoses like e. coli do periodically receive public attention-for example following outbreaks at open farms and in food supply chains. but it is the more exotic zoonoses like bse and hpai that are more clearly associated with undue anxiety and ideas about social risk amplification. yet these cases also showed how uncertain the best, expertly assessed, supposedly objective risk level can be, and this makes it very problematic to retain the idea of an objective process of social risk amplification. such cases are therefore an important and promising setting for exploring the idea that amplification is only in the heads of social actors, and for exploring the notion that this might nonetheless produce observable, and potentially highly consequential, outcomes in a way that risk managers need to understand. our study involved two main elements, the second of which is the main subject of this article: . exploratory fieldwork to examine how various groups perceived risks and risk amplification in connection with zoonoses like the avian influenza outbreaks in ; . a systems dynamics simulation to work out what outcomes would emerge in a system of social actors who attributed amplification to other actors. in the remainder of the paper we first outline the fieldwork and its outcomes, and then describe the model and simulation. although the article concentrates on the latter, the two parts provide complementary elements of a process of theorising (kopainsky and luna-reyes, ) : the fieldwork, subjected to grounded analysis, produces a small number of propositions that are built into the systems dynamics model, and the model both operationalises these propositions and explores their consequences when operationalised in this way. the modelling is a basis for developing theory that is relevant to policy and decision making, rather than supporting a specific decision directly. a discussion and conclusion follow. traditionally, the most problematic aspect of public risk perception has been seen as its sometimes dramatic divergence from expert assessments-and the way in which this divergence has been seen as an obstacle both to managing risks specifically and to introducing new technology more generally. this has produced a longstanding interest in the individual perception of risk (eg, slovic, ) and in the way that culture selects particular risks for our attention (eg, douglas and wildavsky, ) . it has led to a strong interest in risk communication (eg, otway and wynne, ) . and it has been a central theme in the social amplification of risk framework (or sarf) that emerged in the late s (kasperson et al, ) . the notion behind social risk amplification, developed in a series of articles (kasperson et al, ; renn, ; burns et al, ; kasperson and kasperson, ) , is that a risk event produces signals that are processed and sometimes amplified by a succession of social actors behaving as communication 'stations'. they interact and observe each other's responses, sometimes producing considerable amplification of the original signal. a consequence is that there are often several secondary effects, such as product boycotts or losses of institutional trust, that compound the effect of the original risk event. a substantial amount of empirical work has been conducted on or around the idea of social amplification, for example showing that the largest influence on amplification is typically organisational misconduct (freudenberg, ) . it continues to be an important topic in the risk literature, not least in connection with zoonosis risks (eg, heberlein and stedman, ; lewis and tyshenko, ). there has always been a substantial critique of the basic idea of social risk amplification. its implication that there is some true or accurate level that becomes amplified is hard to accept in many controversial and contested cases where expertise is lacking or where there is no expert consensus (rayner, ) . the phenomenon of 'dueling experts' is common in conflicts over environmental health, for instance (nelkin, ) . more generally, the concept of risk amplification seems to suggest that there is a risk 'signal' that is outside the social system and is somehow amplified by it (rayner, ) . this seems misconceived when we take the view that ultimately risk itself is a social construction (hilgartner, ) or overlay on the world (jasanoff, ) . and it naturally leads to the view that contributors to the amplification, such as the media (bakir, ) , need to be managed more effectively, and that risk managers should concentrate on fixing the mistake in the public mind (rip, ) , when often it may be the expert assessment that is mistaken. it thus becomes hard to sustain the idea that there is a social process by which true levels of risk get distorted. and this appears to undermine the possibility that risk managers can have a way of anticipating very high or very low levels of social anxiety in any particular case. once risk amplification becomes no more than a subjective judgment by one group on another social group's risk responses, it is hard to see how risk issues can be dealt with on an analytical basis. however, subjective beliefs about risk can produce objective behaviours, and behaviours can interact to produce particular outcomes. and large discrepancies in risk beliefs between different groups are still of considerable interest, whether or not we can know which beliefs are going to turn out to be more correct. in the remainder of this article we therefore explore the consequences of the idea that social risk amplification is nothing more than an attribution, or judgment that one social actor makes of another, and try to see what implications this might have for risk managers based on a systems dynamics model. before this, however, we describe the fieldwork whose principal findings were meant to provide the main structural properties of the model. the aim of the fieldwork was to explore how social actors reason about the risks of recent zoonotic disease outbreaks, and in particular how they make judgments of other actors systematically amplifying or attenuating such risks. this involved a grounded, qualitative study of what a number of groups said in the course of a number of unstructured interviews and focus groups. it follows the general principle of using qualitative empirical work as a basis for systems dynamics modelling (luna-reyes and andersen, ) . focus groups were used where possible, for both lay and professional or expert actors; individual interviews were used where access could only be gained to relevant groups (such as journalists) as individuals. the participants were selected from a range of groups having a stake in zoonotic outbreaks such as avian influenza incidents and are listed in table . the focus groups followed a topic guide that was initially used in a pilot focus group and continually refined throughout the programme. they started with a short briefing on the specific topic of zoonotic diseases, with recent, well-publicised examples. the professional and expert groups were also asked to explain their roles in relation to the management of zoonotic diseases. participants were then invited to consider recent cases and other examples they knew of, discuss their reactions to the risks they presented, and discuss the way the risks had been, or were being, managed. their discussions were recorded and the recordings transcribed except in two cases where it was only feasible to record researcher notes. the individual interviews followed the same format. analysis of the transcripts followed a typical process of grounded theorising (glaser and strauss, ) , in which the aim was to find a way of categorising participants' responses that gave some theoretical insight into the principle of risk amplification as a subjective attribution. the categories were arrived at in a process of 'constant comparison' of the data and emerging, tentative categories until all responses have been satisfactorily categorised in relation to each other (glaser, ) . in glaser's words, 'validity is achieved, after much fitting of words, when the chosen one best represents the pattern. it is as valid as it is grounded'. our approach also drew on template analysis (king, ) in that we started with the basic categories of attributing risk amplification and risk attenuation, not a blank sheet. a fuller account of the analysis process and findings is given in a parallel publication (busby and duckett, ) . the first main theme to emerge from the data was the way in which actors privilege their own views, and construct reasons to hold on to them by finding explanations for other views as being systematically exaggerated or underplayed. it is surprising in a sense that this was relatively symmetrical. we expected expert groups to characterise lay groups as exaggerating or underplaying risk, but we also expected lay groups to use authoritative risk statements from expert groups and organisations of various kinds as ways of correcting their own initial and tentative beliefs. but there was no evidence for this kind of corrective process. the reasons that informants gave for why other actors systematically amplify or attenuate risk were categorised under five main headings: cognition, or the way they formed their beliefs; disposition, or their inherent natures; situation, or the particular circumstances; strategy, or deliberate, instrumental action; and structure, or basic patterns in the social or physical world. for example, one group saw the highly pathogenic avian influenza (hpai) outbreak at holton in the uk in as presenting a serious risk and explained the official advice that it presented only a very small risk as arising from a conspiracy between industry and government that the dispositions of the two naturally created. this second main theme was that some groups of informants often lacked specific and direct knowledge about relevant risks, and resorted to reasoning about other actors' responses to those risks. this reasoning involved moderating those observations with beliefs about whether other actors are inclined to amplify or attenuate risk. lay groups received information through the media but they had definite, and somewhat cliche´d, beliefs about the accuracy of risk portrayals in the media, for example. thus some informants saw the media treatment of hpai outbreaks as risk amplifying and portrayed the media as having an incentive to sensationalise coverage, but others (particularly virologists) saw media coverage as risk attenuating out of scientific ignorance. a third theme was that risk perceptions often came from the specific associations that arose in particular cases. for example, the holton hpai outbreak involved a large food processing firm that had earlier been involved in dietary and nutritional controversies. the firm employed intensive poultry rearing practices and was also importing partial products from a processor abroad. this particular case therefore bound together issues of intensive rearing, global sourcing, zoonotic outbreaks and lifestyle risks-incidental associations that enabled some informants to perceive high levels of risk and indignation, and portray others as attenuating this risk. the fourth theme was that some actors have specific reasons to overcome what they see as other actors' amplifications or attenuations. they do not just discount another actor's distortions but seek to change them. for example, staff in one government agency believed they had to correct farmers who were underplaying risk and not practicing sufficient bio-security, and also correct consumers who were exaggerating risk and boycotting important agricultural products. such actors do not simply observe other actors' expressed risk levels but try to communicate in such a way as to influence these expressed levels-for example through awareness-raising campaigns. the fieldwork therefore pointed to a model in which actors like members of the public based their risk evaluations on what they were told by others, corrected in some way for what they expected to be others' amplifications or attenuations; discrepancies between their current evaluations and those of others would be regarded as evidence of such amplifications, rather than being used to correct their own evaluations. the findings also indicated a model in which risk managers would communicate risk levels in a way that was intended to overcome the misconceptions of actors like the public. these are the underpinning elements of the models we describe below. systems dynamics was a natural choice for this modelling on several grounds. first, there is an inherent stress on endogeneity in the basic idea of social risk amplification, and in particular in the notion that it is an attribution. risk responses first and foremost reflect the way people think about risks and think about the responses of other people to those risks. second, the explicit and intuitive representation of feedback loops was important to show the reflective nature of social behaviour: how actors see the impact of their risk responses on other actors and modify their responses accordingly. third, memory plays an important part in this, since the idea that some actor is a risk amplifier will be based on remembering their past responses, and the accumulative capacity of stocks in systems dynamics provides an obvious way of representing social memory. developing a systems dynamics model on the grounded theory therefore followed naturally, and helped to add a deductive capability to the essentially inductive process of grounded theory (kopainsky and luna-reyes, ) . kopainsky and luna-reyes ( ) also point out that grounded theory can produce large and rich sets of evidence and overly complex theory, making it important to have a rigorous approach to concentrating on small numbers of variables and relationships. thus, in the modelling we describe in the next section, the aim was to try to represent risk amplification with as little elaboration as possible, so that it would be clear what the consequences of the basic structural commitments might be. this meant reduction to the simplest possible system of two actors, interacting repeatedly over time during the period of an otherwise static risk event (such as a zoonosis outbreak). applications of systems dynamics have been wide-ranging, addressing issues in domains ranging from business (morecroft and van der heijden, ) to military (minami and madnick, ) , from epidemiology (dangerfield et al, ) to diffusion models in marketing (morecroft, ) , from modelling physical state such as demography (meadows et al, ) to mental state such as trust martinez-moyano and samsa, ) . applications to issues of risk, particularly risk perception, are much more limited. there has been some application of system dynamics to the diffusion of fear and sarf, specifically (burns and slovic, ; sundrani, ) , but not to the idea of social amplification as an attribution. probably the closest examples to our work in the system dynamics literature deal with trust. luna-reyes et al ( ), for example, applied system dynamics to investigate the role of knowledge sharing in building trust in complex projects. to make modelling tractable, the authors make several simplifying assumptions including the aggregation of various government agencies as a single actor and various service providers as another actor. each actor accumulates the knowledge of the other actor's work, and the authors explore the dynamics that emerge from their interaction. greer et al ( ) modelled similar interactions-this time between client and contractor-each having its own, accumulated understandings of a common or global quantity (in this case the 'baseline' of work a project). martinez-moyano and samsa ( ) developed a system dynamics model to support a feedback theory of trust and confidence. this represented the mutual interaction between two actors (government and public) in a social system where each actor assesses the trustworthiness of the other actor over time, with both actors maintaining memories of the actions and outcomes of the other actor. our approach draws from all these studies, modelling a system in which actors interact on the basis of remembered, past interactions as they make assessments of some common object. the actors are in fact groups of individuals who are presumed to be acting in some concerted way. although this may seem questionable there are several justifications for doing so: ( ) the aim is not to represent the diversity of the social world but to explore the consequences of specific ideas about phenomena like social risk amplification; ( ) in some circumstances a 'risk manager' such as a private corporation or a government agency may act very much like a unit actor, especially when it is trying to coordinate its communications in the course of risk events; ( ) equally in some circumstances it may be quite realistic to see a 'public' as acting in a relatively consensual way whose net, aggregate or average response is of more interest than the variance of response. in the following sections we develop a model in three stages. in the first, we represent the conventional view of social risk amplification; in the second, we add our subjective, attributional approach in a basic form; and in the third we make the attributional elements more realistically complex. the aim is to explore the implications of the principal findings of the fieldwork, and our basic theoretical commitments to social risk amplification as an attribution, with as little further adornment as possible, while also incorporating elements shown in the literature to be important aspects of risk amplification. in the first model, shown in figure , we represent in a simple way the basic notion of social risk amplification. the fundamental idea is that risk responses are socially developed, not simply the sum of the isolated reactions of unconnected individuals. the model represents a population as being in one of two states of worry. this is simpler than the three-state model of burns and slovic ( ) particularly adds to the model. there is also no need for a recovering or removal state, as in sir (susceptible infectious recovered) models (sterman, , p ) , since there is no concept of immunity and it seems certain that people can be worried by the same thing all over again. the flow from an unworried state to a worried state is a function of how far the proportion in the worried state exceeds that normally expected in regard to a risk event such as a zoonotic disease outbreak. members of the public expect some of their number to become anxious in connection with any risk issue: when, through communication or observation, they realise this number exceeds expectation, this in itself becomes a reason for others to become anxious. this observation of fellow citizens is not medium-specific, so it is a combination of observation by word-of-mouth, social networks and broadcast media. in terms of how this influences perception, various processes are suggested in the literature. for example, there is a variety of 'social contagion' effects (levy and nail, ; scherer and cho, ) relevant to such situations. social learning (bandura, ) or 'learning by proxy' (gardner et al, ) may also well be important. we do not model specific mechanisms but only an aggregate process by which the observation of worry influences the flow into a state of being worried. the flow out of the worried state is a natural relaxation process. it is hard to stay worried about a specific issue for any length of time, and the atrophy of vigilance is reported in the literature (freudenberg, ) . there is also a base flow between the states, reflecting the way in which-in the context of any public risk event-there will be some small proportion of the population that becomes worried, irrespective of peers and public information. this base flow also has the function of dealing with the 'startup problem' in which zero flow is a potential equilibrium for the model (sterman, , p ) . the public risk perception in this model stands in relation to an expert, supposedly authoritative assessment of the risk. people worry when seeing others worry, but moderate this response when exposed to exogenous information-the expert or managerial risk assessment. what ultimately regulates worry is some combination of these two elements and it is this regulatory variable that we call a resultant 'risk perception'. unlike burns and slovic ( ) we do not represent this as a stock because it is not anyone's belief, and so need not have inertia. the fact that various members of the public are in different states of worry means that there is no belief that all share, as such. instead, risk perception is an emergent construct on which flows between unworried and worried states depend (and which also determines how demand for risky goods changes, as we explain below). in the simplest model we simply take this resultant risk perception as a weighted geometric mean of the risk implied by the proportion of the population worried and the publically known expert risk assessment. the expert assessment grows from zero toward a finite level, for a certain period, before decaying again to zero. this reflects a time profile for typical risk events-for example zoonotic outbreaks such as sars-where numbers of reported cases climb progressively and rapidly to a peak before declining (eg, leung et al, ) . the units for risk perception and the expert assessment are arbitrary, but for exposition are taken as probabilities of individual fatality during a specific risk event. numerical values of the exogenous risk-related variables are based on an outbreak in which the highest fatality probability is À . but risks in a modern society tend to vary over several orders of magnitude. typically, individual fatality probabilities of À are regarded as 'a very low level of risk', whereas risks of À are seen as very high and at the limit of tolerability for risks at work (hse, ) . because both assessed and perceived risks are likely to vary widely, discrepancies between risk levels are represented as ratios. the way in which the expert assessment is communicated to the public is via some homogenous channel we have simply referred to as the 'media'. in our basic model we represent in very crude terms the way in which this media might exaggerate the difference between expert assessment and public perception. but the sarf literature suggests there is no consistent relationship between media coverage and either levels of public concern or frequencies of fatalities (breakwell and barnett, ; finkel, ) , so the extent of this exaggeration is likely to be highly case specific. it is also possible that the media have an effect on responses by exaggerating to a given actor its own responses. the public, for example, could have an inflated idea of how worried they are because newspapers or blogs portray it to be so. but we do not represent this because it is so speculative and may be indeterminable empirically. finally, the base model also represents the way in which risk perception influences behaviour, in particular the consumption of the goods or services that expose people to the risk in question. the holton uk outbreak of hpai, for example, occurred at a turkey meat processing plant and affected demand for its products; the sars outbreak affected demand for travel, particularly aviation services. brahmbhatt and dutta ( ) even refer to the economic disruption caused by 'panicky' public responses as 'sars type' effects. there are many complications here, not least that reducing consumption of one amenity as a result of heightened risk perception may increase consumption of a riskier amenity. air travel in the us fell after / but travel by car increased and aggregate risk levels were said to have risen in consequence (gigerenzer, ) . a further complication is that in certain situations, such as bank runs (diamond and dybvig, ), risk perceptions are directly self-fulfilling rather than self-correcting. the most common effect is probably that heightened risk perceptions will lead to reduced demand for the amenity that causes exposure, leading to reductions in exposure and reductions in the expert risk assessment, but it is worth noting that the effect is case-specific. the expert risk assessment is therefore not exogenous, and there is a negative feedback loop that operates to counteract rising risk perceptions. as we show later from the simulation outcomes, the base model shows a public risk perception that can be considerably larger than the expert risk assessment. it therefore seems to show 'risk amplification'. but there is no variable that stands for risk in the model: there are only beliefs about risk (called either assessments or perceptions). the idea that social risk amplification is a subjective attribution, not an objective phenomenon, means that this divergence of risk perception and expert assessment does not amount to risk amplification. and it says that actors see others as being risk amplifiers, or attenuators, and develop their responses accordingly. this means that we need to add to sarf, and the basic model of the previous section, the processes by which actors observe, diagnose and deal with other actors' risk assessments or perceptions. what our fieldwork revealed was that the social system did not correct 'mistaken' risk perceptions in some simpleminded fashion. in other words, it was not the case that people formed risk perceptions, received information about expert assessment, and then corrected their perceptions in the correct direction. instead, as we explained earlier, they found reasons why expert assessments, and in fact the risk views of any other group, might be subject to systematic amplification or attenuation. they then corrected for that amplification. risk managers, on the other hand, had the task of overcoming what they saw as mistaken risk responses in other groups, not simply correcting for them. therefore in the second model, shown in figure , we now have a subsystem in which a risk manager (a government agency or an industrial undertaking in the case of zoonotic disease outbreaks) observes the public risk perception in relation to the expert risk assessment, and communicates a risk level that is designed to compensate for any discrepancy between the two. commercial risk managers will naturally want to counteract risk amplification that leads to revenue losses from product and service boycotts, and governmental risk managers will want to counteract the risk amplification that produces panic and disorder. as beck et al ( ) report, the uk bse inquiry found that risk managers' approach to communicating risk 'was shaped by a consuming fear of provoking an irrational public scare'. the effect is symmetrical to the extent that the public in turn observes discrepancies between managerial communications and its own risk perceptions, and attributes amplification or attenuation accordingly. attributions are based on simple memory of past observations. this historical memory of another actor's apparent distortions is sometimes mentioned in the sarf literature (kasperson et al, ; poumadere and mays, ) . this memory is represented as stocks of observed discrepancies, reaching a level m i (t)for actor i at time t. the managerial memory, for example, is r public ðtÞ r expert ðtÞ dt m i (t) implies that actor i sees the other actor as exaggerating risk, while m i (t)o implies perceived attenuation. the specific deposits in an actor's memory are not retrievable, and equal weight is given to every observation that contributes to it. the perceived scale of amplification is the time average of memory content, and the confidence the actor has in this perceived amplification is Àe À|m(t)| where confidence grows logarithmically towards unity as the magnitude of the memory increases. the managerial actor modifies the risk level it communicates by the perceived scale of public amplification raised to the power of its confidence, while the public adjusts the communicated risk level it takes account of by the perceived scale of managerial attenuation raised to the power of its confidence in this. in the third model, in figure , we add three elements found in the risk amplification literature that become especially relevant to the idea of risk amplification as a subjective attribution: confusion, distrust and differing perceptions about the significance of behavioural change. the confusion issue reflects the way an otherwise authoritative actor's view tends to be discounted if it shows evidence of confusion, uncertainty or inexplicable change. two articles in the recent literature on zoonosis risk (bergeron and sanchez, ; heberlein and stedman, ) specifically describe the risk amplifying effect of the authorities seeming confused or uncertain. the distrust issue reflects the observation that 'distrust acts to heighten risk perception . . . ' (kasperson et al, ) , and that it is 'associated with perceptions of deliberate distortion of information, being biased, and having been proven wrong in the past' (frewer, , p ) . a distinguishing aspect of trust and distrust is the basic asymmetry such that trust is quick to be lost and slow to be gained (slovic, ) . in figure , the confusion function is based on the rate of change of attributed amplification, not rate of change communication itself, since some change in communication might appear justified if correlated with a change in public perception: g ¼ À e Àg c g ðtÞ j j ; where c g (t) is the change in managerial amplification in unit time. the distrust function is based on the extent of remembered attributed amplification: f ¼ À e Àf m g ðtÞ j j ; where m g (t) is the memory of managerial risk amplification at time t and f is the distrust parameter. there is no obvious finding in the literature that would help us set the value of such a parameter. the combination of the confusion and distrust factors is a combination of an integrator and a differentiator. it is used to determine how much weight is given to managerial risk communications in the formation of the resultant risk perception. it is defined such that as distrust and confusion both approach unity, this weight w tends to zero: w ¼ w max ( Àg)( Àf). this weight was exogenous in the previous model, so the effect of introducing confusion and distrust is also to endogenise the way observation of worry is combined with authoritative risk communication. the third addition in this model is an important disproportionality effect. the previous models assume that risk managers base their view of the public risk perception on some kind of direct observation-for example, through clamour, media activity, surveys and so on. in practice, the managerial view is at least partly based on the public's consumption of the amenity that is risk, for example the consumption of beef during the bse crisis, or flight bookings and hotel reservations during the sars outbreak. the problem is that when a foodstuff like beef becomes a risk object it may be easy for many people to stop consuming it, and such a response from the consumer's perspective can be proportionate to even a mild risk assessment. reducing beef consumption is an easy precaution for most of the population to take (frewer, ) , so rational even when there is little empirical evidence that there is a risk at all (rip, ) . yet this easy response of boycotting beef may be disastrous for the beef industry, and therefore seem highly disproportionate to the industry, to related industries and to government agencies supporting the industry. unfortunately there is considerable difficulty in quantifying this effect in general terms. recent work (mehers, ) looking at the effect of heightened risk perceptions around the avian influenza outbreak at a meat processing plant suggests that the influence on the demand for the associated meat products was very mixed. different regions and different demographic groups showed quite different reactions, for example, and the effect was confounded by actions (particularly price changes) taken by manufacturer and retailers. our approach is to represent the disproportionality effect with a single exogenous factorthe relative substitutability of the amenity for similar amenities on the supply and demand side. the risk manager interprets any change in public demand for the amenity multiplied by this factor as being the change in public risk perception. if the change in this inferred public risk perception exceeds that observed directly (for example by opinion survey), then it becomes the determinant of how risk managers think the public are viewing the risk in question. this relative substitutability is entirely a function of the specific industry (and so risk manager) in question: there is no 'societal' value for such a parameter, and the effects of a given risk perception on amenity demand will always be case specific. for example, brahmbhatt and dutta ( ) reported that the sars outbreak led to revenue losses in beijing of % in tourist attractions, exhibitions and hotels, but of - % in travel agencies, airlines, railways and so on. the effects are substantial but a long way from being constant. in this section we briefly present the outcomes of simulation with two aims: first to show how the successive models produce differences in behaviour, if at all, and thereby to assess how much value there is in the models for policymakers; second to assess how much uncertainty in figure model of a more complex attributional view of risk amplification. outcomes such as public risk perception is produced by uncertainty in the exogenous parameters. figure shows the behaviour of the three successive models in terms of public risk perception and expert risk assessment. for the three models, the exogenous variables are set at their modal values and when variables are shared between models they have the same values. the expert risk assessment is thus very similar for each model, as shown in the figure, rising towards its target level, falling as public risk perception reduces exposure, and then ceasing as the crisis ends around day . in the base model, the public risk perception is eight times higher than the expert assessment at its peak, which occurs some days after that in the expert assessment. but once the attributional view of risk amplification is modelled, this disparity becomes much greater, and it occurs earlier. in the simple attributional system the peak discrepancy is over times, and in the complex attributional system nearly times, both occurring within days of the expert assessment peak. thus the effect of seeing risk amplification as the subjective judgment of one actor about another is, given the assumptions in our models, to polarise risk beliefs much more strongly and somewhat more rapidly. we can no longer call the outcome a 'risk amplification' since, by assumption, there is no longer an objective risk level exogenous to the social system. but there is evidently strong polarisation. there is some qualitative difference in the time profile of risk perception between the three models, as shown in the previous figure where the peak risk perception occurs earlier in the later models. there are also important qualitative differences in the time profiles of stock variables amenity demand and worried population, as shown in figure . when the attributional view is taken, both demand and worry take longer to recover to initial levels, and when the more complex attributional elements are modelled (the effects of mistrust, confusion and different perceptions of the meaning of changes in demand), the model indicates that little recovery takes place at all. the scale of the recovery depends on the value of the exogenous parameters, and some of these (as we discuss below) are case specific. but of primary importance is the way the weighting given to managerial communications or expert assessment is dragged down by public attributions. this result indicates the importance of a complex, attributional view of risk amplification. unlike the base model, in the attributional model it is much more likely there will be an indefinite residue from a crisis-even when the expert assessment of risk falls to near zero. figures and show the time development of risk perception in the third model in terms of the mean outcome with (a) % confidence intervals on the mean and (b) tolerance intervals for % confidence in % coverage over runs, with triangular distributions assigned to the exogenous parameters and plausible ranges based solely on the author's subjective estimates. the exogenous parameters fall into two main groups. the first group is of case-specific factors and would be expected to vary between risk events. this includes, for example, the relative substitutability of the amenity that is the carrier of the risk, and the latency before changes in demand for this amenity change the level of risk exposure. the remaining parameters are better seen as social constants, since there is no theoretical reason to think that they will vary from one risk event to another. these include factors like the natural vigilance period among the population, the normal flow of people into a state of worry, the latency before people become aware of a discrepancy between emergent risk perception and the proportion of the population that is in a state of worry. figure shows the confidence and tolerance intervals with the social constants varying within their plausible ranges and the case-specific factors fixed at their modal values, and figure vice versa. thus figure shows the effect of our uncertainty about the character of society, figure outcomes of the three models. whereas figure shows the effect of the variability we would expect among risk events. the substantial difference between the means in risk perception between the two figures reflects large differences between means and modes in the distributions attributed to the parameters, which arises because plausible ranges sometimes cover multiple orders of magnitude (eg, the confusion and distrust constants both range from to with modes of , and the memory constant from to with a mode of ). these figures do not give a complete understanding, not least because interactions between the two sets of parameters are possible, but they show a reasonably robust qualitative profile. figure shows the 'simple' correlation coefficients between resultant risk perception and the policy-relevant exogenous parameters over time, as recommended by ford and flynn ( ) as an indication of the relative importance of model inputs. at each day of the simulation, the sample correlation coefficient is calculated for each parameter over the runs. no attempt has been made to inspect whether the most important inputs are correlated, and to refine the model in the light of this. nonetheless the figure gives some indication of how influential are the most prominent parameters: the expert initial assessment level (ie, the original scale of the risk according to expert assessment), the expert assessment adjustment time (ie, the delay in the official estimate reflecting the latest information), the base flow (the flow of people between states of non-worry and worry in relation to a risk irrespective of the specific social influences being modelled) and the normal risk perception (the baseline against which the resultant risk perception is gauged, reflecting a level of risk that would be unsurprising and lead to no increase in the numbers of the worried). the first of these is case-specific, but the other three would evidently be worth empirical investigation given their influence in the model. it is extremely difficult to test such outcomes against empirical data because cases differ so widely and it is unusual to find data on simultaneous expert assessments and public perceptions over short-run risk events like disease outbreaks, particularly outbreaks of zoonotic disease. but a world bank paper of , on the economic effects of infectious disease outbreaks (primarily sars, a zoonotic disease), collected together data gathered on the sars outbreak, and some-primarily that of lau et al ( ) -showed the day-by-day development of risk perception alongside reported cases. figure is based on lau et al's data ( ) , and shows the number of reported cases of sars as a proportion of the hong kong population at the time, together with the percentage of people in a survey expressing a perception that they had a large or very large chance of infection from sars. the two lines can be regarded as reasonably good proxies for the risk perception and expert assessment outcomes in figure and they show a rough correspondence: a growth in both perception and expertly assessed or measured 'reality', followed by a decay, in which the perception appears strongly exaggerated from the standpoint of the expert assessment. the perceptual gap is about four orders of magnitude-greater than even the more complex attributional system in our modelling. moreover, the risk perception peak occurs early, and in fact leads the reported cases peak. it is our models and especially in which the perception peak occurs early (although it never leads the expert assessment peak). the implications of the work the social amplification of risk framework has always been presented as an 'integrative framework' (kasperson et al, ) , rather than a specific theory, so there has always been a need for more specific modelling to make its basic concepts precise enough to be properly explored. at the same time, as suggested earlier, its implication that there is some true level of risk that becomes distorted in social responses has been criticised for a long time. we therefore set out to explore whether it is possible to retain some concept of social risk amplification in cases where even expert opinion tends to be divided, the science is often very incomplete, and past expert assessment has been discredited. zoonotic disease outbreaks provide a context in which such conditions appear to hold. our fieldwork broadly pointed to a social system in which social actors of all kinds privilege their own risk views, in which they nonetheless have to rely on other actor's responses in the absence of direct knowledge or experience of the risks in question, in which they attribute risk amplification or attenuation to other actors, and in which they have reasons to correct for or overcome this amplification. to explore how we can model such processes has been the main purpose of the work we have described. and the resulting model provides specific indications of what policymakers need to deal with-a much greater polarisation of risk beliefs, and potentially a residue of worry and loss of demand after the end of a risk crisis. it also has the important implication that risk managers' perspectives should shift, from correcting a public's mistakes about risk to thinking about how their own responses and communications contribute to the public's views about a risk. our approach helps to endogenise the risk perception problem, recognising that it is not simply a flaw in the world 'out there'. it is thus an important step in becoming a more sophisticated risk manager or manager of risk issues (leiss, ) . it is instructive to compare this model with models like that of luna-reyes et al ( ) which essentially involve a convergent process arise from knowledge sharing, and the subsequent development of trust. we demonstrate a process in which there is knowledge sharing, but a sharing that is undermined by expectations of social risk amplification. observing discrepancies in risk beliefs leads not to correction and consensus but to self-confirmation and polarisation. our findings in some respects are similar to greer et al ( ) , who were concerned with discrepancies in the perceptions of workload in the eyes of two actors involved in a common project. such discrepancies arose not from exogenous causes but from unclear communication and delay inherent in the social system. all this reinforces the long-held view in the risk community, and of risk communication researchers in particular, that authentic risk communication should involve sustained relationships, and the open recognition of uncertainties and difficulties that would normally be regarded as threats to credibility (otway and wynne, ) . the reason is not just the moral requirement to avoid the perpetuation of powerful actors' views, and not just the efficiency requirement to maximise the knowledge base that contributes to managing a risk issue. the reason is also that the structure of interactions can be unstable, producing a polarisation of view that none of the actors intended. actors engaged with each other can realise this and overcome it. a basic limitation to the use of the models to support specific risk management decisions, rather than give more general insight into social phenomena, is that there are very few sources of plausible data for some important variables in the model, such as the relaxation delay defining how long people tend to stay worried about a specific risk event before fatigue, boredom or replacement by worry about a new crisis leads them to stop worrying. it is particularly difficult to see where values of the case-specific parameters are going to come from. other sd work on risk amplification at least partly avoids the calibration problem by using unit-less normalised scales and subjective judgments (burns and slovic, ) . and one of the benefits of this exploratory modelling is to suggest that such variables are worthwhile subjects for empirical research. but at present the modelling does not support prediction and does not help determine best courses of action at particular points in particular crises. in terms of its more structural limitations, the model is a small one that concentrates specifically on the risk amplification phenomenon to the exclusion of the many other processes that, in any real situation, risk amplification is connected with. as such, it barely forms a 'microworld' (morecroft, ) . it contrasts with related work such as that of martinez-moyano and samsa's ( ) modelling of trust in government, which similarly analyses a continuing interaction between two aggregate actors but draws extensively on cognitive science. however, incorporating a lot more empirical science does not avoid having to make many assumptions and selections that potentially stand in the way of seeing through to how a system produces its outcomes. the more elaborate the model the more there is to dispute and undermine the starkness of an interesting phenomenon. we have had to make few assumptions about the world, about psychology and about sociology before concluding that social risk amplification as little more than a subjective attribution has a strongly destabilising potential. this parsimony reflects towill's ( ) notion that we start the modelling process by looking for the boundary that 'encompasses the smallest number of components within which the dynamic behaviour under study is generated'. the model attempts to introduce nothing that is unnecessary to working out the consequences of risk amplification as an attribution. as ghaffarzadegan et al ( ) point out in their paper on small models applied to problems of public policy, and echoing forrester's ( ) argument for 'powerful small models', the point is to gain accessibility and insight. having only 'a few significant stocks and at most seven or eight major feedback loops', small models can convey the counterintuitive endogenous complexity of situations in a way that policymakers can still follow. they are small enough to show systems in aggregate, to stress the endogeneity of influences on the system's behaviour, and to clearly illustrate how policy resistance comes about (ghaffarzadegan et al, ) . as a result they are more promising as tools for developing correct intuitions, and for helping actors who may be trapped in a systemic interaction to overcome this and reach a certain degree of self-awareness (lane, ) . the intended contribution of this study has been to show how to model a long-established, qualitative framework for reasoning about risk perception and risk communication, and in the process deal with one of the main criticisms of this framework. the idea that in a society the perception of a risk becomes exaggerated to the point where it bears no relation to our best expert assessments of the risk is an attractive one for policymakers having to deal with what seem to be grossly inflated or grossly under-played public reactions to major events. but this idea has always been vulnerable to the criticism that we cannot know objectively if a risk is being exaggerated, and that expert assessments are as much a product of social processes as lay opinion. the question we posed at the start of the paper was whether, in dropping a commitment to the idea of an objective risk amplification, there is anything left to model and anything left to say to policymakers. our work suggests that there is, and that modelling risk amplification as something that one social actor thinks another is doing is a useful thing to do. there were some simple policy implications emerging from this modelling. for example, once you accept that there is no objective standard to indicate when risk amplification is occurring, actors are likely to correct for other actors' apparent risk amplifications and attenuation, instead of simple-mindedly correcting their own risk beliefs. this can have a strongly polarising effect on risk beliefs, and can produce residual worry and loss of demand for associated products and services after a crisis has passed. the limitations of the work point to further developments in several directions. first, there is a need to explore various aspects of how risk managers experience risk amplification. for example, the modelling, as it stands, concentrates on the interactions of actors in the context of a single event or issue-such as a specific zoonotic outbreak. in reality, actors generally have a long history of interaction around earlier events. we take account of history within an event, but not between events. a future step should therefore be to expand the timescale, moving from intra-event interaction to inter-event interaction. the superposition of a longer term process is likely to produce a model in which processes acting over different timescales interact and cannot simply be treated additively (forrester, ) . it also introduces the strong possibility of discontinuities, particularly when modelling organisational or institutional actors like governments whose doctrines can change radically following elections-rather like the discontinuities that have to be modelled to represent personnel changes and consequences like scapegoating (howick and eden, ) . another important direction of work would be a modelling of politics and power. it is a common observation in risk controversies that risk is a highly political construction-being used by different groups to gain resources and influence. as powell and coyle ( ) point out, the systems dynamics literature makes little reference to power, raising questions about the appropriateness of our modelling approach to a risk amplification subject-both in its lack of power as an object for modelling, and its inattention to issues of power surrounding the use of the model and its apparent implications. powell and coyle's ( ) politicised influence diagrams might provide a useful medium for representing issues of power, both within the model of risk amplification and in the understanding of the system in which the model might be influential. the notion, as currently expressed in our modelling, that it is always in one actor's interest to somehow correct another's amplification simply looks naı¨ve. greenpeace v. shell: media exploitation and the social amplification of risk framework (sarf) social learning theory public administration, science and risk assessment: a case study of the uk bovine spongiform encaphalopathy crisis media effects on students during sars outbreak world bank policy research working paper , the world bank east asia and pacific region chief economist's office social amplification of risk and the layering method the diffusion of fear: modeling community response to a terrorist strike incorporating structural models into research on the social amplification of risk: implications for theory construction and decision making social risk amplification as an attribution: the case of zoonotic disease outbreaks model-based scenarios for the epidemiology of hiv/aids: the consequences of highly active antiretroviral therapy bank runs, deposit insurance, and liquidity risk and culture: an essay on the selection of technological and environmental dangers risk and relativity: bse and the british media the social amplification of risk perceiving others' perceptions of risk: still a task for sisyphus statistical screening of system dynamics models nonlinearity in high-order models of social systems system dynamics-the next fifty years institutional failure and the organizational amplification of risk: the need for a closer look trust, transparency, and social context: implications for social amplification of risk workers' compensation and family and medical leave act claim contagion how small system dynamics models can help the public policy process out of the frying pan into the fire: behavioral reactions to terrorist attacks conceptualization: on theory and theorizing using grounded theory the discovery of grounded theory improving interorganizational baseline alignment in large space system development programs socially amplified risk: attitude and behavior change in response to cwd in wisconsin deer the social construction of risk objects: or, how to pry open networks of risk on the nature of discontinuities in system dynamics modelling of disrupted projects reducing risks, protecting people bridging the two cultures of risk analysis the social amplification and attenuation of risk the social amplification of risk: assessing fifteen years of research and theory the social amplification of risk the social amplification of risk: a conceptual framework qualitative methods and analysis in organizational research: a practical guide closing the loop: promoting synergies with other theory building approaches to improve systems dynamics practice social theory and systems dynamics practice monitoring community responses to the sars epidemic in hong kong: from day to day the chamber of risks: understanding risk controversies a tale of two cities: community psychobehavioral surveillance and related impact on outbreak control in hong kong and singapore during the severe acute respiratory syndrome epidemic contagion: a theoretical and empirical review and reconceptualization the impact of social amplification and attenuation of risk and the public reaction to mad cow disease in canada collecting and analyzing qualitative data for system dynamics: methods and models knowledge sharing and trust in collaborative requirements analysis a feedback theory of trust and confidence in government the limits to growth: the -year update on the quantitative analysis of food scares: an exploratory study into poultry consumers' responses to the h n avian influenza outbreaks in the uk food supply chain dynamic analysis of combat vehicle accidents strategy support models systems dynamics and microworlds for policymakers modelling the oil producers: capturing oil industry knowledge in a behavioural simulation model. modelling for learning, a special issue of the science controversies: the dynamics of public disputes in the united states risk communication: paradigm and paradox (guest editorial) the dynamics of risk amplification and attenuation in context: a french case study identifying strategic action in highly politicized contexts using agent-based qualitative system dynamics muddling through metaphors to maturity: a commentary on kasperson et al. 'the social amplification of risk' risk communication and the social amplification of risk should social amplification of risk be counteracted folk theories of nanotechnologists a social network contagion theory of risk perception perception of risk perceived risk, trust and democracy business dynamics: systems thinking and modelling for a complex world understanding social amplification of risk: possible impact of an avian flu pandemic. masters dissertation, sloan school of management and engineering systems division acknowledgements-many thanks are due to the participants in the fieldwork that underpinned the modelling, and to dominic duckett who carried out the fieldwork. we would also like to thank the anonymous reviewers of an earlier draft of this article for insights and suggestions that have considerably strengthened it. the work was partly funded by a grant from the uk epsrc. key: cord- -qt tp t authors: fong, i. w. title: litigations for unexpected adverse events date: - - journal: medico-legal issues in infectious diseases doi: . / - - - - _ sha: doc_id: cord_uid: qt tp t a -year-old iranian female who immigrated to canada about . years before was referred to an internist for a positive mantoux skin test ( mm in diameter). the subject was previously well with no symptoms indicative or suggestive of active tuberculosis. a routine tuberculosis skin test was performed because the patient had applied to be a volunteer at a local hospital. she had no significant past illness or known allergies, and she was never diagnosed with nor had known contact with anyone with active tuberculosis. the subject never ingested alcohol and was not known to have hepatitis or be a carrier of any hepatitis virus. baseline investigations performed by the internist included routine complete blood count, routine biochemical tests (liver enzymes, creatinine, and glucose), serum ferritin, and thyroid-stimulating hormone – all of which were normal. a chest radiograph was reported to be normal. creatinine, glucose and electrolytes; but the bilirubin had risen to mmol/l, the sgot was m/l, the serum alanine aminotransferase (alt) was m/l (normal - m/l), alp m/l, and the prothrombin time . s. a repeat ultrasonography of the abdomen revealed large ascites and a liver of cm in length with normal contour. over the next weeks, she became drowsy and encephalopathic, and was transferred to a tertiary care hospital where a liver transplantation was successfully performed (live donor from the patient's daughter). pathology of the liver showed a markedly shrunken liver with signs of fulminant hepatitis, with negative stains for hepatitis b antigens. a lawsuit was subsequently launched by the patient (plaintiff) against the physician who prescribed the isoniazid. the statement of claim alleged the following: ( ) isoniazid was directly responsible for the plaintiff's fulminant hepatitis which resulted in the need for a liver transplant, ( ) informed consent was never obtained to prescribe the drug, as the plaintiff was never counseled on the adverse effects, nor given a choice of treatment, ( ) use of the isoniazid was never indicated, as the patient had no symptoms or signs of active disease, ( ) the physician should have realized that the positive mantoux test was due to a previous bcg vaccination as a child (the defendant was informed of this fact) and therefore there was no need to treat the plaintiff for latent tuberculosis. based on the above facts, the internist was negligent in prescribing isoniazid and he should have monitored her liver enzymes after initiation of treatment (according to the statement of claims). the lawyer for the plaintiff further stipulated that if his client were never treated unnecessarily for latent tuberculosis, she would not have suffered from fulminant hepatitis or required a liver transplant. hence, the treating physician provided substandard care and compensation was sought for pain and suffering of the plaintiff, as well as for the daughter who underwent partial hepatectomy for liver donation. the case under discussion does not fall into the high-risk category for treatment of latent tuberculosis, but may be considered as an intermediate risk on cursory assessment. although employees and staff of healthcare facilities, especially those involved in direct patient contact, should be offered treatment of latent tuberculosis, there is no such stipulation for volunteers in hospitals. most healthcare facilities screen volunteers for active tuberculosis by mantoux skin test and chest radiograph for those with positive reaction. another category under which the subject could be considered is an indication for treatment of latent tuberculosis include persons from highly endemic countries within years of immigration with a positive mantoux test ( mm), irrespective of previous bcg vaccination. this group of people represent one of the largest segment of newly diagnosed patients with active tuberculosis in north america and europe. , in , % of all tuberculosis cases in the united states were among foreign-born persons, and in several european countries > % of tuberculosis case occur among foreign-born people. there are countries with a high burden of tuberculosis (tb) that account for % of the tb cases globally. these countries are located predominantly in asia (south east asia and western pacific regions) africa, brazil (south america), the russian federation (eastern europe), and afghanistan (middle east). the estimated new tb cases (all forms) per , people per year in iran is , which falls in the low risk category ( - ) as present in north america and western europe. the incidence and prevalence of tb in the middle east varies from country to country, and iran actually falls into the relatively lower risk group the indication for treatment of latent tb in this case is borderline or very debatable, but most physicians (including internists) may not be aware of this fact. the treatment of choice for latent tb is now standardized to a -month course of isoniazid (inh) mg once daily for adults, with or without pyridoxine (vitamin b ) to prevent peripheral neuritis. this is believed to be about % effective in preventing future reactivation of tb; but it does not prevent re-infection (with a new strain), which is a risk mainly in highly endemic countries. the main worrisome adverse effect of inh is clinical hepatitis, which can be fatal or lead to fulminant hepatitis that requires liver transplantation. there are two types of hepatic toxicity seen in inh; a common transient elevation of the transaminases seen in - % of patients that occurs within - months and is benign and asymptomatic, and clinical hepatitis (symptomatic) which is much less common, age-related, and only occurs in about % of treated patients. clinical hepatitis with inh is rare under years of age and increases to about - . % above years, and in persons > years, the risk increases to about . %. about % of inh hepatitis occurs in the first few months of treatment and the remainder occurs later up to months (if still on inh). the prognosis of overt inh hepatitis is usually very good if the drugs are discontinued promptly with the first sign of clinical hepatitis. the overall mortality is about % or . per , patients treated with inh. middle-aged black women seem to have the worst prognosis from this complication. in the majority of patients, there is clinical and biochemical resolution of signs and laboratory abnormality within - months of stopping the drug. occasionally, patients can present or develop a sub-acute, more protracted course that mimics chronic viral hepatitis and leads to cirrhosis. the pathogenesis of inh hepatotoxicity was initially considered to be an idiosyncratic reaction, but there is increasing evidence that this is a direct toxic effect of metabolite(s). there appears to be a higher risk and greater severity with higher doses, and higher incidence in slow acetylators. , animal experiments show that inh metabolism leads to acetyl hydrazine, which after oxidation forms toxic intermediates. these are thought to produce damaging effects by acetylating or alkylating macromolecules within liver cells, but the exact mechanism of liver cell injury is unknown. in slow acetylators, acetyl hydrazine accumulates and predisposes to hepatotoxicity. another metabolic pathway involves hydrolysis of inh to hydrazine and isonicotinic acid. hydrazine is known to be directly hepatotoxic and hydrolysis of inh is increased by alcohol and rifampin. the mechanism of agerelated hepatotoxicity is unclear, but could possibly be related to the slowing of acetylation with advancing age. most guidelines and recommendations of latent tb strongly discourage treatment with inh in patients with active liver disease. close clinical and biochemical monitoring for liver toxicity are mainly recommended for subjects with high risk for clinical hepatitis, such as older people ( years), those with history of liver disease, chronic carriers of hepatitis b and c, alcohol abusers, concomitant users of other hepatotoxic drugs, and subjects who suffer from malnutrition or aids. current textbooks of medicine do not recommend routine biochemical monitoring for healthy adults being treated with inh. in these circumstances, baseline liver tests are performed and patients should be counseled on symptoms of clinical side effects and should be monitored clinically. some experts and the manufacturer recommend biochemical monitoring for persons > years old, pregnant women, (and those within months post-partum), monthly for months, then afterwards at - month intervals. , inh should be discontinued promptly at the first sign of clinical hepatitis. symptoms of hepatitis may include fatigue, weakness or fever > days, malaise, unexplained anorexia, right upper quadrant pain or discomfort, and jaundice. if the alt is - times the upper limit of normal, the drug should be discontinued, even if the patient is asymptomatic. restarting inh at a small dose has been recommended by some experts in asymptomatic patients. it is of interest to note that the american thoracic society, the british thoracic society, and the task force of the european respiratory society only recommend regular biochemical monitoring of liver function on multidrug treatment for tb in patients with chronic liver disease or increased serum transaminases prior to treatment. in the case of symptoms of hepatotoxicity, the liver function should be examined. this may be based on the fact that there is no good evidence that routine monitoring of liver function will decrease the chance of fulminant hepatitis or fatality, and prompt discontinuation of medications with first onset of symptoms usually results in full recovery in those with clinical hepatitis. the defendants' lawyer raised a critical question. is it absolutely certain that the fulminant hepatitis suffered by the patient was due to isoniazid? with any serious adverse event, to make an assessment requires several steps and investigations to reach a valid conclusion. this involves a process of deduction and exclusion of other etiologies (such as hepatitis virus), other agents, and use of bayes theorem to assess overall probability (definite, probable, or possible), as well as posterior and prior probability (based on known literature reports). other considerations include temporal relationship with use of the medication, compatibility of clinical features and laboratory data, histopathology data and previous reports, and reproduction of the event by re-challenge with the putative agent. although this is the most definitive method of proving cause and effect, it is the least used because of the potential risk of harm to the patient and the ethical and moral issues. the temporal relationship, clinical features, laboratory data, and histology of the liver are all compatible with inh -induced hepatitis. however, the investigation excluded well-known causes of viral hepatitis. the patient was also receiving diclofenac, which started weeks before the clinical diagnosis of hepatitis and - weeks before the onset of symptoms. thus, there is a temporal relationship with diclofenac treatment and the onset of clinical hepatitis. nsaids in general are known, but rare causes of drug-induced hepatitis. the incidence of diclofenac-induced clinical hepatitis is about - per , users, and the incubation period varies from to weeks (consistent with the present case). data from the diclofenac monograph (novartis pharmaceuticals) indicates that there is a higher incidence of moderate to severe ( - times upper limit of normal) and marked (> times normal) elevation of transaminases when compared to other nsaids. in addition, rare causes of severe hepatic reactions, including liver necrosis, jaundice, and fulminant fatal hepatitis (or requiring liver transplant) have been reported with diclofenac. to date, there is no evidence of enhanced risk of clinical hepatitis in patients receiving both inh and diclofenac or other nsaids. elderly women are more susceptible to nsaids-induced hepatitis. histopathology of the liver usually reveals zone or spotty acute hepatocellular necrosis, but there can be granulomas, cholestasis, hepatic eosinophilia, and even chronic active hepatitis with overuse of nsaids. the prognosis is usually very good from withdrawal of nsaids. there is no evidence that concurrent treatment with inh and nsaids increased the risk or severity of hepatitis. treatment for latent tb in the case under discussion was not indicated, but the circumstances could be interpreted as representing a borderline indication to use inh. however, the patient should have been offered the choice of no treatment versus therapy for latent tb. the risk versus benefit should have been discussed and the potential side effects explained to the patient. the patient should have been counseled to discontinue the medication at the first symptoms suggestive of clinical hepatitis. monitoring for liver disturbance by biochemical tests is not routinely recommended for patients at low risk for clinical hepatitis, and the physician should not be held responsible for his failure to order these tests. clinical monitoring however is standard and the physician can be held responsible for either failure to recognize the manifestations of hepatitis, or his failure to promptly withdraw all drugs once these signs appear. it cannot be concluded that inh was irrefutably culpable for the fulminant hepatitis, but based on the relative risk and incidence, it was more likely the cause than diclofenac. in any case, both drugs should have been discontinued immediately with the first signs of clinical hepatitis. for years, a -year-old male had suffered from recurrent bouts of nasal congestion, nasal discharge, and post-nasal drip with only partial, temporary relief from decongestants, antihistamines, and topical corticosteroids. his fp referred him to an internist and clinical allergist for further management. his past history was negative for any significant medical illness, but the patient had previous surgery for nasal septal deviation, and had stopped smoking years before. examination by the allergist revealed inflamed edematous nasal mucosa with some purulent discharge, and a radiograph of the sinuses demonstrated mucosal thickening of both maxillary antra. based on these findings, the consultant made a diagnosis of chronic rhino-sinusitis with an allergic and infectious component. the consultant prescribed intranasal corticosteroids and a -week course of trimethoprim-sulfamethoxazole (tmp-smx). the patient reported that he was treated by his fp months before with triple sulfonamide antibiotics (trisulphamine) for days without any side effects. he had no known drug allergies before this visit. towards the end of the -week course of tmp-smx, the patient developed malaise, low-grade fever, and a body rash that started on the face and trunk. this rash rapidly progressed over the next h to involve his limbs, mouth, and eyes, with blistering of the skin. he was admitted to the emergency department of a hospital with a diagnosis of sulfonamide-induced toxic epidermal necrolysis (ten). further care was performed in the burn unit. as a consequence of this adverse reaction, the patient developed bilateral corneal ulcerations requiring repeated corneal transplants. despite this, he remained blind in the left eye and had severe visual impairment on the right side. medico-legal actions were launched by the patient's lawyer claiming medical malpractice against the allergist in failing to warn the patient of the potential adverse effects of tmp-smx. moreover, the plaintiff claimed that antibiotics were never needed in the first place and if he had known of these potential side effects, he would not have agreed to be treated with the tmp-smx. the defense retorted that the adverse reaction suffered by the patient was extremely rare, and that the patient had previously been treated with sulfonamides, without any reaction. they claimed this reaction could not have been predicted and that it was not the standard medical practice for physicians to list all the rare side effects of licensed drugs on the market. the first relevant issue in this case is the following question: should any antibiotic have been prescribed? if antibiotics were indicated, was the choice of the tmp-smx appropriate? current consensus is that antibiotics are overused and prescribed unnecessarily for sinus disease. sinusitis is commonly due to respiratory viruses and allergic reaction (as in hay fever), and antibiotics are of no value in these situations. the presence of purulent nasal discharge can be seen in the above conditions, but is not diagnostic or indicative of bacterial sinusitis. radiographs of sinuses showing thickened mucosa or fluid in the chambers are non-specific and not diagnostic of bacterial sinusitis, as these changes can also be seen in viral infection and allergic sinusitis. the etiology of chronic sinusitis is complex and there is a lack of consensus of the pathogenesis. multiple factors may predispose to chronic sinusitis and allergy appears to play a prominent role, with or without polyps. other factors include structural abnormalities (outflow obstruction, retention cysts, etc.) and irritants such as smoking. chronic sinusitis is usually defined as having symptoms of sinus inflammation lasting longer than weeks, with documented inflammation (by imaging techniques) at least weeks after appropriate therapy with no intervening acute infection. computerized tomography (ct) is the preferred imaging technique to identify any obstruction and polyps. although antibiotics are commonly used in chronic sinusitis, their benefits have not been established by randomized trials, and the role of bacterial superinfection has not been well-defined. the best microbiological data from patients with chronic sinusitis have found aerobic ( . %) and anaerobic pathogens ( . %) are common in these cases. the most common aerobes were streptococcus species and hemophilus influenzae (nontypable strains), and the most common anaerobes were prevotella species, anaerobic streptococci and fusobacterium species. management of chronic sinusitis is challenging and involves combined medical and surgical therapy. for surgical cases where there is good clinical and imaging evidence of chronic bacterial sinusitis, empiric antibiotics should be effective against streptococci, h. influenzae and anaerobes. amoxicillin-clavulanate would be a suitable choice, and for b-lactam allergic patients, a new fluoroquinolone with anaerobic activity (moxifloxacin) would be an acceptable alternative. failure to respond usually indicates the need for surgery which can be performed by endoscopy, and in these cases, antibiotic treatment should be guided by sinus culture (by puncture or endoscopy-guided). although antimicrobials are commonly used for extended periods ( - weeks) for acute superinfection or exacerbation, no studies have addressed the issue of duration of therapy. although the case under discussion may not meet the diagnostic criteria for chronic bacterial sinusitis, making this diagnosis and instituting antibiotic therapy (although a judgment error) should not be considered gross negligence, or represent substandard care to merit malpractice litigation. the choice of antibiotic (even if the diagnosis of chronic sinusitis were correct), however, would not be a suitable selection. for acute bacterial sinusitis, amoxicillin/ampicillin is considered the drug of choice and tmp-smx is recommended as an alternative agent for subjects allergic to penicillin. what counseling should patients receive when prescribing an antibiotic, and specifically tmp-smx? most physicians do not spend time to inform their patients about the adverse effects of prescribed medications. on the other hand, most pharmacists do provide written information on new prescriptions. physicians cannot depend on this fact though, nor rely on this service for defense in a court of law. in most situations, physicians may counsel patients on drugs with known high risk of toxicity or side effects. for frequently prescribed medications (such as most oral antibiotics), counseling often is neglected, or only the common adverse effects are mentioned. the incidence of uncomplicated skin reaction (allergic skin rash) to tmp-smx (mainly due to the sulfonamide component) in the general population is about - % of recipients. this consists of mainly toxic erythema, a maculopapular eruption, infrequently urticaria, erythema nodosum, and fixed drug eruption. severe skin reactions in tmp-smx recipients are rare and include steven's-johnson syndrome (sjs), toxic epidermal necrolysis (ten), exfoliative dermatitis, and necrotizing cutaneous vasculitis. previous estimates of severe skin reaction were in , recipients. patients with hiv infection have a much higher incidence of cutaneous reaction to tmp-smx (especially those with aids). epidermal necrolysis (en) is a rare and life-threatening reaction, mainly drug induced, which encompasses sjs and ten. these two conditions represent severity variants of identical process and differ only in the percentage of body surface involved. the incidence of sjs and ten are estimated at . per million person-years and . - . cases per million person-years, respectively. although en can occur at any age, it increases in prevalence after the fourth decade, and is more frequent in women. there is some evidence that the risk of en increases with hiv, collagen vascular disorders, and cancers. the clinical features of en are characterized by skin and mucous membrane involvement. initially, the skin reaction begins with macules (mainly localized to the trunk, face, and proximal limbs), and then progresses to involve the rest of the body and become confluent with flaccid blisters leading to epidermal detachment. patients may become systematically ill with fever, dehydration, hypovolemia, secondary bacterial infection, esophageal and pulmonary involvement, and complications and death from sepsis. the pathogenesis of en is not completely understood, but studies indicate cell mediated cytotoxic reaction against keratinocytes leading to massive apoptosis. early in the process, there is a predominance of cd killer t lymphocytes in the epidermis and dermis of bullous lesions, and later monocytes develop. cytotoxic cd t cells express a-b tcell receptors are able to kill cells through production of perforin and granzyme b. drugs are the most important causes of en and ten and > different drugs are implicated. cd oligoclonal expansion corresponds to a drug specific, major histocompatibility complex (mhc) -restricted cytotoxicity against keratinocytes. pro-inflammatory cytokines il- , tnfa, and fas ligand are also present in skin lesions. genetic susceptibility appears to be important, and there is strong association with han chinese with hla-b leucocyte antigen and sjs induced by carbamazepine, and hla-b antigen and sjs induced by allopurinol. high-risk drugs (about ) from six different classes, account for % of en reactions. these include allopurinol, sulfonamides, anticonvulsants (carbamazepine, phenobarbital, lamotrigine), nevirapine (non-nucleoside analog), oxicam nsaids, and thiacetazone. the incubation period for en ranges from to days, but most cases occur within weeks of starting the medication. rare cases can appear within hours of use, or same day if they had prior reaction. early, non-specific symptoms (fever, headache, rhinitis, myalgias) may precede mucocutaneous lesions by - days. some patients may also present with pain on swallowing or stinging of the eyes. about one third of patients begin with non-specific symptoms, another third with primary mucous membrane involvement, and the rest present with an exanthema. progression from a localized area to full body involvement can vary from hours to days. the classification of en depends on areas of detachable epidermidis by a positive nikolsky sign (dislodgement of epidermidis by lateral pressure) and flaccid blisters. the diagnosis of sjs is made when there is less than % body surface area (bsa) involvement; sjs/ten overlaps with - % bsa, and ten for > % bsa involvement. in severe cases of en, the mucous membranes (buccal, ocular, genital) are involved in about %, and % have conjunctival affliction consisting mainly of hyperemia, erosions, chemosis, photophobia, and excessive lacrimation. severe form of eye involvement can result in shedding of eyelashes, corneal ulceration (as in case ), anterior uveitis, and purulent conjunctivitis. extra-cutaneous complications mainly seen in severe ten may include pulmonary disease ( %) with hypoxia, hemoptysis, bronchial mucosal casts, interstitial changes, and acute respiratory distress syndrome (ards), which carries a poor prognosis. the gastrointestinal tract involvement is less common, but can include esophageal necrosis, small bowel disease with malabsorption, and colonic disease (diffuse diarrhea and bleeding). renal involvement is mainly proteinuria and hematuria, but proximal renal tubular damage can sometimes cause renal failure. late ophthalmic complications occur in about - % and consist of abnormal lacrimation with dry eyes, trichiasis (ingrowing eyelashes), entropion (inversion of eyelid), and visual impairment or blindness from scarring of the cornea. prognosis of en varies with the severity of illness and prompt withdrawal of the offending agent. the overall mortality of en is - %, but for sjs it is lower, at - %, and higher for ten > %. development of a prognostic scoring system (scorten) for ten, has recently been found useful, but the performance of the score in prediction is best on day of hospitalization. the prognostic factors that are each given one point include the following: age > years, heart rate > /min, cancer, or hematologic malignancy, bsa involved > %, serum bicarbonate of < mm/l, and serum glucose of > mm/l. the mortality rate in ten increases with accumulation of points as follows: - point has a mortality rate of . %, points has a mortality rate of . %, points has a mortality rate of . %, points result in a mortality rate of . %, and > points result in nearly uniform mortality of %. management of en or ten consists of prompt removal of the offending agent and symptomatic therapy. patients with a scorten of - can be managed on the regular medical wards, whereas those with > points should be transferred to a burn center or intensive care unit (icu). it is most important to maintain hemodynamic support with adequate fluids and electrolyte balance. central venous lines should be avoided because the risk of superinfection is high, and so peripheral intravenous access should be used. moreover, the rash and blistering is greatest proximally. nutritional support should be maintained orally or by nasogastric tube, and use of prophylactic heparin is warranted, and also an air-fluidized mattress preferable. unlike severe burns, extensive and aggressive debridement of necrotic epidermidis is not recommended. there is no indication for prophylactic antibiotics, but patients should be monitored diligently for infection and treated promptly when present. there is no standard protocol for skin dressing, and antiseptic is used depending on the individualized center's experience. eye care should consist of a daily examination, artificial tears, antiseptic and vitamin a drops every h. regular mouth rinse with antiseptic solution several times a day is recommended. there is no proven specific therapy for any form of en. steroids were initially considered for sjs, but their value is unproven, controversial, and they are not routinely recommended. intravenous immunoglobulin (ivig) is also very controversial, and although initial retrospective studies suggested benefit, recent prospective, non-randomized studies have not confirmed any definite value, and some studies showed increased renal failure and mortality with ivig. in one of the largest studies from a single center, ivig was assessed in a prospective noncomparative study of patients with en, and subjects with ten. there was no evidence of improvement in mortality, progression of detachment, nor reepidermalization. most deaths occurred in elderly patients with initially impaired renal function. thus, ivig is not recommended for en unless being assessed in a randomized clinical trial. the death rate with ivig was %, which was higher than the historical death rate in the same center ( %), in historical controls with ten not treated with ivig. thus, ivig may be harmful in patients with en. one of the issues raised by the plaintiff was that he was not counseled on the potential severe side effects of the tmp-smx, and that if he were aware of the risk, he would not have agreed to take it. is it the responsibility of the physicians to explain all potential albeit rare adverse effects of any treatment? the courts may take in consideration the standard practice of the physician's peers, or what is considered accepted practice. most physicians (if they do counsel patients on medications) would mention the most common side effects, but would not usually mention rare adverse effects. for instance, it would be justifiable to mention that a drug rash could be seen with tmp-smx, if the patient happens to be allergic to the drug (which should be discontinued as soon as this occurs). as physicians, we would not usually mention that there is a rare risk of shedding of the skin, blindness, or death. similarly, when prescribing penicillin in patients not known allergic to the drug, we generally do not counsel that there is a : , to : , risk of dying from anaphylaxis (which is treatable). yet, if we were to order or prescribe chloramphenicol, it is expected that we should counsel the patient that there is a : , to : , risk of aplastic anemia, which is not treatable except by bone marrow transplantation. hence, it may be asked; what is the best method of informing patients on medication toxicity? it is acceptable to leave this to pharmacists to provide literature on these drugs as the sole form of counseling. it is the prescriber's responsibility to obtain informed consent before ordering the medications. it may be the best policy for prescribers to list the most common side effects, then occasional severe adverse reactions, and mention a possibility of other rare unforeseen adverse reaction (without specifying these latter reactions unless requested by the patient). the details of the counseling may vary on several factors, such as the relative safety profile (therapeutic to toxic ratio), enhanced risk factor for side effects (which may depend on underlying comorbidities or genetic predisposition), and the expected duration of treatment; as the longer an individual is exposed to a drug, the greater the potential for some side effects. the cmpa have provided some guidelines for risk management considerations in prescribing opioids that are useful for all medication orders and may curtail medico-legal cases from drug adverse events. these medico-legal considerations are: . is there an appropriate indication for this drug? . is the starting dose and need for continuation appropriate? . have you considered the need for monitoring that would be reasonable for your patient? . have you considered the potential effect of any concomitant medication that might influence the dosing, monitoring, and side effects? . have you considered other factors such as comorbidity that might influence the dosing and monitoring? . are you prepared to diagnose and manage any adverse event? . have you counseled the patient on potential side effects, how to recognize early signs, and necessary actions? . when discharging patients, have you provided reasonable information about the risks of adverse reactions, precautions to be observed, and person to notify? patients who suffer from adverse effects may be willing to forgive a physician's failure to provide informed consent when that therapy is indicated. however, in situations where the treatments were not indicated, or of questionable value, then any adverse event would likely be unacceptable to the plaintiff or courts. a -year-old male with steroid-dependent crohn's colitis (diagnosed years before) called his fp for advice regarding chickenpox from his young son who was recently diagnosed with it at a daycare center. the patient was experiencing retrosternal and epigastric pain on swallowing. the fp prescribed omeprazole mg once daily and ibuprofen over the phone, without seeing the patient. later in the night of the same day, the man presented to the emergency department of a local hospital. the er physician noted that the patient was chronically on methylprednisolone mg once daily for crohn's disease, and that he had developed local pustules consistent with early varicella within the past days. however, the main concern of the patient was severe retrosternal, mid-chest pain on swallowing and radiating through his back for h. the recorded vital signs showed a temperature of . c, blood pressure of / mmhg, heart rate of /min, and respiratory rate of /min. the examination revealed scattered vesicles/pustules on the patient's face, soft palate, and pharynx. treatment on discharge consisted of liquid bupivacaine swish and swallow (topical anesthetic), oxycodone-acetaminophen, and metoclopramide. an electrocardiogram was normal and the discharge diagnosis listed possible esophageal involvement with varicella. within h, the subject returned to the same er with worsening symptoms, and was seen by the same physician. the symptoms consisted of swelling of his face, fever, sweats, productive cough of blood-streaked sputum, and persistent chest pain. examination reports revealed a very ill looking male with a temperature of . c, heart rate of /min, blood pressure of / mmhg, and respiratory rate of /min. his face was swollen and edematous with closure of the right eye, extensive vesicles and pustules on the face, soft palate with edema and inflammation of the gingivae, and numerous skin lesions over the trunk and proximal limbs. oxygen saturation on room air was % and the chest radiograph was reported as normal. investigations revealed anemia, thrombocytopenia, liver disturbance, and evidence of disseminated intravascular coagulopathy. intravenous acyclovir was started and the patient was transferred to the icu of a tertiary care center, where he died within h after the second presentation. autopsy revealed disseminated varicella with involvement of the brain, lung, heart, liver, esophagus, and stomach. the wife and family of the deceased man launched medical malpractice litigation against the fp, er attending physician, and the local hospital. charges against the fp were as follows: ( ) substandard care reasonably expected of a general practitioner, ( ) he should have advised or warned the patient and provided early treatment, especially since he knew that his son had chickenpox, ( ) he knew, or ought to have known that the deceased was immunosuppressed from chronic steroids and therefore at increased risk, ( ) he failed to provide medical assistance and prescribe the correct drug (acyclovir) on presentation, ( ) he failed to make the patient aware of the potential complications of his long-term steroid use, and ( ) he failed to refer the deceased to an appropriate specialist. the accusations against the er attending physician were similar: ( ) his negligence was the direct cause of the deceased's death, ( ) his medical care fell below the standard reasonably expected from an er physician, ( ) he ought to have known that the patient was immunosuppressed from steroids, and therefore at high risk for complications from chickenpox, ( ) he failed to provide proper medical assistance and treatment, ( ) he failed to appropriately admit the patient on initial presentation and institute intravenous acyclovir, and ( ) he failed to consult an appropriate specialist (internist or infectious disease specialist). damages were sought by the plaintiffs for pain and suffering, deprivation of a husband and father, loss of economic benefit afforded to the family from potential employment earnings of the deceased over the next years (assuming retirement at age ). counsel for the defendants requested expert opinion on two key issues: ( ) was the steroid dose the deceased received sufficient to cause immunosuppression? ( ) if appropriate therapy with acyclovir were started at initial presentation with chickenpox, would the outcome have been any different? chickenpox (varicella) has dramatically declined in all age groups, but most markedly in children since the introduction of varicella vaccine in in north america and developed countries. since the introduction of the vaccine, the decline in varicella-related hospitalization in the us was greatest among - year-old children, but rates also declined in older youths ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) year) and adults. in temperate regions, % of cases of varicella occur in children < years of age, % occur in individuals > years old, and adults (> year) only account for %. the risk of hospitalization and death is greater in young infants and adults than children, and most varicella-related deaths occur in previously healthy people. although varicella is much less common in adults than children, % of the deaths from complications occur in adults. in tropical and subtropical countries, the mean age of patients with varicella is higher than in temperate regions, and up to % of immigrants from these areas are susceptible to varicella. healthy children rarely suffer from complications of varicella, with the most common one being secondary bacterial infection (streptococcus and staphylococcus) of the skin and soft tissue. immunocompromised children are predisposed to more severe and progressive diseases (up to one third) with multiple organ involvement, lungs, liver, and central nervous system issues being the most frequent. mortality in these children range from % to % and those with lympho-proliferative malignancies on chemotherapy have the greatest risk. bone marrow transplant recipients also have a high risk of varicella zoster virus (vzv) infection, with a probability of vzv infection at % by year after transplant. in a series of cases of vzv infection, presented with chickenpox and with herpes zoster. the overall vzv infection mortality was . % ( of ) -all with disseminated infection in the first months. however, the mortality in those with herpes zoster was only . % versus . % of those with varicella. high dose corticosteroids are also associated with significant complications of varicella and herpes zoster. immunosuppression is most commonly seen with high daily dose of mg/kg of prednisone or moderate doses for prolonged periods. rates of infectious complication were not increased in patients given a daily dose of less than mg daily, or a cumulative dose of less than mg prednisone in a meta-analysis of controlled trials. many experts consider prolonged daily dose mg prednisone or equivalent to be immunosuppressive. the us food and drug administration (fda) states that low doses of prednisone (or similar agents) for prolonged periods may also increase the risk of infection. corticosteroids can suppress several stages of the immune response that leads to inflammation, but the main immunosuppressive effect is on the cellular immunity. thus, steroids can increase the risk and severity of a variety of infectious agents (virus, bacteria, fungi, and parasites). most notable are agents that require intact cellular immunity for control and eradication, such as herpes viruses, mycobacteria, listeria, nocardia, pneumocystis, candida, cryptococci, toxoplasma, and strongyloides, etc., are increased in patients on prolonged corticosteroids. the effect of corticosteroids on the inflammatory and immune responses is pleomorphic. an earlier study in guinea pigs demonstrated that similar levels of lymphocytopenia were induced by acute and chronic corticosteroid administration, but only chronic treatment was associated with depression of certain cell-mediated lymphocyte functions. chronic cortisone treatment resulted in marked decrease in both antigen-induced migration inhibitory factor (mif) and proliferation, although mitogen responses remained normal. over the last few decades, corticosteroids have been found to inhibit the function of various cell types: ( ) macrophage/monocytesinhibit cyclooxygenase- and phospholipase a (interrupting prostaglandin and leukotriene pathways), and suppress cytokine production and release of interleukin (il)- , il- , tumor necrosis factor (tnf)-a, ( ) endothelial cells -impair endothelial leucocyte adhesion molecule-i (elam-i), and intracellular adhesion molecule-i (icam-i), that are critical for leucocyte localization, ( ) basophils -block histamine and leukotriene c ige-dependent release, ( ) fibroblast -inhibit arachidonic pathway (as with monocytes) and suppress growth factor-induced dna synthesis and fibroblast proliferation, ( ) lymphocytes -inhibit cytokines il- , il- , il- , il- , tnf-a, gm-csf, and interferon g production or expression. the association of steroid therapy and increased risk, severity and complications of vzv infections has been well established for decades. patients receiving highdose corticosteroids are at risk for disseminated disease and fatality, whereas patients on low-dose schedules are not at increased risk. , esophagitis and gastrointestinal involvement of vzv are distinctly rare and have been described in both immunocompromised hosts and apparently healthy subjects as complications of chickenpox or herpes zoster. disseminated varicella in autopsy studies of children with acute lymphoblastic leukemia or lymphoma on chemotherapy had demonstrated involvement of the esophagus, small bowel, colon, liver, spleen, and pancreas. fulminant and fatal cases of varicella hepatitis have been described predominantly in immunosuppressed children and adults, but also in healthy people. rare cases of adult varicella on chronic steroids (for asthma) have been reported with small bowel involvement presenting with abdominal pain and gastrointestinal bleeding. however, it appears that the patient may have been on moderately high dose of methylprednisolone ( mg daily). in an immunocompetent young adult on inhaled steroids for asthma, varicella has been reported to cause diffuse abdominal pain and tenderness with hepatic, esophageal, and pulmonary involvement, with recovery after acyclovir therapy. bullous and necrotic ulcerative lesions of the esophagus and stomach have been described in the pathology literature of fatal varicella as early as . stomach and small bowel changes detected by radiological imaging has also been reported in a case of chickenpox. occasionally healthy adults with varicella may have mild symptoms of esophagitis that respond to antihistamine-h blockers, suggesting temporary esophageal reflux. shingles esophagitis have also been seen on endoscopy in patients without widespread dissemination of herpes zoster and benign course. the deceased patient (case ) was receiving mg daily of methylprednisolone prior to his presentation with chickenpox. this dose is equivalent to mg prednisone and normally would not be considered to be immunosuppressive. however, the course of the disease and widespread dissemination with fatality resembles that of an immunocompromised host. how can we explain this reaction? the possibilities include: ( ) inaccurate history of the steroid dose provided by the patient, ( ) rarely, dissemination and fatality can occur in healthy adults, ( ) unrecognized immunocompromised state such as hiv infection or rare genetic mutations, or polymorphisms in genes involved in cellular immunity, and ( ) higher free active concentration of the drug than would be expected. methylprednisolone (medrol) is % bound to protein, mainly albumin, and decrease in serum albumin by - % could increase the active unbound drug almost to the same proportion. on admission to hospital, the patient's serum albumin was g/l (lower limit of normal g/l), % of the normal lower limit. although the serum albumin can decrease in acute illness from varicella, the half-life of circulating albumin is days and thus, even after days of chickenpox, it should not decrease more than % below normal, even if his liver stopped producing any protein (which is not likely). hence, the patient probably had a chronically low serum albumin from his chronic colitis. his free concentration of corticosteroid should have been greater than % of his expected active drug, which is equivalent to mg prednisone/day. can this information absolve the defendants from responsibility of the patient's adverse outcome? it could be argued by the defendants that it is not common knowledge or usual practice to consider the protein binding effects of drugs on their toxicity. furthermore, it would not be expected that the fp and er physicians be cognizant of these facts. the defendants maintain that their management did not fall below the expected standard of care, and most reasonable physicians would not have considered the patient immunocompromised on such a low dose of prednisolone. the outcome was unpredictable and only in hindsight was it evident that the deceased was likely immunocompromised and susceptible to a higher risk for adverse outcome. experts' opinions for the plaintiffs' side argued that the involved physicians should have been aware that adults (even normal hosts) are at a greater risk of severe disease and complications than children from chickenpox are. therefore, the fp and er physician were remiss in not prescribing antiviral drug (acyclovir). the er physician should have admitted the deceased at the first presentation and started intravenous acyclovir, as he suspected visceral dissemination (esophagitis) with varicella, irrespective of the immune state of the patient. previous randomized control trial (rct) of oral acyclovir therapy for uncomplicated varicella in healthy adults have reported mild clinical benefit (decrease of symptoms, fever and time to cutaneous healing), but only in those initiating treatment within h of the rash. late treatment ( - h) had no benefit. the low frequency of serious complications (pneumonia, encephalitis, or death) precluded any evaluation of acyclovir on these outcomes. in immunocompromised patients with vzv infection, later initiation of therapy ( h after onset of rash) may be of value. , although there is no rct to prove the benefit of intravenous acyclovir in normal adults with varicella complicated by visceral involvement, observational and cohort studies suggest benefit. thus, intravenous acyclovir continues to be the standard therapy for healthy adults and immunocompromised hosts with clinically significant visceral disease (pneumonia, encephalitis) or dissemination. chronic corticosteroid therapy can have numerous side effects and complications. it is important for physicians to counsel their patients on these potential adverse events, and provide a risk-benefit assessment. many organs and systems in the body can be adversely affected by chronic steroid therapy (endocrine, bone, eyes, muscle, brain, immune system, skin, etc.). it is important to counsel on potential increased risk of infectious diseases and certain precautions should be taken before embarking on chronic therapy. these include a mantoux skin test and treatment for latent tuberculosis in those with positive reactions and about to receive prednisone mg/day for days. a baseline chest radiograph for active or inactive disease should be performed beforehand. it is also recommended that steroid dependent children should undergo vzv antibody test, and if this were negative, then varicella vaccination should be offered. it seems prudent to apply these guidelines to adults as well on chronic steroid therapy. for patients with previous chickenpox or adequate antibodies, varicella zoster vaccine may be considered to reduce the risk and severity of shingles. this vaccine, a live attenuated vaccine has been found effective and is recommended for persons years of age to reduce the burden of illness and incidence of postherpetic neuralgia. presently, this vaccine is not indicated in immunocompromised adults, so it should be administered before starting prolonged steroids. the product monograph of zostavax tm (merck), states that the varicella zoster vaccine is contraindicated in patients receiving high-dose corticosteroid, but not contraindicated for individuals on inhaled or low-dose steroids. the varicella vaccine has been found safe in children with moderate immune deficiency, but it is contraindicated in those with substantial suppression of cellular immunity (as with high-dose steroid). what should have been the appropriate steps of action in this case? once the fp was notified that the patient's child had chickenpox, he should have counseled the father and determined his previous past history or antibody level against vzv. for patients considered non-immune and severely immunosuppressed (moderate to high-dose corticosteroid ( mg/day) vzv immune globulin should be offered and treatment with acyclovir should be instituted at the first sign of varicella. since the deceased was considered to be receiving a low dose of steroid, then it was more appropriate to offer treatment with acyclovir at the first sign of a typical rash, or provide a prescription to be filled within h of onset of varicella. ahfs drug information changes in the transmission of tuberculosis in the new york city from to advanced survey of tuberculosis transmission in a complex socioepidemiologic scenario with a high proportion of cases in immigrants trends in tuberculosis incidence -united states the global plan to stop tb - . world health organization global tuberculosis control: epidemiology strategy, financing. world health organization report drug-induced liver disease and environmental toxins hepatic toxicity of antitubercular agents. role of different drugs, cases risk factor for isoniazid (inh)-induced liver dysfunction harrison's principles of internal medicine isoniazid antituberculosis drug-induced hepatotoxicity: concise up-to-date review emergency issues in head and neck infections. in: emerging issues and controversies in infectious disease infectious rhinosinusitis in adults: classification, etiology and management bacteriologic finding associated with chronic bacterial maxillary sinusitis in adults adverse reactions to trimethoprim-sulfamethoxazole epidermal necrolysis (stevens-johnson syndrome and toxic epidermal necrolysis) medication use and the risk of stevens-johnson syndrome or toxic epidermal necrolysis scorten: a severity-of-illness score for toxic epidermal necrolysis performance of the scorten during the first five days of hospitalization to predict the prognosis of epidermal necrolysis toxic epidermal necrolysis: does immunoglobulin make a difference? intravenous immunoglobulin treatment for steven-johnson syndrome and toxic epidermal necrolysis: prospective non-comparative study showing no benefit in mortality or progression adverse events -physician-prescried opioids. risk identification for all physicians fitzpatrick's dermatology in general medicine the epidemiology of varicella and its complications varicella complications and cost varicella in children with cancer.: seventy seven cases infection with varicella zoster virus after marrow transplantation varicella and herpes zoster: changing concepts of the natural history, control and importance of a not-so-benign virus (first of two parts) glucocorticoids and infection hormones and synthetic substitutes: adrenals immunosuppressive effects of glucocorticosteroids: differential effects of acute vs chronic administration on cell-mediated immunity adrenocorticotropic hormone, adrenocortical steroids and their synthetic analogs: inhibitors of synthesis and actions of adrenocortical hormones varicella and herpes zoster: changes in concepts of natural history, control and importance of a not-so-benign virus (second of two parts) the human herpesviruses: an interdisciplinary perspective disseminated varicella at autopsy in children with cancer varicella hepatitis in the immunocompromised adult: a case report and review of the literature fatal varicella in an adult: case report and review of gastrointestinal complications of chickenpox digestive manifestations in an immunocompetent adult with varicella visceral lesions associated with varicella cimetidine in "chickenpox esophagitis shingles esophagitis: endoscopic diagnosis in two patients treatment of adult varicella with acyclovir. a randomized placebo-controlled trial acyclovir halts progression of herpes zoster in immunocompromised patients treatment of varicella-zoster virus infections in severely immunocompromised patients early treatment with acyclovir for varicella pneumonia in otherwise healthy adults: retrospective controlled study and review in the clinic: tuberculosis for the shingles prevention study group a vaccine to prevent herpes zoster and postherpetic neuralgia in older patients varicella vaccine in children with acute lymphoblastic leukemia and non-hodgkin lymphoma general recommendations on immunization. recommendations of the advisory committee on immunization practices (acip) key: cord- -alxtoaq authors: smerecnik, chris m. r.; mesters, ilse; verweij, eline; de vries, nanne k.; de vries, hein title: a systematic review of the impact of genetic counseling on risk perception accuracy date: - - journal: j genet couns doi: . /s - - -z sha: doc_id: cord_uid: alxtoaq this review presents an overview of the impact of genetic counseling on risk perception accuracy in papers published between january and february . the results suggest that genetic counseling may have a positive impact on risk perception accuracy, though some studies observed no impact at all, or only for low-risk participants. several implications for future research can be deduced. first, future researchers should link risk perception changes to objective risk estimates, define risk perception accuracy as the correct counseled risk estimate, and report both the proportion of individuals who correctly estimate their risk and the average overestimation of the risk. second, as the descriptions of the counseling sessions were generally poor, future research should include more detailed description of these sessions and link their content to risk perception outcomes to allow interpretation of the results. finally, the effect of genetic counseling should be examined for a wider variety of hereditary conditions. genetic counselors should provide the necessary context in which counselees can understand risk information, use both verbal and numerical risk estimates to communicate personal risk information, and use visual aids when communicating numerical risk information. recent advances in genetic research have enabled us to identify individuals at risk for a wide variety of medical conditions due to their genetic makeup (collins et al. ) . at the same time, these advances have created the need to educate and guide these individuals (lerman et al. ) . informing them of their hereditary risk and of the options for how to deal with this risk is the primary aim of genetic services (wang et al. ) . genetic services involve both genetic counseling and genetic testing; of these, genetic counseling in particular aims to enable at-risk individuals to accurately identify, understand and adaptively cope with their genetic risk (biesecker ; pilnick & dingwall ) . the national society of genetic counselors' (nsgc) task force defines genetic counseling as "the process of helping people understand and adapt to medical, psychological, and familial applications of genetic contributions to disease" (resta et al. , p. ) . as such, genetic counselors are faced with three important tasks: ( ) to interpret family and medical histories to enable risk assessment, ( ) to educate counselees about issues related to heredity, preventive options (e.g., genetic testing), and personal risk, and ( ) to facilitate informed decisions and adaptation to personal risk (cf. trepanier et al. ) . the latter task may be considered the "core" (i.e., the desired outcome) of genetic counseling, with the former tasks in service of its fulfillment. informed decision making and adaptation to personal risk, however, are abstract concepts that cannot easily be assessed. as such, several measures have been developed to assess the efficacy of genetic counseling. kasparian, wakefield and meiser ( ) summarized available measurement scales which include satisfaction, knowledge, psychological adjustment, and risk perception measures. although each of these measures significantly contributes to our understanding of the effect of genetic counseling, risk perception measures (and especially risk perception accuracy) may be regarded as one central concept. indeed, several influential models of health behavior, such as the health belief model (janz & becker ) , the protection motivation theory (rogers ) , and the extended parallel process model (witte ) , posit that adequate risk perception acts as a motivator to take (preventive) action and, as such, is a prerequisite of preventive behavior. moreover, risk perception and risk perception accuracy have been shown to be related to several other important outcomes of genetic counseling, such as coping (nordin et al. ) , worry (hopwood et al. ) , and anxiety . the effect of genetic counseling on risk perception has been heavily examined during the past two decades, from early research into reproductive genetic counseling (e.g., humphreys & berkeley ) to recent studies into genetic predispositions to cancer (e.g., bjorvatn et al. ) . while these studies are valuable in their own right, few have investigated the effect of genetic counseling on risk perception accuracy. indeed, to facilitate informed decision making and adaptation to personal risk, counselees must have accurate risk perceptions. in their meta-analysis, meiser and halliday ( ) identified only six studies that assessed the effects of genetic counseling on risk perception accuracy. their meta-analysis showed that individuals at risk for breast cancer significantly perceive their own risk more accurately after genetic counseling. in particular, they observed an average increase of . % of the participants who accurately estimated their personal risk after counseling. a systematic review by butow and colleagues ( ) year later confirmed the positive impact of genetic counseling in breast cancer risk perception accuracy, although - % continued to overestimate their risk even after counseling. research thus suggests that genetic counseling may indeed improve risk perception accuracy in some individuals. however, meiser and halliday ( ) and butow et al. ( ) only included studies examining breast cancer risk. to date, there is no systematic review or metaanalysis which examines the effect of genetic counseling on perception of genetic risks in general. thus, the purpose of the present review is twofold: ( ) to provide an updated overview of the impact of genetic counseling on risk perception accuracy in papers published between january and february , and ( ) to extend the results of meiser and halliday's ( ) meta-analysis and butow et al.'s ( ) systematic review to other genetic conditions. we searched the pubmed, embase, web of science, eric and psycinfo databases. we also used the search engine google scholar to find papers and grey literature (literature not published in a journal-e.g., in press or under review-but nevertheless available on the internet) on risk perception accuracy and genetic counseling on the internet. to this end, we used the search term "(risk perception or perceived risk or perceived susceptibility or susceptibility estimate or risk estimate) and (genetic counsel* or genetic risk or familial risk or genetic predisposition)." if available in the databases, we used the standardized, subject-related indexing terms of the concepts in the search term. we also searched the following journals manually: journal of genetic the selection procedure was performed independently by two reviewers. the review process then consisted of three phases. during the first phase, papers were reviewed based on title only. in the second phase, the reviewers examined the abstracts of papers that could not be definitively included or excluded based on their title. papers thought to be relevant to the review based on their abstracts were included; those judged irrelevant were excluded. in the third phase, the reviewers examined the papers included during the previous two phases for content. as recommended by the cochrane guidelines (higgins & green ) , we erred on the safe side during the whole selection process; if in doubt, we included the paper for more extensive review in the subsequent phase. the following inclusion and exclusion criteria were used to determine whether papers were eligible for the review. . studies should be published after (i.e., upper limit of the meiser and halliday meta-analysis, since one goal of this review was to provide an update of that analysis); studies published before were excluded (n= ; e.g., evans et al. ). . studies should focus on genetic risk perception; studies which did not (n= ; e.g., clementi et al. ) or which discussed the effect of genetic mutations, prevalence, incidence, morbidity, or mortality only were excluded (n= ). . studies should examine the effect of genetic counseling on risk perception accuracy; that is, should explicitly link perceived risk to objective risk estimates to examine whether they more closely align after (rather than before) counseling. studies were excluded if they examined changes in risk perception without linking them to some objective risk estimate (n= ; e.g., burke et al. ) , if they investigated risk perception as a determinant of genetic counseling participation (n= ; e.g., collins et al. ) , or if they focused on the effectiveness of decision aids as compared to standard genetic counseling (n= ; e.g., warner et al. ) . . to accurately assess whether genetic counseling affected risk perception accuracy, studies should employ either a prospective or a randomized control trial design. studies using other designs were excluded (n= ; e.g., cull et al. ). . risk perception accuracy should be assessed as a quantitative outcome measure; studies were excluded if they assessed risk perception as a qualitative outcome measure (n= ). . studies should focus on at-risk individuals; those focusing on intermediaries (e.g., genetic counselors, nurses) would be excluded (n= ). . studies should describe original research published in peer-reviewed journal in english. studies describing secondary data or reviewing other studies, editorials, commentaries, book reviews, bibliographies, resources or policy documents were excluded (n= ; e.g., palmero et al. ) as they provided too little detail. risk perception outcomes were abstracted by two authors independently, using standardized extraction forms. in the event of disagreement, the authors discussed the particular paper until they reached consensus. we abstracted the characteristics of the study, the participants and the genetic counseling session, as well as the results and quality of the study (cf. higgins & green ) . figure presents the flowchart of the study selection process. from the initial sample of , eligible papers from the database searches and the unique papers from the google scholar, journal, reference list and key author searches, a total of papers were eligible for extensive review. of these, papers were included in the review. table lists the included papers and information about the study design, genetic counseling session content, criteria for risk perception accuracy, measurement time points, and finally the risk perception outcomes. given the heterogeneity in the studies, we decided against pooling the studies in a meta-analysis. concerning the content and quality of the genetic counseling sessions, four studies mentioned using a genetic counseling protocol (bjorvatn et al. ; bowen et al. ; kaiser et al. ; van dijk et al. ) . two mentioned using a standardized counseling script (codori et al. ; tercyak et al. ). an additional three used audiotapes as a content check of the counseling session (hopwood et al. ; kelly et al. ; , while the remaining twelve did not mention the use of any protocol, standardized script or audio-or videotapes as a content check. in-depth analyses of the content (see table ) revealed that a majority of the studies described counseling sessions with similar content. however, four studies did not provide a description of the counseling session at all (hopwood et al. ; huiart et al. ; lidén et al. ; nordin et al. ) . comparing the descriptions of the counseling sessions of the remaining fifteen studies to the recommendations of the nsgc task force, we observed that only six of these mentioned the first task, "interpretation of family and medical histories to enable risk assessment" (bjorvatn et al. ; bowen et al. ; hopwood et al. ; kelly et al. ; pieterse et al. ; tercyak et al. ; van dijk et al. ) . likewise, only five studies explicitly mentioned performing the second task, "educate counselees about issues related to heredity and treatment and preventive options" (bjorvatn et al. ; codori et al. ; kelly et al. ; van dijk et al. ) . although judging whether counselors "facilitated decision making an adaptation to personal risk" is difficult, we did observe six studies claiming to advise counselees on surveillance (bjorvatn et al. ; kaiser et al. ; rimes et al. ; rothemund et al. ; tercyak et al. ) , which may be regarded as facilitating informed decisions. the included studies used two different types of measures to determine the effect of genetic counseling on risk perception accuracy: several studies reported changes in the proportion of individuals who accurately perceive their risk, while others reported the degree of overestimation or underestimation as a measure of risk perception accuracy. where available, we report both types of measures (see table ). overall, the studies indicate that genetic counseling has a positive impact on risk perception accuracy (cf . table ) . however, some studies observed no effect on risk perception accuracy at all, or only for low-risk individuals (cf. table ). the studies assessing the proportion of individuals who accurately estimated their risk (see table , subsection i) showed an average increase of approximately % (range: - %) of counselees who correctly estimated their risk after counseling; from an average of % pre-counseling to an average of % post-counseling. however, on average % (range: - %) continued to overestimate and . % (range: - %) continued to underestimate their risk even after counseling. other studies which assessed changes in the average overestimation of participants' perceived risk (see table , subsection ii) still observed an average overestimation of approximately % (range: - %) after counseling, in comparison with % (range: . - %) before counseling. across the studies, the average decrease in overestimation was approximately %. linking the outcome (i.e., risk perception accuracy) to the content of the counseling session (i.e., whether counselors performed the tasks as recommended by the nsgc task force), we observed that the studies in which the counselor gave information about family history and heredity as well as personal risk estimates positively influenced risk perception accuracy (bjorvatn et al. ; bowen et al. ; hopwood et al. ; kelly et al. ; tercyak et al. ) , although this improvement was not significant in two studies (pieterse et al. ; van dijk et al. ) . in contrast, the studies that did not mention giving counselees this information observed no significant improvement of risk perception accuracy as a result of genetic counseling (codori et al. ; kent et al. ; rothemund et al. ) , with the exception of one study (kaiser et al. ) . the results for the other two tasks were mixed. while some studies that educated counselees about heredity observed a positive impact on risk perception accuracy (bjorvatn et al. ; kelly et al. ; van dijk et al. ) , others did not (codori et al. ; . similar results were observed for the third task of facilitating informed decision making and adaptation to personal risk. three out of the six studies identified as performing this task observed a positive impact of genetic counseling on risk perception accuracy (bjorvatn et al. ; rimes et al. ; tercyak et al. ) , while the other three did not (kaiser et al. ; rothemund et al. ). the purposes of this review were ( ) to provide an updated overview of the impact of genetic counseling on risk perception accuracy from january until february , and ( ) to extend the meiser and halliday ( ) meta-analysis and the butow et al. ( ) systematic review to other genetic conditions. overall, the studies showed that an increased proportion of individuals correctly perceived their risk after counseling rather than before, and those who did not had smaller deviations from their objective risk than before counseling. these positive effects were sustained even at follow-up year later. some studies, however, observed no positive effect of genetic counseling, or only for low-risk individuals. these results are in line with those reported in the meiser and halliday metaanalysis and the systematic review conducted by butow and colleagues. the research in the present review may shed some light on why some studies observe positive effects of genetic counseling on risk perception accuracy and others do not. first, one study (codori et al. ) that observed no effect explicitly mentioned that personal risk information was not communicated during the relevant counseling session. second, the provision of information about the role of family history, as recommended by the nsgc task force, may provide an appropriate context in which counselees can make sense of the risk information (cf. codori et al. ) , resulting in accurate risk perceptions. third, some counselors may go to great lengths to explain risk information in terms the counselees can understand (cf. kent et al. ) . unfortunately, research has shown that verbal and numerical risk estimates often do not coincide. that is, verbal risk information results in more variability in risk perception than does numerical information (gurmankin et al. b ). bjorvatn et al. ( ) , for example, observed incongruence between numerical and verbal measures of risk perception. similarly, hopwood et al. ( ) observed that counselees included a wide range of numerical risk estimates within the same verbal category. the significance of this is discussed below, where we present the implications of our study for clinical practice. finally, several studies (pieterse et al. ; rothemund et al. ) that observed no effect of genetic counseling on risk perception accuracy had small sample sizes, and thus may not have observed a significant effect due to power limitations. the present review has several important implications for future research. first, we selected a large number of studies assessing risk perception changes as a result of genetic counseling. however, we had to exclude of these studies because they did not explicitly link risk perception to an objective risk figure. assuming that researchers are aware of these objective risk figures, future studies should link risk perception changes to objective risk figures to assess changes in risk perception accuracy. a second implication concerns the definition of risk perception accuracy, which differs between studies. for instance, in several studies accurate risk perception is defined as falling within a certain category (e.g., bjorvatn et al. ; kelly et al. ; lidén et al. ) or within % of the counseled risk (e.g., pieterse et al. ; rothemund et al. ) , while the majority define it as the correct counseled risk estimate (e.g., bowen et al. ; hopwood et al. ; tercyak et al. ). additionally, the reviewed studies based the counseled risk estimate on different methods, such as family history assessment (huiart et al. ) , gail's score (bowen et al. ) , or the brcapro procedure (kelly et al. ) . these issues reduce our ability to compare the results of the studies, thereby lessening their value. future researchers should define risk perception accuracy as correct counseled risk, and base their risk estimate on generally accepted and applied methods to allow for better interpretation of the results and comparison between studies. a third, related issue concerns the type of outcome measure used: several studies report changes in the proportion of individuals who correctly perceive their risk, while others report the degree of overestimation or underestimation as a measure of risk perception accuracy. researchers are advised to include both measures in their studies, as both provide valuable information about the effect of genetic counseling on risk perception accuracy. further, we observed that the quality of the genetic counseling descriptions (in those descriptions that were present) was poor. although the counseling sessions were labeled as standardized, they were described in general terms, such as "discussion about the risk" and "information was given about how hereditary factors contribute to disease." these general descriptions leave room for substantial differences between counseling sessions. this is especially problematic given that perceptions of genetic risks before genetic counseling can determine the content of the counseling session (julian reynier et al. ) , which tends to alter patient outcomes . differences in the quality of the counseling session content may well explain the fact that not all studies in the present review observed a positive effect on risk perception accuracy. future studies should therefore try to link the content of the counseling session to risk perception to determine which feature of the session actually contributes to improved risk perception accuracy (cf. pieterse et al. , or shiloh et al. . the present review provides some insight into how the content of the counseling session relates to risk perception accuracy. indeed, the provision of information on the role of family history was observed to positively impact risk perception accuracy, perhaps because it creates a context in which the counselee can understand the information. additionally, forcing numerical risk estimates to fit lay terms to aid counselees' understanding may lead to inaccurate risk perceptions (kent et al., ) . a possible avenue for further research may be to link effectiveness to certain sociodemographic variables. we could then examine the influence of known psychological differences between certain groups, which is a more complex process and should thus occur later in time. by associating these psychological differences to the effectiveness of genetic counseling, we may be able to identify the processes responsible for the positive effect of genetic counseling on risk perception accuracy. knowledge of such processes will enable us to match the session's content to these processes and thus to increase the session's effectiveness. finally, we observed a relative lack of diversity in research on genetic counseling and genetic test result disclosure in terms of the genetic disorder under consideration. although genetic counseling and testing can be effective for a variety of disorders (biesecker ; lerman et al. ; pilnick & dingwall ) , most recent studies focus on their impact on cancer risk perception, particularly breast cancer. although genetic counseling on cancer has been shown to positively affect risk perception accuracy, this does not guarantee it will do the same for other genetic conditions. extensive research is needed to assess whether genetic counseling also effectively enhances risk perceptions for other genetic predispositions. based on the results, we have formulated some implications for practice. first, in accordance with the recommendations of the nsgc task force, we again strongly urge genetic counselors to discuss the role of family history and perform a family history assessment. we suggest that this information is an important factor in accurate risk perception because it may provide the necessary context in which counselees can understand the risk information. indeed, the results seem to suggest that the provision of such information is positively related to risk perception accuracy. while this implication may seem redundant as it repeats the earlier recommendations by the nsgc task force, we nonetheless repeat it here since several studies in this review did not mention communicating this information to the counselee (codori et al. ; kaiser et al. ; kent et al. ; rothemund et al. ) . second, while explaining risk information in lay terms seems to be a useful strategy to help counselees to better understand their risk (cf. trepanier et al. ) , the one study that explicitly mentioned doing so did not observe a significant effect on risk perception accuracy (kent et al. ) . moreover, there appears to be incongruency between verbal and numerical risk estimates (e.g., bjorvatn et al. ; hopwood et al. ) . both types of risk estimates, however, possess qualities that would make them especially suited for counseling. compared to verbal risk estimates, numerical risk estimates have been shown to increase trust in (gurmankin et al. a ) and satisfaction with (berry et al. ) the information. on the other hand, individuals have been shown to more readily use verbal information when describing their risk to others (erev & cohen ) and when deciding on treatment (teigen & brun ) . we therefore advise genetic counselors to present numerical risk estimates first, as they are accurate, objective information. the patient may then be asked what that risk estimate means to him or her. the patient's verbal response will provide an opportunity for further discussion of the meaning and impact of the risk information. genetic counselors should, however, be aware of the disadvantages of verbal information in accurately communicating risk information. a third, related implication concerns the presentation of numerical risk information. research has shown that visual presentation of risk information (e.g., odds or percentages) may be better understood than written presentation formats. indeed, there seems to be general agreement that graphical formats, in comparison with textual information, are better able to accurately communicate risk information (schapira et al. ; timmermans et al. ) although contradictory evidence has also been published (parrot et al. ). furthermore, graphical information seems to have a larger impact on risk-avoiding behavior than textual information (chua et al. ) . we therefore advise genetic counselors to use visual aids when communicating numerical risk information (cf. tercyak et al. ) . overall, this review suggests that genetic counseling may have a positive impact on risk perception accuracy. it has also resulted in several implications for future research. first, future researchers should link risk perception changes to objective risk estimates to assess the effect of genetic counseling on risk perception accuracy. researchers are advised to define risk perception accuracy as the correct counseled risk estimate instead of falling within a certain percentage of the counseled risk. additionally, they should report both the proportion of individuals who correctly estimate their risk and the average overestimation of risk. second, as the descriptions of the counseling sessions were generally poor, future research should include more detailed descriptions of these sessions, and link their content to risk perception outcomes to enable interpretation of the results. finally, the effect of genetic counseling should be examined for a wider variety of hereditary conditions. genetic counselors are advised to discuss the role of family history and perform a family history assessment to provide the necessary context in which counselees can understand the risk information. they should also use both verbal and numerical risk estimates to communicate personal risk information, and use visual aids when communicating numerical risk information. over the counter medicines and the need for immediate action: a further evaluation of european commision recommended wordings for communicating risks goals of genetic counseling risk perception, worry and satisfaction related to genetic counseling for hereditary cancer effects of counseling ashkenazi jewish women about breast cancer risk genetic counseling for women with an intermediate family history of breast cancer psychological outcomes and risk perception after genetic testing and counselling in breast cancer: a systematic review risk avoidance: graphs versus numbers pregnancy outcome after genetic counselling for prenatal diagnosis of unexpected chromosomal anomaly genetic counseling outcomes: perceived risk and distress after counseling for hereditary colorectal cancer a vision for the future of genomics research cancer worries, risk perceptions and associations with interest in dna testing and clinic satisfaction in a familial colorectal cancer clinic cancer risk perceptions and distress among women attending a familial ovarian cancer clinic verbal versus numerical probabilities: efficiency, biases, and the preference paradox the impact of genetic-counseling on risk perception in women with a family history of breast-cancer the effect of numerical statements of risk on trust and comfort with hypothetical physician risk communication intended message versus message received in hypothetical physician risk communications: exploring the gap cochrane handbook for systematic reviews of interventions . . do women understand the odds? risk perceptions and recall of risk information in women with a family history of breast cancer risk perception and cancer worry: an exploratory study of the impact of genetic risk counselling in women with a family history of breast cancer a randomised comparison of uk genetic risk counselling services for familial cancer: psychosocial outcomes effects of genetic consultation on perception of a family risk of breast/ovarian cancer and determinants of inaccurate perception after the consultation representing risks: supporting genetic counseling the health belief model: a decade later risk perception, anxiety and attitudes towards predictive testing alter cancer genetic consultations psychological responses to prenatal nts counseling and the uptake of invasive testing in women of advanced maternal age assessment of psychological outcomes in genetic counseling research subjective and objective risks of carrying a brca / mutation in individuals of ashkenazi jewish descent the relationship between perceived risk, thought intrusiveness and emotional well-being in women receiving counselling for breast cancer risk in a family history clinic genetic testing: psychological aspects and implications genetic counselling for cancer and risk perception communication and information-giving in highrisk breast cancer consultations: influence on patient outcomes communication and information-giving in high-risk breast cancer consultations: influence on patient outcomes risk perceptions and knowledge of breast cancer genetics in women at increased risk of developing hereditary breast cancer long-term outcomes of genetic counseling in women at increased risk of developing hereditary breast cancer what is the impact of genetic counseling in women at increased risk of developing hereditary breast cancer? a meta-analytic review coping style, psychological distress, risk perception, and satisfaction in subjects attending genetic counselling for hereditary cancer genetic counseling and cancer risk perception in brazilian patients at-risk for hereditary breast and ovarian cancer risk comprehension and judgements of statisical evidentiary appeals. when a picture is not worth a thousand words risk communication in completed series of breast cancer genetic counseling visits research directions in genetic counselling: a review of the literature a new definition of genetic counseling: notional society of genetic counselors' task force report applying cognitive-behavioral models of health anxiety in a cancer genetics service cognitive and physiological processes in fear appeals and attitude change: a revised theory of protection motivation perception of risk, anxiety, and health behaviors in women at high risk for breast cancer frequency or probability? a qualitative study of risk communication formats used in health care the facilitating role of information provided in genetic counseling for counselees' decisions verbal probabilities: a question of frame psychological response to prenatal genetic counseling and amniocentesis different formats for communicating surgical risks to patients and the effect on choice of treatment genetic counselling and the intention to undergo prophylactic mastectomy: effects of a breast cancer risk assessment assessment of genetic testing and related counseling services: current research and future directions educating women about breast cancer. an intervention for women with a family history of breast cancer putting the fear back into fear appeals: the extended parallel process model (eppm) acknowledgements this study was financially supported by maastricht university and performed at the school for public health and primary care (caphri). caphri participates in the netherlands school of primary care research (care), recognized by the royal dutch academy of science (knaw) in .open access this article is distributed under the terms of the creative commons attribution noncommercial license (https:// doi.org/creativecommons.org/licenses/by-nc/ . /), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. key: cord- - a fkd v authors: dutta, ankhi; flores, ricardo title: infection prevention in pediatric oncology and hematopoietic stem cell transplant recipients date: - - journal: healthcare-associated infections in children doi: . / - - - - _ sha: doc_id: cord_uid: a fkd v pediatric patients with malignancies and transplant recipients are at high risk of infection-related morbidity and mortality. children at the highest risk for infections are those with acute myeloid leukemia (aml), relapsed acute lymphoblastic leukemia (all), and hematopoietic stem cell transplant recipients (hsct). these patients are at high risk for life-threatening bacterial, viral, and fungal infections which are associated with prolonged hospital stay, poor quality of life, and increased healthcare cost and death. recognition of risk factors which predisposes them to infections, early identification of signs and symptoms of infections, prompt diagnosis, and empiric/definitive treatment are the mainstay in reducing infection-related morbidity and mortality. infection control and prevention programs also play a crucial role in preventing hospital-acquired infections in these immunosuppressed hosts. there are various factors which contribute to the increased susceptibility to infections in pediatric hematology/oncology (pho) and hsct patients, most prominent of them being disruption of cutaneous and mucosal barriers (oral, gastrointestinal, etc.), microbial gastrointestinal translocation, defects in cell-mediated immunity, and insufficient quantities and inadequate function of phagocytes. goals of infection control and prevention in this population are based on mitigating the risk inherent with the underlying malignancy and associated treatments (i.e., chemotherapy, radiation). this chapter discusses infection control and prevention measures specifically in patients with hematological malignancies as well as hsct recipients. hand hygiene and standard precautions during the care of pho and hsct patients are key components in reducing the risk of infections. additional isolation precautions may also be undertaken depending on the pathogen isolated and/or symptoms that the patient is experiencing (e.g., contact precautions would be appropriate in patients experiencing diarrhea). further information on general infection prevention measures can be found in chap. . minimizing injury to mucosal surfaces and decreasing heavy colonization of the skin reduce the likelihood of microbial invasion through these sites. thus, the importance of meticulous skin care and daily inspection in pho and hsct patients is paramount and provides opportunities to identify areas of inflammation or breakdown early. skin inspection should be done routinely, with special attention to highrisk areas like intravascular catheter insertion sites and the perineum. rectal thermometers, digital rectal examinations, and suppositories should be avoided to prevent mucosal breakdown. as part of an effort to reduce colonization of cutaneous surfaces, daily chlorhexidine baths have been shown to reduce hais and transmission of multidrug-resistant organisms (mdro) in oncology patients [ , ] . chlorhexidine gluconate (chg) is a cationic bisbiguanide that serves as a topical antiseptic. chg binds to negatively charged bacterial cell wall proteins altering the bacterial cell wall equilibrium and helps in reducing bacterial colonization of the skin [ ] . education of patients, families, and staff on the importance of these practices is key to compliance with this preventative strategy and should be made a priority. many experts recommend a complete periodontal examination be performed prior to initiation of chemotherapy with reevaluations throughout the treatment course and after completion [ , ] . oral mucositis, which can be considered an acute inflammation and/or ulceration of the oral/oropharyngeal mucus membranes, is a common adverse effect of chemotherapeutic agents. it can cause oral pain/discomfort as well as difficulties in eating, swallowing, and speech. mucositis is most commonly caused by chemotherapeutic agents which prevent dna synthesis such as methotrexate, -fluorouracil, and cytarabine, particularly in hsct recipients. oral rinses with normal saline or chg-containing products are recommended - times per day to prevent oral mucositis [ , ] . patients with painful mucositis might not comply with oral care regimens, however, putting them at increased risk for infections from oral flora such as bacteremia due to viridans streptococci. mouth rinses containing alcohol should be avoided because they can aggravate mucositis. neutropenic patients should also be instructed to brush their teeth carefully in order to prevent gingival injury [ ] . a regular soft toothbrush or an electric brush can be used to minimize trauma [ ] . any elective dental procedure should be ideally performed prior to starting chemotherapy and after discussion with the primary medical team. the absolute neutrophil count, platelet count, and stage of treatment should be considered before performing any dental procedures in this vulnerable population [ , ] . the presence of central venous catheters (cvc) in this population puts them at risk for central line-associated bloodstream infection (clabsi) and its related complications. clabsi is the most commonly reported hai in most pediatric series. among all the pediatric hai reported to national healthcare surveillance network (nhsn), % were from oncology units; streptococcus viridans ( %) and klebsiella pneumoniae/oxytoca ( %) were the two most common pathogens in this study [ ] . in the nhsn report, antibiotic resistance was noted to be high in oncology units, including ampicillin and/or vancomycin resistance for enterococcus faecium and fluoroquinolone resistance for escherichia coli [ ] . although less than % of enterobacteriaceae were reported to have carbapenem resistance, the emergence of such organisms in this population is of significant concern [ ]. among candida infections in this population, fluconazole resistance among non-c. albicans and non-c. parapsilosis isolates was up to %, whereas fluconazole resistance in c. albicans and c. parapsilosis was < % [ ] . mucosal barrier injury (mbi)-associated laboratory-confirmed bloodstream infections (mbi-lcbi) have gained attention in recent years [ , ] . these are clabsis related primarily to mucosal barrier injury (i.e., mucositis) and not due to the direct presence of the cvc per se. in the nhsn definition, a positive blood culture would qualify as a mbi-lcbi if it results from one or more groups of selected commensal organisms of the oral cavity or gastrointestinal tract and occurred in the presence of signs and symptoms consistent with mucosal barrier injury (mbi) in pho or hsct patients [ ] . eligible organisms for mbi-lcbi include candida species, enterococcus, enterobacteriaceae, viridans group streptococci, other streptococcus species, and anaerobes [ ] . specific guidelines for central line insertion and maintenance bundles have been proposed by the centers for disease control and prevention (cdc) and the infectious diseases society of america (idsa) to reduce the clabsi rates and healthcare costs [ , ] . several studies have demonstrated that a multifaceted approach reduces clabsi rates in this population [ , ] and includes standardizing cvc insertion practices and maintenance bundles, tracking cvc infections using standardized definitions, and using dedicated nursing staff or "cvc champions" specifically trained in cvc maintenance and tracking in conjunction with other infection control methods (including oral and hand hygiene, optimizing nurse/patient ratio, etc.). clabsi is discussed in greater detail in chap. . the american society for blood and marrow transplantation recommends a low microbial diet for hsct recipients [ ] . there is little evidence, however, to suggest that this helps in pho patients. routine safety in handling and preparing food should be practiced by patients and parents. in general, eating unpasteurized milk/ cheese, undercooked meat, and raw fruits and vegetables is discouraged during periods of neutropenia to reduce incidence of infection. the need to minimize risk of infection, however, should be balanced with the nutritional needs and quality of life of the patient [ , ] . pets can be a great source of companionship and comfort to children; however, there are several diseases that can be transmitted by pets to these immunosuppressed hosts [ ] [ ] [ ] . certain animals like reptiles, birds, rodents, or other exotic animals that cannot be immunized and could carry unusual human pathogens should not be kept as pets in households with pho or hsct patients. immunosuppressed patients should avoid petting zoos due to the risk of diseases secondary to enteric pathogens (such as salmonella or campylobacter) [ ] [ ] [ ] [ ] . dogs and cats, preferably more than year old, are generally considered safe for pho and hsct patients. they should be routinely evaluated by veterinarians for diseases and their immunizations kept up-to-date. extreme care should be taken to maintain hand hygiene during and after handling the pets [ ] [ ] [ ] [ ] . further information regarding pet therapy is available in chap. . studies performed in adult oncology patients have consistently shown the benefit of using prophylactic antibiotics in reducing the incidence of bacterial infections [ ] . levofloxacin prophylaxis in adults has been shown to reduce the incidence of fever, bacterial infection, hospitalization rates, and all-cause mortality [ , ] . based upon such data in adults, the idsa guidelines for the use of antimicrobial agents in neutropenic patients with cancer state that fluoroquinolone prophylaxis should be considered for high-risk patients with prolonged severe neutropenia [ ] . pediatric studies on antibiotic prophylaxis are limited. a pediatric pilot study on the use of ciprofloxacin prophylaxis for pediatric patients receiving delayed intensification therapy for acute lymphoblastic leukemia (all) showed a reduction in hospitalization, intensive care admission, and bacteremia when compared to controls [ ] . in another study, levofloxacin prophylaxis in patients with all reduced the odds of febrile neutropenia, possible bacterial infection, and confirmed bloodstream infection by ≥ %. it also reduced the use of other broad-spectrum antibiotics and the incidence of c. difficile infections [ ] . in other studies, however, ciprofloxacin prophylaxis did not decrease the incidence of overall bacteremia or duration of fever or mortality in pediatric acute myelogenous leukemia (aml) patients [ ] . furthermore, increasing quinolone resistance among gram-negative organisms is a concern recently observed in the nhsn database of pediatric oncology patients with clabsi [ ] . in addition, the use of antimicrobial prophylaxis in pho could increase the possibility of developing other mdros, invasive fungal infections, or drug-related toxicities. though some authors suggest that antibiotic prophylaxis should be considered in children undergoing induction chemotherapy for all, there is currently insufficient data to inform definitive guidelines for antibiotic prophylaxis to prevent bacterial infections in pediatric oncology patients [ ] [ ] [ ] . notably, an open-label randomized clinical trial was recently conducted of levofloxacin prophylaxis vs. no prophylaxis in children with aml, relapsed all, and hsct recipients. among patients with aml and relapsed all, prophylaxis was associated with a reduction in rates of bacteremia; there was a numeric reduction in bacteremia in the hsct recipients, but this did not achieve statistical significance. it is unclear at this time how these new findings will influence practice and future guidelines [ ] . infections with common respiratory and gastrointestinal viruses can result in significant morbidity and mortality in pho and hsct patients. the most common respiratory viruses encountered include rhinovirus, coronavirus, adenovirus, rsv, parainfluenza, human metapneumovirus, and influenza. common gastrointestinal viruses affecting both healthy and immunocompromised children include norovirus, rotavirus, enteric adenoviruses, and enteroviruses among others. infection prevention strategies should include education provided to the patient and the family about hand hygiene, prevention techniques, avoidance of ill visitors, disease surveillance in the community and hospital, vaccination against influenza and prompt identification, and testing and treatment (if possible) of any respiratory viral illness. implementation of routine infection control prevention policies on oncology wards should reduce transmission of common respiratory and gastrointestinal viruses. all visitors should be screened for any signs and symptoms of acute viral illness and restricted from visitation on the unit or contact with any immunocompromised hosts. chapter outlines infection control guidance for hospital visitors in greater detail. immunization of healthcare workers and household contacts needs special consideration in settings with pho and hsct patients. given the immunosuppressed status of children with malignancy and/or hsct, immunization of those closest to them at home and those caring for them in the hospital is critically important in preventing infections. live attenuated vaccines contain a theoretical risk of being transmitted to an immunocompromised host. live oral polio vaccine, which is no longer administered in the united states, is an absolute contraindication for people taking care of this high-risk population. however, data suggests that measles, mumps, and rubella (mmr), varicella zoster, and herpes zoster vaccines can be safely provided to healthcare workers and household contacts [ ] . if healthcare personnel develop a rash that cannot be covered within the first days following receipt of the varicella vaccine, they should avoid any contact with immunocompromised patients until all rash has crusted to avoid the potential risk of transmitting vaccine strain varicella to patients [ ] . infants living in households with persons who are immunocompromised including pho and hsct patients may be safely immunized against rotavirus; it is recommended, however, that immunocompromised persons avoid contact with the infant's diapers/stool for weeks following vaccination to minimize risk of acquiring vaccine strain rotavirus infection [ ] . an inactivated influenza vaccine is preferred for personnel taking care of immunocompromised children as opposed to live attenuated influenza vaccine [ ] . vaccination against other non-viral pathogens (such as pneumococcus or pertussis) by family members is another important method to minimize the risk of serious infection in pho patients. hospital environments are designed to minimize the potential for fungal disease in the highest-risk patients. high efficiency particulate air (hepa) filters have been shown to reduce nosocomial infection in hsct patients, and the cdc recommends hepa filters in hsct recipient's rooms. the rooms should also have directed airflow and positive air pressure and be properly ventilated (≥ air changes per hour) [ ] . avoidance of carpets and upholstery is also recommended. since outbreaks secondary to aspergillus have been reported during hospital renovation or construction, appropriate containment should be in place, and strict precautions should be taken to prevent exposure to patients during such periods [ ] . infection control and prevention departments should be involved in risk assessment, planning, and approval of all construction or renovation projects in healthcare facilities including inpatient units, clinics, and infusion centers caring for these patients [ ] . cytotoxic chemotherapies and radiation therapy used in the treatment of malignancies are myelosuppressive and result in variable duration and severity of neutropenia. in addition, certain malignancies that originate from bone marrow precursors (i.e., leukemia) or metastasize to the bone marrow (e.g., lymphoma, neuroblastoma, and sarcomas) can result in a decreased number of normal blood cell precursors and consequent neutropenia. hence, pediatric cancer and hsct patients are frequently immunosuppressed and at risk for a wide range of pathogens. febrile neutropenia is a common condition in the pho/hsct population. with regard to this entity, fever is defined as a single temperature > . °c ( °f) or a temperature ≥ . °c ( . °f) on two occasions hour apart. neutropenia is classified as mild (absolute neutrophil count [anc] > - /mm ), moderate (anc ≥ - /mm ), or severe (anc < /mm ). febrile neutropenia (also known as fever and neutropenia) is the combination of these two events in the patient with malignancy or hsct and is a common complication of cancer treatment. it has been estimated that - % of patients with solid tumors and up to % of patients with hematologic malignancies will develop fever during at least one chemotherapy cycle associated with neutropenia [ ] . moreover, fever may be the only indication of a severe underlying infection as other signs and symptoms are often absent or minimized due to an inadequate inflammatory response. therefore, physicians must be particularly aware of the infection risks, diagnostic methods, and antimicrobial therapies required for the management of febrile neutropenia in cancer patients. in the majority of febrile episodes, a pathogen is not identified, with a clinically documented infection occurring in only - % of cases. of these patients, bacteremia occurs in - %, with most episodes seen in the setting of prolonged and/or profound neutropenia (anc < neutrophils/mm ) [ , ] . on the other hand, the most common sites of focal infection include the gastrointestinal tract, lung, and skin [ ] . over the past five decades, the rates, antibiotic resistance, and epidemiologic spectrum of bloodstream pathogens isolated from febrile neutropenic patients have changed substantially under the selective pressure of broad-spectrum antimicrobial therapy and/or prophylaxis [ , ] . early in the development of cytotoxic chemotherapies, during the s and s, gram-negative pathogens predominated in febrile neutropenia. subsequently, during the s and s, gram-positive organisms became more common as use of indwelling plastic venous catheters became more prevalent, which can allow for colonization and subsequent infection by gram-positive skin flora [ , ] . gram-positive bacteria currently account for - % of culture-positive infections in pediatric cancer patients [ ] . importantly, a recent systematic review of the epidemiology and antibiotic resistance of pathogens causing bacteremia in cancer patients since showed a recent shift from gram-positive to gram-negative organisms [ ] . the main causes for this new trend are to be determined, but the use and duration of antibiotic prophylaxis are an important factor to consider as the incidence of gram-negative bacteria was significantly higher in groups who did not receive antibiotic prophylaxis. the use of antibiotic prophylaxis, however, may conceivably select for resistant organisms; increasing rates of antibiotic resistance in both gram-negative and gram-positive bacteria have been reported in the global community as well as the cancer population and are of significant concern [ , , ] . overall, the most common blood isolate in the setting of febrile neutropenia is coagulase-negative staphylococci. other less common blood isolates include enterobacteriaceae, non-fermenting gram-negative bacteria (such as pseudomonas), s. aureus, and streptococci (see table . ). providers should review the local data at their institution for prevalent blood isolates and antimicrobial susceptibility profiles. management of febrile neutropenia continues to evolve given the awareness that interventions previously considered standard of care (such as inpatient treatment with intravenous broad-spectrum antibiotics) may not be necessary nor appropriate for all patients [ ] . it has become increasingly important to identify patients at high risk of infectious complications requiring more aggressive management and monitoring (i.e., inpatient setting with intravenous antibiotics). in addition, clinicians may be able to identify low-risk patient populations who may be managed in a less aggressive and more cost-effective manner (i.e., outpatient setting and/or with oral antibiotics). in order to address these issues, algorithmic approaches to neutropenic fever, infection prophylaxis, diagnosis, and treatment have been developed [ , [ ] [ ] [ ] . it is well established that stratification of patients to determine the risk for complications of severe infection should be undertaken at presentation of fever [ , ] . this determines the type of empiric antibiotic therapy (oral vs. intravenous), venue of treatment (inpatient vs. outpatient), and duration of antibiotic therapy. generally, the risk for serious infection is directly related to the degree and duration of neutropenia. pediatric patients with mild (anc ≥ ) and brief periods of neutropenia (< days) are less likely to have infectious complications than those [ , , ] a. dutta and r. flores with moderate to severe neutropenia (anc ≤ ) lasting more than days. similarly, the risk for bacteremia and septicemia increases dramatically when the anc is < . infectious complications that are more common with severe and prolonged neutropenia include bacteremia, pneumonitis, cellulitis, and abscess formation. it is important to consider individual patient risk incorporating the latest recommendations for the management of neutropenic fever in children with cancer and hsct [ , ] . patients are generally stratified as either high or low risk as follows: . high-risk patients -anticipated prolonged (> days duration) and profound neutropenia (anc < cells/mm following cytotoxic chemotherapy) and/or significant medical comorbid conditions, including hypotension, pneumonia, new-onset abdominal pain, or neurologic changes [ ] . low-risk patients -anticipated brief (< days duration) neutropenic periods in those with no or few comorbidities [ ] in addition, risk classification may be based on the multinational association for supportive care in cancer (mascc) score (table . ) [ ] . a mascc risk score of ≥ is recommended as the threshold for definition of low risk, with % of such patients developing serious medical complications compared to % of those scoring < [ ] . however the mascc score was developed and validated in adults and has not been validated in a pediatric population. the consensus in the field is for all patients considered to be at high risk by mascc or by clinical criteria to be treated as inpatients with empiric iv antibiotic therapy. carefully selected low-risk patients may be candidates for oral and/or outpatient empiric antibiotic therapy. table . summarizes the recommendation for the management of febrile neutropenia based on recommendations of the idsa and the international pediatric fever and neutropenia guideline panel. importantly, in neutropenic febrile patients with an obvious source of infection on clinical exam, management should be tailored to that source. of note, adequate antibiotic stewardship is of utmost importance during the treatment of neutropenic patients in order to decrease the incidence of antibiotic-related adverse drug events, prevalence of antibiotic resistance, and decrease treatment costs. blood cultures must be closely monitored, and once a microorganism has been identified, an appropriate plan for antibiotic de-escalation and/or treatment duration should be promptly instituted. invasive fungal diseases (ifd) are one of the leading causes of morbidity and mortality in pho and hsct patients and present many diagnostic and therapeutic challenges. one of the principal risk factors contributing to the development of ifd relates to the patient's oncologic diagnosis. patients with aml and high-risk and relapsed all, recipients of allogenic hsct, and those with chronic or severe acute graft-versushost disease (gvhd) are at the highest risk of ifd [ , ] . often a combination of other risk factors is present in these patients which may include prolonged neutropenia, high-dose corticosteroid use, immunosuppressive therapy, parenteral nutrition, presence of a cvc, preceding antibiotic therapy, presence of bacterial coinfection, oral mucositis, and admission to an intensive care unit [ , ] . the highest risk of ifd is during periods of profound neutropenia which for hsct recipients occurs during the first days posttransplant and during neutrophil engraftment [ ] ; for pho patients, the highest risk period is during induction chemotherapy [ ] . high-risk patients use monotherapy with an antipseudomonal β-lactam, a fourth generation cephalosporin, or a carbapenem as empirical therapy in pediatric high-risk fn depending on the local prevalence of multidrug-resistant gram-negative rods strong recommendation high-quality evidence reserve addition of a second gram-negative agent or a glycopeptide for patients who are clinically unstable, when a resistant infection is suspected, or for centers with a high rate of resistant pathogens in an era of growing prophylactic antifungal use, children receiving mold-active agents have been shown to be at higher risk of non-aspergillus species fungal infection [ ] . voriconazole prophylaxis in adults has been shown to be an independent risk factor for mucormycoses [ ] . likewise, breakthrough trichosporonosis has also been reported in patients receiving micafungin as prophylaxis [ ] . these phenomena are likely in part related to the selection of fungi with reduced intrinsic susceptibility to the prophylactic agent. the most common ifd are invasive aspergillosis (ia) and invasive candidiasis (ic), with a recent upward trend seen in non-aspergillus mold infections [ ] [ ] [ ] . among aspergillus species, a. fumigatus is the most common, followed by a. flavus and a.niger [ ] . among non-aspergillus molds, mucormycoses (rhizopus, mucor, absidia) are most frequently reported followed by a number of other species (e.g., fusarium, scedosporium, curvularia, exserohilum, etc.) [ ] . among ic, c. albicans is the single most common candida species, but nonalbicans candida species (especially c. parapsilosis and c. tropicalis) have been increasingly reported among this population [ ] . ifd should be suspected in patients with fever and neutropenia lasting for more than days without any identifiable cause [ ] . ic can present as septic shock or may have more non-specific findings such as fever, cough, nausea/vomiting, abdominal pain, and cutaneous lesions depending on the site of involvement. in children, the most common sites of ic are the lungs, liver, and spleen, but dissemination can occur to the other organs including the heart, eyes, or brain. disseminated disease is an independent risk factor for death in children with ic [ ] . the primary sites of ia are the lungs, skin, and sinuses [ ] . the clinical presentation of fungal rhinosinusitis may include fever, rhinorrhea, nasal congestion, and facial pain; many cases, however, may not present with any symptoms and may be diagnosed based on imaging performed in a persistently febrile patient with profound and prolonged neutropenia. cutaneous lesions can present as macules, papules, or nodular ulcerative lesions with or without surrounding erythema and tenderness. clinical presentation secondary to other molds, such as fusarium or scedosporium, is indistinguishable from ia. mucormycoses deserve special mention since dissemination and death are higher due to ifd caused by these species when compared to ia [ ] . early recognition and prompt treatment of ifd are crucial for optimal management. diagnostic tests should include blood cultures (though often with low sensitivity), cultures of appropriate sterile sites (such as urine or csf), and diagnostic biopsies of involved sites for culture and histopathology. fungal biomarkers can be used as both a screening test during high-risk periods and adjunct diagnostic test in patients with suspected ifd, especially during the periods of prolonged fever and neutropenia. galactomannan (gm) is a cell wall component released by aspergillus species which can be detected in blood, bronchoalveolar lavage fluid, and cerebrospinal fluid. a cutoff value of a gm optical index of ≥ . in blood and a bronchoalveolar lavage fluid level of ≥ is considered a positive test, though an optimum cutoff value is not well defined in children [ , ] . invasive fungal disease due to fungi other than aspergillus species may have negative galactomannan tests. β-d-glucan is a cell wall component found in many (but not all) species of fungi, and an elevated serum β-d-glucan assay can be caused by ic, ia, and other molds [ , ] . the optimum cutoff value of β-dglucan for a positive test is unknown in children, but ≥ pg/ml is used in most studies [ ] . both gm and β-d-glucan assays have variable sensitivity and specificity among children and should be interpreted with caution. the sensitivity of gm has been reported to range from to % in children with malignancy and ia [ , ] ; by contrast the β-d-glucan assay has high sensitivity for ifd (~ %) but suffers from poor specificity [ ] . false-positive β-d-glucan can be due to systemic bacterial or viral coinfection, receipt of antibiotics (such as piperacillin-tazobactam or amoxicillinclavulanate), hemodialysis, receipt of albumin or intravenous immunoglobulin, material containing glucan, oral mucositis, and other gi mucosal breakdowns [ ] . other pcr-based fungal diagnostic tests are under investigation but have low sensitivity and specificity. gm and β-d-glucan monitoring twice weekly is suggested to evaluate treatment response in those with confirmed/probable disease and as a screening tool in patients at high risk for ifd [ , ] . all pho and hsct patients with febrile neutropenia that persists beyond days and/or those with suspected ifd should undergo computed tomography of the chest, abdomen, and pelvis and of other areas if indicated [ ] . the most common findings on imaging suggestive of ifd are pulmonary nodules, especially those with a halo sign, air crescent sign, or cavitations. hepatosplenic and renal nodules should also raise suspicion of ifd. other studies to consider include an echocardiogram and dilated retinal examination, especially in patients with disseminated candidiasis. if symptoms of sinusitis or new lesions on the palate are present, a prompt nasal endoscopic examination and ct of sinuses are warranted. there are three main classes of antifungals used in patients with ifd: ( ) polyenes, which include amphotericin b (amb) and its lipid formulations (liposomal amb is most commonly used in pho and hsct patients); ( ) triazoles (fluconazole, itraconazole, voriconazole, and posaconazole); and ( ) echinocandins (caspofungin, micafungin, anidulafungin). antifungal prophylaxis should be considered in patients who are at high risk for ifd including hsct recipients and those undergoing intensive remission-induction therapy or salvage-induction therapy [ , ] . a high incidence of ifd has been reported in children with aml (newly diagnosed and relapsed) [ ] and patients with relapsed all [ ] , and such patients may be considered candidates for prophylaxis. among hsct recipients, those with an unrelated donor or a partially matched donor are at higher risk of ifd [ ] . recent studies show that children with aml receiving antifungal prophylaxis have reduced rates of induction mortality and resource utilization compared to those who did not receive prophylaxis [ ] . posaconazole was found to be superior to fluconazole or itraconazole in reducing incidence of ifd in children [ ] . echinocandins have been shown to be as or more effective for ifd prophylaxis than triazoles, especially in hsct recipients, with less adverse effects and can be an alternative option for prophylaxis [ ] . the idsa and the european conference on infections in leukemia (ecil- ) recommend using posaconazole, voriconazole, or micafungin during prolonged neutropenia to prevent ifd [ , ] . posaconazole is recommended for prophylaxis in patients with gvhd who are at high risk of ia [ ] . variable absorption of oral azoles in children should be taken into consideration when choosing oral antifungals. for patients with prolonged fever and neutropenia without an alternative explanation, consideration must be given to the possibility of an active fungal infection. empiric antifungal therapy should be considered for neutropenic patients with persistent or recurrent fevers after - days of antibiotic therapy and whose overall duration of neutropenia is expected to be > days [ ] . in low-risk patients, routine use of empiric antifungals is not recommended [ ] . liposomal amphotericin b or an echinocandin, both of which are fungicidal, are the first-line therapy for empiric antifungal treatment [ ] . there is insufficient data to provide specific guidance for patients with concern for a new fungal infection who are already receiving moldactive (i.e., anti-aspergillus) prophylaxis; however, some experts suggest switching to a different mold-active antifungal [ ] . surgical debridement of any fungal lesions or abscesses and prompt removal of cvc in the event of fungemia are crucial to reduce the progression of ifd. therapeutic drug monitoring (tdm) should be performed for patients receiving voriconazole, itraconazole, and posaconazole. there is extreme variability in triazole serum levels among pediatric patients owing to diversity in bioavailability in this population. for voriconazole tdm, a serum trough level between and mcg/dl has been considered safe and effective in preventing breakthrough ifd in children [ ] . for posaconazole, a trough level of . mg/l- mg/l has been shown to be effective [ ] . due to increased toxicity associated with vinca alkaloids, high doses of cyclophosphamide, and anthracyclines, azoles should not be co-administered with these agents. the antifungal agents most commonly used in children with pho and hsct and their indications are noted below (table . ). although combination antifungals are not well studied in children, they are used frequently in this population. pediatric data are variable regarding the benefit of combination antifungal therapy but overall report an increase in adverse events [ ] ; the risk of systemic toxicity must therefore be taken into account when considering the use of antifungal combinations. combination therapy could be considered in patients with refractory disease or as salvage therapy. granulocyte transfusions for profound or persistent neutropenia, adjunctive cytokines (e.g., granulocyte colony-stimulating factor [gcsf]), and reduction of immunosuppression and tapering of steroids are recommended as an adjunct to antifungal agents in the treatment of ifd [ ] . in summary, children and adolescents with malignancy have additional risk factors for healthcare-associated infections. meticulous attention to personal and oral hygiene, diet, environmental safety, and appropriate immunizations should be practiced in this high-risk population. the use of antimicrobial prophylaxis should be considered in periods of severe neutropenia to prevent bacterial and fungal infections as necessary. prompt diagnosis and management strategies to prevent infectious complications are key to preventing morbidity and mortality in these immunocompromised hosts. daily bathing with chlorhexidine and its effects on nosocomial infection rates in pediatric oncology patients infection prevention in the cancer center oral and dental considerations in pediatric leukemic patient guideline on dental management of pediatric patients receiving chemotherapy, hematopoietic cell transplantation, and/or radiation pathogen distribution and antimicrobial resistance among pediatric healthcareassociated infections reported to the national healthcare safety network antibiotic use during infectious episodes in the first months of anticancer treatment-a swedish cohort study of children aged - years mucosal barrier injury laboratory-confirmed bloodstream infection: results from a field test of a new national healthcare safety network definition the centers for disease control and prevention definition of mucosal barrier injury-associated bloodstream infection improves accurate detection of preventable bacteremia rates at a pediatric cancer center in a low-to middle-income country strategies to prevent central line-associated bloodstream infections in acute care hospitals: update guidelines for the prevention of intravascular catheter-related infections preventing clabsis among pediatric hematology/oncology inpatients: national collaborative results rapid cycle development of a multifactorial intervention achieved sustained reductions in central line-associated bloodstream infections in haematology oncology units at a children's hospital: a time series analysis guidelines for preventing infectious complications among hematopoietic cell transplantation recipients: a global perspective high rates of potentially infectious exposures between immunocompromised patients and their companion animals: an unmet need for education pet ownership in immunocompromised children--a review of the literature and survey of existing guidelines should immunocompromised patients have pets? antibiotic prophylaxis for patients with acute leukemia levofloxacin to prevent bacterial infection in patients with cancer and neutropenia antibacterial prophylaxis after chemotherapy for solid tumors and lymphomas clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: update by the infectious diseases society of america a pilot study of prophylactic ciprofloxacin during delayed intensification in children with acute lymphoblastic leukemia levofloxacin prophylaxis during induction therapy for pediatric acute lymphoblastic leukemia clinical and microbiologic outcomes of quinolone prophylaxis in children with acute myeloid leukemia effect of levofloxacin prophylaxis on bacteremia in children with acute leukemia or undergoing hematopoietic stem cell transplantation: a randomized clincial trial updated recommendations of the advisory committee on immunization practices for healthcare personnel vaccination: a necessary foundation for the essential work that remains to build successful programs redbook: report of the committee on infectious diseases guidelines for environmental infection control in health-care facilities. recommendations of cdc and the healthcare infection control practices advisory committee (hicpac) management of fever in neutropenic patients with different risks of complications repeated blood cultures in pediatric febrile neutropenia: would following the guidelines alter the outcome? pediatr blood cancer etiology and clinical course of febrile neutropenia in children with cancer changes in the etiology of bacteremia in febrile neutropenic patients and the susceptibilities of the currently isolated pathogens contemporary antimicrobial susceptibility patterns of bacterial pathogens commonly associated with febrile patients with neutropenia emergence of carbapenem resistant gram negative and vancomycin resistant gram positive organisms in bacteremic isolates of febrile neutropenic patients: a descriptive study changing epidemiology of infections in patients with neutropenia and cancer: emphasis on gram-positive and resistant bacteria recent changes in bacteremia in patients with cancer: a systematic review of epidemiology and antibiotic resistance management of febrile neutropenia in malignancy using the mascc score and other factors: feasibility and safety in routine clinical practice guideline for the management of fever and neutropenia in children with cancer and hematopoietic stem-cell transplantation recipients: update guidelines for the use of antimicrobial agents in neutropenic patients with cancer outpatient management of fever and neutropenia in adults treated for malignancy: american society of clinical oncology and infectious diseases society of america clinical practice guideline update summary the multinational association for supportive care in cancer risk index: a multinational scoring system for identifying low-risk febrile neutropenic cancer patients clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: update by the infectious diseases society of america invasive mycoses in children receiving hemopoietic sct a prospective, international cohort study of invasive mold infections in children epidemiology and outcomes of invasive fungal infections in allogeneic haematopoietic stem cell transplant recipients in the era of antifungal prophylaxis: a singlecentre study with focus on emerging pathogens invasive mold infections in pediatric cancer patients reflect heterogeneity in etiology, presentation, and outcome: a -year, single-institution, retrospective study antifungal prophylaxis in pediatric hematology/oncology: new choices & new data. pediatr blood cancer breakthrough zygomycosis after voriconazole treatment in recipients of hematopoietic stem-cell transplants trichosporonosis in pediatric patients with a hematologic disorder results from a prospective, international, epidemiologic study of invasive candidiasis in children and neonates risk factors for mortality in children with candidemia invasive mucormycosis in children: an epidemiologic study in european and non-european countries based on two registries practice guidelines for the diagnosis and management of aspergillosis: update by the infectious diseases society of america ecil- ): guidelines for diagnosis, prevention, and treatment of invasive fungal diseases in paediatric patients with cancer or allogeneic haemopoietic stem-cell transplantation clinical practice guideline for the management of candidiasis: update by the infectious diseases society of america threshold of galactomannan antigenemia positivity for early diagnosis of invasive aspergillosis in neutropenic children galactomannan antigenemia in pediatric oncology patients with invasive aspergillosis beta-d-glucan screening for detection of invasive fungal disease in children undergoing allogeneic hematopoietic stem cell transplantation guideline for primary antifungal prophylaxis for pediatric patients with cancer or hematopoietic stem cell transplant recipients antifungal prophylaxis associated with decreased induction mortality rates and resources utilized in children with new-onset acute myeloid leukemia antifungal prophylaxis with posaconazole vs. fluconazole or itraconazole in pediatric patients with neutropenia key: cord- -ucmzbezx authors: hardell, lennart; carlberg, michael title: health risks from radiofrequency radiation, including g, should be assessed by experts with no conflicts of interest date: - - journal: oncol lett doi: . /ol. . sha: doc_id: cord_uid: ucmzbezx the fifth generation, g, of radiofrequency (rf) radiation is about to be implemented globally without investigating the risks to human health and the environment. this has created debate among concerned individuals in numerous countries. in an appeal to the european union (eu) in september , currently endorsed by > scientists and medical doctors, a moratorium on g deployment was requested until proper scientific evaluation of potential negative consequences has been conducted. this request has not been acknowledged by the eu. the evaluation of rf radiation health risks from g technology is ignored in a report by a government expert group in switzerland and a recent publication from the international commission on non-ionizing radiation protection. conflicts of interest and ties to the industry seem to have contributed to the biased reports. the lack of proper unbiased risk evaluation of the g technology places populations at risk. furthermore, there seems to be a cartel of individuals monopolizing evaluation committees, thus reinforcing the no-risk paradigm. we believe that this activity should qualify as scientific misconduct. most politicians and other decision-makers using guidelines for exposure to radiofrequency (rf) radiation seem to ignore the risks to human health and the environment. the fact that the international agency for research on cancer (iarc) at the world health organization (who) in may classified rf radiation in the frequency range of khz to ghz to be a 'possible' human carcinogen, group b ( , ) , is being ignored. this has been recently exemplified in a hearing at the tallinn parliament in estonia ( ) . an important factor may be the influence on politicians by individuals and organizations with inborn conflicts of interests (cois) and their own agenda in supporting the no-risk paradigm ( , ) . the international commission on non-ionizing radiation protection (icnirp) has repeatedly ignored scientific evidence on adverse effects of rf radiation to humans and the environment. their guidelines for exposure are based solely on the thermal (heating) paradigm and were first published in icnirp ( ) , updated in icnirp ( ) and have now been newly published in icnirp ( ) , with no change of concept, only relying on thermal effects from rf radiation on humans. the large amount of peer-reviewed science on non-thermal effects has been ignored in all icnirp evaluations ( , ) . additionally, icnirp has successfully maintained their obsolete guidelines worldwide. cois can be detrimental, and it is necessary to be as unbiased as possible when assessing health risks. there are three points that should be emphasized. firstly, the evidence regarding health risks from environmental factors may not be unambiguous, and therefore informed judgements must be made. furthermore, there are gaps in knowledge that call for experienced evaluations, and no conclusion can be reached without value judgements. secondly, paradigms are defended against the evidence and against external assessments by social networks in the scientific community. thirdly, the stronger the impact of decisions about health risks on economic, military and political interests, the stronger will stakeholders try to influence these decision processes. since the iarc evaluation in ( , ) , the evidence on human cancer risks from rf radiation has been strengthened based on human cancer epidemiology reports ( ) ( ) ( ) , animal carcinogenicity studies ( ) ( ) ( ) and experimental findings on oxidative mechanisms ( ) and genotoxicity ( ) . therefore, the iarc category should be upgraded from group b to group , a human carcinogen ( ) . the deployment of the fifth generation, g, of rf radiation is a major concern in numerous countries, with groups of citizens trying to implement a moratorium until thorough research on adverse effects on human health and the environment has been performed. an appeal for a moratorium, currently signed by > international scientists and medical doctors, was sent to the european union (eu) in september ( ) , currently with no eu response ( ) . several regions have implemented a moratorium on the deployment of g motivated by the lack of studies on health effects, for instance geneva ( ) . in the present article, the current situation in switzerland is discussed as an example ( ) . additionally, the icnirp evaluation is discussed ( ) . several swiss citizens have brought to our attention that associate professor martin röösli is the chair of two important government expert groups in switzerland (directeur), despite possible cois and a history of misrepresentation of science ( , ) . these groups are beratende expertengruppe nis (berenis; the swiss advisory expert group on electromagnetic fields and non-ionizing radiation) ( ) , and the subgroup , the mobile communications and radiation working group of the department of the environment, transport, energy and communications/eidgenössisches departement für umwelt, verkehr, energie und kommunikation, evaluating rf-radiation health risks from g technology ( , ) . the conclusions made in the recent swiss government g report are biased and can be found here ( , ) . this g report concluded that there is an absence of short-term health impacts and an absence or insufficient evidence of long-term effects [see table (tableau ) on page in the french version ( ) and table (tabelle ) on page in the german version ( ) ]. furthermore, it was reported that there is limited evidence for glioma, neurilemmoma (schwannoma) and co-carcinogenic effects, and insufficient evidence for effects on children from prenatal exposure or from their own mobile phone use. regarding cognitive effects, fetal development and fertility (sperm quality), the judgement was that the evidence on harmful effects is insufficient. these evaluations were strikingly similar to those of the icnirp (see appendix b in icnirp ; ). other important endpoints, such as effects on blood-brain barrier, cell proliferation, apoptosis (programmed cell death), oxidative stress (reactive oxygen species) and gene and protein expression, were not evaluated. according this swiss evaluation is scientifically inaccurate and is in opposition to the opinion of numerous scientists in this field ( ) . in addition, electromagnetic field (emf) scientists from countries, all with published peer-reviewed research on the biologic and health effects of nonionizing electromagnetic fields (rf-emf) have stated that: 'numerous recent scientific publications have shown that rf-emf affects living organisms at levels well below most international and national guidelines. effects include increased cancer risk, cellular stress, increase in harmful free radicals, genetic damages, structural and functional changes of the reproductive system, learning and memory deficits, neurological disorders, and negative impacts on general wellbeing in humans. damage goes well beyond the human race, as there is growing evidence of harmful effects to both plant and animal life' ( ) . we are concerned that the swiss g report may be influenced by ties to mobile phone companies (cois) by one or several members of the evaluating group. funding from telecom companies is an obvious coi. martin röösli has been a member of the board of the telecom funded swiss research foundation for electricity and mobile communication (fsm) organization and he has received funding from the same organization ( ) ( ) ( ) . it should be noted that the fsm is a foundation that serves formally as an intermediate between industry and researchers. according to their website, among the five founders of fsm who 'provided the initial capital of the foundation' four are telecommunications companies: swisscom, salt, sunrise, g mobile (liquidated in ). the fifth founder is eth zurich (technology and engineering university). there are only two sponsors, swisscom (telecommunications) and swissgrid (energy), who 'support the fsm with annual donations that allow for both the management of the foundation and research funding' ( ) . the same situation applies to being a member of icnirp (table i) ( ) . in , the ethical council at karolinska institute in stockholm stated that being a member of icnirp is a potential coi. such membership should always be declared. this verdict was based on activities by anders ahlbom in sweden, at that time a member of icnirp, but is a general statement ( - - ; dnr, - - ). in summary: 'it is required that all parties clearly declare ties and other circumstances that may influence statements, so that decision makers and the public may be able to make solid conclusions and interpretations. aa [anders ahlbom] should thus declare his tie to icnirp whenever he makes statements on behalf of authorities and in other circumstances' (translated into english). cois with links to industry are of great importance; these links may be direct or indirect funding for research, payment of travel expenses, participation in conferences and meetings, presentation of research, etc. such circumstances are not always declared as exemplified above. a detailed description was recently presented for icnirp members ( ) . icnirp is a non-governmental organization (ngo) based in germany. members are selected via an internal process, and the organization lacks transparency and does not represent the opinion of the majority of the scientific community involved in research on health effects from rf radiation. independent international emf scientists in this research area have declared that: 'in , the icnirp released a statement saying that it was reaffirming its guidelines, as in their opinion, the scientific literature published since that time has provided no evidence of any adverse effects below the basic restrictions and does not necessitate an immediate revision of its guidance on limiting exposure to high frequency electromagnetic fields. icnirp continues to the present day to make these assertions, in spite of growing scientific evidence to the contrary. it is our opinion that, because the icnirp guidelines do not cover long-term exposure and low-intensity effects, they are insufficient to protect public health' ( ) . icnirp only acknowledges thermal effects from rf radiation. therefore, the large body of research on detrimental non-thermal effects is ignored. this was further discussed in a peer-reviewed scientific comment article ( ) . in , icnirp published 'icnirp note: critical evaluation of two radiofrequency electromagnetic field animal carcinogenicity studies published in ' ( ). it is surprising that this note claims that the histopathological evaluation in the us national toxicology program (ntp) study on animals exposed to rf radiation was not blinded ( , ) . in fact, unfounded critique of the ntp study had already been rebutted ( ); however, this seems to have had little or no impact on this icnirp note casting doubt on the findings of the animal study: 'this commentary addresses several unfounded criticisms about the design and results of the ntp study that have been promoted to minimize the utility of the experimental data on rfr [radiofrequency radiation] for assessing human health risks. in contrast to those criticisms, an expert peerreview panel recently concluded that the ntp studies were well designed, and that the results demonstrated that both gsm-and cdma-modulated rfr were carcinogenic to the heart (schwannomas) and brain (gliomas) of male rats' ( ) . in contrast to the opinion of the icnirp commission members, the iarc advisory group of scientists from countries has recently stated that the cancer bioassay in experimental animals and mechanistic evidence warrants high priority re-evaluation of the rf radiation-induced carcinogenesis ( ) . ( ) . surprisingly, the iarc classification of rf-emf exposure as group b ('possibly' carcinogenic to humans) from was concealed in the background material to the new icnirp draft on guidelines. notably, one of the icnirp commission members, martin röösli ( ), was also one of the iarc experts evaluating the scientific rf carcinogenicity in may ( ) . he should be well aware of the iarc classification. the iarc classification contradicts the scientific basis for the icnirp guidelines, making novel guidelines necessary and providing a basis to halt the rollout of g technology. therefore, the icnirp provides scientifically inaccurate reviews for various governments. one issue is that only thermal (heating) effects from rf radiation are considered, and all non-thermal effects are dismissed. an analysis from the uk demonstrates these inaccuracies ( ), also discussed in another article ( ) . all members of the icnirp commission are responsible for these biased statements that are not based on solid scientific evidence. icnirp release of novel guidelines for rf radiation. on march , , icnirp published their novel guidelines for exposure to emfs in the range of khz to ghz, thus including g ( ). the experimental studies demonstrating a variety of non-thermal biological/health effects ( , ) are not considered, as in their previous guidelines ( , ) . additionally, the icnirp increased the reference levels for the general public averaged over min for rf frequencies > - ghz (those that will be used for g in this frequency range), from w/m (tables and in ref. no. ) to w/m ( table in ref. no. ), which paves the way for even higher exposure levels to g than the already extremely high ones. background dosimetry is discussed in appendix a of the icnirp guidelines ( ) . the discussion on 'relevant biophysical mechanisms' should be criticized. the only mechanism considered by icnirp is temperature rise, which may also occur with g exposure, apart from the established non-thermal biological/health effects ( , ) . it is well known among experts in the emf-bioeffects field that the recorded cellular effects, such as dna damage, protein damage, chromosome damage and reproductive declines, and the vast majority of biological/health effects are not accompanied by any significant temperature rise in tissues ( ) ( ) ( ) ( ) . the ion forced-oscillation mechanism ( ) should be referred to as a plausible non-thermal mechanism of irregular gating of electrosensitive ion channels on cell membranes, resulting in disruption of the cell electrochemical balance and initiating free radical release and oxidative stress in the cells, which in turn causes genetic damage ( , ) . the irregular gating of ion channels on cell membranes is associated with changes in permeability of the cell membranes, which icnirp admits in its summary ( ) . health risks are discussed in appendix b of the icnirp guidelines ( ) . again, only thermal effects are considered, whereas literature on non-thermal health consequences is disregarded ( , , ) . in spite of public consultations on the draft, the final published version on health effects is virtually identical to the draft version, and comments seem to have been neglected ( ) . in the following section, appendix b on health effects ( ) (scenihr ) , and the swedish radiation safety authority (ssm) have produced several international reports regarding this issue (ssm (ssm , (ssm , . accordingly, the present guidelines have used these literature reviews as the basis for the health risk assessment associated with exposure to radiofrequency emfs rather than providing another review of the individual studies'. in the last years since its previous icnirp statement ( ), icnirp has not managed to conduct a novel evaluation of health effects from rf radiation. however, as shown in table i , several of the present icnirp members are also members of other committees, such as the eu scientific committee on emerging and newly identified health risks (scenihr), the swedish radiation safety authority (ssm) and the who, thus creating a cartel of individuals known to propagate the icnirp paradigm on rf radiation ( , , , ) . in fact, six of the seven expert members of the who, including emelie van deventer, were also included in icnirp ( , ) . therefore, emilie van deventer, the team leader of the radiation programme at who (the international emf project), is an observer on the main icnirp commission, and ssm seems to be influenced by icnirp. among the current seven external experts (danker-hopfe, dasenbrock, huss, harbo polusen, van rongen, röösli and scarfi), five are also members of icnirp, and van deventer used to be part of ssm. as discussed elsewhere ( ), it is unlikely that a person's evaluation of health risks associated with exposure to rf radiation would differ depending on what group the person belongs to. therefore, by selecting group members, the final outcome of the evaluation may already be predicted (no-risk paradigm). additionally, we believe that this may compromise sound scientific code of conduct. the scenihr report from ( ) has been used to legitimate the further expansion of the wireless technology and has been the basis for its deployment in a number of countries. one method, applied in the scenihr report, to dismiss cancer risks involves the selective inclusion of studies, excluding studies reporting cancer risks and including some investigations with inferior epidemiological quality. the report has been heavily criticized by researchers with no coi ( ) regarding the ssm, only yearly updates are available and no overall evaluations are made. therefore, no thorough review is presented. over the years, the icnirp has dominated this committee (table i) . therefore, it is unlikely that the opinion of the ssm will differ from that of the icnirp. in , the who launched a draft of a monograph on rf fields and health for public comments ( ) . it should be noted that the who issued the following statement: 'this is a draft document for public consultation. please do not quote or cite'. icnirp completely ignored that request and used the aforementioned document. the public consultations on the draft document were dismissed and never published. in addition to van deventer, five of the six members (mann, feychting, oftedal, van rongen, and scarfi) of the core group in charge of the who draft were also affiliated with icnirp, which constitutes a coi (table i) . scarfi is a former member of icnirp ( ) . several individuals and groups sent critical comments to the who on the numerous shortcomings in the draft of the monograph on rf radiation. in general, the who never responded to these comments and it is unclear to what extent, if any, they were even considered. nevertheless, the final version of the who 'in-depth review' has never been published. the authors of the present article were part of a team that applied to review sr -human cancer. on december , , the following reply was received from the who radiation programme: 'after careful review, we have decided to choose another team for this systematic review'. transparency is of importance for the whole process. therefore, a query was sent to the who requesting informa-tion regarding the following points: 'who did the evaluation of the groups that answered the call? what criteria were applied? how many groups had submitted and who were these? which groups were finally chosen for the different packages?'. in spite of sending the request four times, january , january , april and april , , there has been no reply from who. this appears to be a secret process behind closed doors. these circumstances have also been reported in microwave news ( ) . it is important to comment on the current icnirp evaluation. notably, on february , , two weeks before the icnirp publication, the who team on public health, environmental and social determinants of health issued a statement on g mobile networks and health: 'to date, and after much research performed, no adverse health effect has been causally linked with exposure to wireless technologies' ( ) . this statement is not correct based on current knowledge ( , , ( ) ( ) ( ) , ) and was without a personal signature. the lack of research on g safety has been previously discussed ( ) . furthermore, there is no evidence that can 'causally link' an adverse effect to an exposure. causality is no empirical fact, it is an interpretation. in the following section, only one (cancer) of the eight different end points in the icnirp publication ( ) is discussed, since it deals with our main research area. viii) cancer. 'in summary, no effects of radiofrequency emfs on the induction or development of cancer have been substantiated. the only substantiated adverse health effects caused by exposure to radiofrequency emfs are nerve stimulation, changes in the permeability of cell membranes, and effects due to temperature elevation. there is no evidence of adverse health effects at exposure levels below the restriction levels in the icnirp ( ) guidelines and no evidence of an interaction mechanism that would predict that adverse health effects could occur due to radiofrequency emf exposure below those restriction levels'. the icnirp draft ( ) has been previously described to some extent ( ) . the published final version on health effects is virtually similar to the draft. it cannot be taken at face value as scientific evidence of no risk from rf radiation. one example is the following statement (p. ): '…a set of case-control studies from the hardell group in sweden report significantly increased risks of both acoustic neuroma and malignant brain tumors already after less than five years since the start of mobile phone use, and at quite low levels of cumulative call time'. this allegation is not correct according to our publication for glioma ( ) . in the shortest latency group > - years, the risk of glioma was not increased (odds ratio (or), . ; % ci, . - . ) for use of wireless phones (mobile phone and/or cordless phone). there was a statistically significant increased risk of glioma per h of cumulative use (or, . ; % ci, . - . ) and per year of latency (or, . ; % ci, . - . ) ( ) . these published results are in contrast to the icnirp claims. regarding acoustic neuroma, the corresponding detailed results are reported in our previous study ( ) . the shortest latency period > - years yielded an or of . ( % ci, . - . ) for use of wireless phones; the risk increased per h of cumulative use (or, . ; % ci, . - . ) and per year of latency (or, . ; % ci, . - . ) ( ) . therefore, the allegation by icnirp is false. it is remarkable that icnirp is uninformed and that their writing is based on a misunderstanding of the peer-reviewed published articles as exemplified above. additionally, our studies ( , ) and another study by coureau et al ( ) , as well as the iarc evaluation from ( , ), are not included among the references. several statements by icnirp are made without any scientific references. on the other hand, the danish cohort study on mobile phone use ( ) is included, in spite of the fact that it was judged by iarc ( , ), as well as in our review ( ) , to be uninformative. a biased article written by authors including icnirp members, used to 'prove' the no-risk paradigm for rf radiation carcinogenesis ( ) , is cited by icnirp. notably, the article has not undergone relevant peer-review and we believe that it should not have been published in its current version. the shortcomings in the aforementioned article are discussed in the following sections. as discussed below, another claim ( ) is incorrect regarding increased risk of brain tumors associated with use of wireless phones: 'however, they are not consistent with trends in brain cancer incidence rates from a large number of countries or regions, which have not found any increase in the incidence since mobile phones were introduced'. the criticism of the icnirp draft guidelines from by the emf call ( ) can also be applied to the current icnirp publication. the call has been signed by scientists and medical doctors, as well as ngos: 'the international commission on non-ionizing radiation protection (icnirp) issued draft guidelines on th july for limiting exposure to electric, magnetic and electromagnetic fields ( khz to ghz). these guidelines are unscientific, obsolete and do not represent an objective evaluation of the available science on effects from this form of radiation. they ignore the vast amount of scientific findings that clearly and convincingly show harmful effects at intensities well below icnirp guidelines. we ask the united nations, the world health organization, and all governments to support the development and consideration of medical guidelines , that are independent of conflict of interests in terms of direct or indirect ties to industry, that represent the state of medical science, and that are truly protective'. in the recent report on icnirp published by two members of the european parliament it is concluded: 'that is the most important conclusion of this report: for really independent scientific advice we cannot rely on icnirp. the european commission and national governments, from countries like germany, should stop funding icnirp. it is high time that the european commission creates a new, public and fully independent advisory council on non-ionizing radiation' ( ) . published article. this section discusses an article with conclusions not substantiated by scientific evidence, representing a biased evaluation of cancer risks from mobile phone use and is an example of lack of objectivity and impartiality ( ). the aforementioned report was used by icnirp ( ) to validate that no risks have been found for brain and head tumors. therefore, the article should be discussed in further detail. the aforementioned article has numerous severe scientific deficiencies. one is that the results on use of cordless phones as a risk factor for brain tumors are not discussed. in fact, detailed results on cordless phones in studies by hardell et al ( , ) are omitted. when discussing glioma risk, all results on cumulative use of mobile phones, as well as ipsilateral or contralateral use associated with tumor localization in the brain, are omitted from the figures in the main text. some results in the article by röösli et al ( ) , such as cumulative use, can be found in the supplementary material, although the increased risk among heavy users is disregarded ( , , , ) . in supplementary figure , all odds ratios regarding long-term (≥ years) use of mobile phones are above unity (> . ) for glioma and neuroma ( ) . no results are provided for ipsilateral mobile phone use (same side of tumor localization and mobile phone use), which is of large biological importance. results on cumulative use, latency and ipsilateral use are especially important for risk assessment and have shown a consistent pattern of increased risk for brain and head tumors ( , ) . in the aforementioned article, recall bias is discussed as the reason for increased risk ( ) . the studies by hardell et al ( , ) included all types of brain tumors. in one analysis, meningioma cases in the same study were used as the 'control' entity ( ) , and still a statistically significant increased risk of glioma was identified for mobile phone use (ipsilateral or, . ; % ci, . - . ; contralateral or, . ; % ci, . - . ) and for cordless phone use (ipsilateral or, . ; % ci, . - . ; contralateral or, . ; % ci, . - . ). if the results were 'explained' by recall bias, similar results would have been obtained for both glioma and meningioma. thus, this type of analyses would not have yielded an increased glioma risk. also, for acoustic neuroma a statistically significant increased risk was found using meningioma cases as 'controls' ( ) . therefore, the results in the studies by hardell et al ( , ) cannot be explained by a systematic difference in assessment of exposure between cases and controls. these important methodological findings were disregarded by röösli et al ( ) . in the analyses of long-term use of mobile phones, a danish cohort study on mobile phone use is included ( ) , which was concluded to be uninformative in the iarc evaluation ( , ) . a methodological shortcoming of the aforementioned study was that only private mobile phone subscribers in denmark between and were included in the exposure group ( ) . the most exposed group, comprising , corporate users of mobile phones, were excluded and instead included in the unexposed control group consisting of the rest of the danish population. users with mobile phone subscription after were not included in the exposed group and were thus treated as unexposed at the time of cut-off of the follow up. no analysis of laterality of mobile phone use in relation to tumor localization was performed. notably, this cohort study is now included in the risk calculations, although martin röösli was a member of the iarc evaluation group and should have been aware of the iarc decision. the numerous shortcomings in the danish cohort study, discussed in detail in a peer-reviewed article ( ) , are omitted in the article by röösli et al ( ) . regarding animal studies, a study by falcioni et al ( ) at the ramazzini institute on rf radiation carcinogenesis is only mentioned as a reference, but the results are not discussed. in fact, these findings ( ) provide supportive evidence on the risk found in human epidemiology studies ( ), as well as the results in the ntp study ( , ) . furthermore, for incidence studies on brain tumors, the results are not presented in an adequate way. there is a lot of emphasis on the swedish cancer register data ( , ) , but the numerous shortcomings in the reporting of brain tumor cases to the register are not discussed. these shortcomings have been presented in detail in a previous study ( ) , but are disregarded by röösli et al ( ) . there is clear evidence from several countries regarding increasing numbers of patients with brain tumors, such as in sweden ( , ) , england ( ), denmark ( ) and france ( ) . the article by röösli et al ( ) , does not represent an objective scientific evaluation of brain and head tumor risk associated with the use of wireless phones, and should thus be disregarded. by omitting results of biological relevance and including studies that have been judged to be uninformative, the authors come to the conclusion that there are no risks: 'in summary, current evidence from all available studies including in vitro, in vivo, and epidemiological studies does not indicate an association between mp [mobile phone] use and tumors developing from the most exposed organs and tissues'. röösli et al ( ) , disregard the concordance of increased cancer risk in human epidemiology studies ( , , , ) animal studies ( ) ( ) ( ) , ) and laboratory studies ( , , ) . it is unfortunate that the review process of the aforementioned article has not been of adequate quality. finally, there is no statement in the article of specific funding of this particular work, which is not acceptable. only a limited number of comments on general funding are provided. it is not plausible that there was no funding for the study. we believe that, due to its numerous limitations, the aforementioned article should not have been published. cefalo. in , a case-control study on mobile phone use and brain tumor risk among children and adolescents termed cefalo was published ( ) . the study appears to have been designed to misrepresent the true risk, since the following question regarding cordless phone use was asked: 'how often did [child] speak on the cordless phone in the first years he/she used it regularly?'. there are no scientific valid reasons to limit the investigation to the first years. the result is a misrepresentation and a wrong exposure classification, since aydin et al ( ) willingly omitted any increase in the child's use of and exposure from cordless phone radiation after the first years of use. this unscientific treatment of cordless phone exposure was not mentioned in the article other than in a footnote of a table and in the methods section ( ) ; however, no explanation was provided: 'specifically, we analyzed whether subjects ever used baby monitors near the head, ever used cordless phones, and the cumulative duration and number of calls with cordless phones in the first years of use'. since previous studies have demonstrated that these phone types, in addition to mobile phones, increase brain tumor risk ( , ) , we believe that the exclusion of a complete exposure history on the use of cordless phones represents scientific misconduct. in a critical comment the authors of the present study wrote: 'further support of a true association was found in the results based on operator-recorded use for cases and controls, which for time since first subscription > . years yielded or . ( % ci . - . ) with a statistically significant trend (p = . ). the results based on such records would be judged to be more objective than face-to-face interviews, as in the study that clearly disclosed to the interviewer who was a case or a control. the authors disregarded these results on the grounds that there was no significant trend for operator data for the other variables -cumulative duration of subscriptions, cumulative duration of calls and cumulative number of calls. however, the statistical power in all the latter groups was lower since data was missing for about half of the cases and controls with operator-recorded use, which could very well explain the difference in the results' ( ) . our conclusion was that: 'we consider that the data contain several indications of increased risk, despite low exposure, short latency period, and limitations in the study design, analyses and interpretation. the information certainly cannot be used as reassuring evidence against an association, for reasons that we discuss in this commentary' ( ) . this is in contrast to the authors that claimed that the study was reassuring of no risk in a press release from martin röösli, july ( ) . considering the results and the numerous scientific shortcomings in the study ( ) , the statements in these press releases are not correct. there is no doubt that several individuals included in table i are influential, being members, as well as having consulting assignments, in several organizations, such as icnirp, berenis, the ssm, the program electromagnetic fields and health from zonmw in the netherlands, and the rapid response group for the japan emf information center ( ) . in fact, there appears to be a cartel of individuals working on this issue ( ) . associate professor martin röösli has had the chance to provide his view on the content of the present article relating to him. the only message from him was in an e-mail dated january , : 'just to be clear, all my research is funded by public money or not-for -profit fundations [foundations] . i think you will not help an important debate if you spread fake news'. obviously, as described in the present article, his comment is not correct considering his funding from the telecom industry ( , ) . as shown in table i , few individuals, and mostly the same ones, are involved in different evaluations of health risks from rf radiation and will thus propagate the same views on the risks in agencies of different countries associated with the icnirp views ( , ) . therefore, it is unlikely that they will change their opinions when participating in different organizations. furthermore, their competence in natural sciences, such as medicine, is often low or non-existent due to a lack of education in these disciplines ( ) . therefore, any chance for solid evaluations of medical issues is hampered. additionally, it must be concluded that if the 'thermal only' dogma is dismissed, this will have wide consequences for the whole wireless community, including permissions for base stations, regulations of the wireless technology and marketing, plans to roll out g, and it would therefore have a large impact on the industry. this may explain the resistance to acknowledge the risk by icnirp, eu, who, ssm and other agencies. however, the most important aspects to consider are human wellbeing and a healthy environment. telecoms can make profit in a variety of ways, and wireless is just one of them. they have the capacity to maintain profits by using different techniques, such as optical fiber, that will provide more data with less rf radiation exposure. particularly when considering the liability, they are incurring in their misguided insistence of wireless expansion that may ultimately catch up to them in the form of lawsuits, such as those previously experienced by asbestos and tobacco companies ( , ) . a recent book describes how deception is used to capture agencies and hijack science ( ) . there are certain tools that can be used for this. one is to re-analyze existing data using methods that are biased towards predetermined results ( ) . for example, this can be performed by hiring 'independent experts' to question scientific results and create doubt ( , ) . as clearly discussed in a number of chapters of the books ( - ), front groups may be created to gain access to politicians and to influence the public with biased opinions. other methods may involve intimidating and harassing independent scientists that report health risks based on sound science, or removing all funding from scientists who do not adhere to the no-risk proindustry paradigm. another tool would be economic support and courting decision makers with special information sessions that mislead them on science and mask bribery ( , , , - ). an industry with precise marketing goals has a big advantage over a loose scientific community with little funding. furthermore, access to regulatory agencies and overwhelming them with comments on proposed regulations is crucial ( ) . to counteract all these actions is time consuming and not always successful ( ) . nevertheless, it is important that these circumstances are explored and published in the peer-reviewed literature as historical notes for future use. based on the swiss and icnirp experiences, some recommendations can be made. one is to include only unbiased and experienced experts without cois for evaluation of health risks from rf radiation. all countries should declare a moratorium on g until independent research, performed by scientists without any ties to the industry, confirms its safety or not. g, g, g and wifi are also considered not to be safe, but g will be worse regarding harmful biological effects ( , , ) . the authors of the present article recommend an educational campaign to educate the public about the health risks of rf radiation exposure, and safe use of the technology, such as the deployment of wired internet in schools ( ) , as previously recommended by the european council resolution in ( ) and the emf scientist appeal ( ) . additionally, it is recommended that the government takes steps to markedly decrease the current exposure of the public to rf radiation, ( , ) . notably, dna damage has been identified in peripheral blood lymphocytes using the comet assay technique, and in buccal cells using the micronucleus assay, in individuals exposed to rf radiation from base stations ( ) . finally, an alternative approach to the flawed icnirp safety standards may be the comprehensive work of the european academy for environmental medicine (europaem) emf working group that has resulted in safety recommendations, which are free from the icnirp shortcomings ( ) . recently, the international guidelines on non-ionising radiation (ignir) have accepted europaem safety recommendations ( ). the bioinitiative group has recommended similar safety standards based on non-thermal emf effects ( ) . who and all nations should adopt the europaem/bioinitiative/ignir safety recommendations, supported by the majority of the scientific community, instead of the obsolete icnirp standards. in conclusion, it is important that all experts evaluating scientific evidence and assessing health risks from rf radiation do not have cois or bias. being a member of icnirp and being funded by the industry directly, or through an industryfunded foundation, constitute clear cois. furthermore, it is recommended that the interpretation of results from studies on health effects of rf radiation should take sponsorship from the telecom or other industry into account. it is concluded that the icnirp has failed to conduct a comprehensive evaluation of health risks associated with rf radiation. the latest icnirp publication cannot be used for guidelines on this exposure. data sharing is not applicable to this article, as no datasets were generated or analyzed during the present study. lh and mc contributed to the conception, design and writing of the manuscript. both authors read and approved the final manuscript. not applicable. not applicable. who international agency for research on cancer monograph working group: carcinogenicity of radiofrequency electromagnetic fields iarc monographs on the evaluation of carcinogenic risks to humans: non-ionizing radiation as regards the deployment of the fifth generation, g, of wireless communication inaccurate official assessment of radiofrequency safety by the advisory group on non-ionising radiation world health organization, radiofrequency radiation and health -a hard nut to crack (review) international commission on non-ionizing radiation protection: guidelines for limiting exposure to time-varying electric, magnetic, and electromagnetic fields (up to ghz) international commission on non-ionizing radiation protection: icnirp statement on the 'guidelines for limiting exposure to time-varying electric, magnetic and electromagnetic fields (up to ghz international commission on non-ionizing radiation protection (icnirp) : guidelines for limiting exposure to electromagnetic fields ( khz to ghz) thermal and non-thermal health effects of low intensity non-ionizing radiation: an international perspective cancer epidemiology update, following the iarc evaluation of radiofrequency electromagnetic fields (monograph ) mobile phone and cordless phone use and the risk for glioma -analysis of pooled case-control studies in sweden national toxicology program: ntp technical report on the toxicology and carcinogenesis studies in hsd:sprague dawley sd rats exposed to whole-body radio frequency radiation at a frequency ( mhz) and modulations (gsm and cdma) used by cell phones report of final results regarding brain and heart tumors in sprague-dawley rats exposed from prenatal life until natural death to mobile phone radiofrequency field representative of a . ghz gsm base station environmental emission oxidative mechanisms of biological activity of low-intensity radiofrequency radiation evaluation of the genotoxicity of cell phone radiofrequency radiation in male and female rats and mice following subchronic exposure evaluation of mobile phone and cordless phone use and glioma risk using the bradford hill viewpoints from on association or causation appeals that matter or not on a moratorium on the deployment of the fifth generation, g, for microwave radiation environmental health trust: three-year moratorium on g and g in head of swiss radiation protection committee accused of g-swindle. nordic countries deceived the international commission on non-ionizing radiation protection: conflicts of interest, corporate capture and the push for g brain and salivary gland tumors and mobile phone use: evaluating the evidence from various epidemiological study designs berenis -swiss expert group on electromagnetic fields and non-ionising radiation office fédéral de l'environnement: téléphonie mobile et g: le conseil fédéral décide de la suite de la procedure département fédéral de l'environnement, des transports, de l'énergie et de la communication: groupe de travail téléphonie mobile et rayonnement: présentation d'un rapport factuel global groupe de travail téléphonie mobile et rayonnement: rapport téléphonie mobile et rayonnement. publié par le groupe de travail téléphonie mobile et rayonnement sur mandat du detec herausgegeben von der arbeitsgruppe mobilfunk und strahlung im auftrag des uvek un groupe de travail fédéral temporise sur les risques de santé et ne fixe pas de limite aux rayonnements emfscientist: international appeal: scientists call for protection from non-ionizing electromagnetic field exposure swiss research foundation for electricity and mobile communication: organisation swiss research foundation for electricity and mobile communication: publications swiss research foundation for electricity and mobile communication: annual report swiss research foundation for electricity and mobile communication: sponsors and supporters international commission on non-ionizing radiation protection (icnirp) : icnirp note: critical evaluation of two radiofrequency electromagnetic field animal carcinogenicity studies published commentary on the utility of the national toxicology program study on cell phone radiofrequency radiation data for assessing human health risks despite unfounded criticisms aimed at minimizing the findings of adverse health effects iarc monographs priorities group: advisory group recommendations on priorities for the iarc monographs international commission on non-ionizing radiation protection: guidelines for limiting exposure to time-varying electric, magnetic and electromagnetic fields ( khz to ghz) international commission on non-ionizing radiation protection: commission a rc monog r aphs on t he eva lu at ion of carcinogenic risks to humans systematic derivation of safety limits for time-varying g radiofrequency exposure based on analytical models and thermal dose exposure of insects to radio-frequency electromagnetic fields from to ghz effects of electromagnetic fields on molecules and cells the effects of radiofrequency fields on cell proliferation are non-thermal comparing dna damage induced by mobile telephony and other types of man-made electromagnetic fields chromosome damage in human cells induced by umts mobile telephony radiation mechanism for action of electromagnetic fields on cells electromagnetic fields act via activation of voltagegated calcium channels to produce beneficial or adverse effects europaem emf guideline for the prevention, diagnosis and treatment of emf-related health problems and illnesses scientific committee on emerging and newly identified health risks (scenihr): opinion on potential health effects of exposure to electromagnetic fields (emf). european commission comments on scenihr: opinion on potential health effects of exposure to electromagnetic fields world health organization: radio frequency fields: environmental health criteria monograph consultation on the scientific review for the upcoming who environmental health criteria microwave news: will who kick its icnirp habit? non-thermal effects hang in the balance. repacholi's legacy of industry cronyism world health organization: g mobile networks and health pooled analysis of case-control studies on acoustic neuroma diagnosed - and - and use of mobile and cordless phones mobile phone use and brain tumours in the cerenat case-control study cellular telephones and cancer--a nationwide cohort study in denmark review of four publications on the danish cohort study on mobile phone subscribers and risk of brain tumors the emf call: scientists and ngos call for truly protective limits for exposure to electromagnetic fields ( khz to brain tumour risk in relation to mobile telephone use: results of the interphone international case-control study increasing rates of brain tumours in the swedish national inpatient register and the causes of death register mobile phones, cordless phones and rates of brain tumors in different age groups in the swedish national inpatient register and the swedish cancer register during - brain tumours: rise in glioblastoma multiforme incidence in england - suggests an adverse environmental or lifestyle factor microwave news: spike in 'aggressive' brain cancer in denmark brain cancers: times more new cases of glioblastoma in according to public health france indication of cocarcinogenic potential of chronic umts-modulated radiofrequency exposure in an ethylnitrosourea mouse model tumor promotion by exposure to radiofrequency electromagnetic fields below exposure limits for humans mobile phone use and brain tumors in children and adolescents: a multicenter case-control study childhood brain tumour risk and its association with wireless phones: a commentary kein erhöhtes hirntumorrisiko bei kindern und jugendlichen wegen handys reassuring results from first study on young mobile users and cancer risk swedish radiation safety authority: declaration of disqualification, conflicts of interest and other ties for experts and specialists of the swedish radiation safety authority electromagnetic radiation safety: icnirp's exposure guidelines for radio frequency fields swiss research foundation for electricity and mobile communication: list of funded research projects swiss research foundation for electricity and mobile communication: sponsors and supporters secret ties in asbestos -downplaying and effacing the risks of a toxic mineral greenwashing: the swedish experience the triumph of doubt: dark money and the science of deception doubt is their product. how industry's assault on science threatens your health corporate ties that bind. an examination of corporate manipulation and vested interest in public health towards g communication systems: are there health implications? g wireless telecommunications expansion: public health and environmental implications measurements of radiofrequency radiation with a body-borne exposimeter in swedish schools with wi-fi. front public health : radiofrequency radiation from nearby mobile phone base stations-a case comparison of one low and one high exposure apartment compared with results on brain and heart tumour risks in rats exposed to . ghz base station environmental emissions international guidelines on non-ionising radiation: guidelines. ignir's latest independent guidelines on emf exposure are available now to download and use a rationale for biologicallybased exposure standards for low-intensity electromagnetic radiation swedish radiation safety authority: publications this work is licensed under a creative commons attribution-noncommercial the authors would like to thank mr. reza ganjavi for valuable comments. no funding was received. the authors declare that they have no competing interests. key: cord- -p lijyu authors: rodriguez-proteau, rosita; grant, roberta l. title: toxicity evaluation and human health risk assessment of surface and ground water contaminated by recycled hazardous waste materials date: - - journal: water pollution doi: . /b sha: doc_id: cord_uid: p lijyu prior to the s, principles involving the fate and transport of hazardous chemicals from either hazardous waste spills or landfills into ground water and/or surface water were not fully understood. in addition, national guidance on proper waste disposal techniques was not well developed. as a result, there were many instances where hazardous waste was not disposed of properly, such as the love canal environmental pollution incident. this incident led to the passage of the resource conservation and recovery act (rcra) of . this act gave the united states environmental protection agency regulatory control of all stages of the hazardous waste management cycle. presently, numerous federal agencies provide guidance on methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, i.e., soil, air, and water. these agencies also establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. the risk assessment methodology is used by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization. the overall objectives of risk assessment are to balance risks and benefits; to set target levels; to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations; and to estimate residual risks and extent of risk reduction. the chapter will provide information on the concepts used in estimating risk and hazard due to exposure to ground and surface waters contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the steps in the risk assessment process. moreover, this chapter will provide examples of contaminated water exposure pathway calculations as well as provide information on current guidelines, databases, and resources such as current drinking water standards, health advisories, and ambient water quality criteria. finally, specific examples of contaminants released from recycled hazardous waste materials and case studies evaluating the human health effects due to contamination of ground and surface waters from recycled hazardous waste materials will be provided and discussed. after world war ii, industries began to produce a whole new generation of industrial and consumer goods made of synthetic organic chemicals such as plastics, solvents, detergents, and pesticides. industries profited enormously from the production and marketing of these products and consumers became accustomed to the convenience of synthetic products as well as cheap, convenient, throwaway packaging materials. as the industrial production of these products increased, so did the production, accumulation, and disposal of hazardous waste. prior to , facilities that handled and/or disposed of hazardous waste were not provided with detailed regulations and/or guidance on proper waste handling/ disposal techniques and, as a result, there were many instances where hazardous waste was improperly disposed. when chemicals are improperly disposed in the environment, abandoned hazardous waste sites are created that potentially affect human health and cost our society billions of dollars due to the high cost of not only evaluating human health and environmental impacts but also to performing site clean-ups. an example of one of the most well known incidents of improper disposal of hazardous waste was the love canal environmental pollution incident [ ] . this incident led to the passage of the resource conservation and recovery act (rcra) of . this act gave the united states environmental protection agency (usepa) regulatory control of all stages of the hazardous waste management cycle from the "cradle-to-grave:' beginning in the s, congress passed several other acts designed to protect human health and the environment ( table ) . based on the legislative directives in these acts, the usepa has issued numerous rules, regulations, and guidance documents that ensure that the use, disposal, processing, and handling of hazardous waste do not result in impacts to human health or the environment. state governments are authorized to implement these rules/regulations promulgated by the usepa, to permit facilities that handle hazardous waste in their states, and to create additional state rules and state regulations that apply to the operations of facilities in their specific state. the emphasis in recent years is to prevent pollution by recycling hazardous waste followed by proper disposal practices. rcra defines recyclable materials as "hazardous waste that are reclaimed to recover a usable product:' recycling is a broad term that applies to those who use, reuse, or reclaim waste to use as an ingredient to make a product and to use as an effective substitute for a commercial product. a material is reclaimed if it is processed to recover a useful by-product or forms the starting material for the systematic scientific approach of evaluating potential adverse health effects resulting from human exposure to hazardous agents or situations occur by the following steps: i) hazard identification; ii) dose-response (quantitative) assessment; iii) exposure assessment; iv) risk characterization [ ] . the overall objectives of risk assessment are to balance risks and benefits, to set target levels, to set priorities for program activities at regulatory agencies, industrial or commercial facilities, or environmental and consumer organizations, and to estimate residual risks and extent of risk reduction [ ] . diversity of risk assessment methodology helps ensure that all possible risk models and outcomes have been considered and minimize the potential for error [ ] . this section will provide information on the concepts used in estimating risk and hazard due to exposure to ground water and surface water contaminated from the recycling of hazardous waste and/or hazardous waste materials for each of the aforementioned steps. the first step in the risk assessment process is an evaluation of all human and animal data to determine what health effects occur after exposure to a chemical. well-conducted human studies are preferred, but occupational or accidental exposures to chemicals also provide useful information. however, in most cases, the results from animal studies are used as models to predict effects in humans since animal studies allow for controlled dose-response investigations and detailed, thorough toxicological analysis. some toxicants produce health effects immediately following exposure such as air pollutants that can produce eye irritation in individuals after a few minutes of exposure. other effects, such as organ damage due to metals and solvents, may not become manifested for months or years after first exposure. the time from the first exposure to the observation of a health effect is called the latent period. the length of this period is dependent on various factors such as the type of pathology induced by the compound/chemical of potential concern (copc), dose, dose rate as well as host characteristics such as age at first exposure, gender, race, species, and strain. other host factors that influence susceptibility to environmental exposures include genetic traits; preexisting diseases; behavioral traits such as smoking; coexisting exposures; and medication and vitamin supplementation [ ] . genetic studies include investigations of the effects of chemicals on the genes and chromosomes (genetic toxicology) and ecogenetics, a relatively new field, describes a host's genetic variation in predisposition and resistance to copc exposure [ ] . ecogenetics involves studies of specific exposures ranging from pharmaceuticals known as pharmacogenetics, pesticides, inhaled pollutants, foods, food additives, to allergic and sensitizing agents [ ] . moreover, induction of a health effect at the molecular level may occur after a single exposure, after repeated exposures, or after long-term continuous exposure. the length of the induction period may be a function of the same variables as the latent period. effective exposure time refers to the exposure time that occurred up to the point of induction [ ] . ineffective exposure is readily observed in dose-response curves as a saturation of response in the high dose range. an experimental study must follow the subjects beyond the length of the minimum latent period to observe all effects and cases associated with exposure. under ideal circumstances, a study will follow subjects for their lifetime. lifetime follow-up is common for animal studies but uncommon for epidemiology studies [ ] . qualitative assessment of hazard information should include a consideration of the consistency and concordance of the findings. such assessments should include a determination of the consistency of the toxicological findings across species and target organs, an evaluation of consistency across duplicate experimental conditions, and the adequacy of the experiments to detect the adverse endpoints of interest [ ] . for consideration of whether a copc is a carcinogen, qualitative assessment of animal or human evidence is done by many agencies, including the usepa and the international agency for research on cancer (iarc). similar evidence classifications are used for both animal and human evidence categories by both agencies. these evidence classifications are used for overall weight-of-evidence (woe) carcinogenicity classification schemes where the alphanumeric classification levels recommended by usepa [ ] are shown in table . usepa's woe carcinogenicity classification schemes were first recommended in the guidelines for carcinogen risk assessment (usepa, , hereafter " cancer table usepa's carcinogenicity classification scheme [ ] alphanumeric code evidence of noncarcinogenicity for humans; no evidence of carcinogenicity in adequate studies in at least two species or in both epidemiological and animal studies table weight-of-evidence classification scheme for qualitative assessment of chemical mixtures from mumtaz and durkin [ ] mechanistic understanding: i, ii, and iii i. direct and unambiguous mechanistic data ii. mechanistic data on related compounds iii. inadequate or ambiguous mechanistic data toxicologic significance: a, b, and c a. direct evidence of toxicologic significance of interaction b. probable evidence of a toxicologic significance based on related compounds c. unclear evidence of a toxicologic significance exposure modifiers: and . anticipated exposure duration and sequence . different exposure duration or sequence .a. in vivo data .b. in vitro data .b.i. anticipated route of exposure .b.ii. different route of exposure mixture is additive (=),greater than additive (> ), orless than additive ( <). guidelines") [ ] . however, the guidelines for carcinogen risk assessment, review draft (usepa, ,hereafter" draft cancer guidelines") [ ] recommend a woe narrative describing a summary of the key evidence for carcinogenicity. the draft cancer guidelines will serve as interim guidance until usepa issues final cancer guidelines [ ] . for evaluating chemical mixtures of noncarcinogens, mumtaz and durkin [ ] suggest the interaction data (i.e., independent joint action, similar joint action and synergistic action) and the qualitative and quantitative interaction matrix be taken into consideration when determining the hazard index. a qualitative woe scheme for evaluating chemical mixtures is shown in table . the woe takes into consideration the copc, data, reference doses/concentrations, and hazard index based on additivity [ ] . figure illustrates each of the chemical mixture's woe determination by a symbol indicating the direction of the interaction followed by the alphanumeric expression in table . the first two components are the major factors for ranking the quality of the mechanistic data to support the risk assessment. because toxicity studies must be evaluated to determine the quantitative dose-response relationship between the magnitude of exposure and the extent and severity of the adverse effect, a brief description of various toxicity tests will be provided. different methodologies are used to characterize doseresponse relationships, depending on whether or not the chemical has been identified as a carcinogen or noncarcinogen. carcinogens are assumed to pose some risk at any exposure level [ ] . four classes of toxicant-induced health effects include: i) cancer: genotoxic and nongenotoxic mechanisms; ii) hereditary effects: genotoxic mechanisms; iii) developmental effects: genatoxic or nongenotoxic mechanisms; iv) organ/tissue effects: nongenotoxic mechanisms [ ] . the evaluation of chemicals for acute toxicity is necessary for the protection of public health and the environment. acute toxicity is generally performed by the probable route of exposure in order to provide information on health hazards likely to arise from short-term exposure by that route (table ) [ ] . as shown in table , there are four categories ranging from i to iv based on increasing doses. generally, acute studies evaluate oral, dermal, inhalation, and eye and skin irritation as well as dermal sensitization. the acute inhalation studies are performed from one to seven days while the intermediate studies are performed from seven days to several months [ ] .an evaluation of acute toxicity data includes the relationship of the exposure to the copc and the incidence and severity of all abnormalities, gross lesions, body weight changes, effects on mortality, and any other toxic effects. an acute exposure is considered to be a one-time or short-term exposure with a duration of less than or equal to h. acute toxicity testing is conducted toxity evaluation and human health risk assessment of surface and ground water up to days of exposure and subacute testing for - days. testing periods for the evaluation of developmental effects is less than days since developmental toxicity can occur after short periods of exposure. sub chronic testing is typically conducted for days to year since subchronic exposures are considered to be multiple or continuous exposures occurring for approximately % of an experimental species lifetime. chronic exposures are assumed to be multiple exposures occurring over an extended period of time, or a significant fraction of the animal's or the individual's lifetime. to minimize the number of animals used and to take full account of their welfare, usepa recommends the use of data from structurally related substances or mixtures [ ] . review of existing toxicity information on chemical substances that are structurally related to the copc may provide enough information to make preliminary hazard evaluations that may reduce the need for testing. for example, if a chemical can be predicted to have corrosive potential based on structure-activity relationships (sars), dermal or eye irritation testing does not need to be performed in order to classify it as a corrosive agent. all the human carcinogens that have been identified have produced positive results in at least one animal model. in the absence of adequate human data, it is plausible to regard agents and/or mixtures for which sufficient evidence of carcinogenicity in animals exists to be a possible carcinogenic risk to humans [ ] . therefore, chemicals that cause tumors in animals are presumed to cause tumors in humans. in general, the most appropriate rodent bioassays are those that test the exposure pathways most relevant to human exposure pathways, i.e., inhalation, oral, dermal, etc. because it is feasible to combine bioassays together, it is desirable to tie these bioassays with mechanistic studies, biomarker studies, and genetic studies to understand the mechanism(s) of toxicity and/or carcinogenicity [ ] . a typical experimental design includes two different species, both genders, at least subjects per experimental group using near lifetime exposures. for dose-response purposes, a minimum of three dose levels should be used. the highest dose, typically the maximum tolerated dose, mtd, is based on the findings from a -day study to ensure that the test dose is adequate for the assessment of chronic toxicity and carcinogenic potential. the lowest dose level should produce no evidence of toxicity. in the oral studies, the animals are dosed with the copc on a -day per week basis for a period of at least months for mice and hamsters and months for rats [ ] . for dermal studies, animals are treated with the copc for at least h per day on a -day per week basis for a period. a minimum of h should be allowed for the skin to recover before the next dosing. the copc is applied uniformly over a shaved area that is approximately % of the total body surface area [ ] . the animals are evaluated for an increase in number of tumors, size of tumors, and number of rare tumors seen and/or expressed. even without toxicity, a high dose may trigger events different from those triggered by low-dose exposures. also, these bioassays can be evaluated for uncontrolled effects by comparing weight vs time and mortality vs time curves [ ] . if there is a divergence between the control group and the experimental group in the weight vs time curve, this indicates that there is a disruption of normal homeostasis due to high-level dosing. if there is a divergence in the mortality vs time curves, this indicates that there is an uncontrollable effect [ ] . the national toxicology program (ntp) criterion for classifying a chemical as a carcinogen is that it must be tumorigenic in at least one site in one sex of f rats or b c f mice. validation and application of short-term tests (stt) are important in risk assessment because these assays can be designed to provide information about mechanisms of effects. short-term toxicity experiments includes in vitro or short-term in vivo tests ranging from bacterial mutation assays to more elaborate in vivo short-term tests such as skin-painting studies in mice and altered rat liver foci assays. these studies determine if copcs are mutagenic, indicating they have the potential to be carcinogens as well. in general, stt are fast and inexpensive compared with the lifetime rodent cancer bioassays [ ] . positive results of stt have been used to predict potential carcinogenicity. common stt include the following: ames salmonella/microsome mutagenesis assay (sal); assays for chromosome aberration (abs); sister chromatid exchange induction (sce) in chinese hamster ovary cells; the mouse lymphoma l y cell mutagenesis assay (moly). there are several limitations to stt such as: stt cannot replace long-term rodent studies for the identification of carcinogens; the available tests do not detect all classes of copcs that are active in the carcinogenic process such as hormones; and negative results from stt cannot rule out carcinogenicity [ ] . the most convincing evidence for human risk is a well-conducted epidemiological study where an association between exposure to copc and a disease has been observed. these studies compare copc-exposed individuals vs non-copc-exposed individuals [ ] . the major types of epidemiology studies are cross-sectional studies, cohort studies, and case-control studies. cross-sectional studies survey groups of humans to identify risk factors and disease. these studies are not very useful for establishing a cause-and-effect relationship. cohort studies evaluate individuals on the basis of their exposure to the copc under investigation. these individuals are monitored for development of disease. prospective studies monitor individuals who initially are diseasefree to determine if they develop the disease over time. in case-control studies, subjects are selected on the basis of disease status and are matched accordingly. the exposure histories of the two groups are compared to determine key consistent features. thus, all case-control studies are retrospective studies [ ] . epidemiological findings are evaluated by the strength of association, consistency of observations, specificity, appropriateness of temporal relationship, dose responsiveness, biological plausibility and coherence, verification, and biological analogy [s].a disadvantage of epidemiological studies is an accurate measure of concentration or dose that the copc-exposed individuals receives is not available, so estimates must be employed to quantify the relationship between exposure and adverse effects. moreover, the control group is a major determinant of whether or not a statistically significant adverse effect can be detected. the various types of control groups are: regional general population; general population of a state; local general population; and workers in the same or a similar industry who are exposed to lower or zero levels of the toxicant under study [ ] . dose-response assessment is the fundamental basis of the quantitative relationship between exposure to an agent and the incidence of an adverse response. the procedures used to define the dose-response relationship for carcinogens and noncarcinogens differ. for carcinogens, a non-threshold, zero threshold, dose-response relationship is used when there are known or assumed risks of an adverse response at any dose above zero. non-threshold toxicants include hereditary disease toxicants, genotoxic carcinogens, and genotoxic developmental toxicants. for noncarcinogens, a threshold, nonzero threshold is used to evaluate toxicants that are known or assumed to produce no adverse effects below a certain dose or dose rate. threshold toxicants include nongenotoxic carcinogens, nongenotoxic developmental toxicants, and organ/ tissue toxicants [ ] . the two different approaches will be discussed separately in this section. the toxicity factors used to evaluate oral exposure and inhalation exposure are expressed in different units to account for the unique differences between these two routes of exposure. cancer slope factors (csfs), in units of (mg/kg/day)-t, and reference doses (rids), in units of mg/kg/day, are used to quantify the relationship between dose and effect for oral exposure whereas unit risk factors (urfs), in units of (jlg/m )-t, and reference concentrations (rfcs), in units of mg/m\ are used to describe the relationship between ambient air concentration and effect for inhalation exposure. the urf and rfc methodology accounts for the species-specific relationships of exposure concentration to deposited/delivered doses to the respiratory tract by employing animal-to-human dosimetric adjustments that are different than those employed for oral exposure. the interaction with the respiratory tract and ultimate disposition are considered as well as the physicochemical characteristics of the inhaled agent and whether the exposure is to particles or gases. most important is the type of toxicity observed since direct effects on the respiratory tract (i.e., portal of entry effects) must be considered as opposed to toxicity remote to the portal-of-entry [ ] . based on the differences between oral and inhalation exposure, route to route extrapolation of oral toxicity values to inhalation toxicity values may not be appropriate. please refer to appendix b of the soil screening guidance [ ] for a discussion of issues relating to route-to-route extrapolation. carcinogenic assessment assumes that exposure to any amount of a carcinogenic substance increases carcinogenic risk. thus, zero risk does not exist (a non-threshold response) because there is no carcinogen exposure concentration low enough that will not increase risk of cancer. a genotoxic carcinogen alters the information coded in dna; thus, it is reasonable to assume that these agents do not have a threshold so that a risk of cancer exists no matter how low the dose. there are three stages of genotoxic carcinogenesis: initiation, promotion, and progression. initiation refers to the induction of an irreversible change in dna caused by a mutagen. the initiator may be a direct-activating carcinogen or a carcinogenic metabolite. promotion refers to the possibly reversible replication of initiated cells to form a "benign" lesion. promoters are not genotoxic or carcinogenic but they enhance the tumorigenic response initiated by a primary or secondary carcinogen when administered at a later time. complete carcinogens have initiation and promotion properties [ ] . nongenotoxic carcinogenesis does not involve direct interaction of a carcinogen with dna. mechanisms of nongenotoxic carcinogenesis include an accelerated replication that may increase the frequency of spontaneous mutations or increase the susceptibility of dna damage. cancer may be secondary to organ toxicity and may occur only at high dose rates. moreover, many nongenotoxic cancer mechanisms are species-specific where the results from certain rodent species may not apply to human [ ] . several approaches and models are used to provide estimates of the upper limit on lifetime cancer risks per unit of dose or unit of ambient air concentration, i.e., the csf or the urf, respectively. the upper bound excess cancer risk estimates may be calculated using models such as the one-hit, weibull, logit, log-probit,or multistage models [ , ] . the linearized multistage model is considered to be one of the more conservative models and is typically used because the mechanism of cancer is not well understood and one model may not be more predictive than another one [ , ] . because the risk assessor generally needs to extrapolate beyond the region of the dose-response curve for which experimentally observed data are available, models derived from mechanistic assumptions involve the use of a mathematical equation to describe dose-response relationships that are consistent with biological mechanisms of response [ ] . "hit models" for cancer modeling assume that i) an infinite number of targets exist, ii) after a minimum of targets have been modified, the host will elicit a toxic response, iii) a critical target is altered if a sufficient number of hits occurs, and iv) the probability of a hit in the lowdose range is proportional to the dose of copc [ ] . the one-hit linear model is the simplest mechanistic model where only one hit or critical cellular interaction is required for cell function to be altered. multi-hit models describe hypothesized single-target multi-hit events as well as multi-target events in carcinogenesis. biologically based dose-response (bbdr) modeling reflects specific biological process [ ] . because a large number of subjects would be required to detect small responses at very low doses, several theoretical mathematical extrapolation models have been proposed for relating dose and response in the subexperimental dose range: tolerance distribution models, mechanistic models, and enhanced models. these mathematical models generally extrapolate low-dose carcinogenic risks to humans based on effects observed at the high doses in experimental animal studies. the linear interpolation model interpolates between the response observed at the lowest experimental dose and the origin. linear interpolation is recommended due to its conservatism, simplicity, and reliance because it is unlikely to underestimate the true-low dose risk [ ] . there is no universally agreed upon method for estimating an equivalent human dose from an animal study. however, several methods are currently being used to obtain an estimate of the equivalent human dose. the first method calculates an equivalent human dose from an animal study by scaling the animal dose rate for animal body weight. to derive an equivalent human dose from animal data, the draft cancer guidelines recommend adjusting the daily applied oral doses experienced over a lifetime in proportion to bw [ ] . for noncarcinogens, an uncertainty factor is employed to estimate the equivalent human dose from an animal study if pharmacokinetic data is not available. noncarcinogenic dose-response assessment utilizes a point of effects method which selects the highest dosage level tested in humans or animals at which no adverse effects were demonstrated and applies uncertainty factors or margins of safety to this dosage level to determine the level of exposure where no health effects will be observed, even for sensitive members of the population. also, benchmark dose modeling may be conducted if the experimental data are adequate. animal bioassay data are generally used for dose-response assessment; however, the risk assessor is normally interested in low environmental exposures of humans, which are generally below the experimentally observable range of responses seen in the animal assays. thus, low-dose extrapolation and animal-to-human risk extrapolation methods are required and constitute major aspects of dose-response assessment. human and animal dose rates are frequently reported in terms of the following abbreviations, which are defined below: loel lowest observed effect level in mglkg·day, which produces a statistically or biologically significant effect loael lowest observed adverse effect level in mg/kg·day, which produces a statistically or biologically significant adverse effect noel no observed effect level in mg/kg·day, which does not produce a statistically or biologically significant effect noael no observed adverse effect level in mg!kg·day, which does not produce a statistically or biologically significant adverse effect. key factors in determining which noael or loael to use in calculating a reference dose (rid) is exposure duration. as mentioned previously, acute animal studies are typically conducted for up to days, subacute studies for to days, and subchronic studies for days to year. chronic studies are conducted for a significant portion of the lifetime of the animal. animals may experience health effects during short-term exposure which may differ from effects observed after long-term exposure, so short-term animal studies less than days should not be used to develop chronic rids except for the development of interim rids or developmental rids. exceptionally high quality > day oral exposure studies may be used as a basis for developing an rid whereas the inhalation route is preferred for deriving a rfc [ ] . please note that the same approaches used to develop the rid are used to develop the rfc, the only difference being the route of exposure, animal-to-human dosimetric adjustments, and the units, (i.e., mg/m for the rfc vs mg/kg/day for the rid). the highest dose level that does not produce a significantly elevated increase in an adverse response is the noael. the noael from the critical study should be used for criteria development, i.e., the health effect that occurs at the lowest dose. however, if a noael is not available, then the loael can be used if a loael to noael uncertainty factor (uf) is applied. significance generally refers to both biological and statistical criteria and is dependent on the number of dose levels tested, the number of animals tested at each dose, and the background incidence of the adverse response in the control groups [ ] . noaels can be used as a basis for risk assessment calculations such as rids and acceptable daily intake values (adi). adi and rid values should be viewed as a conservative estimate of levels below which adverse affects would not be expected; exposures at doses greater than the adi or rid are associated with an increase probability (but not certainty) of adverse effects [ ] . who uses adi values for pesticides and food additives to define "the daily intake of chemical, which during an entire lifetime appears to be without appreciable risk on the basis of all known facts at that time" [ ] . in order to remove the value judgments implied by the words "acceptable" and "safety", the adi and safety factor (sf) terms have been replaced with the terms rid and up/modifying factors (mf), respectively. usepa publishes rids and rfcs in either iris or in the usepa's health effects assessment summary tables (heast) . rids and adi values (eqs. and , respectively) are typically calculated from noael values divided by the uf and/or mf: the uncertainty factor (uf) may range from to , depending on the nature and quality of the data and is determined by multiplying different ufs together to account for five areas of scientific uncertainty [ ] . the uf is primarily used to account for a potential difference between the animal's and human's sensitivity to a particular compound. the ufh and uf a accounts for possible intraand interspecies differences, respectively. as mentioned previously, an ufs is used to extrapolate from a subchronic duration study to a situation more relevant for chronic study and an ufl is used to extrapolate from a loael to a noael. an uf is used to account for inadequate numbers of animals, incomplete databases, or other experimental limitations. a modifying factor (mf} can be used to account for additional scientific uncertainties. in general, the magnitude of the individual ufs is assigned a value of one, three, or ten, depending on the quality of the studies used in developing the rid or rfc. this uf is reduced whenever there is experimental evidence of concordance between animal and human pharmacokinetics and when the mechanism of toxicity has been established. recently, benchmark dose modeling has been recommended by usepa instead of the noael approach. criticism of the noael approach exists because of its limitations, which include the following: i) the noael must be one of the experimental doses tested; ii) once the dose is identified, the remaining doses are irrelevant; iii) larger noaels may occur in experiments with few animals thereby resulting in larger rids; iv) the noael approach does not identify the actual responses at the noael and will vary based on experimental design. these limitations of the noael approach resulted in the benchmark dose (bmd) method [ ] . the dose-response is modeled and the lower confidence bound for a dose (bmdl) at a specified response level, benchmark response (bmr), is calculated [ ] . the bmdlx (with x representing the x percent bmr) is used as an alternative to the noael value for the rid calculations. thus, the calculation of the rid is shown in eq. ( }: advantages of the bmd approach includes: i) the ability to account for the full dose-response curve; ii) the inclusion of a measure of variability; iii) the use of responses within the experimental range; iv) the use of a consistent benchmark response level for rid calculations across studies [ ] . there are numerous informational databases or resources that provide risk assessors essential information. usepa publishes rids, rfcs, csfs, and urfs in the integrated risk information system (iris) or in the health effects assessment summary tables (heast). the information in iris followed by heast should be used preferentially before all other sources. a recent review of other available resources was published in a special volume of toxicology, vol , . articles by poore et al. [ ] and brinkhuis [ ] provide a thorough review of u.s. government databases such as usepa's iris at http://www.epa.gov/ iriswebp/iris/, national center for environmental assessment (ncea),atsdr's chemical-specific toxicology profiles and acute, subchronic, and chronic minimal risk levels (mrls ), and hazdat at http:/ /www.atsdr.cdc.gov/hazdat.html, among many other databases. the reviewers provide advise for effective search strategies as well as strategies for finding the appropriate toxicology information resources. exposure occurs when a human contacts a chemical or physical agent. exposure assessment examines a wide range of exposure parameters pertaining to the environmental scenarios of people who may be exposed to the agent under study. the information considered for the exposure assessment includes monitoring studies of chemical concentration in environmental media and/or food; modeling of environmental fate and transport of contaminants; and information on different activity patterns of different population subgroups. the principal pathways by which exposure occurs, the pattern of exposure, the determination of copc intake by each pathway, as well as the number of persons and whether there are sensitive subpopulations that need to be evaluated are also included in the evaluation. in this step, the assessor characterizes the exposure setting with respect to the general physical characteristics of the site, the site copcs, and the characteristics of the populations on or near the site. hazard identification/evaluation consists of sampling and analysis of soil, ground water, surface water, air, and other environmental media at contaminated sites. a common method used in screening substances at a site is by comparison with background levels in soil or ground/ surface water [ ] , determining if a chemical is detected or not and whether the detection limit for that chemical is less than reference concentrations as well as frequency of detection [ ] . once a list of copcs have been identified at the site, the availability of chemical characteristics such as struc-ture, solubility, stability, ph sensitivity, electrophilicity, and chemical reactivity and toxicity data are collected and evaluated to ascertain the nature of health effects associated with exposure to these chemicals. in many cases, toxicity information on chemicals is limited. knowing the copc's characteristics can represent important information for hazard identification [s] .also, sars are useful in assessing the relative toxicity of chemically related compounds. during this phase of exposure assessment, the major pathways by which the previously identified populations may be exposed are identified. therefore, locations of contaminated media, sources of release, fate and transport of copcs, pathways and exposure points, routes of exposure (i.e., ingestion of drinking water, dermal contact when showering) and location and activities of the potentially exposed population are explored. for example, the common on-site pathways evaluated when conducting a rcra remediation baseline risk assessment where unauthorized chemical releases have occurred includes direct contact with soil either by ingestion of soil and/ or inhalation of volatile chemicals or contaminated dust [ ] . the migration of chemicals off-site can occur via wind-blown dust and vapor emissions from soil, leaching of chemicals to ground water with subsequent movement off-site, and run-off surface water. these off-site chemicals can eventually accumulate in other transport media such that the copc ends up in vegetation crops, meat, milk, and fish that will eventually be consumed by humans. therefore, pathways, sources of release, locations of contaminated media, fate and transport of copcs, and location and activities of the potentially exposed population are explored. exposure points and routes of exposure (ingestion, inhalation) are identified for each exposure pathway. it is necessary to identify populations likely to receive especially high exposure and populations likely to be unusually sensitive to the chemical's effects. an example of possible point of exposures and exposure routes due to exposure to ground water or surface water (i.e., source medium) used for drinking water is shown in table . please note that all of these exposure path- volatilization from water air inhalation into enclosed space ways are typically not evaluated when doing a risk assessment on contaminated drinking water since the techniques and exposure parameters for evaluating these routes of exposure are not well developed. additional pathways to consider for surface water may include recreational exposures (i.e., swimming, boating), ingestion of contaminated fish, shellfish, etc., and dermal exposure to contaminated sediment. finally, an attempt should be made to develop a number of exposure scenarios. exposure scenarios are a combination of"exposure pathways" to which a single "receptor" may be subjected [ ] . for example, a residential adult or child receptor may be exposed to all the exposure routes in table (i.e., drinking water, showering/bathing, washing/cooking food, and volatilization from ground water or drinking water into an enclosed space). an industrial receptor may only be exposed through the drinking water pathway and volatilization from ground water into an enclosed space and not be exposed through showering/bathing or washing/ cooking, because these activities are not allowed at an industrial site. exposure scenarios are generally conservative and not intended to be entirely representative of actual scenarios at all sites. the scenarios allow for standardized and reproducible evaluation of risks across most sites and land use areas [ ] . conservatism allows for protection of potential receptors not directly evaluated such as special subpopulations and regionally specific land uses. the magnitude, frequency and duration of exposure for each pathway are next evaluated. for each potential exposure pathway, the chemical doses received by each exposure route needs to be calculated. because chemical concentrations can vary, many different studies might be required to get a complete picture of the chemical's distribution patterns within the environment. off-site sampling and analysis are preferred methods to determine the exposure concentrations in the environmental media at the point of exposure. because sampling data forms the foundation of a risk assessment, it is important that site investigation activities are designed and implemented with the overall goals of the risk assessment to be performed [ ] . for example, it is essential that appropriate analytical methods with proper quality assurance/quality control documentation be employed and that the analytical methods are sensitive enough to detect the copc at concentrations that are below health protective reference concentrations. after the sampling data is collected and evaluated, then statistical techniques may be used to calculate the representative concentration of copcs that will be contacted over the exposure area. different statistical techniques may be required for the determination of representative concentrations in ground water vs surface water [ ] . fate and transport models can be used to estimate current concentrations in media and/or at locations for which sampling was not conducted. in addition, an increase in future chemical concentrations in media that are currently contaminated or that may become contaminated can be predicted by fate and transport modeling. detailed discussions of these models are contained elsewhere in this book. each scenario described in the exposure assessment should be accompanied by an estimated exposure dose for each pathway. once the exposure pathway is determined, then the estimated risks and hazards from each exposure pathway can be characterized. exposure estimates for the oral pathway are expressed in terms of the mass of substance in contact with the body per unit body weight per unit time (i.e., intakes) whereas exposure estimates from inhalation pathways are expressed as mass of substance per unit volume (i.e., inhalation concentrations). the general equation for calculating intakes (mg/kg/day) is as follows [ ] : intake, the amount of chemical at the exchange boundary (mg/kg body weight-day) c copc concentration, average concentration contacted over the exposure period cr contact rate, the amount of contaminated medium contacted per unit time or event ef exposure frequency (days/year) ed exposure duration (years) bw body weight, the average body weight over the exposure period (kg) at averaging time or period over which exposure is averaged (days). each exposure pathway has slightly different variations of the above basic equation. please refer to appendix a for examples of equations used to calculate intakes for the major exposure pathway from ground and surface waters as well as examples of exposure parameters employed to calculate intakes: appendices a- and a- , ingestion of drinking water; appendices a- and a- , ingestion of contaminated fish tissue; appendices a- and a- , dermal contact with contaminated water; and appendix a- inhalation of volatiles from contaminated ground water or surface water. please refer to kasting and robinson [ ] and exposure to contaminants in drinking water [ ] for additional information on the various issues involved in the assessment of dermal exposure to water. the exposure parameters (e.g., cr, ef, ed, bw, and at) for each pathway are derived after an extensive literature review and statistical analysis [ ] . for example, information on water ingestion rates, body weights, and fish ingestion rates for adults, children, and pregnant women used to develop the national ambient water quality criteria were obtained from the following documents: exposure factors handbook [ ]; national health and nutrition examination survey (nhanes iii) [ ] ; and united states department of agriculture (usda) - continuing survey of food intakes [ ] . exposure parameters may represent central tendency or average values or maximum or near-maximum values [ ] . science policy decisions that consider the best available data and risk management judgments regarding the population to be protected are both used to choose appropriate exposure parameters. usepa emphasizes that exposure assessments should strive to achieve an overall dose estimate that represents a "reasonable maximum exposure (rme):' the intent of the rme is to estimate a conservative exposure scenario that is within the range of possible exposures yet well above the average case (above the h percentile of the actual distribution). however, estimates that are beyond the true distribution should be avoided. if near maximum or maximum values are chosen for each exposure parameter, then the combination of all maximum values for each exposure parameter would result in an unrealistic assessment of exposure. using probabilistic risk assessment, cullen demonstrated that if only two exposure parameters were chosen at maximum or near maximum values, and other parameters were chosen at medium values, than the risk and hazards estimates represented a rme (> % percentile level) [ ] . risk assessors should identify the most sensitive parameters and use maximum or near-maximum values for one or a few of those variables. central tendency or average values should be used for all other parameters [ ] . when central tendency and/or maximum values are chosen for exposure parameters used to calculate intake for an exposure pathway, single point estimates of risk and hazard are calculated (i.e., a deterministic technique). however, probabilistic techniques like monte carlo analysis can be employed to provide different percentile estimates of risk and hazard (i.e., soth percentile or th percentile estimates) as well as to characterize variability and uncertainty in the risk assessment. monte carlo simulation is a statistical technique by which a quantity is calculated repeatedly, using randomly selected values from the entire frequency distribution for an exposure parameter or multiple exposure parameters for each calculation. usepa recommends using computerized monte carlo simulations to provide probability distributions for dose and risk estimates by incorporating ranges for individual assumptions rather than a single dose or risk estimate [ ] . using better estimates for the distribution of contaminant levels is a major focus of recent risk assessment research. to obtain such estimates, several techniques, such as generating subjective uncertainty distributions and monte carlo composite analyses of parameter uncertainty, have been applied [ ] . these are approaches that can provide a reality check that is useful in generating more realistic exposure estimates [ ] . also, high-end exposure estimates (heees) and theoretical upper-bound estimates (tubes) are now recommended for specified populations as well as calculation of exposure for highly exposed individuals [ ] . heee represents an estimate of the exposure in the upper ninetieth percentile while tubes represent exposure levels that exceed exposures experienced by all individuals in the exposure distribution and assume limits for all exposure variables [ ] . please refer to the policy for use of probabilistic analysis in risk assessment at the usepa and guiding principles for monte carlo analysis at http:/ /www.epa.gov/ ncea/mcpolicy.htm [ ] . risk characterization, the last step in the risk assessment process, links the toxicity evaluation (hazard identification and dose-response assessment) to the exposure assessment. estimates of the upper-bound excess lifetime cancer risk and noncarcinogenic hazard for each pathway, each copc, and each receptor identified during the exposure assessment are calculated. another important component of risk characterization is the clear, transparent communication of risk and hazard estimates as well as an uncertainty analysis of those estimates to the risk manager. cancer risk is usually expressed as an estimated rate of excess cancers in a population exposed to a copc for a lifetime or portion of a lifetime [ ] . oral intakes are multiplied by the csf (eq. ), dermal intakes are multiplied by the csf adjusted for gi absorption (eq. ), and lifetime average inhalation concentrations are multiplied by the urf (eq. ) to obtain risk estimates. for evaluating the risk from oral exposure, the intakes from all ingestion pathways can be summed (i.e., ingestion of drinking water, ingestion of fish, etc.), then the total intake is multiplied by the csf, as follows: ( ) intakeoral the combined amount of copc from all oral pathways at the exchange boundary (mg/kg/day) (appendices a- to a- ) csf cancer slope factor (mg/kg/day)- • for evaluating dermal exposure, the dermally absorbed dose (dad) is calculated (appendices a- and a- ) and multiplied by an adjusted csf, csf dermal· the csf is typically derived based on oral dose-response relationships that are based on administered dose, whereas the dermal intake estimates are based on absorbed dose. therefore, if the csf is based on an administered dose, it should be adjusted for gastrointestinal absorption, if gastrointestinal absorption is significantly less than % (e.g., < %) [ ] . therefore, if an estimate of the gastrointestinal absorption fraction (absgi) is available for the compound and absgi is less than % [ , ] , then the oral dose-response factor, based on an administered dose, can be converted to an absorbed dose basis by dividing the csf by the absgr to form a csf derma!: where dad dermally absorbed dose (mg/kg/day) (appendices a- and a- ) ( ) csfdermal dermal cancer slope factor (mg/kg/day)- ; csfdermat=(csf/absg ). when absgr values are not available from bast and borges [ ] for a compound, then usepa region [ ] recommends the following defaults for absg : % for volatile organics; % for semi-volatile organics and nonvolatile organics; and % for inorganics. for evaluation of inhalation exposure, the lifetime average inhalation concentration is multiplied by the urf: where cinh concentration of copc at the exchange boundary (mg/m ) (appendix a- ) urf unit risk factor (jlg/m )- • to obtain a conservative total risk estimate, the risks for an individual copc from each pathway is summed and then the risks from all copcs are summed (eq. ): where ( ) rtotal sum of all risk estimates from all i h copcs from all pathways. however, usepa is still developing approaches to deal with the uncertainties associated with combining risk estimates of chemical mixtures across different routes of exposure (i.e., inhalation, oral, and dermal) since differences in the properties of the cells that line the surfaces of the air pathways and the lungs, the gastrointestinal tract and the skin may result in different intake patterns of chemical mixtures components depending on the route of exposure. another consideration in dealing with chemical mixtures is the chemicals in a mixture may partition to contact media differently [ ] .a risk estimate of x , x , or x is interpreted to mean that an individual has not more than, and likely less than, a in , , , in , , or in , chance, respectively, of developing cancer from the exposure being evaluated. the range of carcinogenic risks acceptable by the usepa is iq- to iq- • for chronic exposures to noncarcinogens, the intake of a copc is compared to the appropriate rid (i.e., oral rid or rid dermal) or rfc to form the hazard quotient (hq) [ ] . oral intakes are compared to the rid (eq. ), dermally absorbed doses (dads) are compared to the rid dermal (i.e., rid adjusted for gi absorption refer to the previous section for a discussion of procedures for adjusting toxicity factors for gi absorption) (eq. ), and inhalation intakes are compared to the rfc (eq. ) to obtained hazard quotients for each route of exposure: l intakeoral hqoral = rjd ( ) where intakeoral the combined amount of copc from all oral pathways at the exchange boundary (mg/kg/day) (appendices a- to a- ) rid oral reference dose (mg/kg/day) the total hazard index (hi) for an individual copc from all routes of exposure is the sum of the hqs from all applicable pathways (oral, dermal, or inhalation) (eq. ): hi; = l hq; ( ) where hii the sum of the hazard quotients from all relevant pathway for the i h co pc. in order to be conservative, a total hi can be calculated by summing the his from each individual copc. "if the overall hi value is less than one, public health risk is considered to be very low"; however, "if the hi value is equal to or greater than one, then the exposure assessment and hazard characterization should be investigated more thoroughly:' if the hi exceeds one, then the hazard estimates may be refined by grouping the copcs that affect the same target organ or have the same mechanism of action and adding only the his from similar-acting copcs [ , ] . ideally, chemicals would be grouped according to effect-specific toxicity criteria, information on chemicals exhibiting multiple effects would be available, and their exact mechanism of action would be known. instead, rids and rfcs are available for just one of the several possible endpoints of toxicity for a chemical and data are often limited to gross toxicological effects in an organ or an entire organ system. the list of these specific endpoints of toxicity is limited so it is best to consult a toxicologist during this step of the hazard evaluation [ ] . each copc and exposure pathway needs to be calculated to determine the actual risk. the hi provides a rough estimate of possible toxicity and requires careful interpretation [ ] . the hi does not account for the number of individuals who might be affected by exposure or the severity of the effects. usepa recommends that a hi of . be used for noncancer health effects. in the "real world", exposures generally involve complex mixtures of copcs. there are three basic actions for mixtures: i) independent joint action, which describes copcs that act independently and have different modes of action and are not expected to affect the toxicity of one another; ii) similar joint action or "dose addition;' which describes a mixture where the copcs produce similar but independent effects; iii) synergistic action, which the effective mixture cannot be assessed from the individual ingredients but depends on knowledge of their combined toxicity [ ] (table and fig. ). the total hi can exceed the target hazard level as a result of the presence of either one or more copcs with an hq exceeding the target hazard level or the summation of several copc-specific hqs that are each less than the target hazard level. it is important to mention that the numbers generated by risk assessors should not be viewed as either accurate measures or even predictors of rates of adverse health effects in human populations [ ] . the calculated estimates are routinely based on assumptions recognized as being conservative. thus, these numbers should be used as tools open to interpretation on a site-by-site basis. it is important for the risk manager to be informed of the uncertainties during the risk assessment process. significant limitations and uncertainties can exist throughout the entire risk assessment process; thus, it is important that a discussion of uncertainty accompanies risk assessment analysis so that the limitations of the quantitative results are taken into consideration. both qualitative and quantitative methods have been developed to analyze the uncertainty associated with risk assessment. a quantitative analysis may be conducted using either a sensitivity analysis or a probability analysis. listed below are the various reasons why uncertainty exists in a risk assessment analysis [ , ] : -deficient control groups -difference in smoking habits between an epidemiology study group and a risk group -differences or lack of consideration of pharmacokinetics and/or mechanism of toxicity between species -failure to diagnose or misdiagnosis the cause of mortality -inappropriate experimental study design -lack of knowledge regarding combined biological effects of exposure to multiple toxicants -limitation in data regarding nature and magnitude of levels in the environment -low-dose extrapolation from high-dose experimental conditions -reliance on mathematical models -toxicant interaction with another agent -use of animal studies in the determination of risk for humans it is important to make clear the distinction between risk assessors and risk management. risk assessors generate risk estimates but a risk manager considers these risk estimates, other scientific information, and integrates it into societal decisions [ ] . for example, risk managers consider data analysis, technical concerns, economic concerns, and social/political concerns in addition to comparing the risk estimates to an acceptable level set by federal or state health agencies [ ] . generally, trade-offs or compromises are made for the lowest possible risk and society's demand for jobs and economic growth. examples of questions that may be asked by risk managers are: "is a particularly deadly type of cancer in a narrow population worse or better than widespread effects of a non-lethal nature? can this decision be successfully defended in court?" in general, risk management decisions may be based more on political and economic factors than risk factors [ ] . risk assessment and risk management are an integral part of the contemporary regulatory scene. risk management refers to the selections and implementation of the most appropriate regulatory action based on: i) goals; ii) social and political factors; iii) available control technology; iv) costs and benefits; v) results of risk assessment; vi) acceptable risk; vii) acceptable number of cases [ ] . another aspect to consider is cumulative risk in either the risk assessment or risk management phase. cumulative risk evaluation considers all "involuntary" risk to which a receptor may be exposed by a variety of environmental risks such as: i) automobile exhaust emissions; ii) leaking underground storage tanks; iii) untreated sewage; iv) agricultural land runoff; v) industrial process air emissions; vi) conventional combustion-related air emissions [ ] . however, at the present time, definite guidance from usepa regarding the evaluation of cumulative risk is not available. a refined site-specific risk assessment takes into account the specific characteristics concerning the site, all relevant pathways a receptor is exposed to, and other site-specific information. this represents a "forward" calculation method where risk and hazard estimates are calculated. however, each state or usepa regional office may utilize slightly different exposure factors, exposure scenarios, target risk and hazard levels or different procedures to account for childhood exposure, cumulative risk, etc. it is a very time-and resource-intensive process, which involves numerous scientific policy decisions. however, risk and hazard estimates from a refined site-specific risk assessment typically provide more realistic estimates than a generic screening level risk assessment. in contrast, media-specific comparison values can be calculated based on a "backward" calculation method based on standardized equations, usepa toxicity values, standard exposure pathways or scenarios, default exposure factors, and conservative risk and hazard levels. usepa office of water has derived drinking water standards and health advisories to evaluate levels of contaminants in public drinking water supplies. to evaluate levels of contaminants in surface water, usepa publishes guidance documents [ ] as well as national recommended water quality criteria. state and tribal agencies then develop water quality standards for each water body in the state based on usepa guidance and the use designation for the individual water body. usepa must review the proposed state water quality standards before they become legally enforceable standards. if drinking water standards and/ or state and tribal water quality standards are available for the copcs present at the site, these standards generally must be used to evaluate human health impacts to groundwater and/or surface water, respectively. water quality standards apply to surface waters of the united states, including rivers, streams, lakes, oceans, estuaries, and wetlands. water quality standards consist, at a minimum, of three elements: ) the "designated beneficial use" or "uses" of a water body or segment of a water body; ) the water quality "criteria" necessary to protect the uses of that particular water body; ) an antidegradation policy. typical designated beneficial uses of water bodies include public water supply, propagation of fish and wildlife, recreation, agricultural water use, industrial water use and navigation. if information concerning copcs is not present in the drinking water and/or state and tribal water standards databases, or additional exposure pathways need to be included during the site assessment, then media-specific comparison values are available from the soil screening guidance [ ] , several usepa regional offices, and individual state governments (table ). these benchmark values may be used as a tool to perform initial site screenings or as initial cleanup goals, if applicable. the different media-specific comparison values are generic, but can be recalculated using more site-specific information and guidance provided on the applicable web addresses (table ). however, they usually do not consider all potential human health exposure pathways or consider ecological concerns. many of the databases listed in table also provide copc concentration in soil calculated with fate and transport models that are protective of ground water and surface water. if information concerning copcs is not present in these databases, or additional exposure pathways need to be included during the site assessment, then a detailed toxicity evaluation and risk assessment may need to be conducted based on state or other regulatory agency guidelines. usepa was granted authority to set drinking water standards by the safe drinking water act (sdwa) of . the sdwa has since been amended in and . the responsibility for implementing drinking water standards is delegated to states and tribes. usepa is responsible for identifying contaminants to regulate, establishing priorities for contaminants that are of the greatest concern, and then deriving national primary drinking water regulations. the sdwa is applicable to public water systems that provide water for human consumption through at least service connections or regularly serve at least individuals. the standards apply to the water delivered to any user of a public water system. the standards are not applicable to private wells, although state and local governments do set rules to protect users of these wells. owners are urged to test their wells annually for nitrate and coliform bacteria, to test their wells for other compounds if a problem is suspected, and to take precautions to ensure the protection and maintenance of their drinking water supplies. even though these drinking water standards do not apply to private wells, many states adopt them as ground water standards or use them to evaluate whether concentrations of contaminants in ground water are above a level of concern. the office of water establishes national primary drinking water standards, secondary drinking water regulations, as well as health advisories. the derivation of these standards, regulations, and health advisories are discussed below. the drinking water standards and health advisories tables may be reached from the office of science and technology ( ost) home page at http://www.epa.gov/ost. the tables are accessed under the ost programs heading on the ost home page. national primary drinking water standards are regulations the usepa sets to control the level of contaminants in the nation's drinking water. maximum contaminant level goals (mclgs) are the maximum level of a contaminant in drinking water at which no known or anticipated chronic adverse effect on the health of persons would occur, and which allows an adequate margin of safety. mclgs are non-enforceable public health goals. maximum contaminant levels (mcls) are enforceable standards that are set as close to mclgs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. the derivation of mclgs and mcls are discussed in the following sections. for noncarcinogens (not including microbial contaminants), the mclg is based on the rid. the definition and derivation of the rid has been discussed previously. the rid is first adjusted for an adult with body weight assumed to be kg and consuming l of water per day to produce the drinking water equivalent level (dwel): the dwel represents the concentration of a substance in drinking water that is not expected to cause any adverse noncarcinogenic health effects in humans over a lifetime of exposure and assumes the only exposure to the chemical comes from drinking water. however, exposure to the chemical can also occur through other pathways and routes of exposure. therefore, the mclg is calculated by reducing the dwel in proportion to the amount of exposure from drinking water relative to other sources (e.g., food, air). in the absence of actual exposure data, this relative source contribution (rsc) is generally assumed to be %. the final value is in mg/l and is generally rounded to one significant figure: mclg(mg!l) = dwel · rsc ( ) if the chemical is considered to be a class a orb carcinogen, then it is assumed that there is not a dose below which the chemical is considered safe. therefore, the mclg is set at zero. if a chemical is a class c carcinogen and scientific data provides information that there is a threshold below which carcinogenesis does not occur, then the mclg is set at a level above zero that is safe. prior to , the mclg for class c carcinogens was based on an rid approach that applied an additional uncertainty factor of to account for possible carcinogenic potential of the chemical. if there weren't any reported noncancer effects, then the mclg was based on a nominal lifetime excess cancer risk of - to - , if data were adequate. the office of water is now moving toward guidance contained in the draft cancer guidelines [ , ] which allows standards for nonlin-ear carcinogens to be derived based on low dose extrapolation and a mode of action approach. for microbial contaminants that may present public health risk, the mclg is set at zero because ingesting one protozoa, virus, or bacterium may cause adverse health effects. usepa is conducting studies to determine whether there is a safe level above zero for some microbial contaminants. so far, however, this has not been established. as mentioned previously, maximum contaminant levels (mcls) are enforceable standards that are set as close to mclgs as possible but take into consideration the availability of technology treatments and techniques as well as whether reliable analytical methods capable of detecting low concentrations of contaminants are available. if there is not a reliable analytical method, than a treatment technique (tt) is set rather than an mcl. a tt is an enforceable procedure or level of technological performance, which public water systems must follow to ensure control of a contaminant. in addition, mcls take into account an economic analysis to determine whether the benefits of enforcing the standard justify the costs. for group a and group b carcinogens, mcls are usually promulgated at the o- to o- risk level. secondary drinking water regulations are non-enforceable federal guidelines that take into account whether a chemical produces cosmetic effects such as tooth or skin discoloration or aesthetic effects such as affecting the taste, odor, or color of drinking water. because there are at least different contaminates (i.e., aluminum, chloride, copper, and fluoride) in drinking water that are not considered to be health threatening, secondary maximum contaminant levels (smcls) guidelines have been established for public water systems that voluntarily test the water. these secondary standards give the public water systems guidance on removing the contaminants. in most cases, the state health agencies and public water systems often monitor and treat their drinking water for secondary contaminants. in order to provide information and guidance concerning drinking water contaminants for which national regulations currently do not exist, the usepa health and ecological criteria division, office of water, in cooperation with the office of research and development prepares health advisories (ha). these detailed has are used to "estimate concentrations of the contaminant in drinking water that are not anticipated to cause any adverse noncarcinogenic health effects over specific exposure durations" [ ] . they include a margin of safety to protect sensitive members of the population (e.g., children, the elderly, and pregnant women). has are not legally enforceable in the united states, are only used for guidance by federal, state and local officials, and are subject to change as new information becomes available. included in the has is information on analytical and treatment technologies. has are provided for acute or shortterm effects as well as chronic effects. the one-day ha, the ten-day ha and the longer-term ha are based on the assumption that all exposures to the contaminant comes from drinking water whereas the lifetime ha takes into account other sources such as food, air, etc. the following types of has have been developed [ ] . one-day ha -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to one day of exposure. a one-day ha is generally based on data from acute human or animal studies involving up to days of exposure. the protected individual is assumed to be a -kg child with an assumed volume of drinking water (di) of lingested/day. ten-day ha-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to ten days of exposure. a ten -day ha is generally based on subacute animal studies involving - days of exposure. similarly to the one-day ha, the protected individual for the ten-day ha is assumed to be a -kg child with an assumed di of l l ingested/ day. longer-term ha-the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for up to approximately seven years ( % of an individual's lifetime) of exposure, with a margin of safety. a longer-term ha is generally based on subchronic animal studies involving days to year of exposure. the protected individual is assumed to be a -kg child with an assumed di of l ingested/day and a -kg adult with an assumed di of ingested/day. lifetime ha -the concentration of a chemical in drinking water that is not expected to cause any adverse noncarcinogenic effects for a lifetime of exposure. a lifetime ha is generally based on chronic or subchronic animal studies. the protected individual is assumed to be a -kg adult with an assumed di of l ingested/day. a dwel is calculated and multiplied by a rsc of % to account for exposure to drinking water as well as other sources (food, air, etc.). therefore, the lifetime ha is derived similarly to the mclg. the following general formula is used to derive the one-day, ten-day, and the longer-term has and the dwel: health advisories for the assessment of carcinogenic risk ( ) if a contaminant is recognized as a human or probable human carcinogen (groups a or b), a carcinogenic slope factor (csf) is derived based on techniques discussed above. the slope factor is then used to determine the concentrations of the chemical in drinking water that are associated with theoretical upper-bound excess lifetime cancer risks of w- , w-s, or w- • the following formula is used to calculate the concentration predicted to contribute an incremental risk level (rl) of - , w- , or - : cvw(mg.il) = ___ : : : . . _ where cow concentration in drinking water at desired rl (mg/l) rl desired risk level (lo- , - , or assumed body weight of adult human (kg) csf carcinogenic potency factor for humans (mg/kglday}- assumed water consumption of an adult human (l!day). ( ) if a dwel was calculated for a class a, b, or c carcinogen based on an rid study, (i.e., noncarcinogen), then the carcinogenic risk associated with lifetime exposure to the dwel can be calculated to assist the risk manager for comparison in assessing the overall risks. the theoretical upper-bound cancer risk associated with lifetime exposure to the dwel is calculated as follows: lid· csf risk = dwel · ---- kg ( ) toxity evaluation and human health risk assessment of surface and ground water usepa is required by the clean water act of to develop, publish, and revise ambient water quality criteria (awqc). the awqc "involves the calculation of the maximum water concentration for a pollutant that ensures drinking water and/or fish ingestion exposures will not result in human intake of that pollutant (i.e., the water quality criteria level) in amounts that exceed a specified level based upon the toxicological endpoint of concern" [ ] . in october , usepa issued new guidelines [ ] that replaced the awqc national guidelines [ ] . the awqs guidelines incorporated significant scientific advances in the following key areas: cancer risk assessment ( cancer guidelines [ ] vs the draft cancer guidelines) [ , ] ; risk assessments for class c carcinogens using nonlinear low-dose extrapolation; non-cancer risk assessments (benchmark dose approach and categorical regression); exposure assessments (consideration of non-water sources of exposures); bioaccumulation in fish (bioaccumulation factors, bafs, are recommended for all compounds to calculate concentration in fish tissue). in addition, the procedures for deriving awqc under the cwa were made more consistent to the procedures for deriving mclg by the sdwa. this section will discuss guidelines from the methodology for deriving ambient water quality criteria for the protection of human health, hereafter referred to as the awqc methodology guidance [ ] , accessible at http:/ /www.epa.gov/ost/ humanhealth/method/index.html. state and tribal environmental agencies are responsible for developing ambient water quality standards (awqs) for each water body in the state based on guidance provided by usepa [ ] and the uses that water bodies have been designated for (i.e., drinking water supply, recreation, or fish protection, etc.). these designated uses are a part of the water quality standards, provide a regulatory goal for the water body and define the level of protection assigned to it. the watershed assessment, tracking&environmental results database (waters), accessible at http:/ /www.epa.gov/waters/ provides information on the water body designation for each individual state and tribe. the exposure pathways typically evaluated for awqc are direct ingestion of drinking water obtained from that water body and the consumption of fish/ shellfish obtained from that water body. when an awqc is set, anticipated exposures from other sources of exposure (e.g., food, air) are taken into account for noncarcinogenic effects, or carcinogenic effects evaluated by the margin of exposure (moe) approach (i.e., class c carcinogens, using the woe cancer guideline terminology). the amount of exposure attributed to each source compared to total exposure is called the relative source contribution (rsc) for that source. the rsc is typically set at % but if a site-specific assessment is conducted for a particular water body and it can be demonstrated that other sources of exposures are not likely to occur, then the rsc can be set as high as %. an exposure decision tree approach is described in the methodology guidance to assist in calculating a site-specific rsc for a water body [ ] . the allowable dose (typically, the rid) is then allocated via the rsc approach to ensure that the criterion is protective enough, given the other anticipated sources of exposure: where awqc ambient water quality criterion (mg/l) rid reference dose for non-cancer effects (mg/kg-day) rsc relative source contribution factor to account for non-water sources of exposure -may be either a percentage (multiplied) or amount subtracted, depending on whether multiple criteria are relevant to the chemical bw human body weight ( default= kg for adults) di drinking water intake (default= l/day for adults) fli fish intake at trophic level (tl) i (i= , , and ) (defaults for total in-take= . kg/day for general adult population and sport anglers, and . kg/day for subsistence fishers). trophic level breakouts for the general adult population and sport anglers are: tl = . kg/day; tl = . kg/day; and tl = . kg/day bafi bioaccumulation factor at trophic level i (i= , , and ), lipid normalized (l/kg). the following equation is used for deriving awqc for chemicals evaluated with a nonlinear low-dose extrapolation (margin of exposure) based on guidance in the draft cancer guidelines: where pod point of departure for carcinogens based on a nonlinear low-dose extrapolation (mg/kglday), usually a loael, noael, or ledlo uf uncertainty factor for carcinogens based on a nonlinear low-dose extrapolation (unitless). for carcinogens, only two water sources (i.e., drinking water and fish ingestion) are considered when awqc are derived. awqc for carcinogens are determined with respect to the incremental lifetime risk posed by a substance's presence in water, and is not being set with regard to an individual's total risk from all sources of exposure [ ] . the cancer guidelines are the basis for iris risk numbers that were used to derive the current awqc, except for a few compounds developed using the revised cancer guidelines [ , ] . each new assessment applying the principles of the draft cancer guidelines [ , ] will be subject to peer review before being used as the basis of revised, updated awqc. the cancer-based awqc was calculated using the risk specific dose (rsd) and other input parameters listed below. the rsd and awqc for carcinogens was calculated for the specific targeted lifetime cancer risk (i.e., iq- , iq- , iq- ), using the following two equations: where rsd risk specific dose (mg/kg/day) target cancer risk iq- , iq- , iq- (lifetime incremental risk) csf cancer slope factor (mg/kg-day)- ( ) ( ) exposure parameters based on a site-specific or regional basis can be substituted to reflect regional or local conditions and/or specific populations of concern. these include the relative source contribution, fish consumption rate, baf (including factors used to derive bafs such as concentration of particulate organic carbon applicable to the awqc (kg/l) or concentration of dissolved organic carbon applicable to the awqc (kg/l), percent lipid of fish consumed by target population, and species representative of given trophic levels. states and tribes are encouraged to make adjustments using the information and instructions provided in the awqc methodology guidance [ ] . the national water quality standards database (wqsdb) at the web address, http:/ /www.epa.gov/wqsdatabase/, provides access to several wqs reports that provide information about designated uses, water body names, state numeric water quality standards, and epa recommended numeric water quality crite-ria. the wqsdb allows users the ability to compare wqs information across the nation using standard reports. some states and tribes use an incidental ingestion value (ii) instead of di value when the water body is used for recreational purposes and not as a source of drinking water. however, an ii value is not used to develop national awqc. the default value for ii is . l!day and is assumed to occur from swimming and other activities. the fish intake value is assumed to remain the same. besides protection of human health, awqc are developed based on other criteria such as organoleptic effects; aquatic life protection, sediment quality protection, nutrient criteria, microbial pathogens, biocriteria, excessive sedimentation, flow alterations, and wildlife criteria. for example, the national recommended water quality criteria table (http:/ /www.epa.gov/ost/standards/wqcriteria.html) lists freshwater or saltwater criteria maximum concentration (cmc) criteria values that are the acute limits for the priority pollutant for the protection of aquatic life in freshwater or saltwater. the freshwater or saltwater criterion continuous concentration (ccc) criteria value is the chronic limit for the priority pollutant for the protection of aquatic life in freshwater or saltwater, respectively. the table also includes criteria for organoleptic effects for pollutants developed to prevent undesirable taste and/or odor imparted by them to ambient water. in some cases, a water quality criterion based on organoleptic effects or aquatic life protection would be more stringent than a criterion based on toxicologic endpoints. information and links to guidance documents relating to these subjects may be reached from the office of water, water quality criteria and standards program page at http://www.epa.gov/waterscience/standards/. as more knowledge is gained about the waste generated and disposed in landfllls by our society, there is great concern about the toxic effects that this waste has on our environment as well as animal and human health. over the past decade, there have been numerous attempts to recycle various waste products generated by our society. this section will review some of the recent literature on recycled hazardous waste materials. recycled concrete pavement as aggregate for the construction of highways can produce effluent with a high ph that can enter the underwater drains [ ] . when portland cement is recycled, the concrete consists of limestones and minerals where - % is lime ( cao ), silica (si }, alumina (al }, and iron oxide (fe ). ca(oh} is formed sparingly in water and the saturated solution has a ph of . at oc. the ph of the water effluent in underdrains is approximately - . at this ph, the cac precipitates out and forms deposits on the screen [ ] . the deposition on the screens produces clogging and scales to form in the underdrain; thereby, causing vegetative kill around the outlet. in addition to recycled concrete, rubber is recycled for asphalt pavements. recycling of rubber allows a means of disposal of scrap tires and reduces the quantity of construction materials for the asphalt. asphalt pavement contains hot mix asphalt with and without crumb rubber modifier. the use of rubber tires reduces the weight of the asphalt and provides good drainage media as well as extending the life of the asphalt [ ] . while there is an apparent benefit for recycling rubber, it has been found by the minnesota pollution control agency that leaching can occur from the use of waste tires in sub grade roadbeds into the run -off water. in acidic conditions, leaching of barium, cadmium, lead, chromium, selenium, and zinc occurred from the asphalt while in basic conditions, there was leaching of polynuclear aromatic hydrocarbons. thus, the recommended allowable levels (rals) may be exceeded for drinking water standards in areas where there is recycled rubber in the asphalt. paper or wood itself does not contain any hazardous chemicals unless the paper undergoes recycling. the recycling process requires de-inking of waste paper prior to recovery of the fiber generating a sludge that contains particles of ink and fibers too short to be converted to a finished paper product [ ] . the de-inking chemicals such as sulfur, chlorine, cadmium, and fluorine are present in the sludge generated. a sludge is any solid, semi-solid, or liquid waste generated from a municipal, commercial, or industrial wastewater treatment plant, water supply treatment plant, or air pollution control facility exclusive of treated effluent from a wastewater treatment plant [ ] . thus, hazardous waste can be generated from the recycling paper process. a commodity used by numerous industries is plastic. because of the enormous amount of plastic disposed by consumers on a daily basis, it has become a common recycled item at many facilities. some metal sites will recycle the plastic insulation generated by their facility. the recycled plastic from such a facility generally includes metals such as lead, copper, manganese, and zinc, as well as dioxins, polychlorinated naphthalene, and polychlorinated biphenols. a leachate, contaminated run -off water, from the plastic "fluff" is formed during the recycling process. the leachate runs off into the water drains carrying hazardous chemical residue into soil and ground water. the plastic fluff is generally recycled on site into tiles, cushions, traffic cones, fenders, and highway barriers. the non-recyclable material and contaminated soil is generally taken to an off-site landfill. another common way to recycle plastic is to use the "sink-float" process where paper, fiber, and metal can be separated from the plastics and then recycled. the "sink-float" process uses water where the heavy items sink and the light items float [ ] . it has been demonstrated that recycled plastics can be used as construction material as an alternative to lumber. this product is made from used bottles collected at curbside for recycling. the recycled plastics undergo sorting to remove unpigmented polyethylene milk/water jugs and polyethylene terephthalate carbonated beverage bottles. the leftover plastic material is referred to as curbside tailings (ct). ct consists of approximately % polyolefin (polyethylene and polypropylene) with the remaining percentages made of polyethylene terephthalate, polystyrene, polyvinyl chloride, and other plastics [ ] . the ct product has reasonable strength compared to wood. we is et al. ( ) evaluated three ct recycled plastic formulations in fiddler crabs, snail, and algae [ ] . it was found that limb regeneration of the fiddler crabs was accelerated with all three formulations but had no effect on fertilized eggs or larval developments formulations. there was a significant reduction in the sperm fertilization success rate [ ] . furthermore, all three ct plastic formulation did not have an affect on the survival rate of snails or other algal species. the presence of metals in sludge and wastewater is a current problem. for instance, the use of sludge as a fertilizer of agricultural land generally receives cadmium (cd++) from aerial deposition and phosphatic fertilizers. cd++ is considered a hazardous chemical and has been shown to produce toxicity of the lung and kidney and to be carcinogenic in rats [ ] . the highest concentration of cd++ is found in tobacco, lettuce, spinach, and other leafy products/vegetables. using crop uptake data from field trials, it is possible to relate potential human dietary intake of cd++ on which hazard depends, to soil concentrations of cadmium [ ] . transfer via farm animals to meat and dairy products for human consumption is thought to be minimal even allowing for some direct ingestion of sludge-treated soil by the animals. background soil contains . to . mg cd++ /kg where % of cadmium is found in raw sewage that is converted to sludge. after the formation of sludge, % of the % cd++ is removed primarily by sedimentation. in order for cd++ uptake in roots to occur, cd++ must be in its soluble form adjacent to the root membrane for some finite period [ ] . generally, a decrease in ph in soil will enhance the solubility of cd++, which will increase the crop uptake of cd++. in , who/usepa agreed that the maximum acceptable daily uptake of cd++ was f.lg/day. where -lg cd++jday over a -year period would be necessary to produce toxicity to the kidney. farm animals fed fodder crops grown on sludge-treated soil will absorb ~ % of the cd++ ingested [ ] . in addition to recycling cd++, lead (pb )-edta wastewater also undergoes recycling. edta a chelating agent used in the soil washing process for the decontamination of pb contaminated soil. kim et al. [ ] outlines a method to recycle pb-edta wastewater by substituting the pb-edta complex with feh ions at a low ph followed by precipitation of pb ions with phosphate or sulfate ions. feh ions-edta will precipitate at a high ph with naoh. the recycled edta solution can be recycled several times without losing its extractive power [ ] . recycling computers can be extremely hazardous if not properly disposed. there are many parts of the computer that are toxic. to begin with, the cathode ray tube (crt) glass may be classified as a hazardous waste due to its high pb concentration. the liquid crystal display (lcd), which contains benzene material for the liquid crystal, is also considered hazardous. in addition, the mercury switch, mercury relay, lithium battery, ni-h battery, ni-cd battery, and polychlorinated biphenyl (pcb) capacitor are all hazardous materials. because of this, taiwan has recently established guidelines for the proper disposal of computer and/or computer parts [ ] . the nine guidelines are: i) landfill or incineration of scrap computers shall be avoided; ii) the phosphorescent coatings which have been applied to the glass panel of crt must be removed; iii) all the batteries (li, ni-cd,ni-h) must be removed by non-destructive means; iv) all the pcb capacitors which have a diameter greater than em and a height larger than em must be removed; v) all the mercury containing parts must be removed; vi) crt must be ventilated before stored inside a building; vii) the high-ph content funnel glass of the crt must be properly treated; viii) the lcd of notebook computer must be removed by non-destructive means; and lastly, ix) plastic that contains the flameretardant, bromine, shall be treated properly. hopefully, this model can be used in other countries where computer waste is becoming a major issue of environmental concern. organic solvents have many applications in the industry such as formulation of products, thinning of products prior to use or cleaning of materials by removal of contaminants. during this application, solvent emission and waste solvent generation occur. most organic solvents are known to have adverse effects on both human health and the environment. solvents may affect the body through inhalation and skin contact and lead to either acute or chronic poisoning [ ] . the effects of acute poisoning include narcosis, irritation of throat, eyes, or skin, dermatitis, and even death and the effects of chronic poisoning include damage to blood, lung, kidney, and gastrointestinal system and/or nervous system. in addition, many solvents are inflammable in nature. waste management of organic solvents includes: source reduction, recycling, treatment, and disposal [ ] . case studies indicate that dry cleaning facilities use perchloroethylene (perc) in which workers around the cleaning machines are subject to high health risks; thus, vapor recovery systems are used to reduce the perc emissions especially from older machines [ ] . riess et al. [ ] evaluated the recyclability of flame-retarded polymers that contain brominated flame-retardants from televisions (tv) and personal computers (pcs) obtained from a recycling company. the flame-retardants identified in the tv were: % high-impact polystyrene, % acrylonitrile butadiene styrene, % polystyrene, and % polyphenyleneoxide polystyrene. the flame retardants found in pcs were: % acrylonitrile butadiene styrene, % polyphenyleneoxide polystyrene, % high-impact polystyrene, % polystyrene, and % polyvinyl chloride. recycling may be practical if % new material is added to mixture [ ] . the denver potable reuse pilot project began in to recycle wastewater effluent to achieve potable water quality as well as being economically competitive with conventional technology. moreover, this project sponsored the first large-scale risk assessment studies using experimental animals [ ] . after ten years, this pilot project was converted to a demonstration treatment plant to address many of the technical and non-technical issues. the objectives of the "reuse demonstration project were (i) to establish end product safety, (ii) to demonstrate the reliability of the process, (iii) to generate public awareness, (iv) to generate regulatory agency acceptance, and (v) to provide data for a largescale implementation" [ ] . however, insuring end-product water safety proved to be difficult to demonstrate because the health standards established for drinking water were not intended to apply to treated waters. thus, additional criteria were used to prove that the effluent was suitable for human consumption. below were the criteria used in this project: -the product was compared with the national primary and secondary drinking water regulations values -the product was compared with federal or state regulated parameters -effluent levels were compared with the levels suggested to be hazardous -concentrations of product in the water were compared to denver's current drinking water criteria or other "acceptable" water supplies in the u.s. and/or worldwide -whole-animal studies (i.e., chronic toxicity, oncogenicity, and reproductive tests) were conducted using denver's current drinking water as a comparison standard the denver project used two dosage groups per water sample: reclaimed water from the demonstration plant with reverse osmosis treatment (rot) and denver drinking water from the foothills water treatment plant (dwt). ro and dw were administered to fischer rats and b c f mice at dosages at least times the amount found in the original water samples. ultrafiltration water treatment samples (uft) were only administered to rats at the high dose (soox) and distilled water was used as the control in both the rats and mice studies. in addition to the chronic toxicity studies, reproductive toxicity studies were performed to identify potential adverse effects on reproductive performance, intrauterine development, and growth and development of the offspring. the teratology phase will identify potential embryotoxicity and teratogenicity. administration of ro, uft, and dw water at times the amount found in the original water samples for weeks in rats did not result in any toxicologic or carcinogenic toxicity [ ] . the survival rate was slightly higher among the female rats ( %- %) compared to the male rats ( %- %). there were a variety of neoplasms seen in all treatment groups ( table ) . the "c" cell tumors in the thyroids were not considered treatment related because these neoplasms were within the anticipated ranges for the age and strain of the rat. similar results were seen with the mouse chronic studies where there was no toxicity or carcinogenicity seen after weeks of high dose treatment and the survival rate was identical to the rats. the organs most affected by the treatment were the hematopoietic system, liver, lung, and pituitary gland [ ] . the remarkable finding of the reproductive studies was "the absence of treatment- related effects on reproductive performance, growth, mating capacity, survival of offspring, or fetal development in any of the treatment groups" [ ] . the denver project met the outlined objectives at the start of the project and all three of the toxicity studies demonstrated that concentrations times the original amount seen in sample water did not cause any notable toxicity. thus, secondary wastewater can be recycled into safe drinking water for human consumption. chemical mixtures have always been an issue of concern to address/assess the toxicity to the environment and to humans. an interagency agreement between atsdr and the ntp resulted in participation in a public health service (phs) activity related to the superfund act (cercla comprehensive environmental response, compensation and liability act) [ ] . yang was the lead scientist at the national institute of environmental health sciences (niehs)/ntp for the development of the "superfund toxicology program". particular focus centered on chemical mixtures of environmental concern, especially groundwater contaminants derived from hazardous waste disposal and agricultural activities. yang states that obtaining a "representative" sample is practically impossible [ ] . a core sample from one location of a site will definitely be different from a core sample from a different location of a site. also, a core sample taken from the exact location at different times of day and/or different days will be different because weather, activity at the site, and composition of the waste can change and degrade or synthesize new compounds. thus, yang proposed a strategy to study chemical mixtures [ ] : . study chemical mixtures between binary and complex mixtures to avoid duplication of earlier studies that evaluated the two extremes . study chemically defined mixtures to make determination and mechanistic studies manageable . study chemical mixtures related to groundwater contamination because groundwater contamination is among the most critical environmental issue . study chemical mixtures at environmentally realistic concentrations to access the potential health effects of environmental pollution of long-term, low-level exposure . study chemical mixtures with potential for life-time exposure. a chemical mixture of groundwater contaminants from hazardous waste sites and agricultural activities were created. this formulation mixture contained -chemicals that simulated groundwater contamination as shown in table . the concentrations selected represent the average survey values of hazardous waste disposal sites representing all usepa regions. even though such a mixture may never exist in reality, new insights may be gained to elucidate potential health effects from laboratory animals to human. for most of the end-points examined in this study, the results were negative. the negative results of this study were significant because various mixtures were tested at -to -fold or several orders of magnitude higher than potential human exposure levels [ ] . insights gained from yang's project were: i) the effects will be subtle and marginal; ii) toxicologic interactions are possible at the environmentally realistic levels of exposure; iii) toxic responses may be from unconventional toxicologic end-point (immunosuppression, myelotoxicity); iv) possibility of subclinical residual effects may become more interactive with subsequent insults from chemical, physical, and/or biological agents; and v) negative results do not in-toxity evaluation and human health risk assessment of surface and ground water dicate safety for humans because the studies were done on rodents. subsequent work on this mixture at low doses increased the acute toxicity of high doses of known hepatic and renal toxicants [ ] . recently, niehs has begun to focus on simpler mixtures of chemicals that share common mechanisms of action rather than complex mixtures. over the past several decades, much effort has been made to establish national guidance on proper waste handling disposal techniques such that there are many local, state, national and federal agencies that provide guidelines to protect the surface and ground waters for humans. these guidelines also provide methods and approaches used to evaluate potential health effects and assess risks from contaminated source media, (i.e., soil, air, and water) as well as establish standards of exposure or health benchmark values in the different media, which are not expected to produce environmental or human health impacts. the use of the risk assessment methodology by various regulatory agencies using the following steps: i) hazard identification; ii) dose-response assessment; iii) exposure assessment; and iv) risk characterization balances the risks and benefits and sets the "acceptable" target levels of exposure to ground water and surface water. • for noncarcinogenic effects at=ed; for carcinogenic effects at= years. b exhibit - of the interim dermal guidance document [ ) . different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. waste management guide. the bureau of national affairs industrial waste recycling in: jessup dh (ed) waste management guide. laws, issues, and solutions. the bureau of national affairs revised rcra inspection manual oswer directive quantitative risk assessment for environmental and occupational health casarett and doull's toxicology: the basic science of poisons the emerging field of ecogenetics guidelines for carcinogen risk assessment fr guidelines for carcinogen risk assessment. review draft. office of research and development a weight -of-evidence scheme for assessing interactions in chemical mixtures approaches and challenges in risk assessments of chemical mixtures. in: yang rsh (ed) toxicology of chemical mixtures health effect test guidelines: acute toxicity testing. us epa, office of prevention, pesticides, and toxic substances chlorethoxyfos-review of a repeated exposure inhalation study and evaluation of that study by the hazard identification assessment review committee. us epa, office of prevention, pesticides, and toxic substances biologic markers in risk assessment for environmental carcinogens health effect test guidelines: combined chronic toxicity/carcinogencity. us epa, office of prevention, pesticides, and toxic substances methods for derivation of inhalation reference concentrations and application of inhalation dosimetry. us epa, office of research and development soil screening guidance: technical background document. us epa, office of waste and emergency response health advisories of drinking water contaminants. us epa, office of water and health advisories assessment and management of chemical risks, vol risk assessment in the remediation of hazardous waste sites methodology for deriving ambient water quality criteria for the protection of human health issues in qualitative and quantitative risk analysis for developmental toxicology toxicology information resources at the environmental protection agency risk assessment guidance for superfund, voll. human health evaluation manual (part a) assessment protocol for hazardous waste combustion facilities can we assign an upper limit to skin permeability? international life science institute (ilsi) ( ) exposure to contaminants in drinking water, estimating uptake through the skin and by inhalation memorandum on body weight estimates based on nhanes iii data, including data tables and graphs. analysis conducted and prepared by westat, under epa contract number -c- - usda ( ) - continuing survey of food intakes by individuals and - diet and health knowledge survey measures of compounding conservatism in probabilistic risk assessment guiding principles for monte carlo analysis epa/ /r- / risk assessment forum chemical risk assessment numbers: what should they mean to engineers? risk assessment guidance for superfund, voll. human health evaluation manual (parte, supplemental guidance for dermal risk assessment derivation of toxicity values for dermal exposure supplemental guidance to ragss. region iv bulletins. human health risk assessment waste management division supplementary guidance for conducting health risk assessment of chemical mixtures guidelines for the health risk assessment of chemical mixtures the toxicity of poisons applied jointly a practical guide to understanding, managing, and reviewing environmental risk assessment reports addendum:region risk management -draft human health risk assessment protocol for hazardous waste combustion facilities epa year guidance document. contract number -w - guidelines and methodology used in the preparation of health effect assessment chapters of the consent decree water criteria documents implementing the food quality protection act. us epa, office of prevention, pesticides, and toxic substance remediation of hazardous effluent emitted from beneath newly constructed road systems and clogging of underdrain systems assessment of water pollutants from asphalt pavement containing recycled rubber in rhode island the rhode island department of transportation waste-to-energy plant for paper industry sludges disposal: technical-economic study superfund at work: hazardous waste cleanup efforts nationwide. us epa, solid waste and emergency response toxicity of construction materials in the marine environment: a comparison of chromated-copper-arenate-treated wood and recycled plastic the control of the heavy metals health hazard in the reclamation of wastewater sludge as agricultural fertilizer cadmium -a complex environmental problem. part ii recycling of lead-contaminated edta wastewater management of scrap computer recycling in taiwan management, disposal and recycling of waste industrial organic solvents in hong kong analysis of flame retarded polymers and recycling materials chemosphere health effect studies on recycled drinking water from secondary wastewater. in: yang rsh ( ed) toxicology of chemical mixtures toxicology of chemical mixtures derived from hazardous waste sites or application of pesticides and fertilizers. in: yang rsh (ed) toxicology of chemical mixtures toxicology studies of a chemical mixture of groundwater contaminants: hepatic and renal assessment, response to carbon tetrachloride challenge, and influence of treatment-induced water restriction texas natural resource conservation commission ( ) texas risk reduction program rule review draft addendum to the methodology for assessing health risks associated with indirect exposure to combustor emissions estimating exposure to dioxin-like compounds review draft development of human health-based and ecologically-based exit criteria for the hazardous waste identification project. office of solid waste i and ii cw · ir · ef · ed intake=--------- bw a a number of studies has shown that an age-adjusted approach should be used to calculate intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from to years old and others from to years [ ] . b the exposure parameters were taken from the texas risk reduction program rule [ ] and are provided as examples only. different regulatory or state agencies may recommend different exposure parameters based on scientific policy or risk management decisions. [ ] . ' use only when a rid is based on health effects in children [ ] . d the office of water is in the process of preparing an exposure assessment technical support document in which an age-adjusted approach will be used to calculate fish intakes for children for carcinogens to take into account the difference in ingestion rates, body weights, and exposure duration for children from to years old and others from to years [ ] . for copcs whose log k w< . for copcs whose log kow> . cf = cw · baf for dioxins, furans, and polychlorinated biphenyls cf = csed · bsaf cf =chemical concentration in fish (mg/kg), fresh weight (fw) cw =chemical concentration in water (mg/l) bcf =bioconcentration factor (l/kg fw)h baf =bioaccumulation factor (l/kg fw)b c,ed =chemical concentration in sediment (mg/kg) bsaf =biota-sediment accumulation factor (unitless)<• please refer to reference [ ] for a detailed discussion of procedures used to calculate chemical concentration in fish. different regulatory or state agencies may recommend different procedures based on scientific policy or risk management decisions [ , ] . b please refer to appendix a- of reference [ ] for bcf, baf, and bsaf values and procedures for calculating these values. also, please refer to [ , ] . c bsafs are used to account for the transfer of copcs from the bottom sediment to the lipid in fish [ , [ ] [ ] [ ] . organic compounds non-steady state" not applicable for inorganics cinh =the concentration of copc at the exchange boundary (mglm ) cw =chemical concentration in water (mg/l) vf =volatilization factor [(mglm )/(mg!l-h )]• ef =exposure frequency (days/year)h ed =exposure duration (years)h at =averaging time in years (period over which exposure is averaged)h a specific fate and transport models are used to derive volatilization factors to quantify the transfer of volatile copcs from ground water into an enclosed space, from ground and surface waters into ambient air, etc. these fate and transport models are discussed elsewhere in this book. b the exposure parameters for ef, ed, and at from appendix a - can be used for the residential adult, residential child, and commercial/industrial worker for some pathways, but site-specific exposure parameters may need to be developed for other pathways. key: cord- -ylng us authors: herman, philippe; pauwels, katia title: biosafety recommendations on the handling of animal cell cultures date: - - journal: animal cell culture doi: . / - - - - _ sha: doc_id: cord_uid: ylng us the first steps in tissue culture are dating back to the beginning of the nineteenth century when biosafety measures did not yet exist. later on, animal cell culture became essential for scientific research, diagnosis and biotechnological activities. along with this development, biosafety concerns have emerged pointing to the risks for human health and in a lesser extent for the environment associated to the handling of animal cell cultures. the management of these risks requires a thorough risk assessment of both the cell cultures and the type of manipulation prior the start of any activity. it involves a case-by-case evaluation of both the intrinsic properties of the cell culture genetically modified or not and the probability that it may inadvertently or intentionally become infected with pathogenic micro-organisms. the latter hazard is predominant when adventitious contaminants are pathogenic or have a better capacity to persist in unfavourable conditions. consequently, most of the containment measures primarily aim at protecting cells from adventitious contamination. cell cultures known to harbour an infectious etiologic agent should be manipulated in compliance with containment measures recommended for the etiologic agent itself. the manipulation of cell cultures from human or primate origin necessitates the use of a type ii biosafety cabinet. the scope of this chapter is to highlight aspects relevant for the risk assessment and to summarize the main biosafety recommendations and the recent technological advances allowing a mitigation of the risk for the handling of animal cell cultures. specified-pathogen-free tse transmissible spongiform encephalopathies biosafety is a concept that refers to the need to protect human health and the environment from the possible adverse effects of pathogenic and/or genetically modified organisms and micro-organisms used in basic research, research and development (r&d) and modern biotechnology. to this end, a case-by-case risk assessment is conducted which consists in the identification and characterisation of potential effects, which may be intended or unintended, together with an assessment of the likelihood and consequences should any effect occur. depending on the risks identified, risk management measures can be proposed. the use of cell cultures may fall within the scope of one or several regulatory provisions which consider an assessment of biological risks. for example in europe, tissue culture work will in many cases involve the use of genetically modified cell lines as well, in which case a risk assessment should be made in accordance with the provisions of the directive / /ec related to the contained use of genetically modified micro-organisms (european commission ). cell culturing activities aiming at manufacturing biopharmaceuticals are covered by the regulation (ec) no / and its amending acts laying down procedures for the authorisation and supervision of medicinal products for human and veterinary use (european commission a), whereas activities that involve the use of human cells and tissues for application to the human body falls within the scope of the directive / /ec and its amending acts (european commission b) . the manipulation of animal cell cultures also exposes the worker to potential biological risks which are considered under the provision of the european directive / / ec (european commission ) . it should be pointed out that guidelines aiming at mitigating the biological risks for the laboratory workers, public health and the environment have been issued by some scientific advisory bodies or competent authorities (world health organization ; centers for disease control and prevention a; swiss expert committee for biosafety ). while biosafety recommendations (as outlined hereafter) are principally aimed at providing maximal protection of human health (including laboratory workers) and the environment, it is recognised that many of the precautionary measures will also directly benefit the quality of research activities involving animal cell cultures. indeed, cross-contamination (lucey et al. ; capes-davis et al. ; stürzl et al. ; jäger et al. ; johnen et al. ; macleod et al. ) or inadvertent contamination with infectious micro-organisms (bacteria, fungi, yeasts, virus, and prion) are plaguing many researchers, often leading to unproductive data, misinterpretation of results and a considerable waste of time and energy (mahy et al. ; drexler and uphoff ; mirjalili et al. ; cobo et al. ; pinheiro de oliveira et al. ) . it should also be emphasized that even if good manufacturing process (gmp) aim at protecting the product, some of the gmp measures are compatible with biosafety measures and reveal to be complementary. the objective of this chapter will be to address and review biorisk assessment and management considerations of diagnostic and research activities involving cell cultures. from an historical perspective, the assessment of biological risks has an empirical basis and has resulted from the awareness of the scientific community with regards the risks associated with the handling of pathogenic organisms, as demonstrated through many reported cases of laboratory-acquired infections, followed by the potential risks associated with experiments involving recombinant dna. in the s, some initiatives for implementing measures guaranteeing the safe use of recombinant dna were linked to the safety measures which were at that time already successfully applied in microbiology for the containment of pathogenic organisms (national institute of health ). the conjunction of these two aspects constitutes two pillars of biosafety and has led to a classification system of organisms into risk classes or risk groups. so far, many risk assessments have been carried out among the scientific community on the use of pathogenic organisms (genetically modified or not), regardless of the scale or the purpose of the activity. the basis for this risk assessment methodology takes into account the most recent scientific, technical data and uses a scientifically sound approach. the methodology of biological risk assessment of contained use activities involving pathogenic and/or genetically modified organisms (gmo) identifies and takes into account the probability of occurrence and the severity of a potential negative effect on public health (including the exposed workers) and/or the environment. as a result of this methodology, a well characterised risk will lead to the choice of appropriate preventive measures encompassing the adoption of an appropriate containment level, the use of safety equipment including personal protective equipment, work practices and waste management. the risk assessment methodology is commonly used and its -step approach is described in fig. . . the first step ( ) takes into account the characteristics of the organism(s) used and, in the case of genetic modification, the genetic material introduced and the resulting gmo. based on information relative to their harmful characteristics, natural pathogenic micro-organisms can be categorised in several classes of risk or risk groups. this classification takes into account the severity of the disease that pathogenic organisms may cause to human or animal health, their ability to spread amongst the population and the availability of prophylaxis or efficient treatment (world health organization ) . for zoopathogens, the classification system is mainly based on the definitions of the world organisation of animal health (oie), which categorises animal pathogens into four groups according to their risk to animal health, and since , their risk to human health as well (world organisation for animal health ). micro-organisms that are unlikely to cause disease are classified into class of risk while etiologic agents responsible for severe diseases with a high potential of transmissibility and for which no prophylaxis or treatment is available are assigned to class of risk . as such, pathogenic organisms will be categorised from class of risk to class of risk . some periodically revised reference lists issued by international and national authorities or advisory committees classify natural biological agents (not genetically modified) into risk groups or assign the biosafety level under which these should be manipulated ( in the second step ( ), the magnitude of the identified negative effects such as human diseases, including allergenic or toxic effects or the transfer of genetic material to other organisms are characterized. in a third step ( ), an assessment is performed of the exposure of the laboratory worker, the population and/or the environment to the considered organism and the consequences of each negative effect should it occur. in the fourth step ( ), a characterization of the risk is performed resulting in the assignment of the risk level associated with the contained use involving the use of the organism(s). on this basis, the containment measures and other protection measures (e.g. safe work practices, safety equipment and biological waste management) to be adopted are determined. there are four levels fig. . flow diagram summarizing the biorisk assessment and management methodology of risk to which contained uses can be assigned, with level of risk increasing from to . the final step consists in definitively classifying the contained use activity by conducting a re-assessment of the whole procedure before starting the research, diagnosis or production activity. in this chapter, the risk assessment methodology applied to animal cell cultures is developed and illustrated by examples. it is important to mention that such a risk assessment is always performed on a case by case basis by the scientist (s) responsible for the activity and the biosafety officer (or biosafety professional) in compliance with local guidance and regulatory requirements. the risk assessment applied to animal cell cultures relies on a thorough evaluation of both the intrinsic properties of the cell culture -including subsequent properties acquired as a result of genetic modification(s) -and the possibility that the cell culture may inadvertently be contaminated or deliberately infected with pathogenic micro-organisms. it also includes an exposure assessment which means that the type of manipulation carried out with the cell cultures is taken into account. the assessment of cell cultures harbouring pathogens follows the same principles as the assessment of the pathogens itself. first, the main organism characteristics (a comprehensive description of the pathogen) is considered by taking into account the following parameters (not by order of importance): ( ) the pathogenicity and, when available, the infectious dose ( ) the mode of transmission, ( ) the host range, ( ) the epidemiology (assignment of appropriate risk group may depend on the geographic localisation), possible reservoir and vector(s), and the ability to zoonosis ( ) the stability and the persistence of the organism in the environment (i.e. survival outside the host). in addition, information related to the physicochemical properties of the pathogenic organism is considered such as: ( ) susceptibility to disinfectants, ( ) physical inactivation and ( ) drug susceptibility (e.g. sensitivity and known resistance to antibiotics or antiviral compounds). finally, aspects related to the disease caused by the pathogen are also to be taken into consideration. this includes ( ) the availability of an effective prophylaxis, ( ) the availability of an efficient therapy and ( ) any reported case of laboratory-acquired infection (s) (lais). although underestimated, many cases of lais related to the handling of cell cultures and/or containing virus suspension have been reported. among them are the reported laboratory worker's exposure to (recombinant) vaccinia viruses amplified in cell culture resulting into infections openshaw et al. ; mempel et al. ; moussatché et al. ; wlodaver et al. ; lewis et al. ; centers for disease control and prevention b). recommendations to work safely with vaccinia virus have been reviewed recently together with an overview on the reported cases of lais involving this virus (isaacs ) . the risk assessment of cell cultures that are genetically modified basically follows a comparative approach: the characteristics of the gmo are compared to those of the non-modified (wild-type) organism from which it is derived under corresponding situations of use. the distinctive feature of the risk assessment of genetically modified cell cultures which consists of the evaluation of the recipient cell, the vector, the donor organisms and the inserted genetic material (insert) is developed in sect. . . . good knowledge and characterisation of the intrinsic properties of cells are key to successful and safe cell culturing. with respect to the biological risks and the risk assessment associated with the manipulation of animal cell cultures, three properties intrinsic to cell cultures should be considered : the species of origin, the cell type or type of tissue, organ from which the cell line is derived and the status of the culture ( fig. . ). with respect to the species of origin and based on the fact that pathogens usually have specific species barriers, it is considered that the closer the genetic relationship of the cell culture is to humans, the higher the risk is to humans. the incidence to harbour organisms that could cause harm to human health is therefore considered higher in human or primate cells compared to cells of non-human origin (brown ) . accordingly, mammalian cells other than human or primate cells are considered to represent less risk, followed by avian and invertebrate cells. however, it should be kept in mind that some infectious agents are able to cross the species barrier and to persist in new host species, leading to zoonotic diseases. it is acknowledged since many years that more or less % of the emerging infectious diseases are zoonotic (chomel et al. ) . well documented cases of viruses that have crossed the species barrier from animal reservoirs to humans include hantavirus (murine reservoir), haemorrhagic fever viruses (ebola, marburg) (peters et al. ), avian influenza virus (reperant et al. ) and severe acute respiratory syndrome (sars) associated coronavirus (ksiazek et al. ; herman et al. ) . these examples show that incidences of cross-species transfer can occur and that occupational risks related to exposure to infected animal tissues or cell cultures should not be underestimated (mahy and brown ; louz et al. ) . cells may dramatically differ in their in vivo half-life depending on the cell type or type of tissue from which these are derived. for example, intestinal and certain leukocytes have a half-life of a few days, human erythrocytes have approximately a - -day half-life, healthy liver cells have a life span of several months, whereas, in adults, there is a slow loss of brain cells with little replacement. partly due to this fact, some cell lines can be more readily obtained than others. the establishment of cell lines is often obtained by a series of (generally uncontrolled) mutations that occur by culturing cells for a longer period. it is known that cells cultured for extensive periods of time display changing growth properties. a reduction of the doubling time, as a result of transformation, may give cells the ability to overgrow the rest of the population and to survive for a large (infinite) number of passages compared to primary cells with a finite life span. therefore, the establishment of cell cultures of a certain cell type upon extensive passage relies on the positive selection for cells that have a growth advantage. these transformed cells can have an increased tumorigenic potential and may present more risks of becoming/being fully neoplastic upon accidental (gugel and sanders ) or deliberate introduction into the human body. therefore, taking the tumorigenic potential into account, the following cell types may be ranked in increasing order of risk: epithelial and fibroblast cells, gut mucosa, endothelium, neural tissues, haematogenous (e.g. blood, lymphoid) cells and tissue. a third inherent property to consider is the status of cell culture. diagnostic and research activities involve the manipulation of primary cultures or cell lines as well as continuous cell lines derived from primary cultures. primary cell cultures and cell strains are produced directly from organs or tissues and are often the most accurate in vitro tool for reproducing typical cellular responses observed in vivo. however, as they are characterised by a finite life span, the time available for characterisation and detection of contaminating agents remains limited. also, because typical cell characteristics are often lost during the passage of cells, primary cell cultures are repeatedly obtained from fresh tissue, resulting in increasing risks for potential contaminating pathogens. a feature that distinguishes continuous cell lines from primary cell cultures is the ability to survive if not infinitely, at least for a great number of passages. these immortalised cells are obtained by isolating cells from tumours, by mutating primary cells with mutagens, by using viruses or recombinant dna to generate indefinitely growing cells or by cell fusioning of primary cells with a continuous cell line. due to their increased life span, the time left for thorough characterisation and detection of contaminating agents is considerably increased. within this respect, well-characterised cell lines present the lowest risks compared to primary cultures or less characterised cell lines as the origin, the source and suitability are well-known and well-defined. for cell lines obtained from external sources (e.g. different laboratory), crosscontamination of cell-lines and/or a lack of proof of identity is actually a widespread problem (buehring et al. ; capes-davis et al. ) . in order to have at least evidence of the species of origin of a cell line and to be able to conduct a thorough risk assessment, it may be necessary to fully characterise the used cell lines. for this purpose, a number of techniques are available such as cytogenetic analysis, dna fingerprinting, pcr, flow cytometry and isoenzyme analysis. (matsuo et al. ; cabrera et al. ). many micro-organisms benefit from a cell's machinery to complete their life cycles and to disseminate. hence the study of a pathogens' lifecycle or immunity escape mechanism requires the intentional in vitro infection of animal (or human) cells. the identification of potential hazards associated with infected cell cultures requires a consideration of the intrinsic cell properties and the inherent properties of the infecting pathogen. the latter implies an assessment of a number of pathogen specific criteria along with aspects such as the existence of effective treatment or prophylaxis. on the basis of these criteria the who defines a classification system that enables the categorisation of micro-organisms into four risk groups (world health organization ) . a fundamental rule is that the biological risk of infected cell cultures will depend on the infecting pathogen(s) class of risk. for example, cell cultures deliberately infected with hepatitis c virus (hcv) in order to produce virus particles are assigned to class of risk , as hcv is a class of risk virus. human cells infected with an airborne pathogen like species of the mycobacterium tuberculosis complex are also assigned to class of risk and are requiring the adoption of biosafety level containment. however, as discussed below, the class of risk to which the infected cell cultures are assigned will not necessarily indicate the level of containment to be implemented as the latter will also be determined by the nature of the work carried out with these cells. an example is the infection of bovine leukocytes with theileria parva, a tick-transmitted, intracellular protozoan of veterinary importance and the causative agent of east coast fever among domestic livestock. it is an animal pathogen of risk group , which is not pathogenic to humans. the sporozoite form (infective form) invades bovine lymphocytes where it develops into a non-infective form (shizonts) and induces host cell transformation and clonal expansion of the cell. these infected bovine leucocytes may be categorised under class of risk for animals, while the biosafety level (bsl or ) appropriate for handling is determined by the presence or absence of the infectious form of the parasites. cell culture can also be coupled with electron microscopy to identify viral diseases of unknown cause as shown in a recent study published by the centers for disease control and prevention (goldsmith et al. ) . in case of outbreaks the harvested tissues from dead or living infected patients are inoculated to a permissive cell line (generally on vero e cells) and eventually subjected to electron microscopy for morphological analysis of the causal virus. although alternatives methods such as high throughput dna sequencing are available to identify a microorganism without a prior in vitro expansion, cell culture followed by electron microscopy remains the complementary approach of choice to molecular methods for the unbiased diagnosis of ill-defined infectious disease. some of the diagnostic activities presented by the authors required the adoption of bsl measures to handle the cell cultures. it illustrates how activities involving the in vitro amplification of unknown virus may represent a risk for the laboratory personnel and requires the adoption of an appropriate containment level and work practices. adventitious contamination of cell cultures is a major drawback for any activity that involves cell culturing (langdon ) . causative agents of cell contamination include bacteria, fungi, mycoplasms, parasites, viruses, prions and even other animal cells. beside the fact that contamination of cell cultures may place experimental results in question or may lead to the loss of cell cultures, one of the main biosafety concerns when manipulating animal cell cultures for research, diagnosis or production purposes is the fact that they may provide a support for contaminating agents that cause harm to human health. generally, bacterial or fungal contamination can be readily detected because of their capacity to overgrow cell cultures. typically, these organisms cause increased turbidity, ph shift of media (change in media colour), slower growth of the cells and cell destruction. antibiotics may be used to prevent cell bacterial contamination, however, continuous use of antibiotics in cultures may lead to development of resistant organisms with slow growing properties, which are much more difficult to detect by direct visual observation. compared to bacterial or fungal infections, mycoplasma contamination gives more problems in terms of incidence, detectability, prevention and eradication. mycoplasma, an intracellular bacterium, is one of the most common cell culture contaminants. it may go unnoticed for many passages and can change several cell properties such as growth, metabolism, morphology and genome structure (paddenberg et al. ; mcgarrity and kotani ) . it has also been reported to influence the yield of virus production in infected cells (hargreaves and leach ) . mycoplasmal contamination is also a biosafety concern, because some of the contaminating mycoplasma spp. belong to risk group . together with m. arginini, m. orale, m. pirum and m. fermentans, pathogenic organisms like m. gallisepticum (risk group for animals), m. hyorhinis (risk group for animals), m. pneumoniae and m. hominis (risk group for humans) account for more than % of mycoplasma contaminants in cell cultures. primary sources of contamination with m. orale, m. fermentans, and m. hominis in the laboratory are infected people who handle cell cultures and suspensions of viruses. sources of m. argini and m. hyorhinis are usually animal donors of tissues and biological constituents used for cell culture, e.g. calf serum and trypsin (razin and tully ; pinheiro de oliveira et al. ) . it was already reported that the contamination of cell cultures by mycoplasma occurs via aerosols (o'connell et al. ). viral contamination merits particular attention because infected cells may pose a serious harm to human health, especially when infected cells are able to release infectious particles. human cells may be infected by various viruses like hepatitis viruses, retroviruses, herpes viruses or papillomaviruses. although cell cultures from non-human origin may pose less risk, it should be emphasised that many viruses have a broad host range and can cross species barriers. since a number of non-human viruses are capable of infecting and/or replicating in human cells in vitro, their possibility to infect human cells in vivo if human exposure occurs should be carefully considered. well-known viral contaminants of primate tissues or cells from non-human origin that can cause human disease are listed in table . . while contamination with some viruses may be associated with changes in cell morphology or behaviour -such as the formation of syncytia (hiv, herpes viruses), swelling of cells (adenoviruses) or haemagglutination or haemadsorption -viral contamination may be harder to detect when cytopathic effects remain absent. viral contamination could also trigger adverse effects as a result of recombination events or phenotypic mixing between contaminating components and experimentally introduced agents, creating agents with new properties. for example, experimental results suggested that htlv-i or htlv-ii undergo phenotypic mixing with hiv- in htlv/hiv- co-infected cells, leading to an increase of the pathogenicity of hiv- by broadening the spectrum of its cellular tropism to cd negative cells (lusso et al. ). another example is the contamination of murine cell cultures by lymphocytic choriomeningitis virus (lcmv). lcmv is an arenavirus that establishes a silent, chronic infection in mice but causes aseptic meningitis, encephalitis or meningoencephalitis to humans. the significance of lcmv contamination has been reinforced by the description of cases of laboratory-acquired lcmv infections arising from contaminated murine tumour cell lines (mahy et al. ) . manipulation of lcmv infected material or material with an increased likelihood of lcmv contamination necessitates the whitley ( ) papovaviruses butel ( adventitious contamination with parasites may be an issue when handling primary cell or organ cultures originating from a donor organism that is known or suspected to be infected with a specific parasite. as the life cycle of most parasites comprises distinct developmental stages, transmission and survival of the parasite will strongly depend on the ability of the invasive stage to recognize and invade specific host cells. but even with cells developing the non-infectious form of parasites, possible harmful effects remain to be considered since natural modes of transmission could be bypassed during the manipulation of infected cells. it is recognised that most of the parasitic laboratory acquired infections are caused by needle stick injuries (herwaldt ) . finally, the use of bovine-derived products as tissue culture supplements may also lead to the contamination with unconventional agents that cause transmissible spongiform encephalopathies (tse), the so-called prions (solassol et al. ; cronier et al. ; vorberg et al. ). contrary to the majority of the infectious agents, tse agents are resistant to most of the physical and chemical methods commonly used for decontamination of infectious agents. it has been shown that neuroblastoma cell lines, primary cultured neurons and astrocytes can serve as hosts (butler et al. ) . although many studies have suggested that the risk of propagation of tse agents in tissue culture cells cultivated in the presence of bovine serum potentially contaminated with tse was restricted to neurons or brain-derived cell cultures, it has been shown recently that non-neuronal cells can also support tse infection, suggesting that any cell line expressing normal host prion protein could have the potential to support propagation of tse agents (vilette et al. ; vorberg et al. ) . while the understanding of the transmission of prions is still in progress (natural transmission seems mainly to take place via oral route in human and animals), investigators using cell cultures need to take into account different routes by which these agents may be transmitted experimentally. mouse scrapie aerosol transmission has been successfully obtained in mice haybaeck et al. ) . in cervids, chronic wasting disease (cwd) was already proposed as a natural airborne pathogen (denkers et al. ). in the case of creutzfeldt jacob disease (cjd), there is to date no proof of release of prion into aerosols. animal cell cultures can also harbour unknown pathogens or whose tropism has not been defined yet. examples described in the literature include viruses such as hepatitis g (linnen et al. ) , hhv (moore et al. ) , tt virus (nishizawa et al. ) or human pneumovirus (van den hoogen et al. ) . cell cultures can be contaminated by different sources. infected organisms or infected animal cells or tissues from which a cell line has been established are the primary source of contamination. an accidental contamination can also occur through the material used for cell culturing including glassware, storage bottles and pipettes due to incorrect maintenance or sterilisation. before the use of disposable material the lip of the culture flask and the outside of the used pipette were important sources of contamination with mycoplasma (mcgarrity ). nowadays, the use of disposable and sterile pipettes has significantly decreased the likelihood of adventitious contamination. a third source of contamination resides in culture media and its components such as serum, basic culture media and salt solutions and enzymes (trypsin, pronase and collagenase). for example, media and additives derived from bovine sources are often contaminated with bovine viral diarrhoea virus (bvdv) (levings and wessman ) . as mentioned above the relative resistance of tse agents may also be an issue when using bovinederived products as tissue culture supplements. finally, non-filtered air supply, clothing, personnel and floor can be a source of airborne contamination (hay ) . genetically modified (gm) animal cell cultures are employed for a number of different activities. for example, the expression of transgenes and production of proteins of interest whose function depends on methylation, sulfation, phosphorylation, lipid addition, or glycosylation may necessitate the capacity of higher eukaryotic cells to perform post-translational modifications. gm animal cell cultures may also be chosen for the replication of defective recombinant or even wild type viruses. the risk assessment of gm cells should follow the five-step methodology as outlined in fig. . , which means that an evaluation of each individual aspect in the process of genetic modification should be performed. this includes an evaluation of the recipient cell, the vector, the donor organism properties and an assessment of the characteristics of the inserted genetic material. a comprehensive risk assessment of genetically modified cells expressing transgenes should also take into account the risk associated with the transgene products. a gene product may be intrinsically harmful (e.g. toxic properties) or could induce hazardous properties via its expression in gm cells, dependent upon the genome integration site, promoter activity and expression of regulatory sequences governing expression. the risk assessment for transgenes is not straightforward and demands appropriate consideration. comprehensive reviews have specifically addressed this topic (bergmans et al. ; van den akker et al. ). genetic modification may confer an expanded life-span, immortalisation or increased capacity for tumour induction. however, it is unlikely that recombinant properties obtained by genetic modification may have an adverse effect upon release of the recombinant animal or human cells into the environment. cells (genetically modified or not) have difficulties to survive in a hostile environment where control of temperature and osmolality is lacking or where cell-specific nutrients (e.g. glucose, vitamins, lipids) are not balanced or missing. hence, the survival of such primary cells or cell lines outside of proper conditions is unlikely to occur. apart from the fact that gm cell cultures may harbour pathogens and pose serious biological risks to human health (as discussed above), recombinant cells are more likely to cause harm when entering the body of animals or humans. however, the extent of the harmful effect remains hard to predict. it should be kept in mind that the lack of histocompatibility between recombinant cells and the host organism remains a major obstacle for these cells to survive and to multiply as the natural immune response of the healthy (immunocompetent) host will recognise foreign cells and eventually destroy them. this is also one of the main reasons why the culturing of cells originating from the laboratory worker is not allowed for research and diagnostic activities (risk associated with autologous cells). particular attention should be paid to the use of packaging cell lines. these are established cell lines that are deliberately and stably transfected with "helper constructs" to ensure the replication and packaging of replication deficient viral vectors. for example, in case of retroviral packaging cell lines, the expression of "helper genes" allows high-level constitutive production of viral proteins (e.g. gag, pol and env proteins), which are missing in the genome of the replication deficient viral vector but are crucial for viral replication. one of the main biosafety issues related to the use of packaging cell lines is the fact that replication-competent viruses may be generated as a result of (homologous) recombination between the replication deficient viral vector and viral sequences present in the packaging cell. these events could result in the formation of viruses with novel yet unwanted properties such as the generation of replication-competent viruses. one of the strategies to engineer safer generations of packaging cell lines consists in minimising the likelihood of generating replication-competent viruses by separating viral functional elements into different expression plasmids, thereby increasing the number of recombination events necessary to generate replication-competent viruses (dull et al. ) or by reducing or eliminating the sequence homology between the viral vector and the helper sequences. however, endogenous retrovirus genomes expressed in safer generations of retroviral packaging cell lines may still give rise to unwanted recombination events (chong et al. ) . this means that the possibility to generate replication-competent viruses cannot be ruled out. clearly, the risk group of the transfected packaging cell line will depend on the risk group of the viral vector itself. consequently, risk assessment of packaging cell lines should be based on the biosafety of the produced viral vectors, including an evaluation of their infectivity, spectrum of host range, capacity of integration (insertional mutagenesis), stability and physiological role of the transgene(s) if expressed (baldo et al. ). aside from the identification and characterisation of hazards intrinsic to the cell culture, a thorough risk assessment must also consider the exposure pathways through which cell cultures may present a risk to human health or the environment. this necessitates an evaluation of the type of manipulation, because processes, methods and/or equipment involved may increase or decrease the likelihood of exposure and hence the resulting potential risks. for instance, while established cell lines inherently present low risks, large scale operations involving the culturing of large volumes (from to , l) are prone to contamination when inadequate containment measures are applied. this is exemplified by continuous processes such as cell cultivation in bioreactors where an appropriate design of seals, valves, pumps and transfer lines is required to guarantee long-term sterility of the operation to avoid inadvertent contamination. at the opposite, the handling of cell cultures belonging to risk group may present less risk once they have been fixed by glutaraldehyde or formaldehyde/acetone for immunostaining and may therefore require less stringent containment measures manipulations that are common to research and diagnostic activities and warrant consideration with respect to the risk assessment of animal cell cultures are described hereunder: -procedures generating aerosols: pipetting, vortexing, centrifugation, opening of wet cups, etc.; -handling cells outside of a class ii bsc: flow cytometric analysis and cell sorting constitute a special case of cell manipulation in which cells are handled outside of a bsc. the use of a fixative is in many cases not appropriate (e.g. viable cell sorting for subsequent further cell culturing) and the risk of aerosol formation can be particularly high, especially during sorting experiments and upon instrument failure such as a clogged sort nozzle. all scientists in the field of flow cytometry must be aware of the potential hazards associated with their discipline and only experienced and well-trained operators should perform potentially biohazardous cell sorting. general recommendations approved by the international society of analytical cytology should help to set a basis for biosafety guidelines in flow cytometry laboratories (schmid ) . some standard operating procedures and methods have also been described for ensuring the cell sorting under optimal biosafety conditions even under bsl conditions (lennartz et al. ; perfetto et al. ). -altering culture conditions: changing the availability of cell-specific nutrients, growth factors, signal molecules or adopting co-culture techniques may have significant effects on animal cell cultures as it may result in altered neoplasia (stoker et al. ), altered expression of (proto)onco-genes or cell surface glycoproteins and release of endogenous viruses (cunningham et al. ) . as a consequence, changing culture conditions may lead to altered susceptibility of cultured cells to biologic agents such as viruses (anders et al. ; vincent et al. ). -manipulations involving use of needles or sharps: due to injuries cell material may be accidentally transferred directly to an operator's tissue and/or blood stream. -in vivo experiments involving animals: major risks are self-inoculation (needlestick injury) and exposure to aerosols. laboratory workers handling infected rodents or cell cultures originating from infected animals expose themselves at risk by directly exposing cuts, open wounds or mucus membranes with infected body fluids or by inhaling infectious aerosolized particles of rodent urine, faeces or saliva. the risk can be minimised by utilising animals or cell cultures from sources that are regularly tested for the virus. finally, the purpose of cell culturing should be taken into consideration as many clinical approaches such as stem cell therapy, gene therapy, xeno-or allotransplantation involve cell culturing ex vivo for therapeutic purposes. this latter clearly justifies more careful consideration regarding safety, ethical, social and regulatory issues, which will not be addressed in this chapter (food and drug administration; european medicines agency; ich ). the assessment of biological risks related to animal cell cultures and the type of manipulation allows the determination of an adequate containment level in order to optimally protect human health and the environment. the implementation of an appropriate containment level includes a list of general and more specific work practices and containment measures. table . lists precautionary measures that should be applied whenever handling animal cell cultures. much of these measures focus on reducing the risk of contamination with adventitious agents. it should be emphasised that the drawn-up of an appropriate set of standard operating procedures and adequate training of the staff is of crucial importance. as a general rule, cell cultures known to harbour an infectious etiologic agent should be manipulated in compliance with containment measures recommended for the etiologic agent. when cell cultures are not known to harbour infectious agents, cells may be considered free of contaminating pathogens as long as a number of conditions are fulfilled. this implies the use of well-characterised cell lines or controlled cell sources for primary cells such as specified-pathogen-free (spf) animals. if no well-characterised cell lines or spf are available, tests for detection of likely contaminating agents should be negative. second, whenever cell cultures are manipulated, media sources should be pathogen free and appropriate containment measures should be adopted to reduce potential contaminations during sampling or subsequent manipulation of cells (re-feeding and washing steps). as the history of a cell culture may be poorly documented when a given cell culture is manipulated for the first time in the laboratory, it often remains unclear whether all appropriate measures have been implemented regardless of the fact that it may have been manipulated for years in another laboratory facility. in this case, cell cultures should be considered to be potentially infectious and should be manipulated in a class ii bsc. if the presence of adventitious agents of a higher risk group is considered likely, the cell line should be handled under the appropriate containment level until tests have proven the absence of such organisms. good documentation of the history of cell cultivation is mandatory. the extent to which cell cultures should be controlled on the likelihood of contaminants strongly depends on the nature of activity. for example, guidelines have been issued aiming at minimising any potential risk for transmission of infectious agents with respect the use of animal cell cultures for industrial production of biopharmaceuticals (european medicines agency; food and drug administration; ich ; world health organization ). hardly any guidance has been provided for the extent of detecting possible contaminants in case animal cell cultures are used for in vitro research or diagnostic activities, or for purposes other than therapeutics or production of biopharmaceuticals. the choice of the detection technique depends on the contaminating pathogen and often a combination of methods is recommended for important samples such as master cell banks. the implementation of bsl measures is adequate for most of the work carried out with cell cultures from human or primate origin. bsl measures may be considered provided that all manipulations occur in a class ii bsc and the cell culture is a well-characterised and certified cell line that presents no increased risk resulting from genetic modification or contaminating pathogen. the contained use to be in compliance with good cell culture practice (gccp), especially in industrial settings for vaccine production. to avoid opening of culture vessels or contact with culture fluid through a defective culture vessel, stopper or poor technique because of the ever present likelihood of contamination with airborne pathogens. to treat each new culture that is manipulated for the first time in the laboratory facility as potentially infectious. to clean up any culture fluid spills immediately with an appropriated and validated disinfection protocol. to work with one cell line at a time and disinfect the work surfaces between two operations involving cell lines. to aliquot growth medium and other substrates so that the same vessel is not used for more than one cell line. to mitigate cross-contamination by avoiding pouring actions. to proceed to the use of the biosafety cabinet (bsc) by adequately trained staff, t.i. turn on for a period before and after use, thoroughly disinfect bsc surfaces after each work session and do not clutter the bsc with unnecessary materials. to restrict the use of antibiotics in growth media. to quarantine new cell cultures to a dedicated bsc or separate laboratory until the culture has been shown negative in appropriate tests. to carry out a quality control of cells demonstrating the absence of likely contaminating pathogens on a regular basis or whenever necessary. to operate cell cultures from undefined sources as risk group/class of risk organism. if the presence of adventitious agents of higher risk class is expected, the cell line should be handled under appropriate containment level until tests have proven safety. adapted from pauwels et al. ( ) in a bsl of viral infected cells could be envisaged if no viral particles are detectable in the supernatant of the infected cells. however, it should be emphasized that procedures for viral clearance are virus-host specific and that variables such as vector titre and the infection protocol can influence the degree and rate of clearance (bagutti et al. ) . since the implementation of good laboratory practices and the use of a bsc is usually the norm in most laboratories dealing with cell culturing, we think that bsl laboratories can relatively easily be upgraded to bsl facilities by implementing a restricted number of simple additional safety measures. it is important to notice that horizontal laminar air flows and clean benches minimise the risk of adventitious contamination of the cell cultures, however they offer no protection for the manipulator or the environment. bearing in mind that biosafety measures intend to provide a maximal protection of human health and the environment, it is important to note that the sole use of a horizontal laminar "clean bench" should be prohibited. based on, but not limited to, key features of risk assessment and the type of manipulation performed as discussed in former paragraphs, we developed a flow diagram providing the cell culture users a schematic guidance for the assignment of an appropriate containment level when manipulating human or primate cells in vitro (fig. . ) . this flowchart is indicative and should be applied and/or reconsidered according to case specific conditions and risk assessments proper to the activities performed. whenever possible approaches developed to ab initio reduce the risk associated with the handling of animal cell cultures should be favoured. such approaches involve the use of dedicated instruments or safer biological material. the examples hereunder illustrate how the choice to apply one or several of these approaches is determined on a case-by-case basis, depending on the intrinsic characteristics of the biological material and the intended use. during the last couple of years instruments enabling automated mechanical passaging and nutrient supply for in vitro cell expansion have been developed for primary cells, adherent and non-adherent mammalian cells (kato et al. ; thomas and ratcliffe ) . some of these developments also provide a safer approach with respect to biosafety considerations by including a biosafety module in the design of such automated platforms. for example a long-term cell culture device ensuring both the protection of the product and the operator has been designed for the culturing of embryonic stem cells in an antibiotic-free medium by means of an integrated automation platform using a class ii bsc confining the microwells (liquid handling robot contained in a bsc) (hussain et al. ) an automation system has also been deployed for the production ( - , ml) of hiv- pseudovirus for hiv vaccine trials in compliance with gclp. this robust automated system for cultivation of t/ cells is contained in a class ii bsc that guarantees the protection of the workers, environment and the product. this system can be implemented to produce other biological reagents under standardised large-scale conditions (schultz et al. ) . lowering the hazards associated with the cell cultures handled is another approach that was applied in the characterisation of pandemic influenza viruses under emergency situations. while such activities typically require bsl conditions, inactivation protocols applied to influenza virus culture allows for performing the virological and immunological assays under bsl conditions (jonges et al. ) . another example where the assignment of a less stringent containment level was enabled relates to the use of insect cells and gm bacteriophage lambda capsids in cases where usually the maintenance of infectious stocks of highly pathogenic or emerging influenza viruses is involved (domm et al. ) . the bacteriophages are "decorated" by the viral glycoproteins of interest in an insect cell-derived system so as to use hemagglutinin displaying bacteriophages in hemagglutination-inhibition assay on pathogenic influenza viruses. the same approach was already applied for hiv-envelope protein (mattiacio et al. ) . with regards to hiv- in vitro testing, a non-infectious cell-based assay to assess the hiv- susceptibility to protease inhibitors was also developed recently (buzon et al. ) . here again, activities that normally require relatively high containment level can be performed in facilities with less expensive infrastructure. it also shows that careful consideration of the material handled can lead to costeffective choice while maintaining the same objectives. biosafety is an internationally recognized concept referring to the maximal protection of laboratory workers, public health and the environment from the possible adverse effects associated to the use of organisms and microorganisms (genetically modified or not). within this respect activities involving the use of animal cell cultures for fundamental research, r&d or in vitro diagnosis purposes pose biosafety considerations as well. before starting any of such activities biosafety considerations should be addressed by performing a biological risk assessment allowing the identification and characterisation of any potential adverse effect together with an evaluation of the likelihood and consequences should any effect happen. such an approach is made on a case-by-case basis, taking into account the type of manipulation and the type of cell culture handled. a risk assessment will result in biosafety recommendations in view of the implementation of an adequate containment level. considering the very limited persistence capacity of animal and human cells outside non-optimised culture conditions and the fact that many cell lines have a long history of safe use, it is generally considered unlikely that cell cultures may inherently cause harm to humans or the environment. the main hazard associated with the handling of animal cell cultures resides in the presence of adventitious pathogenic micro-organisms, which are often difficult to detect and hence less controllable. in contrast to their host cells, adventitious organisms can persist in more hostile conditions and may present risks for human health or the environment in case they are pathogenic (bean et al. ; walther and ewald ; kallio et al. ; kramer et al. ) . for this reason, a risk assessment of cell cultures will frequently lead to a risk assessment of the potential adventitious contaminants, the organisms used for cell's immortalisation (viruses, viral sequences, etc.) and/or the microorganisms intentionally used to experimentally infect them. though the assignment of biosafety containment level requirements cannot be generalised and should be performed on a case-by-case basis, it is recognised that most of the containment measures primarily aim at protecting cells from adventitious contamination in order to mitigate potential risks for the laboratory worker. except for authenticated cell lines proved to present no risk, the activities involving cell cultures from human or primate origin should generally be performed under containment level involving the use of a class ii biosafety cabinet and good laboratory practice. whilst the occurrence of adverse effects related to the handling of cell cultures cannot be excluded, a thorough risk assessment and the implementation of the appropriate containment level offer an optimal protection for the laboratory worker and by extension, public health and the environment. continuous efforts have been made to mitigate the risk associated with the handling of cell culture in laboratory settings. these biorisk management (continued) measures (i.e. adoption of the appropriate containment level) can be coupled with the implementation of automation platforms allowing the reduction of cell culture cross-contamination and accidental contamination with infectious agents. it is anticipated that future efforts contributing to the improvement of the quality of the data necessary for the risk assessment, the containment measures and the increased awareness of biosafety considerations within the scientific community will also directly benefit the quality of research activities involving animal cell cultures. authentication is the process by which the true origin and identity of cell lines are determined and should form an essential part of any cell culture operation. biosafety in the context of this chapter, biosafety relates to the evaluation of the potential risks to human health and the environment associated with the use of genetically modified organisms (gmos) or pathogenic organisms. biosafety cabinet (class ii) safety cabinet with a front aperture through which the operator can carry out manipulations inside the cabinet and which is constructed so that the laboratory worker is protected, the product and cross contamination is low. the escape of airborne particulate contamination generated within the cabinet is controlled by means of an appropriate filtered internal airflow and filtration of the exhaust air (hepa filters). contained use any activity in which (micro)-organisms are genetically modified or in which such organism (pathogenic or not) are cultured, stored, transported, destroyed, disposed or used in any other way, and for which specific containment measures are used to limit their contact with the general population and the environment. culture type: primary cell cultures, diploid cell lines, continuous cell lines primary cell cultures are established directly from tissues of animals and are often the most appropriate in vitro tool for reproducing typical cellular responses observed in vivo. however, as typical cell characteristics are lost during passaging, these cultures must be obtained from fresh tissue that may contain or may become inadvertently contaminated with pathogens. consequently, primary cell cultures may potentially present increased risks compared to continuous, established cell lines. diploid cell lines are similar to primary cells, are considered non-tumourogenic and have a finite capacity for serial propagation. they are used for the preparation of viral vaccines and are from human or monkey origin. continuous cell lines are immortalized cells that may survive almost infinite serial passages. these cells are obtained by either isolating cells from tumours (neoplastic origin), primary cells treated with mutagens, oncogenic viruses or recombinant dna (oncogenes) or by cell fusioning of primary cells with a continuous cell line. as a consequence, the class of risk of these cell lines is often correlated to the class of risk of primary cells of whom they are derived. due to the "immortality" of the continuous cell lines, their ability to induce tumours has also to be considered. transfection gene transfer of dna in eukaryotic cells using non-viral delivery methods. transduction gene transfer of dna in eukaryotic cells using viral vectors. infection process occurring when a virus (wild type) that typically replicates and spreads into neighbouring cells mediates a gene transfer belonging to its own genome or extra genes it may carry. viral vector is a protein particle derived from a replicative virus that contains genetic information in the form of rna or dna. a viral envelope may be present as well. the approved list of biological agents, rd edn. health and safety executive disruption of d tissue integrity facilitates adenovirus infection by deregulating the coxsackievirus and adenovirus receptor washout kinetics of viral vectors from culture mammalian cells general considerations on the biosafety of virus-derived vectors used in gene therapy and vaccination survival of influenza viruses on environmental surfaces belgian classifications for micro-organisms based on their biological risks identification of potentially hazardous human gene products in gmo risk assessment detection of human t-cell lymphoma/leukemia virus type i dna and antigen in spinal fluid and blood of patients with chronic progressive myelopathy threat to humans from virus infections of non-human primates cell line cross-contamination: how aware are mammalian cell culturists of the problem and how to monitor it? in vitro cellular and developmental biology scrapie-infected murine neuroblastoma cells produce protease-resistant prion proteins a non-infectious cell-based phenotypic assay for the assessment of hiv- susceptibility to protease inhibitors identity tests: determination of ell line cross-contamination check your cultures! a list of cross-contaminated or misidentified cell lines biosafety in microbiological and biomedical laboratories (bmbl) th edn laboratory-acquired vaccinia virus infection, united states (virginia) . morbidity and mortality weekly report wildlife, exotic pets, and emerging zoonoses a replication-competent retrovirus arising from a splitfunction packaging cell line was generated by recombination events between the vector, one of the packaging constructs, and endogenous retroviral sequences isolation of a new human retrovirus from west african patients with aids microbiological contamination in stem cell cultures prions can infect primary cultured neurons and astrocytes and promote neuronal cell death activation of primary porcine endothelial cells induces release of porcine endogenous retroviruses polio vaccines, simian virus , and human cancer: the epidemiologic evidence for a causal association b virus infection in man foamy viruses -a world apart aerosol and nasal transmission of chronic wasting disease in cervidized mice nonhuman primate-associated viral hepatitis type a. serologic evidence of hepatitis a virus infection use of bacteriophage particles displaying influenza virus hemagglutinin for the detection of hemagglutination-inhibition antibodies mycoplasma contamination of cell cultures a third-generation lentivirus vector with a conditional packaging system simian herpesviruses and their risk to humans on the protection of workers from risks related to exposure to biological agents at work of the european parliament and of the council of march laying down community procedures for the authorization and supervision of medicinal products for human and veterinary use and establishing a european medicines agency. off j . directive / /ec of the european parliament and of the council of march on setting standards of quality and safety for the donation, procurement, testing, processing, preservation, storage and distribution of human tissues and cells on the contained use of genetically modified micro-organisms food and drug administration (fda). centers for biologics evaluation and research cell culture and electron microscopy for identifying viruses in diseases of unknown cause correspondence. needle-stick transmission of human colonic adenocarcinoma aids as a zoonosis: scientific and public health implications the influence of mycoplasma infection on the sensitivity of hela cells for growth of viruses operator-induced contamination in cell culture systems aerosols transmit prions to immunocompetent and immunodeficient mice biosafety risk assessment of the severe acute respiratory syndrome (sars) coronavirus and containment measures for the diagnostic and research laboratories laboratory-acquired parasitic infections from accidental exposures outbreak of lymphocytic choriomeningitis virus infections in medical center personnel chronic neurodegenerative disease associated with htlv-ii infection encephalomyelitis due to infection with herpesvirus simiae reproducible culture and differentiation of mouse embryonic stem cells using an automated microwell platform step consensus guideline, quality of biotechnological products: viral safety evaluation of biotechnology products derived from cell lines of human or animal origin, cpmp/ich/ / . the european agency for the evaluation of medicinal products, human medicines evaluation unit working safely with vaccinia virus: laboratory technique and review of published cases of accidental laboratory infections hiding in plain view: genetic profiling reveals decades old cross contamination of bladder cancer cell line ku with hela cross-contamination of a urotsa stock with t cells-molecular comparison of different cell lines and stocks accidental human vaccination with vaccinia virus expressing nucleoprotein gene influenza virus inactivation for studies of antigenicity and phenotypic neuraminidase inhibitor resistance profiling prolonged survival of puumala hantavirus outside the host: evidence for indirect transmission via the environment a new subtype of human t-cell leukemia virus (htlv-ii) associated with a t-cell variant of hairy cell leukemia a compact, automated cell culture system for clinical scale cell expansion from primary tissues brief report, infections of a laboratory worker with simian immunodeficiency virus how long do nosocomial pathogens persist on inanimate surfaces? a systematic review a novel coronavirus associated with sars cell culture contamination: an overview improving the biosafety of cell sorting by adaptation of a cell sorting system to a biosafety cabinet bovine viral diarrhea virus contamination of nutrient serum, cell cultures and viral vaccines ocular vaccinia infection in laboratory worker a tale of two clades: monkeypox viruses molecular cloning and disease association of hepatitis g virus: a transfusion-transmissible agent infection of laboratory workers with hantavirus acquired from immunocytomas propagated in laboratory rats cross-species transfer of viruses: implications for the use of viral vectors in biomedical research, gene therapy and as live-virus vaccines henrietta lacks, hela cells, and cell culture contamination cd -independent infection by human immunodeficiency virus type i after phenotypic mixing with human t-cell leukemia viruses where all the cell lines gone? zoonosis and haemorrhagic fever emerging zoonoses: crossing the species barrier virus zoonoses and their potential for contamination of cell cultures efficient dna fingerprinting method for the identification of cross-culture contamination of cell lines dense display of hiv- envelope spikes on the lambda phage scaffold does not result in the generation of improved antibody response to hiv- env spread and control of mycoplasmal infection of cell cultures cell culture mycoplasmas laboratory acquired infection with recombinant vaccinia virus containing an immunomodulating construct microbial contamination of cell cultures: a years study primary characterization of a herpesvirus agent associated with kaposi's sarcoma accidental infection of laboratory worker with vaccinia virus laboratory safety monograph: a supplement to the nih guidelines for recombinant dna research. national institute of health a novel dna virus (ttv) associated with elevated transaminase levels in posttransfusion hepatitis of unknown etiology aerosols as a source of widespread mycoplasma contamination of tissue cultures accidental infections of laboratory worker with recombinant vaccinia virus internucleosomal dna fragmentation in cultured cells under conditions reported to induce apoptosis may be caused by mycoplasma endonucleases animal cell cultures: risk assessment and biosafety recommendations standard practice for cell sorting in a bsl- facility filovirus contamination of cell cultures detection of contaminants in cell cultures, sera and trypsin detection and isolation of type c retrovirus particles from fresh and cultured lymphocytes of a patient with cutaneous t-cell lymphoma detection, isolation, and continuous production of cytopathic retroviruses (htlv-iii) from patients with aids and pre-aids molecular and diagnostic procedures in mycoplasmology adaptative pathways of zoonotic influenza viruses: from exposure to establishment in humans how to develop a standard operating procedure for sorting unfixed cells an automated hiv- env-pseudotyped virus production for global hiv vaccine trials marburg and ebola virus infections in laboratory non-human primates: a literature review the origin and evolution of hepatitis viruses in humans prion propagation in cultured cells unintended spread of a biosafety level recombinant retrovirus aerosols: an underestimated vehicle for transmission of prion diseases the embryonic environment strongly attenuates v-src oncogenesis in mesenchymal and epithelial tissues, but not in endothelia kaposi's sarcoma-derived cell line slk is not of endothelial origin, but is a contaminant from a known renal carcinoma cell line classification of organisms recommendation on the safe handling of human and animal cells and cell cultures automated adherent human cell culture (mesenchymal stem cells) environmental risk assessment of replication competent viral vectors applied in clinical trials: potential effects of inserted sequences a newly discovered human pneumovirus isolated from young children with respiratory tract disease persistent infection of some standard cell lines by lymphocytic choriomeningitis virus: transmission of infection by an intracellular agent emergent human pathogen simian virus and its role in cancer ex vivo propagation of infectious sheep scrapie agent in heterologous epithelial cells expressing ovine prion protein cytokine-mediated downregulation of coxsackievirus-adenovirus receptor in endothelial cells susceptibility of common fibroblast cell lines to transmissible spongiform encephalopathy agents pathogen survival in the external environment and the evolution of virulence biology of b virus in macaque and human hosts, a review herpes simplex viruses laboratory-acquired vaccinia infection world health organization ( ) who requirements for the use of animal cells as in vitro substrates for the production of biologicals. who technical report series laboratory biosafety manual manuel of diagnostic tests and vaccines for terrestrial animals. chapter . . biosafety and biosecurity in the veterinary microbiology laboratory and animal facilities acknowledgments the authors are grateful to their colleagues dr. didier breyer and dr. aline baldo (scientific institute of public health wiv-isp, brussels, belgium) for their useful review of this manuscript. key: cord- -jckfzaf authors: walsh, patrick f. title: intelligence and stakeholders date: - - journal: intelligence, biosecurity and bioterrorism doi: . / - - - - _ sha: doc_id: cord_uid: jckfzaf this chapter underscores the need for more explicit and strategic engagement of stakeholders (scientists, clinicians, first responders, amongst others) by the intelligence community. the chapter argues that the intelligence community will increasingly rely on their expertise to build more valid and reliable assessments of emerging bio-threats and risks. however, the discussion also identifies some of the limitations and challenges stakeholders themselves have to understanding complex threats and risks. agricultural scientists and veterinarians) can all be critical stakeholders for intelligence communities. without them it would be almost impossible to see how the ic alone can fulfil its mission to identify, prevent, disrupt and treat potential and emerging bio-threats and risks. indeed as seen in chapter 'the scientific community' brings a lot of expertise to the intelligence community about how to assess bio-threats and risks in a number of different ways and contexts. these include understanding potential risks through gof experiments, the development of biosensors and knowledge about weaponisation, pathogenicity and transmissibility of various bio-agents. chapter also surveyed briefly the role of scientists working in epidemiology and forensics as providing central roles in the prevention, disruption and treatment of bio-threats and risks. additionally, chapter , highlighted the critical role the scientific community plays in helping the intelligence community better frame their understanding of potential threats and risks emerging from the fast paced changing biotechnology and synthetic biology sectors. this chapter provides a thematic analysis of how important stakeholders can contribute to reducing current and emerging bio-threats and risks. in contrast to chapter , which focused on what internally the intelligence community can do to better equip itself to manage bio-threats and risks, this chapter surveys what important external stakeholders can bring to the table to improve intelligence capability and to reduce bio-threats and risks themselves. paraphrasing research impact scholar mark reed's definition, i define a stakeholder of the intelligence community as any person, organisation or group that is affected by or can affect a decision, action or issue relevant to preventing, disrupting or treating bio-threats and risks (reed : ) . specifically, i am referring to stakeholders in the scientific, research, clinical, policy, first responder and private sectors that can provide capability, expertise to the intelligence community and/ or contribute to biosecurity through their own actions. in particular, the thematic analysis of the role of stakeholders in this chapter is organised around three sub-headings: prevention, disruption and treatment. traversing the literature and interviews with a select number of stakeholders shows there that there is a large and diverse number of individuals and organisations that could potentially play a role in either preventing, disrupting or treating future bio-threats and in the biological context, surveillance is the ongoing collection, analysis, and interpretation of data to help monitor for pathogens in plants, animals, and humans; food; and the environment. the general aim of surveillance is to help develop policy, guide mission priorities, and provide assurance of the prevention and control of disease. in recent years, as concerns about consequences of a catastrophic biological attack or emerging infectious diseases grew, the term bio surveillance became more common in relation to an array of threats to our national security. bio surveillance is concerned with two things: ( ) reducing, as much as possible, the time it takes to recognize and characterize biological events with potentially catastrophic consequences and ( ) providing situational awareness-that is, information that signals an event might be occurring, information about what those signals mean, and information about how events will likely unfold in the near future (gao : ). this definition highlights how the functions and roles of biosurveillance has changed from a more narrow concern of mapping disease in the public health sector to represent a diverse array of knowledge and capabilities that are vital in understanding bio-threats in the national security context. the definition also underscores the ongoing multiple challenges in improving bio-surveillance capabilities and their utility in the national security context. three key challenges in particular remain for improving national bio-surveillance capabilities and they are: methodological, information sharing and integration issues. the information sharing and integration issues have already been discussed in chapter so this section will focus on the bio-surveillance methodology issues. by methodological issues, i am referring to both the technical methods (biosensors) and the broader different disciplinary approaches to biosurveillance that now inform debates amongst stakeholders on how to improve bio-surveillance capabilities. from a technical perspective, there has been a range of bio-sensor research from inside and outside the ic to detect the release of dangerous pathogens into the environment. perhaps the most well-known of these initiatives-biowatch was developed by dhs in with the aim to detect aerolised bio attacks for high risk bioagents in major us cities. the program however, has had mixed success relating to the reliability of results and the delay in the publication of these once samples were collected from the field (gao (gao , . the dhs tried to speed up the detection times from the first generation manual systems to gen acquisitions, which promised speedier autonomous systems though testing difficulties remained. further analysis, however, of alternatives by the dhs as showing any advantages of an autonomous system over the current manual system were insufficient to justify the cost of a fully technology switch (gao : ) . in the us, research continues to improve the robustness, sensitivity, specificity, timeliness and cost of biosensor equipment. while conventional pcr based methods and immunoassay are still being used other biochemical, microbiological and genetic solutions are being trialled such as the incorporation of antibodies and peptide molecules, which may greatly reduce detection times to minutes instead of several hours (kim et al. ) . leaving aside efforts to improve aerolised biosensors, the expected rapid growth of synthetic biology and biotechnology and the potential (however unknown) that bioengineered material may be used maliciously in a way that threatens public safety or national security may shift the focus into other scientific research that can detect signals of bio-engineering including types of changes, location and possibly in the future where changes were made. in july , iarpa commissioned a new program-finding engineering linked indicators (felix) to meet such objectives. iarpa is seeking interest from a range of scientists (synthetic biologists, micro biologist, immunologist, statisticians and computer scientists) to carry out - research projects addressing the two main focus points of felix (eaves ) . if this research can produce reliable results, it will provide another useful collection and analysis point for the ic by allowing the detection of previously undetectable signatures of bio-engineered material in bio-criminal and terrorism cases. in addition to the various technical innovations in biosensors, a range of other bio-surveillance methods have been deployed. in the late s, the us cdc pioneered syndromic surveillance systems, which were initially aimed at improving the early warning of infectious diseases and bio-terrorism and have now evolved to include situational awareness (buehler et al. ) . similar syndromic surveillance systems have developed in other 'five eyes' countries such as the uk's real-time syndromic surveillance team (resst), which collects four national syndromic surveillance systems from several sources. additionally and more recently, the robert koch institute is creating an early warning system based on machine learning and natural language processing that will include 'appealing' interactive web applications and be linked to the german electronic reporting and information system demis (robert koch institute ). syndromic surveillance systems are a critical adjunct to traditional public health lab surveillance as they strive to provide real time or near real time collection, analysis and dissemination of health data to enable early identification and management of public health threats as they are not based on lab confirmed diagnoses-and assess a wider set of health related data including: clinical signs, absenteeism, pharmacy sales or animal health production collapse (buehler ) . a clear benefit of syndromic surveillance is it can be cheaper, faster and potentially more transparent then a state's public health lab surveillance system. however, as with the use of big volumes of data more broadly in the ic, data quantity, quality and structural variation all impact on the utility, accuracy and timeliness of some rapid epidemic intelligence from internet based surveillance methods (yan et al. ) . increasingly these syndromic surveillance systems rely on the use of big data, machine learning and analytics. additionally, web based epidemic detection systems like biocaster portal developed by the national institute of informatics in tokyo (collier ) and canada's global public health intelligence network (gphin) an event based surveillance system which looks at news feeds globally have also contributed to syndromic surveillance systems (mawudeku et al. ) . several event based internet surveillance systems have grown in number in the last decade. using pubmed, scopus and google scholar data bases, o'shea's study found based internet systems all using different technology and data sources to gather data, process and disseminate it to detect infectious disease outbreaks (o'shea ). in line with the broader ic development of exploiting social media analytics discussed in chapter , in dhs piloted another approach to bio-surveillance. the pilot involved dhs trialling various social media analytics from self-reported information on facebook and twitter to determine pandemics and acts of terrorism given social media feeds can provide close to real time reporting of symptoms, sickness access to hospital or pharmaceuticals (insinna ) . additionally, other private companies have entered the biosurveillance space-providing novel methods for capturing bio-surveillance data. wilson's discussion of how a private company (veratect corporation) assessed signal recognition in global media reports to provide warning on the emergence of the h n influenza pandemic shows how the ic warning culture methodology can be employed usefully along with what he described as the 'risk adverse forensically oriented response culture favoured by traditional public health practitioners' (wilson : ) . the veratect case shows that the private sector has a role in developing better bio-surveillance capability as well. as can be seen from the brief discussion above about different methodological approaches to bio-surveillance. there are also different views amongst bio-surveillance scholars and practitioners about the merits of each, particularly in their abilities to predict the 'next pandemic'. can for example, a national bio-surveillance system informed by one or more methods discussed above predict the emergence of the next pandemic or outbreak, particularly novel new viruses? some scientists argue that the prediction of a micro-evolutionary process of some biological agents such as a virus (i.e. a short term emergence or cross species transition) is incredibly difficult given evolutionary and epidemiological timescales are fundamentally different. geoghegan and holmes argue that instead it would be better to build surveillance capability that 'assesses the fault line of disease emergence at the human-animal interface, particularly those shaped by ecological disturbances' ( : ). others have argued differently. scientists working on the usaid funded predict and the global virome project examine disease hotspots globally in order to sequence (rather ambitiously) almost all the viruses in birds and mammals that could potentially spill over into humans. in particular, researchers working on the global virome project believe that prediction of which viruses might spill over from animal to human health is possible. geoghegan and holmes in response argue focusing on disease hotspots relies on very small amounts of data that can be unreliable given they are rare events. they give the example of saudi arabia which has not classically been a hotspot, yet mers recently jumped into humans from camels there. sequencing these viruses may provide useful evolutionary information, but geoghegan and holmes argue it won't necessarily provide early warning of what is going to affect us (geoghegan and holmes ) . other scientists are trying to change the ecology of disease, which presumably in some cases would make the early warning of some pandemics easier. in recent years, the scientific community has increasingly exploited crispr gene editing techniques to change the genetic makeup of malaria mosquitoes. additionally, advances in gene drives have recently been shown to change the ecological parameters of disease. gene drives are artificial 'selfish' genes that can force itself into % of an organism's offspring instead of the usual %. currently there is a global research effort funded by the gates foundation to cause female mosquitoes to become sterile within generations or year. the objective would be to release the genetically altered mosquitoes into malarial areas by (regalado ) . there are concerns by the fbi however that gene drives could be misused to create a 'designer plague' (ibid.). in addition to the 'predictability' challenges presented by various bio-surveillance methods, there are also differences in opinion amongst members of the bio-surveillance community about what an effective bio-surveillance system looks like. on what metrics can an 'effective bio-surveillance' system be evaluated given the multiple methodological approaches and systems that have developed for bio-surveillance? clinician and public health security specialist jim wilson has argued that the development of an effective global surveillance and response system is probably at least a decade or more away (wilson : ) . in the interim, we are left with multiple approaches of varying validity and reliability. so based on the current fragmented bio-surveillance efforts how do we learn the lessons that need to be learnt that will enable the implementation of the long awaited national bio-surveillance capabilities? how do we know if progress is being made to that goal? importantly, beyond national efforts, how do we assess the current capability of state, local agencies to contribute to a national bio-surveillance capabilities? where are the gaps and vulnerabilities in the current sub-national bio-surveillance and detection systems? (gao ) . compounding the current challenge of evaluating bio-surveillance capabilities in order to construct a viable national approach is that different bio-surveillance systems have been created for different end users (e.g. animal and human). the blue ribbon project report into animal health detailed information sharing challenges in animal health bio-surveillance and its integration with other bio-surveillance data including in human health (blue ribbon report : ) . this lack of integration makes it difficult to assess how information collected for animal or agricultural bio-surveillance could improve national approaches to bio-surveillance, particularly in scenarios where the emergence of disease could be an intentional or a malevolent act. different approaches to bio-surveillance have been informed by multi-disciplinary perspectives, which can be both a strength and weakness to developing a national perspective. current efforts across the 'five eyes' to develop fully national and integrated bio-surveillance capabilities remain works in progress and the political will to steward them into being seems insufficient. for example, in the us a program designed to provide a national bio-surveillance and integration system was eliminated in the president's budget request for fy (blue ribbon report : ). any evaluation of the effectiveness of various methods and approaches for building a national bio-surveillance capability also needs to consider how national efforts can both enhance and lever off global bio-surveillance capabilities. gaps and impediments in global biosurveillance have become increasingly evident to the world in the wake of the largest ebola epidemic ever-in which these challenges impacted the ability to prevent, detect, and respond. under the looming threat of mers-cov, leishmaniasis, influenza, multidrug-resistant tuberculosis, and plague, the global public health community now realizes the urgent need to address shortcomings in global bio-surveillance and the broader public health security system. properly preparing for the next major outbreak hinges on our willingness to transform global health surveillance systems and those of countries with fragile health infrastructures (shaikh et al. : - ) . in some respects, similar challenges in developing national bio-surveillance capabilities exist in those at the global level including: siloed systems, inadequate training and technical expertise, different information and communication technology (ict) standards, concerns over data sharing and confidentiality, poor interoperability, and inadequate analytical approaches and tools. there is likely not one bio-surveillance method, technique or tool that is going to detect in real time disease outbreaks, particularly unusual ones which might imply malicious intent. a fully integrated approach to bio-surveillance may rely on more than one method or capability which together can provide reliable and valid bio-surveillance data and early warning at the national and global level. it may mean investigating ways that older legacy systems can be integrated or at least made interoperable with newer more mobile platforms such as mobile or wireless health technologies particularly in the developing world (shaikh et al. ) . it should be clear by now that improving bio-surveillance capabilities is essential to improving the prevention of natural and suspicious outbreaks of disease. it is important for the 'five eyes' intelligence and law enforcement communities to understand broadly the theoretical and practical developments in bio-surveillance so that they are able to more effectively lever relevant knowledge on bio-threats and risks. a second cluster of stakeholders that are useful in the prevention of bio-threats and risks (both natural and malicious) are those working in national, regional and global health. the ebola epidemic ( ) ( ) was a recent reminder of the consequences of weak public health capability and infrastructure in failing to prevent, identify and respond quickly to infectious disease. the ebola epidemic also had a catalytic effect on many public health authorities, practitioners and researcher's views about the capability of the traditional un response to global health crisis mainly coordinated through the who. many public health watchers are now arguing the need for a broader more effective focus-not just on prevention and response to infectious disease, but one that also included reframing the focus as a human security issue. adherents to this view make a compelling point when seen through the ebola case that continues to have significant impact on the economic and social stability of countries impacted (sparrow ; marston et al. ; who ; mmwr ) . beyond west africa, similar vulnerabilities in capabilities such as diseases surveillance, detection, contract tracing, clinical care, community engagement and communications exist globally as was also seen with the proliferation of zika in latin american/caribbean and mers in the middle east. in , the commission on a global health risk framework for the future that met after the ebola crisis estimated . billion per year investment would be needed for better detection and response tools. the same commission report also estimated that the economic cost for global pandemics per year was $ billion (schnirring ; dzau and sands ) . effective national bio-surveillance relies on not only what 'five eyes' countries can do to improve the scientific and technical capability of bio-surveillance, but also how they can improve bio-surveillance globally particularly in at risk areas. beyond effective bio-surveillance, effective prevention of pandemics whether natural, accidental or malicious relies on good global (multilateral), regional and national public health responses. there are several multilateral instruments, institutions and initiatives that are relevant, but i will focus here on what have become the key ones rather than attempting to traverse in detail all major international health initiatives struck since / . they include who international health regulations (ihr), un security resolution , the global health security agenda (ghsa), the biological weapons convention (bwc) and the australia group. the who international health regulations ( ) entered into force in june to prevent, protect against, control and provide a public health response to the international spread of diseases (detect, assess, notify events has a biosafety and biosecurity function) and includes all members of the un. the ihr has improved accountability of countries about progress towards building national core public health capability targets in several areas including, but not limited to: surveillance systems, creating rapid response teams, border management. however, the ihr annual reporting process has been by self-assessment of core capacities to the world health assembly (wha) by all state parties, which has resulted in incomplete or not credible reporting for some member states. the commission on global health risk framework for the future also expressed concerns over the self-assessment monitoring tool of the ihr, because questions are binary (yes/no) answers and recommended that who devise a regular independent mechanism to evaluate country performance against benchmarks (ghrf commission : ). for example, a country can 'tick yes' for having a national public health legislation, but other dependent legislation (biosecurity, food safety, environmental health) may not be in place-thereby reducing overall the country's ability to manage health crisis or for the global community to understand and respond to capability and information gaps in that country (ibid.). some countries continue to be slow or uneven in their reporting of ihr ( ) attributes. in , one study showed that the african region was well below global averages across all attributes measures with no african state reporting full implementation (kasolo et al. : - ) . the second multilateral instrument relevant to our discussion here is the un security council resolution ( , which calls on all states to prohibit non-state actors from developing, acquiring, manufacturing, possessing, transporting, transferring or using nuclear, chemical or biological weapons and their delivery systems. more importantly and specific to bio-threats only, the bwc has historically played the most significant role in preventing the weaponisation of biology. the bwc was established in and seeks to prohibit the development, production, acquisition, transfer, stockpiling and use of biological and toxin weapons (gerstein ; chevrier and spelling : - ) . in , there was an attempt by some member states to introduce a verification process, but this was vetoed by the us following inspection of soviet sites under the tripartite agreement between the soviet union, usa and the uk. the us arguing it could be difficult to certify that a state's biological program was merely defensive rather than offensive. the us also had concerns that inspection to labs could be disruptive or provide opportunity for industrial espionage against legitimately operating biotechnology companies (gerstein : ) . historically there has been a mixed record by some 'five eyes' intelligence countries in assessing verification and therefore noncompliance of the bwc. koblentz surveyed the role of intelligence (particularly humint) in assessing the former soviet union's offensive bio-weapons program between and which resulted in an incomplete picture of moscow's program (koblentz : ) . additionally, as discussed in chapter , in several 'five eyes' intelligence communities (us, uk and australia) incorrectly assessed that iraq had a mobile offensive bio-weapon capability. intelligence collection on its own can either over or under-estimate such capabilities. between yearly review conferences, several initiatives and activities have been introduced (confidence building measure, meetings of experts, information exchanges) to improve the effectiveness and the implementation of the convention. however, state parties are only encouraged to implement relevant national legislation and other measures to prohibit prevent the development, production, stockpiling or transfer or use of bio weapons. how they precisely undertake measures is at the discretion of individual state parties. the bwc has been criticised for several reasons over the years. some of this is warranted, while other criticisms seem to not take into account that the bwc is different from its chemical and nuclear counter proliferation counterparts. as gerstein argues, 'material is the centre of gravity for nuclear discussions and intent being the center of gravity for biological issues' (gerstein : ) . developing nuclear weapons leaves a large recognizable footprint, whereas the development of an offensive biological weapon requires virtually no specialised equipment (ibid.). the first major criticism of the bwc is that it has no verification mechanism or any other mandatory provisions for monitoring compliance. a second complaint is that for many years (until ) , it lacked an implementation capability to help states fulfil their obligations. since , the convention has had a small three team implementation support unit (isu) based in the united nations office for disarmament affairs in geneva which aims to 'assist, coordinate, and magnify the implementation efforts of the states parties to help states parties help themselves' (lennane : ) . in reality though, the isu does not have 'capacity for analysis and coordination other than for the collection of the annually submitted confidence building measures, posting them to the website and organising and attending conferences' (gerstein : ) . historically there has also been a low number of party states submitting their annual confidence building measures. although the bwc isu was able to report that a record number ( ) annual confidence building measures were submitted in , this only represented . % of all state parties submitting that year. though the trend line seems to be going up from a low in of (bwc newsletter : ). a third criticism of the bwc is that it has moved slowly since inception and further questions remain about its relevance strategically and operationally in preventing bio-threats and risks into the future. such questions are likely fundamental to its long term viability. however despite shortcomings, the bwc has nonetheless created a normative institution for reducing the risk of biological or toxin weapons being used or developed by state and non-state actors (lennane : ) . more importantly, as developments in biotechnology continue at a pace, the bwc does provide a venue, where the security implications of dual-use technology can be assessed which will be critical in 'mitigating these emerging threats' (gerstein : ) . the bwc still does have an important role in reducing weaponisation of biology in the future, though its poor funding particularly of the isu means that other multi-lateral measures are needed to amplify the work of the convention. in addition to the above historic/traditional proliferation arrangements of the bwc, other international regimes have been implemented such as the australia group (established in ) and the proliferation security initiative (established in ). both have a broader counter proliferation objectives beyond biological weapons to chemical and nuclear. the australia group member countries have collaborated on the development of lists of technologies and materials that could be used in the development of chemical and biological weapons. member countries then commit to monitor the export or transfer of these materials. the australia group maintains common control lists for dual use bio-equipment, technology, software, bio agents and plant and animal pathogens as the basis for promoting common standards and regulations (australia group common control list handbook ). the australia group works in concert with the bwc. the psi was a bush administration initiative that sought to supplement existing non-proliferation regimes, but seeks to enforce these by interdicting and seizing illegal weapons or missile technology in planes or ships carrying cargo. the psi also includes intelligence sharing and joint operational activity (national institute for public policy ). turning the focus slightly away from multi-lateral counter proliferation measures, other multilateral initiatives have focused on improving global health security. in some respects the ghsa provides a bridge between traditional, narrow security approaches to biological weapons and a wider securitisation of global health. the ghsa was established in by the obama administration and is a multi-sectoral approach to global health security seeking to include governments, international organisations and non-government organisations. ghsa was set up in part to 'advance further the ihr implementation through focused activities to strengthen core capacities and to ensure a world safe and secure from global health threats posed by infectious disease; where we can prevent or mitigate the impact of naturally occurring outbreak and intentional or accidental releases of dangerous pathogens' (heymann et al. (heymann et al. : . ghsa is a refreshing approach not only because it seeks to establish a global framework and capacity to assess, measure and sustain advances in global preparedness for epidemic threats, but it also addresses biosecurity as a public health priority-thereby linking public health and health security, development, defense and agricultural sector (cameron ) . the underlining logic of ghsa suggests that the same attributes needed to prevent, detect and respond to deliberate use of a bio agent are those required to manage a natural or accidental outbreak of a biological agent. ghsa also includes technical targets aligned to three areas: prevention, detection and response (heymann et al. (heymann et al. : . like earlier initiatives, such as the us sponsored global health initiative (ghi), which was discontinued by the obama administration in due a lack of financial and technical authority to leverage and coordinate multiple us agencies-the ghsa will need to secure ongoing funding beyond from major donors including the us. at a november ghsa ministerial meeting in uganda, assembled governments signed onto an extension of the ghsa for another five years. us secretary tillerson had issued public support for continuing it, but at the time of writing no commitment by the us for future financial support (beyond fy ) has been made. ghsa holds promise, but in addition to ongoing funding challenges, those member states signed up to it will need to ensure effective governance is in place to align funding to global health priorities articulated by the who, world bank, imf and other donors in order to avoid duplication and promote an effective approach to international health security capabilities (paranjape and franz ; . in summary, this discussion of multilateral security and global health initiatives demonstrates that there is a diverse number of stakeholders working in these sectors, which can play a role in preventing biothreats and risks-whether they are natural pandemics or a malicious attack from a biological weapon. it's clear that the 'five eyes' intelligence communities have worked extensively with other member states in counter-proliferation institutions such as the bwc and the australia group for several decades, but what remains still under developed is how global health security stakeholders and intelligence communities can work more collaboratively for the mutual goal of global health security regardless of whether the risks are natural pandemics or result from a bio-terror attack or theft of a dangerous select agent from a lab. more trusting and formalised contact between both global health security stakeholders and those working in the security and intelligence communities can only be mutually beneficial to preventing major bio-threats and risks. the final cluster of stakeholders that can help prevent bio-threats and risks are of course those that specialise in biosafety and its promotion in their research institutes, biotechnology companies, universities and medical facilities. promoting biosafety in environments that work with select agents and other facilities that work with less dangerous material which can still cause harm relies on consistently high risk management practices. in all 'five eyes' countries there has historically been in place biosafety risk management procedures and practices to prevent accidental infection, accidental release, or intentional misuse of biological substances. however, as noted in chapter in the last two decades the expansion in synthetic biology, biotechnology and biological science research has meant there are now more people working in more locations on dangerous pathogens-not just in well-regulated liberal democracies such as those in the 'five eyes' countries, but also in developing countries; where biosafety and biosecurity capabilities and practice may be less established such as parts of africa, the middle east, pakistan and former soviet states (gronvall et al. ; shinwari et al. ) . just in terms of the scale of this expansion of facilities working with dangerous pathogens-in the us alone, there is thought to be thousands of bsl labs and in china the number of such labs is increasing too (nature editorial : ). the us and other 'five eyes' countries such as canada have invested in cooperative engagement programs since / in several former soviet union states. the us defense threat reduction agency (dtra) has lead efforts in georgia to reduce bio-risk by securing/consolidating pathogens, training scientists in biosafety and biosecurity technology, regulation and detection. likewise, the cdc has been involved in building public health capacity there as well as in armenia and azerbaijan (bakanidze et al. : ) . as important as building biosafety capacity is in developing countries, it is clear that much more still needs to be done to build biosafety capacity in 'five eyes' countries-including finding better ways to understand and manage comprehensively threats and risks in the biosciences environment. biosafety experts such as salerno and gaudioso argue for more comprehensive risk management systems across the global bioscience community 'to avoid an accident that jeopardizes the entire bioscience enterprise' (salerno and gaudioso : xv) . their argument is that such a system would supplement existing national and international biosafety regulations by risk managing fully at an organisational and unit level every single potential incident rather than by generic risk hazard assessments that are currently done by most facilities today (ibid.: ). others have also called for more systematic tools and approaches for managing biosafety incidents in labs dealing with particular dangerous pathogens such as marburg virus (dickmann et al. ) . still others have argued that while 'security awareness is high among employees who work with biological select agents and toxins, it is not pervasive across the entire life research community' (grphyon scientific : . such a statement does not seem to be hyperbole if one looks at some of the cases of biosafety and security lapses since / (gao (gao , . there have been several lapses at cdc between and . in june , dozens of workers in cdc could have been potentially exposed to live anthrax that hadn't been killed before being shipped from cdc's bioterrorism rapid response and advanced technology (brrat) bsl to a bsl lab in its bacterial special pathogens branch. cdc investigations determined that at least cdc staff members may have been exposed to viable anthrax cells or spores though no illness or deaths occurred (cdc ). the same report found several breaches of biosafety process and procedure including failures of policy, training, supervision, judgement and even scientific knowledge (ibid.). similarly, biosafety lapses cases involving cdc labs occurred in january when an unintentional cross contamination strain of low pathogenic avian influenza a (h n ) with a strain of highly pathogenic avian influenza a (h n ) was shipped from cdc to the usda (schnirring ) . further biosafety breaches were detected in july -this time at the national institute of health campus in bethesda maryland; where viable smallpox vials were discovered improperly stored (dennis and sun a ). an additional five improperly stored vials were also found at the nih-three were select agents (burkholderia pseudmomallei, francisella tularensis and yersinia pestis ) (dennis and sun b). in the nih cases despite their age, they were still viable organisms which could have caused illness. their theft could have also posed a bio-threat and risk to the community. then after a hiatus where biological material was suspended being sent between bsl and bsl labs live transfers commenced again. after a further internal cdc review (cdc a, b) some additional safety measures were put into place, however there was a subsequent lapse when a specimen of chikungunya virus was shipped from a high secure lab in fort collins to a lower level one which had not been killed (young ) . similarly, in the pentagon shipped live anthrax spores from the dugway proving ground in utah to states and one international location that were also meant to have been killed (burns ) . it was later found that dugway and the us dod had been shipping nationally and internationally live anthrax for more than years-often without adequate safeguards. other reports suggested that some samples were sent by federal express (sisk ) . similarly in november , the us hhs discovered that a private lab had 'inadvertently sent a toxic form of ricin to one of its training centres multiple times since putting training staff at risk' (gao : ). similar biosafety lapses have occurred in the uk resulting in investigations since of government, university and hospital labs (sample ) . as noted in chapter , one possible bio-threat and risk pathway could be the theft of biological substances or information from a biosciences institution. lapses in biosafety arrangements demonstrate, at least in some cases, biosecurity vulnerabilities that could make the theft or even infiltration of a threat actor into high containment lab easier. thefts from labs have occurred in the past by an insider, and a motivated insider can compromise biosafety for a range of reasons. bunn and sagan's edited book insider threats provides a useful taxonomy for thinking about 'insider threats' (bunn and sagan ). they can be: self-motivated insiders, who at some point decide to become a spy or thief. insiders can also be recruited insiders, who are already inside an organisation, but become convinced to become part of a plot. finally, an infiltrated insider might be associated with some adversary of the organisation and join it with the purpose of carrying out a malicious act against it. bunn and sagan also refer to inadvertent or non-malicious actors, who pose a threat by making mistakes without really intending to do so-such as leaving a password lying around. finally, the authors refer to a 'coerced insider', who remains loyal in intent, but knowingly assists in theft or sabotage to prevent hostile acts against themselves or their loved ones (ibid.: ). the insider threat that was posed by bruce ivins' activities in a high containment lab (that resulted in amerithrax in ) demonstrates the potentially high threat and risks associated with an insider. the ivins case provides a useful case study in how an organisation's security procedures and other organisational and cognitive biases can miss for several years risks posed by an insider threat actor (stern and schouten : - ) . since the amerithrax incident, significant investment has been made to close the biosafety vulnerabilities revealed by it. increasingly since / and amerithrax, a number of policies, procedures and normative behaviour have developed in the scientific community to promote biosafety and biosecurity. these have ranged from safety regulation codes such as the us biosafety in microbiological and biomedical laboratories (bmbl ) to more formal legislative and oversight regulations. the latter will be addressed in chapter . there are also technical and policy improvements that can be made in securing both physical and remote access to labs including computer systems that house data, which are at risk of theft or being hacked (gryphon scientific : berger : - ; slayton et al. : - ) . leaving aside discussion of some of the formal legislative and regulatory instruments for promoting biosafety, the development and maintenance of effective risk management across the biosciences also relies on an organisational culture that treats biosafety and biosafety as an equal priority to other deliverables. a culture of accountability at all levels must also exist if effective risk management can prevent, identify and treat bio-threats and risks promptly. a rogue insider threat, who may have been assessed as appropriate to work with select agents and seems initially to follow all the relevant biosafety regulations and procedures could still pose a risk if they have not embraced the organisation's normative cultural biosafety values. it is critical then in order to stop opportunities for insider threats, that the organisation promote relevant biosafety cultural values as much as and perhaps more than adherence to formal biosafety regulations. risk management measures must of course be measured against the ability of scientists to carry out its functions. effective engagement with local law enforcement and relevant domestic security intelligence organisations in each 'five eyes' country to help scientists build viable biosafety cultures will likely remain important in addition to internal organisation biosafety initiatives. stern and schouten provide a number of useful suggestions for improving policies and procedures that may help improve biosafety cultures across the biosciences enterprise ( : - ). two that i think would be helpful are, one: developing standard operating procedures for proactively identifying vulnerabilities including using 'red team' exercises to explore how systems could become exploited. in other words, what motivators (financial, psychological, religious, and political) might drive an insider threat and are there ways to assess the signs of such an evolving threat? the other is to 'ensure personnel reliability programs incorporate ongoing assessments of counterintelligence vulnerabilities, including vulnerabilities to self-ascribed whistle-blowers or attention seekers' (ibid.: ). effective biosafety and biosecurity training is also crucial as the number of labs working with select agents or other dual use bio-agents proliferate globally, particularly in locations with fragile states. more consistent approaches to training will also be important so nations can be confident that as many scientists as possible regardless of the country or the context in which they work understand what bio-risks and threats may emerge and how to prevent or mitigate against them (sture et al. ). as discussed above there are multiple stakeholders in the scientific community, global health security and biosafety fields that can play a critical role themselves in preventing bio-threats and risks as well as supporting the operational efforts of the intelligence community to prevent these. while prevention of bio-threats and risks is one critical dimension that stakeholders can play central roles another is disruption. although the intelligence community can use a range of knowledge, technologies and methodologies from stakeholders in the scientific community, to prevent bio-threats and risks, we have to accept that it will not be possible to detect every criminal or terrorist act. nonetheless, some of the techniques, practices, technologies and knowledge available from stakeholders in the scientific community will still be useful to disrupting bio-threats and risks. in other words prevention may not always be possible yet measures can be put into placewhich can detect threats early enough to reduce their impact. similar to preventing bio-threats and risks, disrupting them will also rely on seeking advice from stakeholders involved in bio-surveillance, public health and biosafety research, amongst others on disrupting them as well. for example, as discussed earlier iarpa's commissioning of research into detecting signals of bioengineering changes (felix) may result in better capability for the intelligence community in not only preventing bioengineering changes that make it easier for terrorists to carry out attacks on populations, critical infrastructure or biotechnology companies, it could also help detect and disrupt the planning stages for such attacks. additionally as noted earlier, if a high containment lab has a strong biosafety culture it is more likely that disruption of a biothreat may be possible just by colleagues speaking up about suspicious activities in their working environment rather than any elaborate disruption knowledge and techniques, procedures the intelligence community might have in place to disrupt such threats. but knowledge, technologies, techniques and practice for disruption of bio-threats and risks cannot just come from scientific stakeholders in the biosciences, it should also come from other fields and practitioners working in other areas where successful disruption operations has taken place. these areas include criminology, policing, engineering, legislation, cyber, counter-intelligence amongst others. in this section, we examine briefly what other stakeholders and discipline perspectives might the intelligence community learn from that can provide better capabilities for the disruption of bio-threats and risks. are there lessons to be learnt from other stakeholders, disciplines or even other threat contexts that might be relevant to disrupting biothreats that might not have been initially detected? since / , there are three stakeholder and discipline groups, which are investigating and applying disruption strategies to threats and risks and their knowledge might be relevant in disrupting threats and risks in the bio context. these are criminology, counter-terrorism and cyber. we will explore each briefly to see how stakeholders (researchers and practitioners) have developed disruption strategies in each and how they might be employed against bio-threats and risks. insights from criminology and the practical application of disruption for crime prevention has provided a supplementary approach to traditional law enforcement approaches of prosecution against certain crimes through the courts. disruption is not a new concept in criminology and law enforcement practice, though it can be difficult to define in all law enforcement contexts (ratcliffe : ) . its meaning at least in the criminology/policing/law enforcement contexts can partly be traced back to broader desires-initially by uk law enforcement followed later by other 'five eyes' countries in the late s and early s to move law enforcement away from its traditional reactive mode to offending to one driven by intelligence. this concept of law enforcement or policing being intelligence driven or led gained significant traction in the criminology and policing literature (walsh ; ratcliffe ; innes and sheptycki ) . it was driven initially in the uk by the desire for governments to maximise efficiencies and reducing costs by increasing the use of intelligence to drive strategic and operational decision-making. the implementation of intelligence led policing models into operational policing across 'five eyes' countries has had mixed results partly due to cultural, financial and leadership issues in agencies that have attempted to put intelligence at the centre of strategic and operational decision making in policing (walsh ; ratcliffe ) . nonetheless, despite historical challenges in adopting intelligence led approaches, increasing fiscal constraints and the ever increasing demands on law enforcement in managing both high volume crimes and complex operating environments in counter-terrorism, cyber and organised crime meant, at least in many national law enforcement agencies; a greater demand for an intelligence driven approach (walsh ) . this intelligence driven approach, which promulgated proactive disruption of crime strategies was in part an admission that not all crime could be prevented or the offenders prosecuted. additionally, in many law enforcement agencies such as the australian federal police (afp), the growing volumes of information collected have given intelligence a more central role in triaging the significance of information, value adding to it and guiding investigators to targets and operations that are high priority; or have the greater likelihood of successful prosecution outcomes. in complex organised crime cases such as transnational drug trafficking, people smuggling and even terrorism and cyber threats, which we discuss shortly-intelligence driven disruption strategies have become increasingly popular for many 'five eyes' law enforcement agencies. this has particularly been the case where it can be difficult to dismantle completely the organised crime group-or to even know the full extent of the group's network. disruption operations that attempt to take down threat actors with key roles (e.g. facilitator, financier, and logistics) may nonetheless reduce the threat posed by the organised crime network even if the network continues to exist. additionally, with some organised crime networks, it may be difficult to secure sufficient evidence for prosecution against a more serious offence such as drug importation, but there may be sufficient intelligence that can be used to make the criminal environment more hostile for the group's illicit enterprise by arresting key group members for lesser offenses such as unexplained wealth or migration irregularities. while disruption of crime does seem like a useful tool in preventing or reducing the impact of offenders, the criminology literature demonstrates it has been difficult to evaluate the effectiveness of intelligence driven disruption strategies. ratcliffe cited an rcmp disruption attributes tool, which attempts to examine where the disruption activity is aimed at (core business, financial, personnel) and whether the kind of disruption for one or more of these attributes is high, medium or low in impact (ratcliffe : ) . however, such tools are largely subjective and qualitative-making it difficult to accurately measure the impact of intelligence driven disruption measures. the other concern about disruption strategies is that they may just cause displacement, where other criminal enterprises take the place of those removed by law enforcement or as innes suggest, 'disrupting a network may just provide a vacuum for more dangerous offenders to step in' (innes and sheptycki : ) . finally, the literature suggest that employing effective disruption strategies rely on proactive collection and valid analysis that can led to both timely strategic and operational outcomes that in turn result in threat mitigation and harm minimisation. so are there benefits for the intelligence community working on bio-threats and risks to investigating research and practice for disrupting threats in the organised crime context? the answer is a qualified 'yes'. much of course depends on the nature of the threat and risk posed. clearly as with any crime, it is hard to disrupt a bio-threat, when it's still in the head of the offender. however, we do know that criminal and terrorist acts don't just happen spontaneously. there usually involve predicate steps taken by the offender. some of these might happen in very compressed periods while in other offences planning may take years. either way, and regardless of whether these can be detected by the intelligence community, there is likely to be some signs in the predicate planning stages of an impending threat/risk that can provide the intelligence community opportunities for disruption. it is difficult to say in which bio-threat cases disruption strategies will be most successful. much will depend on how quickly the intelligence community can collect and analyse information that may be indicative of an evolving bio-threat and risk. as discussed previously, good collection and analysis is contingent on having robust core intelligence processes in place and more importantly effective intelligence governance. both are needed to ensure intelligence efforts are coordinated across multiple internal intelligence community stakeholders, with relevant knowledge-as well as ensuring information and expertise from external stakeholders (the scientific community) is available to provide earlier warning signs of an emerging bio-threat. while it is important not to over-play the potential for success of the kind of disruption strategies used against traditional organised crime groups, there are likely bio-threat scenarios where disruption strategies may make a difference. arguably, disruption of bio-threats could be on a continuum with the individual threat actor on one end and a sophisticated organised group on the other. at the individual level one could have the scenario of a lone terrorist actor or a mad/bad scientist. while it may seem difficult to get early warning of the malicious act of mad/bad scientist, we saw in the earlier discussion on 'insider threats' that it may be possible to disrupt their activity before you reach an amerithrax style attack. twenty/twenty is hindsight with the bruce ivins amerithrax case, but the lessons learnt from this incident do provide guidance on the sources of collection and analysis required from within the intelligence and scientific communities to aid the disruption of this kind of bio-threat. it does not mean that all similar cases of 'insider threats' will be detected, prevented or disrupted, but a more careful collection and analysis of 'odd' behaviour or unusual security lapses by a scientist working in a high containment lab could reveal areas of vulnerabilities. detection both of abnormal changes to an individual's psychological profile and/or in their working environment can provide opportunities for those vulnerabilities to be disrupted. at the other end of the bio-threat scale, a more organised bio-criminal or terrorist planned event may resemble in some respects other illicit criminal markets and networks (drugs, identity fraud, money laundering) and thereby present opportunities for disruption. again this is not to suggest that disruption of organised bio-threat scenarios will be always be possible. as discussed in earlier chapters, since / , even with state based wmd programs the intelligence community has had a mixed record in detecting them and uncovering the intention and capability of non-state actors to exploit dual use technology for malicious end remains difficult. however, disruption could be useful in some bio-crimes where there is a bigger network of actors involved in the illicit business. for example, in crime scenarios where food suppliers are not registered legally to import food into a 'five eyes' country because it poses a biosecurity risk, there may be opportunities for parts of the intelligence community (particularly national law enforcement agencies) to work with agriculture, animal health, food regulatory agencies and relevant scientific stakeholders to disrupt illicit food suppliers from a country of concern. equally there may be opportunities for disruption of activity from non-compliant biotechnology providers in a 'five eyes' country, who provide dual use equipment to a company overseas with a questionable profile that resides in a country vulnerable for terrorist infiltration. in addition to useful knowledge that can be gained from criminology and law enforcement practice there are also perspectives on disruption from contemporary counter terrorism studies that may have utility in the bio-threat and risk context. as noted above, since / law enforcement agencies across the 'five eyes' countries have been increasingly deploying disruption strategies in countering terrorism given the preservation of life demands an earlier interception of attacks preferably at the planning stage. as innes suggest in the case of counter terrorism operations, one aim is to overtly disrupt planned attacks, which has many effects including sending a message to other terrorist groups that they may be next, reassuring the community and if possible deploying countering violent extremism (cve) strategies in communities where future attacks may arise (innes et al. : ) . in the uk in particular, a key plank in its counter terrorism strategy has been disruption both at the strategic and tactical level. at the strategic level, disruption has involved a number of initiatives from arresting persons of interest, legislative action and enhanced surveillance (innes et al. : ) . in addition to global influence of groups such as al qaeda and islamic state, the growth in lone actor attacks-some across the us and european countries from s to late s (danzell and montanez : ) has also been a significant catalyst for enacting further stringent legislative measures such as detention without trial and control orders (walsh ). all 'five eyes' countries have also adopted further legislative changes that allow disruption of terrorist attacks by reducing thresholds law enforcement and intelligence agencies need for reasonable suspicion in order to access both electronic and human intelligence (humint). governments desire to do something to reduce the threat and risks posed by terrorists by creating increasingly proactive, flexible and permissive legislative environments has also raised concerns about the role of intelligence, secrecy and privacy. these issues will be discussed as they relate to the bio-threat and risk context in chapter . but legislation is only one plank in effective counter terrorism and the scale and pace of actual and potential terrorist attacks suggest other disruption strategies are required at the tactical level. innes et al. suggest such strategies might include: 'prosecution against an individual or a network for offences other than those they were principally being investigated for and/or interfering with the operations of the criminal enterprise in cases where there is insufficient evidence to secure prosecution ' ( : ) . they add that, at the tactical level, disruption strategies can 'interfere with the ability of suspected adversaries to operate effectively and efficiently' (ibid.). innes et al. suggests that tactical disruption functions at 'near event interdiction', which can mitigate or minimise harms associated with the actual or planned terrorism attack (ibid.). other counter-terrorism disruption strategies in 'five eyes' countries have included the creation of cve policies and interventions as well as the disruption or take down of social media venues advocating politically motivated violence or recruitment to jihadist groups. regardless of the complexity of post / terrorist attacks-such as the multi-site attacks in paris orchestrated by a group; or the knife attack against two police officers in australia in by one individual-disruption strategies employed by law enforcement and national security intelligence agencies are also likely to be usefully employed in the bio-threat and risk context. just how useful strategic and tactical disruption strategies used in conventional counter-terrorism will be in the bio-threat context depends on the nature of the intent and capability of individual threat actor(s) and the risks posed by their actions. the effectiveness of disruption strategies in the bio-threat context like conventional terrorist attacks are contingent on a range of variables that are unique to that event. in the bio-threat context, leaving aside large levels of uncertainty about the future threat trajectory for bio-terrorism, effective disruption will rely on law enforcement and intelligence agencies understanding how the intention, capability and opportunities of threat actors operating in a particular environment-make an attack possible. intention, capability and opportunities will differ along the threat continuum from individual to group and from state to non-state actor. for example, in the research facility, hospital or high containment laboratory environment, intention, capabilities and opportunities may be shaped by actors that are internal, external or an indirectly involved in the facility (perman et al. : ) . threats can also be as perman suggest overt or clandestine (ibid.). in some cases, if a scientist is motivated politically (for religious, environmental or political reasons) to commit an act of violence by using a biological agent it may be easier to disrupt their activities if they are public about their agenda. however, in the case of a clandestine plan it could be very difficult to disrupt an attack launched externally or internally in a contained lab. nonetheless, as we saw with historical cases of lone actor threats such as the bruce ivins amerithrax incident there are likely predicate steps in the process to carrying out an attack which are revealable. similarly, in the lesser known case of dr. larry ford, who was suspected of murdering his business partner in a biotech company-the police subsequently found a cache of weapons, white supremacist writings and allegations that he attempted to infect six mistresses with biological agents (perman et al. : ) . again even in cases of lone actors such as this whose attack planning is more clandestine; there may well be an abundance of 'warning intelligence' that if collected and assessed in time might be useful in disrupting a lone actor planned attack. while it can be difficult to disrupt a lone actor plot, more elaborate ones by a group of conspirators could in some circumstances provide greater opportunities for interception and disruption by law enforcement and intelligence agencies. this is because in plots involving multiple actors there are more stages before the attack can be carried out. some stages such as communications, procuring supplies and transport also provide points of vulnerability, where threat actors can be exposed to authorities and disrupted. so an external threat such as a terrorist attack against a high containment laboratory might involve communications amongst group members, financing of the plan, purchasing of explosives and surveillance of the facility's perimeters. each stage presents opportunities for disruption providing intelligence and information is available to law enforcement and intelligence agencies. similarly a theft of intellectual property or biological material from a private sector biotechnology company might result from either an external criminal group; or state actor pressuring or paying an employee to steal information on their behalf. again, intelligence may exist already about the criminal group or the compromised employee that provides opportunities for disruption. in an ideal world of course, it would be desirable if all potential biothreat and risk scenarios could be prevented early in the intent stage, where they are mainly an idea in a perpetrator's head. pre-employment screening, including criminal checks and select agent risk assessments will show up some individuals, who are not suitable to access and work with dangerous biological agents. this will have an early disruptive effect but it is not fool proof. people can lie about their circumstances in security suitability checks allowing them the ability to access and plan malevolent acts in a secure biological facility rather than just thinking about them. once operating inside a facility-depending on the nature of the planned attack it can be very difficult for law enforcement and the intelligence community to respond quickly enough to disrupt the attack before its fully implemented. in all threat scenarios (simple to complex) in addition to the mandatory background checks for workers, each scientific institution needs to develop a full suite of threat assessments that can be updated regularly on different threat actors, including but not limited to: visitors, criminals, lone actor attacks (internal and external), terrorist and issued motived groups, international terrorists groups and foreign powers (perman et al. : ) . these threat assessments should be developed by an institution's internal security department in collaboration with local law enforcement. the relatively low number of threat scenarios that have taken place involving bio-agents since / will likely mean that there will be many intelligence gaps in assessing the intent, ability and opportunity of different threat types. however, providing baseline threat assessments will begin to build pictures of threats scenarios that should help promote better biosafety measures as well as opportunities to disrupt threats earlier should they begin to emerge. in summary, law enforcement and intelligence agencies working on bio-threats and risks of the future can learn a lot from their counter terrorism colleagues. since / , countering terrorism continues to produce lessons for the law enforcement and intelligence communities on how more effectively to disrupt emerging terror plots before they are implemented. the knowledge gained from investigating conventional terrorism attacks that don't involve biology can help those working on future bio-threats and risks by seeing how to optimise the legislative, intelligence, investigative and community response to terrorism while also learning lessons from contemporary counter terrorism efforts. in particular, the increase in lone actor terrorist attacks in the westoften with short notice underscores that either an insufficient amount of intelligence or types of intelligence that cannot be revealed in court often exists. in these cases, other tactical disruption strategies are gaining traction amongst 'five eyes' countries to mitigate the threat and harm posed by terrorists. similarly, given the complexity of threat scenarios that could arise from the exploitation of dual use biotechnology, it may be difficult in some cases to collect sufficient solid 'evidence' or use bio-forensics to attribute confidently for a conviction on bioterrorism or bio-criminal activity. nonetheless, the various counter terrorism strategies discussed above point to ways threat actors may be disrupted on lesser offences while also providing a greater intelligence dividend on other individuals involved. the final knowledge area and stakeholder group that intelligence agencies and investigators working with bio-threat and risks may learn more from is cyber security. as koblentz and mazanec ( ) suggest there are a lot of common characteristics between biological and cyber weapons including but not limited to: difficulty of attribution and how multiple technologies can be used for offensive, defensive and civilian applications ( - ). both authors argue because of these similarities there is likely a lot cyber can learn from how bio-threats have been managed historically. this is undoubtedly true, though in this section the focus will be the opposite-i.e. what can intelligence and investigative agencies working on bio-threats learn from the cyber threat and capability landscape? even a cursory review of the literature suggest that there are a number of areas where current cyber research and practice could inform the 'five eyes' intelligence communities understanding of current and emerging bio-threats and risks. space does not allow an exhaustive discussion on all of them, but there are three cyber areas in particular; where i believe those working with bio-threats and risks could benefit greatly from knowing more about in order to learn the lessons from the cyber context as well as identifying good intelligence and investigative practice. these areas are: the dark web, cyber terrorism and cyber espionage. i will discuss each briefly in turn. turning to the dark web environment first here we are referring to the content on the internet that is 'not indexed by standard search engines' (weimann : ) . much of the dark web is hidden or blocked and can only be accessed by specialised browsers. given the relative anonymity it provides, the dark web has seen the proliferation of child pornography, credit card fraud, identify theft, drugs and arms trafficking amongst other illicit offences. the dark web only emerged in recent years though law enforcement and intelligence agencies have made some in roads into its penetration and disruption. the fbi's shut down of the dark web site silk road, which operated between february and october was to that point the largest and most sophisticated anonymous online market place for illicit drugs (zajácz ) . new technological solutions are also being developed to better identify, collect and analyse illicit activity on the dark web, including darpa's memex software, which helps catalogue dark web sites (weimann : ) . nonetheless, all 'five eyes' intelligence communities will need to continue to develop their collection, analytical and investigative capabilities in the dark web content to profile more accurately various illicit market places in order to orchestrate impactful disruption activity across multiple markets. although it is unknown, at least in an unclassified sense the extent to which illicit markets exist that could benefit bio-threat actors (criminals or terrorists), undoubtedly law enforcement and intelligence agencies, who are given a watching brief on emerging bio-threats and risk should be exploiting the dark web more for opportunities for disruption. a first step might be first to map the bio-terrorism literature and identify researchers, who have access to bioterrorism agents/disease research, domain, institutions, countries and emerging topics and trends in bioterrorism agents/disease research. chen shows how by using informatics research it might be possible to use knowledge mapping techniques, to analyse productivity status, collaboration status and emerging topics in the bio-terrorism domain (chen : - ) . additionally, other intelligence and investigative teams that are working on non-bio threats such as conventional terrorist attacks, terrorism financing, drug trafficking or even child sexual exploitation may come across offenders, who have links to others interested in exploiting dual use biological agents for malevolent objectives. so the work currently going on by intelligence agencies working on broader cyber security issues such as cybercrime or cyber terrorism is directly relevant to improving collection and analysis against emerging bio-threats and risks. developments in the second area cyber-terrorism provides another opportunity for bio-threat intelligence and investigative teams to learn off their colleagues working on cyber threats. in the past we often think about the classical 'bio-terrorism' attack involving the aerolising and dispersal of a dangerous pathogen like anthrax into a crowded place. this mode of attack may still be chosen in the future by a terrorist group (leaving aside for a minute the technical difficulties of such an attack). though committed acts through cyber opens up other choices for a bio-attack. cyber security specialist's knowledge of cyber terrorism is still developing. we have seen for example groups like the taliban and is increasingly use computers for recruitment, propaganda and communications, but it remains difficult to know empirically how many of the current virtual attacks such as ransomware can be attributed to terrorist or led to deaths or impacted critical infrastructure in significant ways. such attacks could just as easily be attributed to cyber hackers (criminals) or state sponsored espionage both issues we will return to shortly (riglietti ; bernard ; heickerö ) . nonetheless, it is clear that terrorism groups are increasing their use of computers including the dark web given they know that intelligence communities are monitoring the surface internet and social media. in august , al-aan tv reported a laptop belonging to a tunisian member of is captured in syria contained thousands of documents from the dark web including pages about making biological weapons in a way to impact the biggest number of people (weimann : ) . there have also been cases where is has carried out a series of cyber-attacks, 'exclusively computer based, which in one instance even led to the disclosure of private information regarding us government officials, from private conversations to work and email addresses' (riglietti : ) . the final area of cyber security that is useful for bio-threat intelligence and investigative teams to reflect on relates to cyber hacks and espionage. putting hacks and espionage together is not meant to suggest that both are always linked-though we have seen in the russian interference in the us presidential election they can be. china too is playing an increasingly sophisticated and aggressive cyber espionage strategy aimed at political interference and stealing intellectual property (inkster ) . there seems little doubt that the extent of hacking (unauthorised access to a computer or network) being perpetrated by state and non-state actors is on the rise and network vulnerabilities across the civil and military space remain. in a recent article, fbi assistant special agent in charge (chicago), todd carroll said the average time between an unauthorised user getting inside a network and the user being detected is days-'a lifetime in cyber means'. todd went on to say that % of business owners don't have a dedicated employee or vendor monitoring for cyber-attacks (stone ) . we have also seen in recent years the growth in malware and ransomware attacks across the globe. for example, in the wannacry ransomware attack caused , infections across countries (locking down banking, energy and manufacturing systems) (schilling ) . the dark web also provides terrorist and criminal groups opportunities to operate botnet campaigns in anonymity that can remotely operate networks of computers to commit attacks on other systems including critical infrastructure. again there is insufficient space to provide a full survey of all the cyber hacking and espionage threats, and indeed what to do about them is beyond the scope of this chapter (clarke and knake : - ) . nonetheless the hacking attacks-whether they are state sponsored (espionage) or non-state actors (terrorists or criminals) provide another rich source of knowledge to be collected and assessed that can be used by those working on emerging bio-threats and risks. for example, it would seem unwise for bio-threat intelligence and investigative teams to not learn from the fast changing angles of cyber-attack from hackers given how the physical security of biological institutions, their intellectual property and the kinds of biological products produced in such facilities is reliant on secure cyber systems. we have seen in recent years the take down of government websites involving ransomware attacks on both government and private sector networks. increasingly more information is being shared and stored via the cloud. what would be the impact of a major ransomware attack that locks down the entire bio-surveillance capability of a public health authority such as cdc do to maintaining national health security? could a cybercriminal group infiltrate the network of a major biodefense company steal ip and sell it to a terrorist group on the dark web? could research stored via the cloud on non-secure networks relating to the genetic sequences of pathogens be stolen by a terrorist group or state actor to engineer bio-weapons? (blue ribbon project : - ) . in all the three areas discussed above, a fuller development of links between those working in the cyber intelligence collection and analysis streams, and those who might examine emerging bio-threats and risks is a necessary first step in bringing relevant knowledge and practice from cyber security to bio-threat stakeholders. in this final section the attention is turned to what kind of stakeholders play a role in treating bio-threats and risk? second, in performing these roles, how can they help the 'five eyes' intelligence communities build better capability (knowledge, practice and technology) about treating actual or emerging bio-threats and risks? as we have seen so far the management of bio-threats and risks is potentially a crowded enterprise with many stakeholders (beyond the intelligence communities) playing critical roles. in this section, i have grouped them into three 'types of stakeholder': first responders, science and technology stakeholders and security stakeholders. these are not three distinct clusters of unique stakeholders that do not interact with each other. depending on the nature of the bio-incident that has occurred, one would expect to see a close interaction amongst the various knowledge brokers and practitioners from each group. for example, a release of a synthetically manufactured select agent in an airport should result in the combined strategic and tactical contributions from first responders, engineers and security personnel rather each being delivered in isolation. an uncoordinated delivery of knowledge, practice and expertise to treat an unfolding bio-threat/risk from multiple stakeholders will not result in the best outcome for mitigating the risk or disrupting future potential of similar threats occurring. again as with previous sections, the focus here is not a deep exploration of the specific knowledge, practice or technology of all stakeholders involved potentially in the treatment of bio-risks. this would be an impossible task. instead this section will explain briefly what each of the three broad stakeholder categories (first responders, science and technology and security) can do broadly to treat bio-risks (current or potential), what intelligence communities can learn from this in ways that extend their capabilities to manage bio-threats and risks. the label 'first responders' is a descriptor for a much broader range of stakeholders including: fire/hazmat, paramedics, emergency responders, health and hospital service providers. each would play a different role in both responding to and treating a bio-incident depending on the type of biological hazard, their jurisdictional and legislative responsibilities and fiscal capacity. in all 'five eyes' countries with perhaps the exception of new zealand (with a smaller population and only one national government) the complexity of response will be particularly governed by the overlapping roles that various local, state and federal first responders might play. obviously in the us with multiple federal, state and local agencies, the coordination of first responder efforts to a bio-incident presents more challenges than other 'five eyes' countries such as australia and the uk with less agencies and jurisdictions. there is not an abundance of academic literature on the role of first responders in treating bio-threats and risks. this lack of evidence makes it difficult to assess accurately what first responders can do to treat bio-threats and risks, what the challenges are and what the intelligence community can learn from these important stakeholders. there is however, some research available that can increase the intelligence communities' understanding of first responder capabilities to treat bio-threats and risks as well as illuminate some of the challenges in doing so. this research should provide at least a start to what the intelligence community can learn from first responders as they deploy their knowledge and practice to disrupt and treat bio-threats and risks. / and the amerithrax incident provided a catalyst for law enforcement and public health agencies to work closer together to respond to an unfolding threat. since amerithrax, across the 'five eyes' countries further work has been done to better coordinate the work of law enforcement and public health agencies on treating bio-threats and risks. but such efforts have not involved routinely the broader spectrum of national security intelligence agencies, who have tended to play a more strategic and adhoc role compared to their law enforcement counterparts. overall, policy, coordination and legislative efforts to bring first responders and members of the intelligence and law enforcement community together have had only mixed success for a number of reasons. in , a study of how law enforcement and public health agencies in the us, canada, uk and ireland work together on bio-threat incidents identified several common barriers to improving multi-agency responses (strom and eyerman ) . these included cultural, legal, structural, communication and leadership barriers (ibid.: ). ten years on from strom and eyerman's research, other researchers have made similar observations about the ability of first responders to manage effectively a bio-threat incident and to work with law enforcement and intelligence community on such tasks. but it's not just the capability issues raised above, other research points to other technical challenges to treating the impact of bio-threats and risks in the physical environment. for example, research by chemists and environmental engineers show that given the varying nature and strains of the bacteria-the science for assessing risk of exposure may not be able to provide a fully accurate risk assessment of a building's vulnerability or resilience to a bio-attack nor-in some cases whether first responders have effectively 'cleaned the environment up after exposure' (canter ; taylor et al. ) . a lack of effectiveness in responding to a biothreat incident in a local area obviously can have broader public sector implications in both treatment and preparedness of bio-risks. for example, gerstein ( : ) citing a study by advocacy group trust for america's health reported that states and dc scored / or lower on a scale for preparedness. additionally, since / major disease outbreaks such as sars and ebola have also demonstrated fragility in parts of the world, including some 'five eyes' country's public health response capability, which remains a concern if there was a major bio-terrorist event. the blue ribbon study project report raised similar concerns about the capability of certain responders including those local, state and federal agencies that might be involved in decontaminating sites following a bio-incident. in the us, the report raised similar coordination issues between federal, state and local agencies in which first responder agency would take the lead in decontaminating and remediating environments and how other agencies would get involved to ensure the attack site was deemed safe for people to return (blue ribbon study project : ). one underlying theme arising from the studies mentioned on first responder's roles in treating bio-threats and risks is that the intelligence community must share more information with emergency services on the nature of the threat they are meant to respond to. this is not to suggest that in all the 'five eyes' countries that no sharing is going on. my selected interviews with law enforcement and intelligence officials in each country did not give the impression that no sharing was going on with first responders. however it is clear if the local fire officers or emergency staff in a hospital are meant to better respond to a bio-incident they will need regular, consistent, reliable, real-time information and intelligence. this is vital to them safely securing the scene, or rapidly diagnosing and treating infected patients while also keeping themselves safe. importantly too, the more intelligence they receive will likely be helpful in first responders preserving any relevant evidence from the scene that might be needed by the either the law enforcement and intelligence communities. gerstein makes a valuable point when referring to improving bio-preparedness and response activities, when he suggests that first responders need to be seen as part of a complex system rather than each representing a series of programs (gerstein : ) . in addition to the range of knowledge and practice the intelligence community can learn from first responders, arguably the biggest lesson they can learn is to seek to better understand the 'linkages among disparate disciplines (biodefense, public health, emergency management), government, industry, the scientific community and themselves to better support first responders' (ibid.). if the 'five eyes' intelligence communities were able to create the necessary national health security coordination arrangements suggested in chapter such as the health security coordination council and the national health security strategy, then through these institutions further intelligence sharing mechanisms could be established to improve information flow between the intelligence communities and first responders at federal, state and local levels. however, first further research is required to investigate how law enforcement and intelligence communities work currently with first responders to identify and as much as possible ameliorate the cultural, legal, communication and leadership barriers that persist. a second cluster of knowledge and stakeholders for treating bio-threats and risks could be loosely described as 'science and technology' stakeholders. in earlier sections, under the relevant headings (prevention and disruption), significant space was devoted to how our intelligence communities can learn from a range of stakeholders working across a diverse array of disciplines (including bio-surveillance, public health, biosafety, criminology, counter terrorism and cyber). in each of these disciplines, discussion included exploration of relevant science, technology and knowledge useful for the intelligence community in preventing and disrupting bio-threats and risks. some of that discussion, for example bio-surveillance, biosafety and strengthening global health is also relevant to our focus here in treating bio-incidents. however, in this section the focus is not what the intelligence community can learn from stakeholders working in the above disciplines, but rather what they can learn from disciplines more removed from the biological sciences or relevant social sciences (e.g. engineering or security studies). what can the intelligence community learn from physical, mechanical or environmental engineering? there are multiple roles engineering specialties could play and are playing in preventing, disrupting and treating bio-threats and risks. for one and historically, the us dod has relied on engineers, microbiologists to provide advice on weaponisation of biological agents under a range of scenarios and conditions (state actor and terrorists threats). for example, even pre / , between and dtra funded project bacchus to see if a team of scientists and engineers, who allegedly did not have extensive experience in bio-weapons could make bio-weapon facility using just commercially available items. the objective was to see if the team could make anthrax successfully without the detection of the intelligence community, though it was later revealed that this team did have substantive technical knowledge and support throughout this project (vogel : - ) . engineers have also long been engaged in studying aerolisation dynamics, which has become increasingly a multi-disciplinary collaboration of environmental engineers, biomedical engineers, microbiologists, chemists and epidemiologists (xu et al. ) . related to aerolisation studies has been the work of hardware and software engineers-many of whom came from the aerospace and automotive industries that have brought their skills into modelling bio-terrorism attacks to help first responders predict how airborne particles might move through sections of a city under certain weather and windflow conditions (thilmany ) . other engineering studies, sometimes referred to bio-protection studies have been important in the design of the heating ventilation and air conditioning (hvac) systems used to resist biological contaminants. much of this research became activated after the amerithrax incident, and is designed at reducing the health consequences from airborne contaminants by augmenting heating and air conditioning systems (ginsberg and bui ) . another focus of engineering led research relates to improving the portability, speed and reliability of bio aerosol monitors for pathogens. one recent study has been working on a device that would be fully portable and automated-capable of detection of selected air-borne microorganisms on the spot-within to minutes depending on the genome and particular strain of the organism (agranovski et al. ). in this last sub-section in our exploration of what other stakeholders may be useful in treating bio-threats and risks we turn our attention to the role of security officers. i am conscious in the discussion above regarding prevention and biosafety much was said about the role of security officers and managers in promoting biosecurity and biosafety across all sectors of the bio-sciences enterprise (e.g. research centres, hospitals, biotechnology companies, public and private labs). in this section, we focus instead on the role of security officers and managers across the broader economy-beyond biosciences. as argued in previous chapters, in addition to taking a one health perspective to bio-threats and risk, 'five eyes' intelligence communities and their law enforcement colleagues need to also understand the potential development of biothreats and risks beyond the technical world of biotechnology and labs to include also in their wider social, economic and community contexts. hence in this section, we are referring to the role of security officers and companies that work across the international, national, state and local economies in each 'five eyes' country. given the trajectory of most (if not all) future bio-threats is unknown, our intelligence communities need to be forging more formalised (less adhoc) relationships with security officers in a range of non-biotechnology industries (banking, mining, food supply, agriculture, critical infrastructure). as nalla and wakefield ( ) argue several factors have increased the role of private security since the second world war. increased economic wealth, enhanced security technology (alarms, access control and cctv), in addition to an increase in the control by a number of private sector companies of publicly accessible places have, amongst other factors all contributed to the growth in private sector security (ibid.: ). while it is difficult to generalise 'as the functions of security officers/agencies are as varied as the organisations that employ them' (ibid.: ), their functions and roles cut across many facets of each 'five eyes' nation to include office buildings, warehouses, shopping malls, education establishments, residential complexes and critical infrastructure. one often thinks of the classic scenario of a security guard standing in front of a physical gate, which is one role of many others which might also, depending on their functions include traffic control, surveillance, responding to emergencies, security vetting. in the security role of complex large companies, airports and electricity plants, it is likely that the security officers will have a deep understanding of their physical and virtual security environments and this kind of expert knowledge would be integral for them and the intelligence community gaining threat awareness, prevention, surveillance, disruption, treatment and recovery to bio threats and risks which may manifest in their operating environment. historically however, the relationship between intelligence communities (including law enforcement) and private sector security has not been optimal partially because a lack of trust between both (ibid.: ). however, several studies on private and public sector security do show several areas of improvement across each 'five eyes' country. some of these improvements have been initiated by governments such as in the uk making significant cuts to policing in the late s and mid- s and seeking the private sector security sector to pick up more cheaply what were considered less core policing such as offender management and transfers of prisoners. in other cases, governments were interested in engaging with the private sector to extend their own security and intelligence collection capabilities with terrorism. connors et al. ( ) , wakefield ( ) , and rigakos ( ) provide more detailed analysis of a range of factors that have been involved in building partnerships with private sector security companies in the us, uk and canada respectively. / and of course subsequent terrorist attacks in many western countries has seen a more focused attempt by 'five eyes' countries to reach out to the private sector-including private sector security given many attacks occur in public places owned or managed by the private sector. threats as well to public and privately owned critical infrastructure (aviation, power, water, and telecommunication) have also influenced 'five eyes' government's closer liaison with the private sector. for example in the us, dhs has established a private sector office to provide government advice on relevant security issues to the private sector as well as promoting public-private partnerships. in australia, since / parts of the australian intelligence community, particularly asio has developed closer links with the private sector. in australia's attorney general's department created the business-government advisory group on national security to provide a vehicle for the government to discuss a range of national security issues and initiative with ceos and senior business leaders (dpm &c : ). the group later ( ) evolved into the australian governments industry consultation on national security (ibid.). more recently ( ) the australian government released its strategy for protecting crowded places from terrorism. this significant policy document was developed in close partnerships with federal, state and local governments, the intelligence community and the private sector. the key objective being to assist owners and operators to increase the safety, protection and resilience of crowded places across australia (anzctc ). an interesting aspect of this strategy is that it places the primary responsibility for protecting sites and people on private sector businesses. similar policy articulations have been declared in the uk's counter-terrorism strategy (hmg ) and canada's approach to counter-terrorism (canadian government ). in summary, it's clear that various agencies of the 'five eyes' intelligence communities and their broader law enforcement counterparts have increased their liaison and implemented various initiatives with private sector industry. what is less clear is the nature and extent of these as they relate to the prevention, disruption and treatment of potential bio-threats and risks. much is unknown, for example, about whether intelligence and law enforcement communities are actively working in partnership with the private sector beyond the classical threat typologies of basic terrorist's tactics, improvised explosive devices or vehicle born attacks. given the low probability high impact nature of the evolving bio-threat environment, it is likely that many private sector companies (banking, shopping malls, mining, hotels) see little need to include bio-threats in their security risk management plans or indeed consult with intelligence and law enforcement communities on them. while it is important not to be alarmist on low probability threats that are more likely on balance to effect the biosciences community rather than the broader private sector economy, it seems unwise for the latter not to consider the impact of such bio-threats on their operations and to at least have formalised dialogues on these with the intelligence community. but such a dialogue will in the future rely on several factors identified already by researchers coming together to develop more effective public-private crime prevention strategies. prenzler and sarre list several factors including: a common interest in reducing a specific crime, leadership, mutual respect, information sharing based on high levels of trust in confidentiality and formalised mechanisms for consultation and communications (prenzler and sarre : ) . this chapter surveyed the role of external stakeholders external (to the 'five eyes' intelligence communities) in preventing, disrupting and treating bio-threats and risks. depending on the particular bio-threat a diverse array of stakeholders could provide knowledge, skills and capabilities to the intelligence community. the large number of disciplines and stakeholders with relevant technical knowledge suggest that they will continue to play a critical role in the prevention, disruption and treatment of bio-threats and risks. in many cases, such as in biosurveillance, forensics and even engineering the scientific and technical stakeholders discussed here may play a greater role than the traditional intelligence and investigative response to managing bio-threats and risks. the chapter also highlighted that although each 'five eye's intelligence community has a wealth of knowledge to tap into from stakeholders, however in most cases all stakeholder groups are faced with their own theoretical and practical limitations. analysts and investigators working on bio-threats and risks need to understand these limitations while also seeking to build deeper and more formalised partnerships with scientific, technical and cross disciplinary stakeholders. in the final chapter , we shift the focus away from the practice and processes involved in interpreting bio-threats and risks to oversight and accountability issues. given the legislative, ethical and normative challenges modern intelligence practice faces, particularly in understanding the potential threat trajectory of synthetic biology, what role can oversight and accountability play in achieving the objectives of the intelligence communities in liberal democracies? miniature pcr based portable bioaerosol monitor development australia's strategy for protecting crowded places from terrorism. anzctc, australian government biological weapons-related common control lists biosafety and biosecurity as essential pillars of international health security and cross-cutting elements of biological non-proliferation biosecurity in research laboratories blue ribbon study panel on biodefense. a national blueprint for biodefense: leadership and major reform needed to optimise efforts biodefense special focus: defense of animal agriculture group cdcw. framework for evaluating public health surveillance systems for early detection of outbreaks: recommendations from the cdc working group insider threats us military says it mistakenly shipped live anthrax sample building resilience against terrorism: canada's counter terrorism strategy. ottawa: government of canada biosecurity imperative: an urgent case for extending the global health security agenda addressing residual risk issues at anthrax clean up. how clean is safe? day internal review of the division of select agents and toxins report of the advisory committee to the director bioterrorism and knowledge mapping dark web exploring and data mining the dark side of the web the traditional tools of biological arms control and disarmament cyber war the politics of surveillance and response to disease outbreaks operation cooperation, guidelines for partnerships between law enforcement and private security organisations understanding the lone wolf phenomena: assessing current profiles fda found more than smallpox vials in storage room more deadly pathogens, toxins found improperly stored in nih and fda labs marburg biosafety and biosecurity scale (mbbs): a framework for risk assessment and risk communication review of australia's counter terrorism machinery. department of prime minister and cabinet beyond the ebola battle winning the war against future epidemics iarpa director jason matheny advances tech tools for us espionage high containment laboratories: national strategy for oversight is needed biosurveillance non federal capabilities should be considered in creating a national biosurveillance strategy high containment laboratories: assessment of the nation's need is missing. testimony before the subcommittee emergency preparedness, response and communications, biosurveillance observations on the cancellation of biowatch gen- and future considerations for the program testimony before the subcommittee on emergency preparedness response and communications, committee on homeland security, house of representatives gao high containment labs coordinated actions needed to enhance the select agent program's oversight of hazardous pathogens (vol. gao - ) predicting virus emergencies and evolutionary noise the biological and toxin weapons convention. national security and arms control in the age of biotechnology glaring gaps: america needs a biodefense upgrade the neglected dimension of global security. a framework to counter infectious disease crisis bio protection of facilities national biosafety systems risk and benefit analysis of gain of function research final report cyber terrorism: electronic jihad. strategic analysis global health security: the wider lessons from the west african ebola virus disease outbreak contest: the uk's strategy for countering terrorism. london: her majesty's government cyber espionage. adelphi series from detection to disruption: intelligence and the changing logic of police crime control in the uk a disruptive influence? preventing problems and counter violent extremism policy in practice government biosurveillance to include social media implementation of the international health regulations ( ) in the african region advances in anthrax detection: overview of bioprobes and biosensors living weapons viral warfare: the security implications of cyber and biological weapons biological weapon convention ebola response impact on public health programs the politics of surveillance and response to disease outbreaks cdc's response to the - ebola epidemic west africa and the united states. mmwr, supplement the proliferation security initiative: a model for future international collaboration biosafety in the balance digital disease detection: a systematic review of event-based internet biosurveillance systems implementing the global health security agenda lessons from the global health and security programs basic principles of threat assessment the role of partnerships security management the research impact handbook top us intel official calls gene editing a wmd threat the para police defining the threat: what cyber terrorism means today and what it could mean tomorrow signale-early warning system laboratory biorisk management biosafety and biosecurity revealed: safety breaches at uk labs handling potentially deadly diseases. the guardian ransomware -how to face the threat cdc probe of h n cross contamination reveals protocol lapses, reporting delays pandemic readiness review says $ . billion a year needed secretary tillerson lauds global health security agenda disruptive innovation can prevent the next pandemic natural or deliberate outbreak in pakistan: how to prevent or detect and trace its origin: biosecurity, surveillance forensics army probe of anthrax scandal raises more red flags physical elements of biosecurity who isn't equipped for a pandemic or bioterror attack? the who lessons from the anthrax letters the week in fintech: fbi agent says cybersecurity practices need to change interagency coordination in response to terrorism: promising practices and barriers identified in four countries looking at the formulation of national biosecurity education action plans the role of protection measures and their interaction in determining building vulnerability and resilience to harms way engineering software and micro technology prepare the defense against bioterrorism phantom menace or looming danger? selling security. the private policing of public space intelligence and intelligence analysis australian national security intelligence collection since / : policy and legislative challenges going dark: terrorism on the dark web ebola virus disease in west africa-the first nine months of the epidemic and forward projections signal recognition during the emergence of pandemic influenza type a/h n : a commercial disease intelligence unit's perspective. intelligence and national security utility and potential of rapid epidemic intelligence from internet-based sources labs cited for 'serious' security failures in research with bioterror germs silk road: the market beyond the reach of the state key: cord- -khhzlt y authors: jain, aditya; leka, stavroula; zwetsloot, gerard i. j. m. title: work, health, safety and well-being: current state of the art date: - - journal: managing health, safety and well-being doi: . / - - - - _ sha: doc_id: cord_uid: khhzlt y this introductory chapter will present a review of the current state of the art in relation to employee health, safety and well-being (hsw). the work environment and the nature of work itself are both important influences on hsw. a substantial part of the general morbidity of the population is related to work. it is estimated that workers suffer million occupational accidents and million occupational diseases each year. the chapter will first define hsw. it will then review the current state of the art by outlining key hsw issues in the contemporary world of work, identifying key needs. it will then discuss the evolution of key theoretical perspectives in this area by linking theory to practice and highlighting the need for aligning perspectives and integrating approaches to managing hsw in the workplace. this chapter focuses on the relationship between work, health, safety and wellbeing. the work environment and the nature of work itself are both important influences on health, safety and well-being (hsw). as a result, workplace health and safety or occupational health and safety have been key areas of concern for many years. traditionally, more focus has been placed on safety concerns in the workplace while health concerns became more prominent with the changing nature of work. well-being on the other hand, is increasingly being considered in relation to work and the workplace in recent years. a good starting point in understanding this evolution in focus and thinking is definitions. according to the oxford dictionary, safety is defined as the condition of being safe; freedom from danger, risk, or injury. safety can also refer to the control of recognized hazards in order to achieve an acceptable level of risk. in terms of work, this mainly concerns physical aspects of the work environment. however, the changing nature of work was associated with the emergence of new types of risk relating to psychological and social aspects of the work environment. this brought about greater focus on health at work. a very influential definition that shaped thinking and action in subsequent years was the world health organization definition of health as a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity (world health organization [who], ) . this definition promoted a more holistic view of health away from a mere focus on physical aspects towards considering social and mental health aspects. although the who definition already referred to a state of well-being, definitions of well-being include additional dimensions to health, such as social, economic, psychological, and spiritual. well-being refers to a good or satisfactory condition of existence; a state characterized by health, happiness, and prosperity. obviously achieving this state is not relevant to the workplace or work alone but rather an overall evaluation of one's life across many areas. as such, actions to improve hsw can be taken within the work context and outside of it. actions taken in the workplace represent workplace interventions that are implemented in the work setting and consider the characteristics of work environments and workers. on the other hand, actions taken outside the workplace represent public health interventions that are implemented in various settings (for example, in schools, communities or countries) and take into consideration the characteristics of particular populations. a key question in terms of hsw interventions when it comes to the workplace concerns responsibility. while every individual is responsible for their own actions in various contexts of life, in a specific setting like the work environment, additional responsibility lies with the employer since the work environment will expose workers to particular work characteristics that might in turn pose a certain level of risk to their hsw. while employer responsibility might be formalized under law, this is not the case across countries or in relation to all possible types of risks to workers' hsw, and in particular new and emerging risks, or risks that are either new or gain in prevalence with the changing nature of work. accordingly, it is important to consider not only legal duties that employers have towards their workforce but also ethical duties that will extend beyond legal compliance. in addition, while employers bear a legal responsibility towards their workforce, they also bear responsibility towards society. this has meant that enterprises have increasingly been held accountable towards society and that interventions in the workplace, whether legally required or not, are now being increasingly considered in terms of their impact beyond the workforce alone but rather society as a whole (see chapters , , and ). this represents a blurring of boundaries between traditional occupational safety and health and public health initiatives that have also resulted in greater emphasis on the concept of well-being in addition to health and safety. at its first session in , the joint international labour organization (ilo)/ world health organization (who) committee on occupational health defined the purpose of occupational health. it revised the definition at its th session in to read as follows: occupational safety and health should aim at: the promotion and maintenance of the highest degree of physical, mental and social well-being of workers in all occupations; the prevention amongst workers of departures from health caused by their working conditions; the protection of workers in their employment from risks resulting from factors adverse to health; the placing and maintenance of the worker in an occupational environment adapted to his physiological and psychological capabilities; and, to summarize, the adaptation of work to man and of each man to his job. almost years later, the target set through this declaration seems ambitious in many parts of the world, both in developed and developing countries. to understand why, it is worth understanding the context underpinning developments in this area as well as current priorities and needs. in recent years, globalization of the world's economies and its repercussions have been perceived as the greatest force for change in the world of work, and consequently in the scope of occupational safety and health, in both positive and negative ways. liberalization of world trade, rapid technological progress, significant developments in transport and communication, shifting patterns of employment, changes in work organization practices, the different employment patterns of men and women, and the size, structure and life cycles of enterprises and of new technologies can all generate new types and patterns of hazards, exposures and risks. demographic changes and population movements, and the consequent pressures on the global environment, can also affect safety and health in the world of work. let us first consider key impacts on the changing nature of the work environment. different types of products and services, organizational structures and work processes, and tools and resources are used in the modern workplace. three main drivers have been proposed in relation to these changes. the first is globalization, a term which refers to the integration of national and regional economies, which became more prevalent since the nineteenth century. according to the organization for economic co-operation and development (oecd, ) , the rapid integration into world markets by six economies (brazil, russia, india, indonesia, china and south africa) was an important component of globalization during the past decades. globalization has led to increased competition across organizations, to a shift in the type of business operations in which companies are engaged, and to extensive outsourcing of activities, primarily to low-wage countries. flanagan ( ) examined the effects of globalization on working conditions (hours, remuneration and safety) and concluded that globalization has led to greater flexibility of the work process, with more part-time employment, temporary employment and independent contracting of staff (european agency for safety & health at work [eu-osha], ; kawachi, ) . houtman and van den bossche ( ) confirmed these conclusions on the basis of eurostat data, reporting that more employees in europe hold a temporary employment contract and yet more people will work 'on call'. oecd reports also confirm these trends. they also highlight that average wage growth has not been equivalent to growth in labour productivity, which is also an outcome of the erosions of the bargaining power of workers in the process of globalization (oecd, ). organizational restructuring which has been on the increase due to economic crises in different parts of the world may have been partly a cause of this. organizational restructuring is accompanied by job insecurity and can result in unemployment with subsequent negative impacts on hsw. however, restructuring should not only be considered a serious threat to individual hsw for those who lose their job (the 'direct victims') but also to their immediate environment (e.g. kieselbach et al., ). in addition, evidence during the past two decades showcases the impact of restructuring on the so-called 'survivors' as concerns health, well-being, productivity, and organizational commitment (kieselbach et al., ) . the second key development is the tertiarization of the labour market, manifested in increased demand for staff in the services sector and reduced employment opportunities in industry and agriculture. this became apparent in the early years of the twentieth century but in recent decades may have been reinforced by globalization, since the outsourcing of manual labour to low-wage countries left predominantly the service economy elsewhere (eu-osha, ; peña-casas & pochet, ) . the third key development relates to technological advancement and the emergence of the internet, which has led to many changes and innovations in work processes. many forms of manual work have become obsolete and staff must offer different skills and qualifications (joling & kraan, ) . moreover, 'new work', a term which amongst others refers to telework, i.e. working from home or a location other than the traditional office, is now more widespread. this can result in blurring the borders between working and private life. work can take place outside the traditional working hours as well as at home or when travelling. hence, it may impinge on the need for rest and recuperation, or interfere with personal commitments. also new forms of working methods such as lean production (a production practice according to which the expenditure of resources other than for the creation of value for the end customer is wasteful and should be eliminated, womack & jones, ) , and just-in-time production (a production strategy that strives to improve a business' return on investment by reducing in-process inventory and associated costs, womack & jones, ) have been introduced (eu- osha, ; kompier, ) . overall there has been concern of the effects new forms of work may have on the hsw of workers, organizations and communities (e.g. benach, amable, muntaner, & benavides, ; benavides, benach, diez-roux, & roman, ; quinlan, ; quinlan, mayhew, & bohle, ; sauter et al., ; virtanen et al., ) . it is also important to mention the prevalence of small and medium-sized enterprises (smes) that are believed to be responsible for over % of new jobs created globally. moreover, in most developing and emerging countries, they also employ more people than large enterprises do. however, occupational safety and health (osh) is often less well managed in smes, creating working conditions that are less safe and posing greater risks to the health of workers than larger enterprises (croucher, stumbitz, quinlan, & vickers, ) . in particular, smes have less time to devote to providing osh training and information due to economies of scale, and have less expertise in hsw. research also confirms a common lack of awareness of the cost implications of occupational accidents and diseases amongst sme owners and managers, as well as a tendency for smes to be reactive, rather than adopting proactive and preventive strategies towards osh (croucher et al., ) . however, there are also changes in the workforce that are associated with hsw in the workplace. the next section considers the most important of these. alongside the factors changing the nature of work itself, changes can also be seen in the working population, with noteworthy trends being: (a) the ageing workforce; (b) the feminization of the workforce; and (c) increased immigration (leka, cox, & zwetsloot, ) . let us now consider these issues in more detail. in industrialized countries, the share of people aged -plus has risen from % in to % and is expected to reach % ( million) by . in developing countries, the share of people aged -plus has risen from % in to % and is expected to reach % ( . billion) by (world economic forum [wef], ). the global population is projected to increase . times from to , but the number of -plus will increase by nearly %, and the -plus by about %. women have a life expectancy of . years more than men and account for about % of the -plus group, rising to % of the -plus group, and % of the -plus group (wef, ) . in response to these global trends, four strategies have been proposed: raising the normal legal retirement age; using international migration to ameliorate the economic effects of population ageing; reforming health systems to have more emphasiz on disease prevention and health promotion; and rethinking business practices, encouraging businesses to employ more older workers, even on a part-time basis (wef, ) . according to the oecd ( ) most countries will have a retirement age for both men and women of at least years by , and this has already been implemented in many countries. this represents an increase from current levels of around . years on average for men and . years on average for women. the same report stresses that high levels of youth unemployment will lead to widespread poverty in old age as young people struggle to save for retirement. since population ageing in industrialized nations has been a prevalent trend in the past decades (ilmarinen, ) , lessons can be learned from it in relation to the workforce. most reviews and meta-analyses in the scientific literature make clear that there is no consistent effect of age on work performance (e.g., benjamin & wilson, ; griffiths, ; salthouse & maurer, ) . overall, older workers perform as well as younger workers. furthermore, there are many positive findings with regard to older workers. for example, older workers demonstrate less turnover and more positive work values than younger workers (warr, ) . they also exhibit more positive attitudes to safety and fewer occupational injuries (siu, phillips, & leung, ) although there is some evidence that it is tenure (time on the job) that should be examined rather than age per se (breslin & smith, ) . however, the evidence from epidemiological and laboratory-based studies paints a less favourable picture of older people's performance. such studies reveal age-related declines in cognitive abilities such as working memory capacity, attention capacity, novel problem-solving, and information processing speed. agerelated deterioration is also documented in motor-response generation, selecting target information from complex displays, visual and auditory abilities, balance, joint mobility, aerobic capacity and endurance (kowalski-trakofler, steiner, & schwerha, ) . as workers get older, they suffer from more musculoskeletal disorders (eurostat, ) , and they are more likely to report work-related stress (griffiths, ) . recent models of ageing and work propose that certain mediating factors underpin the relationship between chronological age, work performance and behaviour and might function at three levels: individual, organizational and societal. at the individual level, for example, experience, job knowledge, abilities, skills, disposition, and motivation may operate (kanfer & ackerman, ) . other mediating variables may reflect organizational policies and practices: for example, age awareness programmes, supervisor and peer attitudes, management style, the physical work environment and equipment, health promotion, workplace adjustments, and learning and development opportunities (griffiths, ) . however, policies and systems implemented so far have, in most countries, not been adequately successful in keeping people healthier and in employment for longer (oecd, ) . a further level of exploration for the relationship between age and work performance might be provided by examining global markets, the wider employment context and worker protection (johnstone, quinlan, & walters, ; quinlan, ) . as discussed, in developed countries there has been a decline in manufacturing and a recent export of some service sector work to developing countries. the way work is designed and organized has changed substantially with a growth in contingent or 'precarious' work and an increase in part-time work, home-based work, telework, multiple job-holding and unpaid overtime. these changes might make it increasingly difficult for older workers to gain or maintain employment, and such employment may entail inferior and unhealthy working conditions. these changes in work design and management have also been accompanied by changes in worker protection; for example, a decline in union density and collective bargaining, some erosion in workers' compensation and public health infrastructure and cutbacks in both disability and unemployment benefits -again contexts which are unlikely to favour vulnerable workers, such as older workers. as such older workers may be affected by increased exposure to certain occupational hazards; decreased opportunities to gain new knowledge and develop new skills; less support from supervisors, and discrimination in terms of selection, career development, learning opportunities and redundancy (chiu, chan, snape, & redman, ; maurer, ; molinie, ) . pronounced gender differences in employment patterns can be observed as a result of a highly segregated labour market based on gender (burchell, fagan, o'brien, & smith, ; fagan & burchell, ; vogel, ) . gender segregation refers to the pattern in which one gender is under-represented in some jobs and overrepresented in others, relative to their percentage share of total employment (fagan & burchell, ) . a growing body of evidence indicates that a high level of gender segregation is a persistent feature of the employment structure globally (e.g. anker, ; burchell et al., ; rubery, smith, & fagan, ) . some scholars have argued that estimates suggest that gender segregation in the labour market is so pervasive, that in order to rectify this imbalance approximately % of women would have to change jobs or professions (messing, ) . considering differences in employment patterns according to gender (and without taking into account sectors where both genders are represented, e.g. agriculture), women's jobs typically involve caring, nurturing and service activities for people, whilst men tend to be concentrated in managerial positions and in manual and technical jobs associated with machinery or physical products. since men and women are differently concentrated in certain occupations and sectors, with different aspects of job content and associated tasks, they are exposed to a different taxonomy of work-related risks (burchell et al., ; eu-osha, ) . for example, women are more frequently exposed to emotionally demanding work, and work in low-status occupations with often restricted autonomy, as compared to men. this differential exposure can result in differential impacts on occupational ill health for men and women (eu-osha, ; oecd, ) . furthermore, due to the gender division of labour, women and men play different roles in relation to children, families and communities with implications for their health (premji, ) . even though women are increasingly joining the paid workforce, in most societies they continue to be mainly responsible for domestic, unpaid work such as cooking, cleaning and caring for children, and so they carry a triple burden (e.g. loewenson, ) . women are also largely represented among unpaid contributing family workers, those who work in a business establishment for a relative who lives in the same household as they do (ilo, ) . balancing responsibilities for paid and unpaid work often leads to stress, depression and fatigue (duxbury & higgins, ; manuh, ) , and can be particularly problematic when income is low and social services and support are lacking. the lack of availability of child care may also mean that women must take their children to work where they may be exposed to hazardous environments. increased migration of workers from developing countries to developed countries or from poorer to more affluent developed countries is still the norm and increasing. migrant workers can be divided into highly-educated and skilled workers, both from developing and industrialized countries, and unskilled workers from developing countries (takala & hämäläinen, ) . they can also be classified as legal and illegal (or regular and irregular) migrant workers who have a different status and, therefore, varying levels of access to basic social services (who, ) . often lowskilled and seasonal workers are concentrated in sectors and occupations with a high level of occupational health and safety risks (who, ) . ethnic minority migrants have been found to have different conditions in comparison to other migrants, and to report lower levels of psychological well-being (shields & price, ) . women migrants represent nearly half of the total migrants in the world and their proportion is growing, especially in asia. they often work as domestic workers or caregivers while men often work as agricultural or construction workers (ilo, ) . in general, migrant workers tend to be employed in high risk sectors, receive little work-related training and information, face language and cultural barriers, lack protection under the destination country's labour laws and experience difficulties in adequately accessing and using health services. common stressors include being away from friends and family, rigid work demands, unpredictable work and having to put up with existing conditions (magana & hovey, ) . in addition, migrant workers' cultural background, anthropometrics and training may differ from those of nationals of host countries, which may have implications in relation to their understanding and use of equipment (kogi, ; o'neill, ) . as can be understood so far, both the nature of work and of workplaces as well as workforce characteristics depend on wider socioeconomic and political influences. a large body of literature has summarized and examined these influences under the area of the social determinants of health. the following section briefly considers these determinants. new forms of work organization and employment have to be considered within the wider picture of employment and working conditions across the world. labour markets and social policies determine employment conditions such as precarious or informal jobs, child labour or slavery, or problems such as having high insecurity, low paid jobs, or working in hazardous conditions, all of which heavily influence health inequalities. figure . shows various interrelationships between employment, working conditions and health inequalities. let us consider unemployment and associated job insecurity as social determinants of health. in the ilo estimated that there were almost million unemployed people in the eu, million of whom were from eu- countries. overall, million people were unemployed in with a quarter of the increase of four million in global unemployment being in the advanced economies, and three quarters being in other regions, with marked effects in east asia, south asia and sub-saharan africa (ilo, a). the same report also highlighted that in those regions where unemployment did not increase further, job quality worsened as vulnerable employment and the number of workers living below or very near the poverty line increased. in the eu, the financial crisis resulted in unprecedented levels of youth unemployment, averaging % for the eu as a whole. the rates for young people (aged - ) not in employment, education or training are . % in the south and peripheral eu countries, and . % in the north and core of the eu (european commission [ec], ). in a pattern intensified by the financial crisis, structural unemployment has been growing and unemployment varies from . % in the south of the eu and peripheries in , to . % in the north and central countries (ec, ) . a large proportion of jobs destroyed were in mid-paid manufacturing and construction occupations (european foundation for the improvement of living & working conditions [eurofound] , ). as a consequence of reduced employment opportunities, poverty has increased in the eu since . household incomes are declining and . % of the eu population is now at risk of poverty or exclusion. children are particularly affected as unemployment and jobless households have increased, together with in-work poverty (ec, ) . this has implications for quality of life and general population health beyond workplace health and safety due to the impact on personal finances. an ilo report summarized the potential impact of financial crises on organizations and health and safety as shown in table . . the surge of unemployment creates tension and negatively impacts public perceptions for social welfare, job security, and financial stability. increased job insecurity reflects the fear of job loss or the loss of the benefits associated with the job (e.g. health insurance benefits, salary reductions, not being promoted, changes in workload or work schedule). it is one of the major consequences of today's turbulent economies and is common across occupations, and both private and publicsector employees (ashford, lee, & bobko, ; ferrie et al., ; sverke, hellgren, & naswall, ) . several studies have shown that job insecurity has detrimental effects on the physical and mental health of employees, and on many organizational outcomes, including performance, job satisfaction, counterproductive behaviours, and commitment (e.g. ferrie et al., ; sverke et al., ) . increased unemployment has given rise to different forms of flexible and temporary employment, also through the introduction of relevant policies such as flexicurity. flexicurity is an integrated strategy for enhancing flexibility and security in the labour market. it attempts to reconcile employers' need for a flexible workforce with workers' need for security (ec, ) . however, several studies have warned of the possible negative outcomes of new types of work arrangements, highlighting that they could be as dangerous as unemployment for workers' health (benach & muntaner, ) . for example, workers on fixed-term contracts are commonly found to have inadequate working conditions by comparison with permanent employees. new forms of work organization and patterns of employment can be summarized in terms of flexible working practices including temporary and part-time employment, tele-working, precarious employment, and home working. although these new practices can result in positive outcomes such as more flexibility, a better worklife balance, and increased productivity, research has also identified several potential negative outcomes. for example, teleworkers may feel isolated, lacking support and career progression (e.g. ertel, pech, & ullsperger, ; schultz & edington, ) . in addition, temporary, part-time and precarious employment can result in higher job demands, job insecurity, lower control and an increased likelihood of labour force exit (benach et al., ; quinlan, ; quinlan et al., ) . workers engaged in insecure and flexible contracts with unpredictable hours and volumes of work are more likely to suffer occupational injuries (ilo, a (ilo, , b . although awareness and evidence in developing countries lags far behind those in the industrialized world, evidence has started to accumulate showing similar findings in developing countries (kortum, leka, & cox, ) . these various complex relationships between the wider socio-economic context, employment and working conditions have resulted in a more complex profile of risk factors that may affect hsw in the workplace. new forms of work organization and the move towards a service based economy have also resulted in new and emerging risks affecting the workforce, organizations and society. these will be considered next. an 'emerging osh risk' is often defined as any occupational risk that is both new and increasing (eu-osha, ). new means that the risk was previously unknown and is caused by new processes, new technologies, new types of workplaces, or social or organizational change; or, a long-standing issue is newly considered to be a risk due to changes in social or public perceptions; or, new scientific knowledge allows a long standing issue to be identified as a risk. a risk is increasing if the number of hazards leading to the risk is growing; or, the likelihood of exposure to the hazard leading to the risk is increasing (exposure level and/or the number of people exposed); or the effect of the hazard on workers' health is getting worse (seriousness of health effects and/or the number of people affected) (houtman, douwes, zondervan, & jongen, ). an article published on eu-osha's osh wiki on new and emerging risks summarizes them as follows (houtman et al., ) : • emerging physical risks: ( ) physical inactivity and ( ) the combined exposure to a mixture of environmental stressors that increase the risks of musculoskeletal disorders (msds), the leading cause of sickness absence and work disability. • emerging psychosocial risks: ( ) job insecurity, ( ) work intensification, high demands at work, and ( ) emotional demands, including violence, harassment and bullying. • emerging dangerous substances due to technological innovation: ( ) chemicals, with specific attention to nanomaterials, and ( ) biological agents. the growing use of computers and automated systems, aimed at optimizing productivity, has caused an increase in sedentary work or prolonged standing at work, resulting in an increase in physical inactivity. work demands are also commonly cited as reasons for physical inactivity (e.g. trost, owen, bauman, sallis, & brown, ) as well as an increase in travelling time to work (houtman et al., ) . physical inactivity is associated with increased health risks such as coronary heart disease, type ii diabetes, and certain types of cancers and psychological disorders (depression and anxiety) (department of health, ; who, ; zhang, xie, lee, & binns, ) . another important result of inactivity is obesity which can lead to several adverse health effects, such as back pain, high blood pressure, cardiovascular disorders, and diabetes (houtman et al., ) . in addition, sedentary jobs are associated with an increased prevalence of musculoskeletal complaints or disorders, e.g. neck and shoulder disorders (e.g. korhonen et al., ) , and upper and lower back disorders (e.g. chen, mcdonald, & cherry, ) . such disorders may lead to sick leave and work disability (e.g. steensma, verbeek, heymans, & bongers, ) . the established health risks associated with sedentary work are premature death in general, type ii diabetes and obesity (van uffelen et al., ) . as concerns msds, there is a considerable body of research indicating that biomechanical or ergonomic risks in combination with psychosocial risks can generate work-related msds (e.g. bongers, ijmker, & van den heuvel, ; briggs, bragge, smith, govil, & straker, ; eu-osha, ) . psychosocial risk factors at work have a greater effect on the prevalence of musculoskeletal complaints when exposure to physical risk factors at work is high rather than when it is low. in addition, factors such as low job control, high job demands, poor management support or little support from colleagues, as well as restructuring, job redesign, outsourcing and downsizing have been shown to be causally related to increased risks in msds (houtman et al., ) . job insecurity has been discussed earlier and is an important stressor resulting in reduced well-being (psychological distress, anxiety, depression, and burnout), reduced job satisfaction (e.g. withdrawal from the job and the organization) and increased psychosomatic complaints as well as physical strains (e.g. wagenaar et al., ) . all these effects are negatively related to personal growth as well as to recognition and participation in social life (de cuyper et al., ) . additionally, decreased well-being and reduced job satisfaction of employees negatively affects the effectiveness of the organization (houtman et al., ) . there are several increasing demands workers are exposed to in the modern workplace including: quantitative (high speed, no time to finish work in regular working hours), qualitative (increased complexity), emotional (emotional load due to direct contact with customers i.e. service relationship situations), and often physical loads as well (houtman et al., ) . the widespread use of information and communication technology (ict) has led to work intensification. developments in technology use in terms of mechanization, automation, and computerization, has led to the substitution of human activities by machines. on the other hand, the use of computers and smart phones with internet access provides easy access to all kinds of information but may also lead to the expectation from colleagues, supervisors and clients that one is always available and can be contacted (e.g. by email). ict work may then lead to stress symptoms due to excessive working hours, workload and increasing complexity of tasks or isolation in home workers; information overload; pressure of having to constantly upgrade skills; human relationships replaced by virtual contacts; and physical impairments such as repetitive strain injuries and other msds due to using inadequate or ergonomically unadapted equipment (houtman et al., ) . psychosocial hazards such as high job demands and low control have been systematically found to be causally linked to cardiovascular heart disease (e.g. backé, seidler, latza, rossnagel, & schumann, ; eller et al., ) , msds (e.g. da costa & vieira, ) as well as mental health problems such as depression and anxiety (e.g. bonde, ; netterstrom et al., ) . in addition, long term absence and disability are causally related to these types of risks (e.g. duijts, kant, swaen, brandt, & van den zeegers, ) . furthermore, as the labour market shifts towards the service industry, emotional demands at work increase with harassment or bullying and violence contributing to this increase (houtman et al., ) . those affected by violence and harassment in the workplace tend to report higher levels of work-related ill health. the proportion of workers reporting symptoms such as sleeping problems, anxiety and irritability is nearly four times greater among those who have experienced violence, bullying and harassment than amongst those who have not (houtman et al., ) . nanotechnology has been defined as the design, characterization, production and application of structures, devices and systems by controlling shape and size at nanometre scale (eu-osha, ). due to their small size, engineered nanomaterials (enms) have unique properties that improve the performance of many products. nanomaterials have applications in many industrial sectors (currently the main areas are materials and manufacturing industry including automotive, construction and chemical industry, electronics and it, health and life sciences, and energy and environment). a key issue of enms is the unknown human risks of the applied nanomaterials during their life cycle, especially for workers exposed to enms at the workplace. workers in nanotechnology may be exposed to novel properties of materials and products causing health effects that have not yet been fully explored. the manufacture, use, maintenance and disposal of nanomaterials may have potential adverse effects on internal organs (eu-osha, ). although there is a considerable lack of knowledge, there are indications that because of their size, enms can enter the body via the digestive system, respiratory system or the skin. once in the body, enms can translocate to organs or tissue distant from the portal of entry. such translocation is facilitated by the propensity of the nanoparticles to enter cells, to cross membranes and to move along the nerves (iavicoli & boccuni, ) . the enms may accumulate in the body, particularly in the lungs, the brain and the liver. the basis for the toxicity appears to be primarily expressed through an ability to cause inflammation and to raise potential for autoimmune deficits, and may induce diseases such as cancer (houtman et al., ) . other dangerous substances concerns include diesel exposure and its link to lung cancer and non-cancer damage to the lung; and man-made mineral fiber exposure (classified as being siliceous or non-siliceous) and the link of their structure to inflammatory, cytotoxic and carcinogenic potential (houtman et al., ) . another three chemical risks have been identified as emerging with a view to allergies and sensitizing effects. they are epoxy resins, isocyanates and dermal exposure (eu-osha, ). epoxy resins have become one of the main causes of occupational allergic contact dermatitis. skin sensitization of the hands, arms, face, and throat as well as photosensitization have also been reported. isocyanates are powerful irritants to the mucous membranes of the eyes and of the gastrointestinal and respiratory tracts. direct skin contact can cause serious inflammation and dermatitis. isocyanates are also powerful asthmatic sensitizing agents (houtman et al., ) . finally, risks related to global epidemics are the most important biological risk issue. pathogens such as the severe acute respiratory syndrome (sars), ebola, and marburg viruses are new or newly recognized. in addition, new outbreaks of wellcharacterized outbreak-prone diseases such as cholera, dengue, measles, meningitis, and yellow fever still emerge (houtman et al., ) . it should be stressed that the profile of risks in the workplace constantly changes and there are additive effects that exacerbate negative impacts. the following section provides an overview of key challenges in relation to hsw in the modern workplace while also acknowledging the lack of research in relation to some of the new and emerging risks identified earlier. the ilo has published global estimates of fatal and non-fatal occupational (ilo, ) and fatal work-related diseases (ilo, b). . million deaths occur annually across countries for reasons attributed to work. over , are caused by occupational accidents while the biggest mortality burden comes from work-related diseases, accounting for about million deaths. globally, cardiovascular and circulatory diseases at % and cancers at % were the top illnesses responsible for / of deaths from work-related diseases, followed by occupational injuries at % and infectious diseases at %. as a result, approximately people die every day due to these causes: occupational accidents kill nearly people every day and work-related diseases provoke the death of approximately more individuals. there were also over million non-fatal occupational accidents (requiring at least four days of absence from work) in , meaning that occupational accidents provoke injury or ill health for approximately , people every day (ilo, b). major industrial accidents are stark reminders of the unsafe conditions still faced by many. for example, the april collapse of the rana plaza building in bangladesh resulted in the death of individuals and injured more, mostly factory workers making garments for overseas retail chains. the international community has since expressed concerns about market pressures which strive to keep basic production costs low, the role of national authorities, and the responsibilities of multinational enterprises and other stakeholders in supply chains towards the health and safety of workers. hazardous sectors such as mining, construction, shipping, and in particular fishing continue to take a heavy toll on human lives and health. meanwhile, the nuclear industry continues to pose serious problems regarding the radiological protection of site workers and the environment. in particular, the protection of emergency workers at the fukushima daiichi power plant in japan has become a focus of international attention since the east japan earthquake. occupational health has recently become a much higher priority, in light of the growing evidence of the enormous loss and suffering caused by occupational diseases and ill health across many different employment sectors. even though it is estimated that fatal diseases account for about % of all work-related fatalities, more than half of all countries do not provide official statistics for occupational diseases (ilo, b). these therefore remain largely invisible, compared to fatal accidents. moreover and as discussed previously, the nature of occupational diseases is changing rapidly, as new technologies and global social changes aggravate existing health risks and create new ones. for example, long-latency diseases include illnesses such as silicosis and other pneumoconioses, asbestos-related diseases and occupational cancers that may take decades to manifest. such diseases remain widespread, as they are often undiagnosed until they result in permanent disability or premature death. pneumoconioses account for a high percentage of all occupational diseases. for example, in latin america, there is a % prevalence rate of silicosis amongst miners, and this figure reaches % among miners over the age of . in vietnam, pneumoconioses account for . % of all compensated occupational diseases (ilo, b) . the use of asbestos has been banned in more than counties, including all eu member states, but the number of deaths from asbestos-related diseases is increasing in many industrialized countries because of exposure that occurred during the s and later. in germany and the uk, for example, the number of deaths from asbestos-induced mesothelioma has been increasing for some years and was expected to peak in - (health & the number of cases of work-related stress, violence and psychosocial disorders has also been increasing. these have often been attributed at least in part to recession-driven enterprise restructuring and redundancies which can be very damaging psychologically. european studies have shown that a large and rapid rise in unemployment has been associated with a significant increase in suicide rates (e.g. lundin & hemmingsson, ). meanwhile, a review of mortality studies in countries across the world has also shown an increase in cardiovascular mortality rates by an average of . % in periods of crisis (falagas, vouloumanou, mavros, & karageorgopoulos, ) . the impact of the issues discussed in this section is presented in chapter . on the basis of the available evidence, it is now recognized that a new paradigm of prevention is required, one that focuses on work-related diseases and not only on occupational injuries. recognition, prevention and treatment of both occupational diseases and accidents, as well as the improvement of recording and notification systems are high priorities for improving the health of individuals and the societies they live in. several perspectives and associated approaches have been taken to promote hsw in the workplace over the years as priorities change and new issues and knowledge emerge. the following section will provide an overview of some key perspectives that have led to the development of modern holistic models to promote hsw in the workplace. the field of occupational health and safety has been defined as the science of the anticipation, recognition, evaluation and control of hazards arising in or from the workplace that could impair the hsw of workers, taking into account the possible impact on the surrounding communities and the general environment (alli, ) . given the broad scope of this definition, several disciplines are relevant to osh that relate to control of the multitude of hazards in the workplace. furthermore, since social, political, technological and economic changes are constantly impacting upon the workplace, the field of osh has been evolving to address new and emerging issues in line with different perspectives. some disciplines of relevance to osh include engineering, ergonomics, toxicology, hygiene, medicine, epidemiology, psychology, sociology, education, and policy. these disciplines often diverge in terms of theoretical foundation and as a result emphasize different aspects in terms of understanding and dealing with osh issues. however, in recent years there has been convergence in thinking about the work environment and a trend towards more holistic perspectives and approaches when considering hsw. indeed, while hsw issues were in the past approached from a mono-disciplinary perspective, multi-disciplinarity is now advocated as the necessary way forward. however, in practice osh professionals often still employ mono-disciplinary perspectives in dealing with accidents and diseases in the workplace, seeking to protect individual workers rather than preventing negative impacts of the work environment and promoting positive outcomes. solely focusing on ameliorating harm rather than promoting hsw has also been criticized in recent years by scholars emphasizing a salutogenic (health promoting) instead of a pathogenic (disease preventing) perspective. let us now consider some of these approaches further in relation to safety, health and well-being. it has been argued that occupational safety has developed and evolved through three ages: . a technical age, . a human factors age, and . a management and culture age (hale & hovden, ) (or as hudson, described them through a technical wave, a systems wave and a culture wave). several authors have since then suggested new ages in safety science. the first age of safety concerned itself with the technical measures to guard machinery, stop explosions and prevent structures collapsing. it lasted from the nineteenth century through until after the second world war and was interested in accidents having technical causes (hale & hovden, ) . the period between the world wars saw the development of research into personnel selection, training and motivation as prevention measures, often based on theories of accident proneness (see hale & glendon, for a review; burnham, for the accident-prone theory). this brought about the second age of safety, which developed separately to technical measures until the period of the s and s, when developments in probabilistic risk analysis and the rise and influence of ergonomics led to a merger of the two approaches in health and safety. there was a move away from an exclusive dominance of the technical view of safety in risk analysis and prevention, and the study of human error and human recovery or prevention came into its own (hale & hovden, ) . just as the second age of human factors was ushered in by increasing realizations that technical risk assessment and prevention measures could not solve all problems, so were the s characterized by an increasing dissatisfaction with the idea that health and safety could be captured simply by matching the individual to technology. in the s management and culture were the focus of development and research, based on many influential thinkers such as heinrich who published his ground-breaking safety management textbook in heinrich, , the sociotechnical management literature (e.g. elden, ; thorsrud, ; trist & bamforth, ) , the social organizational theory of lewin ( ) , the loss prevention approach (bird, ) , and the introduction of participative management in safety (e.g. simard & marchand, ) . however, reason ( ) contended that an over-reliance on osh management systems and insufficient understanding of, and insufficient emphasis on, workplace culture, can lead to failure because "it is the latter that ultimately determines the success or failure of such systems" (p. ). criticism of overreliance on systems was also influenced by the resilience engineering school that posited that instead of focusing on failures, error counting and decomposition, we should address the capabilities to cope with the unforeseen. the ambition is to 'engineer' tools or processes that help organizations to increase their ability to operate in a robust and flexible way. hopkins ( ) views safety culture as one aspect of organizational culture, or more particularly an organizational culture that is focused on safety. further, culture is viewed as a group, not an individual, phenomenon; efforts to change culture, should, in the first instance, focus on changing collective practices (the practices of both managers and workers) and the dominant source of culture is what leaders pay attention to. much of hopkins' work draws on reason's ( ) notion that a safe culture is an informed culture and sutcliffe's ( , ) principles of collective mindfulness and high reliability organizations (i.e. organizations that are able to manage and sustain almost error-free performance despite operating in hazardous conditions where the consequences of errors could be catastrophic). collective mindfulness is based on the premise that variability in human performance enhances safety whilst unvarying performance can undermine safety, particularly in complex socio-technical systems. glendon, clarke, and mckenna ( ) argued that each of the first three periods of development build on one another and refer to this process of development as the fourth age of safety or the integration age where previous ways of thinking are not lost, but remain available to be reflected upon as multiple, more complex perspectives develop and evolve. however, as the limitations of osh management systems and safety rules that attempt to control behaviour have become evident, it has also been proposed that a fifth age of safety has emerged, the adaptive age; an age which transcends the other ages of safety. the adaptive age challenges the view of an organizational safety culture and instead recognizes the existence of socially constructed sub-cultures. the adaptive age embraces adaptive cultures and resilience engineering and requires a change in perspective from human variability as a liability and in need of control, to human variability as an asset and important for safety (borys, else, & leggett, ) . resilience engineering is similar to collective mindfulness since it also focuses on the importance of performance variability for safety. however, what sets resilience engineering apart from collective mindfulness is the focus on learning from successful performance (hollnagel, ) , i.e. why things go right as well as why things go wrong (also called the safety approach (hollnagel, ) . one particular major development in the safety evolution was the move towards managing risks in the work environment. this implied that it is impossible to completely control all aspects of work to avoid negative outcomes, risks always remain. in an ever-changing work environment, a continuous assessment of risks is needed that will point to key risks that may pose a threat to workers' hsw. these then need to be managed following appropriate actions at various levels with the focus being on prevention. the risk management paradigm has been hugely influential not only in terms of managing safety but also managing health as will be discussed in the following sections. let us then consider it further next. in the wake of the chernobyl disaster in , sociologist ulrich beck published 'risikogesellschaft', later published in english as 'risk society: towards a new modernity' in . beck argued that environmental risks had become the predominant product of industrial society. he defined a risk society as "a systematic way of dealing with hazards and insecurities induced and introduced by modernization itself" (beck, , p. ) . while according to british sociologist anthony giddens ( ) , a risk society is a society that is increasingly preoccupied with the future (and also with safety), which generates the notion of risk. giddens ( ) defined two types of risks as external risks (for example natural disasters) and manufactured risks (for example, those derived from industrial processes. as manufactured risks are the product of human activity, authors like giddens and beck argue that it is possible for societies to assess the level of risk that is being produced, or that is about to be produced, in order to mitigate negative outcomes (i.e. responsibility with managing these risks lies with society and more precisely with experts able to do so). one such area is osh risk management. hazard, something that can cause harm if not controlled, is a key term in osh risk management. the outcome is the harm that results from an uncontrolled hazard. in the context of osh, harm describes the direct or indirect degradation, temporary or permanent, of the physical, mental, or social well-being of workers. a risk is a combination of the probability that a particular outcome will occur and the severity of the harm involved (nunes, ) . hazard identification or assessment is an important step in the overall risk assessment and risk management process. through this, hazards are identified, assessed and controlled/eliminated as close to source as reasonably as possible. as technology, resources, social expectations or regulatory requirements change, hazard analysis focuses control measures more closely towards the source of the hazard aiming at prevention. hazard-based programmes may not be able to eliminate all risks to hsw but they avoid implying that there are 'acceptable risks' in the workplace (nunes, ) . a risk assessment needs to be carried out prior to making an intervention. this assessment should identify hazards, identify all affected by the hazard and how, evaluate the risk, and identify and prioritize appropriate control measures. the calculation of risk is based on the likelihood or probability of the harm being realized and the severity of the consequences. the assessment should be recorded and reviewed periodically and whenever there is a significant change to work practices. the assessment should include practical recommendations to control the risk. once recommended controls are implemented, the risk should be re-calculated to determine if it has been lowered to an acceptable level (nunes, ) . risk assessment and calculation is usually easier as regards physical risks but more complex as regards biological, and even more so psychosocial, risks. despite this, the risk management paradigm has been applied to all these types of risks to hsw, and is used extensively both as concerns occupational injury and occupational health. it also represents the cornerstone of osh legislation across countries. osh management systems are based on this paradigm (see chapter for more details). following the pdca (plan-do-check-act) cycle methodology (deming, ) , risk management is a systematic process that includes the examination of all characteristics of the work system where the worker operates, namely, the workplace, the equipment/machinery, materials, work methods/practices and work environment. the main goal of risk management is to eliminate or at least to reduce the risks that cannot be avoided or eliminated to an acceptable level. risk management measures should follow the hierarchy of control principles of prevention, protection and mitigation. worker participation is key in the process of risk management. the risk management process should be reviewed and updated regularly, for instance every year, to ensure that the measures implemented are adequate and effective. additional measures might be necessary if the improvements do not show the expected results (nunes, ) . periodic risk management is also important since workplaces are dynamic due to changes in equipment, substances or work procedures, and new hazards might emerge. another reason is that new knowledge regarding risks can become available, either leading to the need of an intervention or offering new ways of controlling the risk. the review of the risk management process should consider a variety of types of information and draw them from a number of relevant perspectives (e.g. staff, management, stakeholders). however, risk management has been criticized for focusing too heavily on avoiding (controlling) possible negative outcomes and not promoting positive and healthy work environments. this development in thinking has stemmed from a parallel move from pathogenic to salutogenic approaches in health and its management. this evolution in thinking about health and well-being will be considered next. approaches in occupational health and occupational hygiene have evolved in line with developments in several disciplines, including safety engineering, medicine and psychology. the risk management perspective is the cornerstone of occupational hygiene as is evident by its definition. the international occupational hygiene association (ioha, n.d.) refers to occupational hygiene as the discipline of anticipating, recognizing, evaluating and controlling health hazards in the working environment with the objective of protecting worker health and well-being and safeguarding the community at large. although occupational health definitions similarly place great focus on managing risk factors, they overall refer to the promotion and maintenance of health and well-being of employees. similarly to the evolution of perspectives in safety, these definitions have been influenced by the evolution of thinking on health and well-being over the years (schulte & vainio, ) . perspectives on health and illness started with a focus on pathogenesis, as pioneered and developed by williamson and pearse ( ) which is the study of disease origins and causes. pathogenesis starts by considering disease and infirmity and then works retrospectively to determine how individuals can avoid, manage, and/or eliminate that disease or infirmity. the dose-response relationship of the change in effect on an organizm caused by differing levels of exposure (or doses) to a stressor after a certain exposure time was influential in treating disease and illness (as was in chemical safety). this leads professionals using pathogenesis to be reactive because they respond to situations that are currently causing or threatening to cause disease or infirmity (becker, glascoff, & felts, ) . a major shift came in with antonovsky's concept of salutogenesis, the study of health origins and causes, which starts by considering health and looks prospectively at how to create, enhance, and improve physical, mental and social well-being (antonovsky, ) . the assumption of salutogenesis that action needs to occur to move the individual towards optimum health, prompts professionals to be proactive because their focus is on creating a new higher state of health than is currently being experienced (antonovsky, ) . the difference between the biomedical model (based on pathogenesis) and health promotion which is now the cornerstone of public health (based on salutogenesis) is a move away from risk and disease towards resources for health and life (eriksson & lindström, ) , initiating processes not only for health but wellbeing and quality of life. perceived good health is a determinant of quality of life. according to breslow ( ) , the first era of public health involved combating communicable diseases while the second dealt with chronic diseases. their focus was on developing and maintaining health since health provides a person the potential to have the opportunity and ability to move towards the life they want. to facilitate management of health in the first two eras, measurement of the signs, symptoms and associated risks of disease and infirmity were of paramount importance. in the third era of public health most people expect a state of health that enables them to do what they want in life. to facilitate management of an evolved health status, it is necessary to develop new health measures that must go beyond detecting pathogenesis and its precursors to measuring those qualities associated with better health (breslow, ) . however, salutogenesis also presumes that disease and infirmity are not only possible but likely because humans are flawed and subject to entropy (antonovsky, ) . according to a salutogenic perspective, each person should engage in health promoting actions to cause health while they secondarily benefit from the prevention of disease and infirmity. pathogenesis, on the other hand in a complementary fashion primarily focuses on prevention of disease and infirmity, with a secondary benefit of health promotion. both approaches are needed to facilitate the goal of better health and a safer and more health enhancing environment. pathogenesis improves health by decreasing disease and infirmity and salutogenesis enhances health by improving physical, mental, and social well-being. together, these strategies will work to create an environment that nurtures, supports, and facilitates optimal well-being (becker et al., ) . around the same time when salutogenesis was introduced, in a article in science, psychiatrist george l. engel introduced a new medical model, the biopsychosocial model. the biopsychosocial model is a broad view that attributes disease outcome to the intricate, variable interaction of biological factors (genetic, biochemical, etc.), psychological factors (mood, personality, behaviour, etc.), and social factors (cultural, familial, socioeconomic, medical, etc.) . it holds to the idea that biological, psychological, and social processes are integrally and interactively involved in physical health and illness. it was pioneering in advocating the premise that people's psychological experiences and social behaviours are reciprocally related to biological processes. as a result, interventions should address all these dimensions and not narrowly focus on limited perspectives (such as only the biological perspective for example). more focus was now placed on psychological and social factors in the understanding of health and illness. indeed, the traditional medical model of ill health was increasingly recognized as having achieved limited success in tackling occupational health conditions such as stress, anxiety, depression and msds (white, ) . these challenges which have been shown to now have an increasing prevalence in the workplace (as discussed earlier), do not have a clear underlying physical basis nor do they demonstrate a linear relationship between injury, pain and disability. instead, they appear to be strongly mediated by psychological and social factors. accordingly, waddell ( ) categorized such conditions as 'common health problems'. the challenges presented by common health problems contrasts with the past success of occupational medicine in dealing with conditions that have an identifiable cause and a clear relationship between dose and response (waddell & burton, ) . the psychological models that were developed within the fields of occupational, and occupational health psychology, mainly to make sense of the concept of stress, were similarly influenced by conceptualizations of health, illness and safety. early models viewed stress either as a noxious stimulus in the environment (engineering models, derived from engineering) or a response to exposure to aversive of noxious characteristics of the environment (physiological models, derived from medicine). contemporary models focus on the interaction between the environment and the individual and emphasize either explicitly or implicitly the role of psychological processes, such as perception, cognition, and emotion (psychological models). these appear to determine how the individual recognizes, experiences, and responds to stressful situations, how they attempt to cope with that experience and how it might affect their physical, psychological, and social health (cox & griffiths, ) . the risk management paradigm remains an influential perspective in dealing with new and emerging risks in the psychosocial work environment. however, while we are a long way from the challenge of work-related stress being tackled effectively, there has started to be a shift towards promoting well-being at work and not only preventing stress and its associated negative outcomes in terms of both health and safety. this shift has followed trends in public health (discussed earlier) and also psychology towards more positive concepts. the positive psychology movement, championed by seligman and csikszentmihalyi ( ) , is an attempt to shift the emphasis in psychology away from a preoccupation with the pathological, adverse and abnormal aspects of human behaviour and experience. the positive psychology literature offers a number of perspectives that help with understanding how well-being can arise in work situations (lunt et al., ) . for example, the concept of flow was introduced by csikszentmihalyi ( ) which can be defined as a subjective condition where an individual is fully absorbed in, and engaged with, the task he or she is carrying out, promoting an experience of competence and fulfillment. as is evident from our discussion on perspectives on hsw so far, several useful models have been proposed from various disciplines with parallel developments can be observed across these disciplines. however, it should also be noted that often scholars and practitioners operate in silos, ignoring the interplay among the various approaches, and lessons that can be learned from one another. the recent focus on well-being has brought about the question of whether approaches in the workplace should focus only on factors influencing the individual's experience in the work environment or wider influences, considering more the social determinants of health discussed at the beginning of this chapter. in line with this thinking, some holistic models have emerged that recognize the interplay between workplace and non-workplace factors in determining hsw that will be discussed next. the starting point in the development of holistic models of hsw is the recognition that safety and health are different to well-being. as discussed at the beginning of this chapter, well-being refers to a good or satisfactory condition of existence; a state characterized by health, happiness, and prosperity. in particular, three key concepts have been discussed as relevant to well-being: happiness, quality of life and resilience (lunt et al., ) . layard ( ) defined happiness as feeling good; its inverse is feeling bad and wishing for a different experience. factors that affect our levels of happiness include among others family relationships, our financial situation, work, community and friends, our health, personal freedom and personal values. quality of life overlaps with contemporary interpretations of happiness. quality of life is a subjective state that encompasses physical, psychological, and social functioning. a defining feature of quality of life is its basis on the perceived gap between actual and desired living standards. resilience of individuals has been described as partly a context dependent characteristic, in that what enables resilience in one environment may be less adaptive in another (lunt et al., ) . increasingly it is recognized that resilience is important at different organizational levels (teams, organizations) and that these different levels are to some degree interacting (e.g. schelvis, zwetsloot, bos, & wiezer, ) . it is also important to recognize that even though well-being at work may be primarily an employer's responsibility (as well as the worker's), well-being of the worker or workforce is also the responsibility of others in society (e.g. governments, insurance companies, unions, faith-based and non-profit organizations) or may be affected by non-work domains (schulte et al., -see also chapter ). indeed, the well-being of the workforce extends beyond the workplace, and public policy should consider social, economic, and political contexts. schulte et al. ( ) also provide examples of holistic policy models aiming at the promotion of well-being in the workplace that include the who healthy workplace model and the niosh total worker health model (discussed in the next chapter). to promote hsw holistically, there needs to be synergy and integration among the various perspectives. to achieve this, these perspectives need to be aligned considering current knowledge and existing needs, developing capabilities, and mainstreaming a strategic approach in policy and practice. the following chapter considers key policy approaches to managing hsw at the macro level (international, regional, national), meso level (sectoral), and micro level (organizational). subsequent chapters further consider how alignment across perspectives can be achieved in policy and practice. this chapter has provided an overview of the current state of the art in relation to hsw in the workplace as regards key determinants, outcomes and perspectives. with the changing nature of work and new characteristics of the workforce, new challenges are emerging in the workplace. perspectives on how to address these challenges have changed in line with these developments as well as the evolution of knowledge and the impact of wider socio-economic and political factors. emerging issues such as psychosocial factors, the increasing prevalence of non-communicable diseases, and the shift towards well-being (and not merely safety and health) demand new ways of thinking in addressing hsw in the workplace. continuing to work in silos and adopting mono-disciplinary perspectives will not allow us to move forward in this complex landscape. a strategic alignment of perspectives and integrated approaches are needed. this book aims to promote a way forward by outlining and critically evaluating developments in hsw in the workplace, and providing a framework for action in policy and practice. fundamental principles of occupational health and safety gender and jobs: sex segregation of occupations in the world health, stress and coping the salutogenic model as a theory to guide health promotion content, cause, and consequences of job insecurity: a theory-based measure and substantive test the role of psychosocial stress at work for the development of cardiovascular diseases: a systematic review risk society: towards a new modernity salutogenesis years later: where do we go from here? the consequences of flexible work for health: are we looking at the right place precarious employment and health: developing a research agenda employment, work and health inequalities: a global perspective how do types of employment relate to health indicators? findings from the second european survey on working conditions facts and misconceptions about age, health status and employability management guide to loss control psychosocial factors at work and risk of depression: a systematic review of the epidemiological evidence epidemiology of work related neck and upper limb problems: psychosocial and personal risk factors (part i) and effective interventions from a bio behavioural perspective (part ii) the fifth age of safety: the adaptive age trial by fire: a multivariate examination of the relation between job tenure and work injuries health measurement in the third era of health prevalence and associated factors for thoracic spine pain in the adult working population: a literature review working conditions in the european union: the gender perspective accident prone: a history of technology, psychology, and misfits of the machine age incidence and suspected cause of workrelated musculoskeletal disorders age stereotypes and discriminatory attitudes towards older workers: an east-west comparison work-related stress: a theoretical perspective can better working conditions improve the performance of smes? an international literature review flow: the psychology of optimal experience risk factors for work-related musculoskeletal disorders: a systematic review of recent longitudinal studies literature review of theory and research on the psychological impact of temporary employment: towards a conceptual model at least five a week: evidence on the impact of physical activity and its relationship to health (a report from the chief medical officer) a meta-analysis of observational studies identifies predictors of sickness absence work-life balance in the new millennium: where are we? where do we need to go? democratization and participative research in the developing of local theory work-related psychosocial factors and the development of ischemic heart disease: a systematic review the need for a new medical model: a challenge for biomedicine a salutogenic interpretation of the ottawa charter working hours and health in flexible work arrangements women, men and working conditions in europe how to tackle psychosocial issues and reduce work-related stress priorities for occupational safety and health research in the eu- expert forecast on emerging psychosocial risks related to occupational safety and health european agency for safety and health at work (eu-osha) priorities for occupational safety and health research in europe communication from the commission to the european parliament, the council, the european economic and social committee and the committee of the regions -towards common principles of flexicurity: more and better jobs through flexibility and security report on the current situation in relation to occupational diseases' systems in eu member states and efta/eea countries, in particular relative to commission recommendation / /ec concerning the european schedule of occupational diseases and gathering of data on relevant related aspects health and safety at work in europe gender, jobs and working conditions in europe economic crises and mortality: a review of the literature employment status and health after privatisation in white collar civil servants: prospective cohort study globalization and labor conditions: working conditions and worker rights in a global economy consequences of modernity risk and responsibility human safety and risk management ageing, health and productivity: a challenge for the new millennium healthy work for older workers: work design and management factors individual behaviour and the control of danger management and culture: the third age of safety. a review of approaches to organizational aspects of safety, health and environment rr -projection of mesothelioma mortality in great britain industrial accident prevention: a scientific approach resilience: the challenge of the unstable safety i and safety ii: the past and future of safety management lessons from gretley: mindful leadership and the law monitoring new and emerging risks trends in quality of work in the netherlands implementing safety culture in a major multi-national challenges and perspectives of occupational health and safety research in nanotechnologies in nanotechnologies in europe the ageing workforce-challenges for occupational health protecting workplace safety and health in difficult economic times -the effect of the financial crisis and economic recession on occupational safety and health statutory occupational health and safety workplace arrangements for the modern labour market use of technology and working conditions in the european union aging, adult development, and work motivation globalization and workers' health health in restructuring: innovative approaches and policy recommendations ergonomics and technology transfer into small and medium sized enterprises new systems of work organization and workers' health work related and individual predictors for incident neck pain among office employees working with video display units perceptions of psychosocial hazards, work-related stress and workplace priority risks in developing countries safety considerations for the aging workforce happiness: lessons from a new science the european framework for psychosocial risk management (prima-ef) field theory in social science: selected theoretical papers women's occupational health in globalization and development unemployment and suicide. the lancet applying the biopsychosocial approach to managing risks of contemporary occupational health conditions: scoping review psychosocial stressors associated with mexican migrant farmworkers in the midwest united states women in africa's development: overcoming obstacles, pushing for progress career-relevant learning and development, worker age, and beliefs about selfefficacy for development one eyed science: occupational health and women workers age and working conditions in the european union the relation between work-related psychosocial factors and the development of depression occupational safety and health risk assessment methodologies ergonomics in industrially developing countries: does its application differ from that in industrially advanced countries? globalization and emerging economies. paris: organization for economic co-operation and development. organization for economic co-operation and development a good life in old age? monitoring and improving quality in long-term care. paris: organization for economic co-operation and development convergence and divergence of working conditions in europe building healthy and equitable workplaces for women and men: a resource for employers and workers representatives workers' compensation and the challenges posed by changing patterns of work: evidence from australia. policy and practice in health and safety the global expansion of precarious employment, work disorganization, and consequences for occupational health: a review of recent research managing the risks of organizational accidents women's employment in europe: trends and prospects aging, job performance, and career development the changing organization of work and the safety and health of working people: knowledge gaps and research directions exploring teacher and school resilience as a new perspective to solve persistent problems in the educational sector -a case of the netherlands. teachers and teaching: theory and practice considerations for incorporating "well-being" in public policy for workers and workplaces well-being at work -overview and perspective employee health and presenteeism: a systematic review positive psychology: an introduction the labour market outcomes and psychological well-being of ethnic minority migrants in britain. london: home office a multilevel analysis of organizational factors related to the taking of safety initiatives by work groups age differences in safety attitudes and safety performance in hong kong construction workers prognostic factors for duration of sick leave in patients sick listed with acute low back pain: a systematic review of the literature no security: a meta-analysis and review of job insecurity and its consequences globalization of risks organization development from a scandinavian point of view. doct. / some social and psychological consequences of the longwall method of coal-getting correlates of adult participation in physical activity: review and update occupational sitting and health risks: a systematic review temporary employment and health: a review the gender workplace health gap in europe predicting long-term incapacity for work: the case of low back pain is work good for your health and well-being? london: the stationery office differences among demographic groups and implications for the quality of working life and work satisfaction age and job performance managing the unexpected managing the unexpected biopsychosocial medicine: an integrated approach to understanding illness science, synthesis, and sanity: an inquiry into the nature of living lean thinking: banish waste and create wealth in your corporation global population ageing: peril or promise? preamble to the constitution of the world health organization. official records of the world health organization, , . world health organization (who) raising awareness of stress at work in developing countries: a modern hazard in a traditional working environment: advice to employers and worker representatives world health organization (who) sedentary behaviours and epithelial ovarian cancer risk key: cord- -lor tfe authors: asgary, ali; ozdemir, ali ihsan; Özyürek, hale title: small and medium enterprises and global risks: evidence from manufacturing smes in turkey date: - - journal: int j disaster risk sci doi: . /s - - - sha: doc_id: cord_uid: lor tfe this study investigated how small and medium enterprises (smes) in a country perceive major global risks. the aim was to explore how country attributes and circumstances affect sme assessments of the likelihood, impacts, and rankings of global risks, and to find out if sme risk assessment and rankings differ from the global rankings. data were gathered using an online survey of manufacturing smes in turkey. the results show that global economic risks and geopolitical risks are of major concern for smes, and environmental risks are at the bottom of their ranking. among the economic risks, fiscal crises in key economies and high structural unemployment or underemployment were found to be the highest risks for the smes. failure of regional or global governance, failure of national governance, and interstate conflict with regional consequences were found to be among the top geopolitical risks for the smes. the smes considered the risk of large-scale cyber-attacks and massive incident of data fraud/theft to be relatively higher than other global technological risks. profound social instability and failure of urban planning were among the top societal risks for the smes. although the global environmental and disaster risks were ranked lowest on the list, man-made environmental damage and disasters and major natural hazard-induced disasters were ranked the highest among this group of risks. overall, the results show that smes at a country level, for example turkey, perceive global risks differently than the major global players. small and medium enterprises (smes) face many small and large internal and external risks. while they can better control much of the internal risks through risk management and treatment measures, they are more vulnerable to external risks because these risks are often beyond their control, influence, radar, and capacity to manage. the world economic forum (wef) has created, assessed, and monitored global risks since , using a survey of about major global stakeholders and players. by the wef definition, a global risk is ''an uncertain event or condition that, if it occurs, can cause significant negative impact for several countries or industries within the next years'' (world economic forum , p. ). small and medium enterprises are playing a vital role in local, national, and global economies and are very important in job and income generation (chowdhury ; oecd ; chatterjee et al. ) . at least % of the firms in both developed and developing countries are smes (mbuyisa and leonard ) . they account for - % of gdp in developed and developing countries (igwe et al. ) and generate about % of the global industrial production and % of the world's exports (sharma and bhagwat ; mbuyisa and leonard ) . small and medium enterprises are the backbone of the european economy, with more than . % of all non-financial businesses, % of total value added, and . % of total employment (briozzo and cardone-riportella ; european commission ) . in japan, more than . % of all firms are smes, they employ more than % of the workforce, and create more than % of all added value of the manufacturing industry (yoshino and taghizadeh-hesary ) . small and medium enterprises comprised . % of the firms in turkey in and were involved in . % of export and . % of import (kaya and uzay ) . considering their size and roles in the national and global economies and the fact that the enhancement of the private sector's resilience depends on risk reduction by smes (chatterjee et al. ) , more studies are needed to better understand various aspects of sme risk management. small and medium enterprises, like large corporations, face a significant number of risks, and their survival and resilience are important for national and global economies. however, smes are less prepared to manage the risks, and the institutional supports for them are rather weak (han and nigg ) . small and medium enterprises around the world, particularly in developing and emerging economies do not have strong risk management, business continuity, and crisis management cultures and systems in place (asgary et al. ; yuwen et al. ; kaya and uzay ) . most of smes not have the resources and expertise to focus on these activities and therefore are more vulnerable to internal and external risks and disruptive shocks (leopoulos et al. ; marks and thomalla ) . to minimize the impacts, it is important that smes become more aware of global risks, as well as assess, monitor, and enhance their risk management and business continuity management capacities (güneş and teker ; brustbauer ; kaya and uzay ) . the goals of this study were twofold: ( ) to examine whether country attributes and circumstances affect sme assessments of the likelihood, impacts, and rankings of global risks; and ( ) to find out if sme risk assessment and rankings differ from global rankings. small and medium enterprises in manufacturing in an emerging economy with global footprints were selected because, unlike the wef that takes its samples from large international players, the sample smes are small individual players in the global economy and it is important to see how they view the global risks. the global risk report by the world economic forum (wef ) examines important global risks that are classified into five categories: economic, environmental, geopolitical, societal, and technological (table ) . these risks are evaluated annually based on global players and stakeholder views of the risks. according to the wef global risk report, extreme weather events, failure of climate-change mitigation and adaptation, natural disasters, data fraud or theft, cyber-attacks, man-made environmental damages and disasters, large-scale involuntary migration, biodiversity loss and ecosystem collapse, water crises, and asset bubbles in a major economy were ranked the top global risks in terms of likelihood. weapons of mass destruction, failure of climate-change mitigation and adaptation, extreme weather events, water crises, natural disasters, biodiversity loss and ecosystem collapse, cyber-attacks, critical information infrastructure breakdown, man-made global economic risks have significant implications for smes, particularly those in the manufacturing sector. asset bubbles in a major economy can increase the production costs through inflation, wage increases and labor shortages, and access to financial resources that will impact the global economy (zheng et al. ) . global financial crises cause substantial downturn in the formation of new smes, their performance, and their existence in the market. the - world financial and economic crisis severely impacted smes. as interest rates started to rise, many smes were bankrupted due to the credit crunch, tight monetary policies, and decline in domestic and international demands (filardo ; wehinger ) . the number of bankrupted smes in south korea, for example, particularly in the manufacturing sector, increased by nearly % from to (gregory et al. ) . the economic crisis induced severe socioeconomic impacts worldwide and impacted smes in almost every economy, far beyond expectations, through fast domino effects that caused massive sme closures, downsizing, and reduced the number of new ventures (chowdhury ; sannajust ). small and medium enterprises were under extreme pressures and experienced devastating decrease in demand and revenues, increased lay-offs, and stressful working environments (kossyva et al. ). close to % of the smes in belgium and the netherlands, for example, experienced extended delays in their receivables (kossyva et al. ) . small and medium enterprises in the united states lost . million jobs (gagliardi et al. ) . during this global turmoil, turkish smes were also impacted heavily (karadag ) . during an economic crisis, smes are more vulnerable because of weak cash flow and financial structures, low equity reserves, limited adaptation potential and flexibility for downsizing, liquidation problems, too much dependency on external financial resources, tightened credit lines, payment delays on receivables, lack of resources, and lack of necessary skills to adopt or make necessary strategic decisions (ates et al. ; sannajust ; wehinger ; karadag ) . failure of aging and insecure energy, transportation, and communications infrastructure can have major short-and long-term risks for sme performance and competitiveness. high structural unemployment lowers demand for goods and services and impacts smes significantly (alegre and chiva ) . illicit trade reduces sme competitiveness in the global market. in countries with higher levels of economic risk, smes have less of a chance to flourish (mekinc et al. ) . energy is an important input for sme production and logistics. if energy prices are not manageable or controlable, smes face major uncertainties about energy costs and availability (mulhall and bryson ) . energy price shocks raise sme production costs (kilian ) and compromise their individual and collective competitiveness in the global economy. it is mainly because smes are usually less flexible with respect to their energy sources and smes in the manufacturing sector are very energy intensive, that unpredicted fluctuations in energy prices impact them extensively. energy price shock events have become more frequent and a consistent feature of the energy markets in recent years (mulhall and bryson ) . as the global demand for energy increases, more shock events in the energy prices are expected. finally, unmanageable high inflation rates at national and global levels pose risks to smes through higher interest rates (cefis and marsili ; gül et al. ). small and medium enterprises around the globe, particularly those that are part of the global supply chains, are exposed to various types of global environmental and disaster risks that can have devastating impacts on smes (auzzir et al. ). these enterprises are highly vulnerable to and not well prepared for most of the global environmental and disaster risks (crichton ; schaefer et al. ) . they are vulnerable to environmental disaster risks on four fronts: capital, labor, logistics, and markets (ballesteros and sonny ) . environmental and disaster risk events can damage and disrupt the supply chain networks in which many smes are embedded. they can also damage sme assets, premises, and inventories, disrupt their operations, increase their production costs, and reduce their revenues and long-term growth potentials (snyder and shen ; griffiths , ; asgary et al. ) . small and medium enterprises have limited capabilities to recover from these events and bring their operations, revenue, and profit back to pre-event conditions (asgary et al. ) . considering the links that exist between climate change and extreme events, it is expected that these events will increase in the future (ipcc ). small and medium enterprises face significant climate change-related environmental and regulatory risks (schaefer et al. ) . major costly floods, severe heat and cold waves, heavy rains and extreme storms with higher frequency and intensity are observed globally. extreme events not only cause disruptions and destruction to smes, but also create major challenges for their continuity of operations and future planning (gunawansa and kua ; gasbarro et al. studies show that overall about % of smes do not reopen following a major disaster (ballesteros and sonny ) . of the us companies that experience disasters, for example, % never reopen, and another % close within years (weinhofer and busch ; ballesteros and sonny ) . small and medium enterprises are worse off after disaster events compared to before disaster because they are relatively resource constrained, less resilient, are mainly informal and some of them do not fully comply or are not requested to follow standards and codes, lack necessary insurance, do not carry out risk assessments, and are often without business continuity plans (ye and abe ; undp ; ballesteros and sonny ; halkos et al. ) . being prone to multiple natural hazards such as flooding, earthquakes, and drought, natural hazards and disasters have affected smes in turkey as well. the earthquake had significant economic impacts on the enterprise sector, ranging from usd . to . billion in damages (oecd ) , most of it from the loss in manufacturing (usd to million). about . % of the total manufacturing industry were damaged in five provinces, and , smes suffered heavy physical damages. ezgi ( ) reported that the vast majority of smes had little preparedness before the earthquake and only % of them invested in insurance before the earthquake. a world of geopolitical instability and uncertainty is a major concern for all sectors and businesses, but more so for smes. many of these risks are cross border with global consequences. while existing international political and economic agreements such as those of the world trade organization (wto) are weakened by unilateralism, there is little evidence that new and better multilateralism agreements are replacing them (pascual-ramsay ; asgary and ozdemir ). rather these agreements are being replaced by fragmentation, bilateralism, regionalism, as well as local and short-term interests (pascual-ramsay ; asgary and ozdemir ). the international economy and its key players, including smes, are becoming more exposed and vulnerable to existing and emerging geopolitical risks and uncertainties (pascual-ramsay ) . studies show that terrorist attacks, for example, even though they are very small in terms of direct physical impact zones, have economic impacts that are often substantial and very extensive. repeated terrorist attacks in one country not only impact the economy of that country but create spillover impacts for neighboring countries and the global economy. terrorist attacks discourage foreign investments and capital inflows and cause significant loss of economic activities and international trade (abadie and gardeazabal ; araz-takay et al. ). these risks can also increase insurance, transaction, transportation, and security costs for smes. turkey as an emerging economy located in a geopolitically complex region (middle east and north africa), with several potentially failing neighboring states, and as a member of various types of regional agreements, has a unique situation in terms of geopolitical risks. turkey has been suffering from terrorism and dealing with regional conflicts, both of which have had various impacts on the smes. the presence of terrorist activities has impacted the emergence and growth of smes and the overall economic performance in the country. bilgel and karahasan ( ) found that after the rise of terrorism, the per capita real gdp in eastern and southeastern anatolia declined by about . %. other studies also found that terrorism has a major negative impact on foreign direct investments in turkey (omay et al. ). global societal risks have specific implications for smes. failure of urban planning leads to declining cities, informal urban growth or sprawl, and poor and fragile infrastructure with significant social, environmental, and health issues (asgary and ozdemir ). such urban environments are not able to adequately support enterpreneurship activities that can compete at national and global levels. cities without efficient and interconnected transportation systems, with significant air pollution, and unaffordable land and housing prices are not attractive for entrepreneurship growth (tursab and tuader ). but sme engagement in risk management and critical infrastructure protection is an effective way to reduce the impact of future disasters in urban areas (chatterjee et al. ; chatterjee et al. ) . food and water crises are other important global risks that can affect smes in several ways, particularly those in the agri-food business and those that are in water-intensive manufacturing sectors. social instability as another global risk is not healthy for sme growth and competitiveness. global pandemics such as the severe acute respiratory syndrome (sars) pandemic and the h n pandemic can have immediate direct and indirect impacts on smes. for example, sars had major impacts on smes, particularly those in the tourism and hospitality sector in heavily impacted countries such as china, canada, thailand, and hong kong (kuo et al. ) . studies have found that many smes do not recognize pandemics as a meaningful risk. although governments have tried to raise awareness and provide resources to enhance pandemic preparedness by smes, awareness or concern and actual preparedness have not changed much, and most smes do not have appropriate preparedness and continuity plans for future pandemics (watkins et al. ) . armed conflicts, interstate wars, natural hazards and disasters, and climate change are creating widespread involuntary and forced displacement around the globe. population displacements have a range of economic, social, and political impacts on both source and host countries (tumen ; salgado-gálvez ) . the impacts of forced migration on smes have not been studied yet, but it may have both positive and negative impacts. at least smes can be considered a solution for some of these problems by providing job opportunities for displaced people. turkey has received more than million displaced people from syria since the start of conflict in (onur ). adverse consequences of technological advances could be very diverse and consequential for smes, especially those in the manufacturing sector. new technologies such as robotics, autonomous vehicles and drones, automation, smart phones, artificial intelligence, -d printing, cloud computing and big data, and new materials are among the new technologies that can have unintended consequences and risks for manufacturing smes. these technologies have the potential to reduce outsourcing. studies predict that % of the jobs in the united states (much of them in smes) are at high risk of being automated over the next years, especially in manufacturing, logistics, and administrative support (pascual-ramsay ). these advances will possibly reduce employment opportunities for workers in manufacturing smes and will challenge smes survival. while information technology brings significant growth opportunities for smes through knowledge and information availability, business communication, cost savings and efficiency, improving decision making, responsiveness, and overall flexibility (mbuyisa and leonard ) , technology also introduces risks, including data theft, disruptions, and cyber-attacks (chacko and harris ) . like other institutions, smes are dependent on internet and information technology and a substantial number of their sales and orders are handled through cyberspace and networks. any major failure and disruption of the national and global information infrastructure and networks due to large-scale disaster events can have significant negative impacts on smes. such disruptions can have severe consequences for smes that are very vulnerable and without adequate protection. small and medium enterprises use these technologies in production and service delivery, distribution, sales, and marketing. data breaches, cyber security, and intentional or accidental technological failures can disrupt or significantly damage the short-and long-term operation as well as the existence of smes. following the wef ( ), this study uses a qualitative risk assessment (qra) approach. this will allow us to compare the results of the study with the global risk report results. qualitative risk assessment is one of the most widely used risk assessment approaches because of its low cost and ease of use and it is quick to perform (modarres ) . in qra, potential likelihoods and consequences are assessed using qualitative scales such as low, medium, and high. qualitative risk assessment uses subjective likelihood and consequence values collected from experts and decision makers and, as such, they are not always perfect estimates and are subject to biases and heuristics (talbot ) . assessed likelihoods and consequences for selected risks are then ploted in a two-dimensional space to generate a risk matrix. various risk matrix forms and sizes have been reported in risk assessment reports. a risk matrix is used to visualize, compare, and rank different risks based on their locations in the matrix. color coding is mostly used to show the importance of each risk. the risk matrix approach is also used for indicating possible risk control measures and to record the inherent, current, and target levels of risk (hopkin ) . a risk matrix provides some basis for risk treatments and management. risks that are located in the top righthand corner of the risk matrix (often colored in red) have higher likelihoods and impacts. these risks are very critical and need to be controled. risks that are in the lower (colored in green) and middle part (colored in orange or yellow) of the matrix should be monitored and checked regularly. although the risk matrix method has been criticized by scholars and professionals (cox ; ni et al. ; bao et al. ) , it is an invaluable tool for fast, effective, and practical risk assessment (talbot ) . data were collected from a sample of manufacturing smes in turkey. small and medium enterprises in turkey are categorized into three groups of micro, small, and medium-sized enterprises based on their employee numbers and annual revenues. micro firms are those with less than employees and less than usd , annual turnover. small firms are those with less than employees and less than usd . million annual turnover, and medium-sized firms are those with less than employees and less than usd . million annual turnover (karadag ) . to assess and evaluate the risks, a questionnaire survey, including questions, was developed. several questions collected general information about the production type, years in operation, city of operation, position of responder in the business, percent of production for export, percent of imported production materials, and export countries. in two sets of questions sme representatives provided their opinion about the consequences and likelihoods of global risks. samples of a risk likelihood question and a risk consequences question are: . review the following global economic risks and give your opinion on the likelihood of these risks occurring in the manufacturing sector in turkey over the next years. • critical infrastructure failure: • very unlikely • unlikely • somewhat likely • likely • very likely . please review the following global economic risks and give your opinion about the potential impacts/consequences of these risks on the manufacturing sector in turkey over the next years. • critical infrastructure failure: the questionnaire was designed and distributed using google form. small and medium enterprises operating in the manufacturing sector (nace revision. in c class through - ) were included in the population framework. these are smes in the nace classes that are registered with the kosgeb (small and medium industry development organization, turkey) and had an approved kobİ (sme) certificate in . the survey link was emailed to about , smes on april . potential respondents were asked to complete the online survey by may . by the deadline, completed responses had been received. the sample covers smes in different manufacturing areas. after the unspecified ''other'' manufacturing subgroup ( ), smes in food products ( ), textiles ( ), machinery and equipment ( ), furniture ( ), fabricated metal ( ), basic metal ( ), wood products ( ), rubber and plastic ( ), electrical equipment ( ), and chemical products ( ) had the highest number of participants in this study. the questionnaire was completed by various individuals within each sample business, including managers ( ), owners ( ), accounting managers ( ), financial managers ( ), business partners ( ), board members ( ), engineers ( ), and other employees ( ). sample smes are operating in different cities and geographic regions in turkey, including marmara ( ), central anotolia ( ), aegean ( ), black sea ( ), mediterranean ( ), eastern anotolia ( ), and southeastern anotolia ( ). the majority of the sample businesses ( ) have been in operation for less than years, only have been in operation for to years, between and years, and the rest ( ) have been in business for more than years. about . % of the sample businesses were micro businesses, . % small businesses, and about . % were medium-sized enterprises. more than % of the smes export their products to varying degrees. they export to a large list of neighboring and european countries in particular. the sample smes also import some of their raw materials and equipment, and about % use imported products in their productions. using the methodology on collected data respondents perceived likelihood of the global risks and their impacts were identified and risk values were calculated, and risk matrix was generated using the risk values. this section presents the key findings. almost all global economic risks are perceived to have very high and high likelihoods by the sample turkish smes (fig. a) . however, fiscal crises in key economies, high structural unemployment or underemployment, and severe energy price shock are among the most likely risks according to the sample enterprises. a majority of the smes thought that catastrophic and severe impacts can be expected from the economic risks, particularly unmanageable inflation, high structural unemployment or underemployment, and fiscal crises in key economies (fig. b) . global environmental risks seem to have relatively lower likelihoods to the sample smes, compared with the global economic risks (fig. a) . man-made environmental damages caused by human and major natural hazards and disasters show higher perceived likelihood. the perceived impacts from these risks were scored lower as well. among these risks, environmental damages caused by human are perceived to have slightly higher impacts for the sample businesses (fig. b) . figure a and b show the sample sme respondents' opinion about the likelihoods and the consequences of the global geopolitical risks. failure of national governance and failure of regional or global governance, and largescale terrorist attacks have the highest average perceived likelihood in this risk category. however, the impacts are assessed to be higher for interstate conflicts with regional consequences, followed by the failure of national governance, and failure of regional or global governance. among global societal risks failure of urban planning and profound social instability were perceived to have the highest likelihood and impact averages among the sample businesses (fig. a, b) figure a and b present the stated likelihoods and consequences of the global technological risks. while the likelihood of all these risks is perceived to be high, largescale cyber-attacks and large data fraud are among the top in this risk group. although the means of the impacts are lower for most of these risks, except for the negative consequences of technological developments, more smes stated that the consequences of large-scale cyber-attacks and large-scale data fraud are expected to be severe and catastrophic. risks can be calculated as the multiplication of likelihood by impacts (table ) . using the mean values of each risk category, economic and technological risks are perceived to have the highest likelihood levels followed by geopolitical risks (fig. a) . in terms of impacts, however, economic risks and geopolitical risks take the first and second ranks followed by technological risks. societal and environmental risks are considered to have lower impacts (fig. b) . in terms of the overall risk, the results show that economic risks and geopolitical risks take the first and second place, followed by technological risks (fig. c) . using the qualitative risk analysis methodology and perceived likelihood and impact data, a risk matrix was generated. although the horizontal and vertical axes take the values to , the matrix axes have been rescaled for better visualization. this risk matrix displays the means of stated likelihoods and consequences for each risk. risks at the top right and in the red colored area are risks with higher than average likelihoods and consequences. risks in the lower left part of the matrix and colored green are considered to be low. figure presents the resulting risk matrix for the businesses and the risks. most economic risks are in the upper part of the risk matrix, followed by some geopolitical risks. large-scale data fraud is the only technological risk that falls into the same area. majority of environmental and societal risks, although scattered diagonally in the risk matrix, are in the lower part of the matrix. this study examined the global risks from the perspective of manufacturing smes with global footprints in the emerging economy of turkey. the main aim was to understand whether and to what extent country-and industry-specific contexts and conditions affect smes perceptions of the global risks. key findings are discussed here. first, overall the results suggest that regardless of the ranking, the global risks are of high concern for smes in turkey. the average likelihood for all global risks is . and the average impact is . . the minimum perceived likelihood (infectious desease) is . and the minimum perceived impact (severe weather events) is . . these figures confirm that all global risks are of concern for the smes and have significant implications for them, particularly those in the manufacturing sector (zheng et al. ) . second, findings indicate that the smes' perceived risks at the country level (turkey) significantly varied from those perceived by the global companies in the global risk report (world economic forum ) (table ) . while this study does not examine the underlying causes of these differences, it is evident that the smes' major concerns are global economic and geopolitical risks, both in terms of the likelihoods and the impacts. individual smes in turkey have been exposed and impacted more by the global economic risks than other risks. our findings are consistent with a few different but related research conducted by gül et al. ( ) , topçu ( ) , and deloitte ( ) that economic and financial risks such as devaluation of the turkish lira, interest rate risk, breakdown in cash flow or liquidity risk, credit risk, and increase in input prices were the key risks that businesses are facing in turkey. third, it is not surprising that smes' highest perceived risks are economic and geopolitical risks. studies demonstrate that financial and economic crises cause substantial (gregory et al. ; zheng et al. ; chowdhury ; filardo ; kossyva et al. ) . small and medium enterprises are very vulnerable to economic and financial crises as they are forced to close, downsize, and reduce the number of new ventures due to sharp decrease in demand and revenues (ates et al. ; sannajust ; wehinger ) . in today's global economy, turkish smes are not exempt from this, and they have been frequently impacted by such risks in the past two decades as well (karadag ) . moreover, giving high likelihood and high impact values for financial crises and other economic risks can be explained by the fact that turkish economy has deficit in international trade (abbasoglu et al. ) and highly rely on external energy sources such as oil and natural gas. fourth, failure of regional or global governance and failure of national governance are the geopolitical risks that are among the top perceived risks by the smes in this study. these risks have been largely felt by turkish smes in recent years. turkey has been in close proximity to a number of regional conflicts with potential impacts on the smes (omay et al. ; bilgel and karahasan ) and because of their vulnerability (pascual-ramsay ) and awareness of these risks, such risks are perceived highly both in terms of the likelihoods and the impacts. fifth, the sample smes also consider the likelihood of large data fraud/theft and large-scale cyber-attacks to be high. this is possibly due to the increasing dependency of the smes to the internet and the increasing number of cyber attacks and data theft in recent years (mbuyisa and leonard ) . while smes do not consider the impacts of these risks as high as their likelihoods, still these risks can cause disruptions and severe consequences to them, particularly because they are not well equipped to manage these risks. a recent report published by allianz ( ) confirms that smes in turkey increasingly recognize their cyber vulnerability and risks. finally, the relatively lower perceived likelihoods and impacts of the global risks by the sample smes can be attributed to the fact that small businesses may not be directly and highly impacted by distance environmental risks such as major natural hazard-induced disasters and that the awareness about some of the environmental risks among the smes may be lower than other risks. moreover, turkey has not experienced a major natural hazard-induced disaster in the past years, and major weather events have been very local. the results of this study indicate the importance of addressing global risk assessments by smes. as more and more smes are connected with the national and global economies, their awareness about theses risks and the impacts that they could have for them will increase. this awareness can help smes to take these risks into consideration and prepare themselves for such risks. this study highlighted that smes' perceptions of the global risks are different from the businesses that operate at large scale at the global level. it also demonstrated that country's circumstances can affect smes' assessments of the likelihood, impacts, and rankings of global risks. it demonstrated that smes are more concerned about economic risks and risks that directly impact economic systems and variables, particularly geopolitical risks. environmental risks, while important, are not at the top of the list for smes. considering the significant role that smes play in local and national economies and the fact that they are concerned most about global economic and geopolitical risks, it can be argued that efforts towards lowering global economic and geopolitical risks can significantly benefit smes. since turkey's smes have been in a relatively unique situation in the past two decades with respect to some of the major global risks, similar studies in countries in other parts of the world may shed more light on how country contexts and type and size of businesses impact smes' perceptions of global risks. it was beyond the scope of this study to examine the smes' risk and business continuity actions taken to manage and mitigate the risks. future studies can also investigate whether and how smes prepare themselves for global risks. open access this article is licensed under a creative commons attribution . international license, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the creative commons licence, and indicate if changes were made. the images or other third party material in this article are included in the article's creative commons licence, unless indicated otherwise in a credit line to the material. if material is not included in the article's creative commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. to view a copy of this licence, visit http://creativecommons. org/licenses/by/ . /. terrorism and the world economy the turkish current account deficit linking entrepreneurial orientation and firm performance: the role of organizational learning capability and innovation performance allianz risk baromter sme business risks the endogenous and non-linear relationship between terrorism and economic performance: turkish evidence global risks and tourism industry in turkey disaster recovery and business continuity after the flood in pakistan: case of small businesses measuring small businesses disaster resiliency: case of small businesses impacted by the flood in pakistan the development of sme managerial practice for effective performance management impacts of disaster to smes in malaysia building philippine smes resilience to natural disasters. pids discussion paper series comparison of different methods to design risk matrices from the perspective of applicability the economic costs of separatist terrorism in turkey evaluating the impact of public programs of financial aid to smes during times of crisis: the spanish experience enterprise risk management in smes: towards a structural model survivor: the role of innovation in firms' survival information and communication technology and small, medium, and micro enterprises in asia-pacific-size does matter bangkok to sendai and beyond: implications for disaster risk reduction in asia identifying priorities of asian small-and medium-scale enterprises for building disaster resilience impact of global crisis on small and medium enterprises what's wrong with risk matrices climate change and its effects on small businesses in the uk. london: axa insurance uk annual report on european smes the role of lifeline losses in business continuity in the case of adapazari the impact of the international financial crisis on asia and the pacific: highlighting monetary policy challenges from a negative asset price bubble perspective annual report on european smes / : a recovery on the horizon? brussels: european commission sustainable institutional entrepreneurship in practice korean smes in the wake of the financial crisis: strategies, constraints, and performance in a global economy. australia: department of economics risk and uncertainity expectations in smes: the case of karaman a comparison of climate change mitigation and adaptation strategies for the construction industries of three coastal territories awareness of corporate risk management in turkish energy industry bouncing back from extreme weather events: some preliminary findings on resilience barriers facing small and medium-sized enterprises the influences of business and decision makers' characteristics on disaster preparedness-a study on the loma prieta earthquake fundamentals of risk management: understanding, evaluating and implementing factors affecting the investment climate, smes productivity and entrepreneurship in nigeria climate change : the physical science basis. contribution of working group i to the fifth assessment report of the intergovernmental panel on climate change the role of smes and entrepreneurship on economic growth in emerging economies within the post-crisis era: an analysis from turkey the risks that will threaten going concern and control recommendations: case study on smes a comparison of the effects of exogenous oil supply shocks on output and inflation in the g countries co-opetition: a business strategy for sme in times of economic crisis. south-eastern assessing impacts of sars and avian flu on international tourism demand to asia rm for smes: tools to use and how beyond adaptation: resilience for business in light of climate change and weather extremes assessing organizational resilience to climate and weather extremes: complexities and methodological pathways responses to the floods in central thailand: perpetuating the vulnerability of small and medium enterprises? the role of ict use in smes towards poverty reduction: a systematic literature review the impact of corruption and organized crime on the development of sustainable tourism risk analysis in engineering: techniques, tools, and trends energy price risk and the sustainability of demand side supply chains some extensions on risk matrix approach economic effects of the turkish earthquakes: an interim report. oecd economics department working paper no . financing smes and entrepreneurs : an oecd scoreboard the effects of terrorist activities on foreign direct investment: nonlinear evidence from turkey turkey: a crossroads of risk and opportunities global risks and eu businesses estimating the lost economic production caused by internal displacement because of disasters impact of the world financial crisis to smes: the determinants of bank loan rejection in europe and usa smes' construction of climate change risks: the role of networks and values practice of information systems: evidence from select indian smes supply chain management under the threat of disruptions what's right with risk matrices? a great tool for risk managers. version , risk corporate risk management in businesses the economic impact of syrian refugees on host countries: quasi-experimental evidence from turkey association of turkish travel agencies and turkish association of tourism academicians small businesses: impact of disasters and building resilience: analysing the vulnerability of micro, small, and medium enterprises to natural hazards and their capacity to act as drivers of community recovery. background paper for the global assessment report on disaster risk reduction tackle the problem when it gets here: pandemic preparedness among small and medium businesses building up resilience of construction sector smes and their supply chains to extreme weather events smes and the credit crunch: current financing difficulties, policy measures and a review of literature corporate strategies for managing climate risks the global risks report the impacts of natural disasters on global supply chains. artnet working paper series the role of smes in asia and their difficulties in accessing finance. asian development bank institute controlling town industry explosion hazard in china towards a system of open cities in china: home prices, fdi flows and air quality in major cities acknowledgements this research has been partially supported by the scientific and technological research council of turkey (tubitak) through the -fellowship program for visiting scientists and scientists on sabbatical leave. the project has been also supported in part by york university's advanced disaster, emergency, and rapid response simulation (adersim) funded by ontario research fund. key: cord- - hlwwdh authors: quarantelli, e. l.; boin, arjen; lagadec, patrick title: studying future disasters and crises: a heuristic approach date: - - journal: handbook of disaster research doi: . / - - - - _ sha: doc_id: cord_uid: hlwwdh over time, new types of crises and disasters have emerged. we argue that new types of adversity will continue to emerge. in this chapter, we offer a framework to study and interpret new forms of crises and disasters. this framework is informed by historical insights on societal interpretations of crises and disasters. we are particularly focused here on the rise of transboundary crises – those crises that traverse boundaries between countries and policy systems. we identify the characteristics of these transboundary disruptions, sketch a few scenarios and explore the societal vulnerabilities to this type of threat. we end by discussing some possible implications for planning and preparation practices. disasters and crises are as old as when human beings started to live in groups. through the centuries, new types have emerged. for instance, the development of synthetic chemicals in the th century and nuclear power in the th century created the possibility of toxic chemical disasters and crises from radioactive fallouts. older crisis types did not disappear: ancient types such as floods and earthquakes remain with us. the newer disasters and crises are additions to older forms; they recombine elements of old threats and new vulnerabilities. the literature on crisis and disaster research suggests that we are at another important historical juncture with the emergence of a new distinctive class of disasters and crises not often seen before (ansell, boin, & keller, ; helsloot, boin, jacobs, & comfort, ; tierney, ) . in this chapter, we discuss the rise of transboundary crises and disasters. we seek to offer a heuristic approach to studying these new crises and disasters. we offer a heuristic approach to understanding the disasters and crises of the future. it is presented primarily as an aid or guide to looking further into the matter, hopefully stimulating more investigation on conceptions of disasters and crises in the past, the present, and the future. unlike in some areas of scientific inquiry, where seemingly final conclusions can be reached (e.g., about the speed of light), the basic nature of the phenomenon we are discussing is of a dynamic nature and subject to change through time. the answer to the question of what is a disaster or crisis has evolved and will continue to do so (see perry' s chapter in this handbook). human societies have always been faced with risks and hazards. earthquakes, hostile inter-and intra-group relationships, massive floods, sudden epidemics, threats to take multiple hostages or massacre large number of persons, avalanches, fires and tsunamis have marked human history for centuries if not eons. disasters and crises requiring a group reaction are as old as when human beings started to live in stable communities. the earliest happenings are attested to in legends and myths, oral traditions and folk songs, religious accounts and archeological evidence from many different cultures and subcultures around the world. for example, a "great flood" story has long existed in many places (lang, ) . as human societies evolved, new threats and hazards emerged. to the old there have been added new dangers and perils that increasingly have become potentially dangerous to human groups. risky technological agents have been added to natural hazards. these involve chemical, nuclear and biological threats that can accidentally materialize as disasters. intentional conflict situations have become more damaging at least in the sense of involving more and more victims. the last years have seen two world wars, massive air and missile attacks by the military on civilians distant from battle areas, many terrorist attacks, and widespread ethnic strife. genocide killed one million persons in rwanda; millions have become refugees and tens of thousands have died in darfur in the sudan in africa. while terrorism is not a new phenomenon, its targets have considerably expanded. some scholars and academics have argued that the very attempt to cope with increasing risks, especially of a technological nature, is indirectly generating new hazards. as the human race has increasingly been able to cope with such basic needs as food and shelter, some of the very coping mechanisms involved (such as the double edged consequences of agricultural pesticides), have generated new risks for human societies (beck, ; perrow, ) . for example, in , toxic chemicals were successfully used to eradicate massive locust infestations affecting ten western and northern african countries. those very chemicals had other widespread negative effects on humans, animals and crops (irin, ) . implicit in this line of thinking is the argument that double-edged consequences from new innovations (such as the use of chemicals, nuclear power and genetic engineering) will continue to appear (tenner, ) . we cannot say that the future will bring more disasters, as we have no reliable statistics on prior happenings as a base line to use in counting (quarantelli, ) . at present, it would seem safer to argue that some future events are qualitatively different, and not necessarily that there will be more of them in total (although we would argue the last is a viable hypothesis that requires a good statistical analysis). societies for the most part have not been passive in the face of these dangers to human life and well-being. this is somewhat contrary to what is implicit in much of the social science literature especially about disasters. in fact, some of these writings directly or indirectly state that a fatalistic attitude prevailed in the early stages of societal development (e.g., quarantelli, ) . this was thought because religious beliefs attributed negative societal happenings to punishments or tests this seems to have occurred about five to six thousand years ago (see lenski, lenski, & nolan, ) . however, recent archeological studies suggest that humans started to abandon nomadic wanderings and settled into permanent sites around , years ago (balter, ) so community recognized disasters and crises might have an even longer history. by supernatural entities (the "acts of god" notion, although this particular phrase became a common usage mostly because it served the interests of insurance companies). but prayers, offerings and rituals are widely seen as means to influence the supernatural. so passivity is not an automatic response to disasters and crises even by religious believers, an observation sometimes unnoticed by secular researchers. in fact, historical studies strongly indicate that societal interpretations have been more differentiated than once believed and have shifted through the centuries, at least in the western world. in ancient greece, aristotle categorized disasters as the result of natural phenomena and not manifestations of supernatural interventions (aristotle, ) . the spread of christianity about , years ago helped foster the belief that disasters were "special providences sent directly" from "god to punish sinners" (mulcahy, , p. ) . in the middle ages, even scholars and educated elites "no longer questioned the holy origins of natural disasters" (massard-guilbaud, platt, & schott, , p. ) . starting in the th century, however, explanations started to be replaced by "ones that viewed disasters as accidental or natural events" (mulcahy, , p. ) . this, of course, also reflected a strong secularization trend in western societies. perhaps this reached a climax with the lisbon earthquake which dynes notes can be seen as the "first modern disaster" ( , p. ). so far our discussion has been mostly from the perspective of the educated elites in western societies. little scholarly attention seems to have been given to what developed in non-western social systems. one passing observation about the ottoman empire and fire disasters suggests that the pattern just discussed might not be universal. thus, while fire prevention measures were encouraged in cities, they were not mandated "since calamities were considered" as expressions of the will of god (yerolympos, , p. ) . even as late as an ottoman urban building code stated that according to religious writing "the will of the almighty will be done" and nothing can and should be done about that. at the same time, this code advances the idea that nevertheless there were protective measures that could be taken against fires that are "the will of allah" (quoted in yerolympos, , p. ) . of course, incompatibility between natural and supernatural views about the world are not unique to disaster and crisis phenomena, but that still leaves the distinction important. even recently, an australian disaster researcher asserted that in the southwestern asian tsunami most of the population seemed to believe that the disaster was "sent either as a test of faith or punishment" (mcaneney, , p. ). or as another writer noted, following the tsunami, religiously oriented views surfaced. some were by: "fundamentalist christians" who tend to view all disasters "as a harbinger of the apocalypse". others were by "radical islamists" who are inclined to see any disaster that "washes the beaches clear of half-nude tourists to be divine" (neiman, , p. ) . after hurricane katrina, some leaders of evangelical groups spoke of the disaster as punishment imposed by god for "national sins" (cooperman, ) . in the absence of systematic studies, probably the best hypothesis that should be researched is that at present religious interpretations about disasters and crisis still appear to be widely held, but relative to the past probably have eroded among people in general. the orientation is almost certainly affected by sharp cross-societal difference in the importance attributed to religion as can be noted in the religious belief systems and practices as currently exist in the united states and many islamic countries, compared to japan or a highly secular western europe. apart from the varying interpretations of the phenomena, how have societies behaviorally reacted to existing and ever-changing threats and risks? as a whole, human groups have evolved a for an interesting attempt to deal with these two perspectives see the paper entitled disaster: a reality or a construct? perspective from the east, written by jigyasu ( ) an indian scholar. variety of formal and informal mechanisms to prevent and to deal with crises and disasters. but societies have followed different directions depending on the perceived sources of disasters and crises. responses tend to differ with the perception of the primary origin (the supernatural, the natural or the human sphere). for example, floods were seen long ago as a continuing problem that required a collective response involving engineering measures. stories that a chinese emperor, centuries before christ, deepened the ever-flooding yellow river by massive dredging and the building of diversion canals may be more legend than fact (waterbury, , p. ) . however, there is clear evidence that in egypt in the th century bc, the th dynasty pharaoh, amenemher ii completed southwest of cairo what was probably history's first substantial river control project (an irrigation canal and dam with sluice gates). other documentary evidence indicates that dams for flood control purposes were built as far back as b c in greece (schnitter, , p. , - ) . such mitigatory efforts indicate both the belief that there was a long-term natural risk as well as one that could be coped with by physically altering structural dimensions. later, particular in europe, there were many recurrent efforts to institute mitigation measures. for example, earthquake resistant building techniques were developed in ancient rome, although "they had been forgotten by the middle ages" (massard-guilbaud et al., , p. ) . the threats from floods and fires spurred mitigation efforts in greece. starting in the th century, developing urban areas devised many safeguards against fires, varying from regulations regarding inflammable items to storage of water for firefighting purposes. in many towns in medieval poland, dams, dikes and piles along riverbanks were built (sowina, ) . of course, actions taken were not always successful. but, if nothing else, these examples show that organized mitigation efforts have been undertaken for a long time in human history. there have been two other major behavioral trends of long duration that are really preventive in intent if not always in reality. one has been the routinization of responses by emergency oriented groups so as to prevent emergencies from escalating into disasters or crises. for example, in ancient rome, the first groups informally set up to fight fires were composed of untrained slaves. but when a fire in a.d. burned almost a quarter of rome, a corps of vigiles was created that had full-time personnel and specialized equipment. in more recent times, there are good examples of this routinization in the planning of public utilities that have standardized operating procedures to deal with everyday emergencies so as to prevent them from materializing into disasters. in the conflict area, there are various un and other international organizations, such as the international atomic energy agency and the european union (eu), that also try to head off the development of crises. in short, societies have continually evolved groups and procedures to try to prevent old and new risks and threats from escalating into disasters and crises. a second more recent major trend has been the development of specific organizations to deal first with wartime crises and then with peacetime disasters. societies for about a century have been creating specific organizations to deal first with new risks for civilians created by changes in warfare, and then improving on these new groups as they have been extended to peacetime situations. rooted in civil defense groups created for air raid situations, there has since been the evolvement of civilian emergency management agencies (blanchard, ) . accompanying this has been the start of the professionalization of disaster planners and crisis managers. there has been a notable shift from the involvement of amateurs to educated professionals. human societies adjusted not only to the early risks and hazards, but also to the newer ones that appeared up to the last century. the very existence of the human race is testimony to the social coping mechanisms of humans as they face such threats. here and there a few communities and groups have not been able to cope with the manifestations of contemporary risks and hazards (diamond, ) . but these have been very rare cases. neither disasters nor crises involving conflict have had that much effect on the continuing existence of cities anywhere in the world. throughout history, many cities have been destroyed. they have been: "sacked, shaken, burned, bombed, flooded, starved, irradiated and poisoned", but in almost every case they have phoenix-like been reestablished (vale & campanella, , p. ) . around the world, from the th to the th century, only cities were "permanently abandoned following destruction" (vale & campanella, , p. ) . the same analysis notes that large cities such as baghdad, moscow, aleppo, mexico city, budapest, dresden, tokyo, hiroshima and nagasaki all suffered massive physical destruction and lost huge numbers of their populations due to disasters and wartime attacks. all were rebuilt and rebounded. at the start of the th century, "such resilience became a nearly universal fact" about urban settlements around the world (vale & campanella, , p. ) . looking at these cities today as well as warsaw, berlin, hamburg and new orleans, it seems this recuperative tendency is very strong (see also schneider & susser, ) . in the hiroshima museum that now exists at the exact point where the bomb fell, there is a -degree photograph of the zone around that point, taken a few days after the attack. except for a few piles of ruins, there is nothing but rubble as far as the eye can see in every direction. there were statements made that this would be the scene at that location for decades. but a visitor to the museum today can see in the windows behind the circular photograph, many signs of a bustling city and its population (for a description of the museum see webb, ) . hiroshima did receive much help and aid to rebuild. but the city came back in ways that observers at the time of impact did not foresee. early efforts to understand and to cope with disasters and crises were generally of an ad hoc nature. with the strong development of science in the th century, there was the start of understanding the physical aspects of natural disasters, and these had some influence on structural mitigation measures that were undertaken. however, the systematic social science study of crises and disasters is about a half-century-old (fritz, ; kreps, ; quarantelli, quarantelli, , schorr, ; wright & rossi, ) . in short, there is currently a solid body of research-generated knowledge developed over the last half century of continuing and ever increasing studies around the world in different social science disciplines. to be sure, such accounts and reports are somewhat selective and not complete. there are now case studies and analytical reports on natural and technological disaster (and to some extent on other crises) numbering in the four figures. in addition, there are numerous impressions of specific behavioral dimensions that have been derived from field research (for summaries and inventories see alexander, ; cutter, ; dynes, demarchi, & pelanda, ; dynes & tierney, ; farazmand, ; helsloot, boin, jacobs, & comfort, ; mileti, ; oliver-smith, ; perry, lindell, & prater, ; rosenthal, boin, & comfort, ; rosenthal, charles, & 't hart, ; tierney, lindell, & perry, ; turner, ) . what are the distinctive aspects of the newer disasters and crises that are not seen in traditional ones? to answer this question, we considered what social science studies and reports had found about behavior in disasters and crises up to the present time. we then implicitly compared those observations and findings with the distinctive behavioral aspects of the newer disasters and crises. one issue that has always interested researchers and scholars is how to conceptualize disasters and crises. there is far from full agreement that all disasters and crises can be categorized together as being relatively homogeneous phenomena (quarantelli, ; perry & quarantelli, ) . this is despite the fact that there have been a number of attempts to distinguish between, among and within different kinds of disasters and crises. however, no one overall view has won anywhere near general acceptance among self-designated disaster and crisis researchers. to illustrate we will briefly note some of the major formulations advanced. for example, one attempt has been to distinguish between natural and technological disasters (erikson, ; picou & gill, ) . the basic assumption was that the inherent nature of the agent involved made a difference. implicit was the idea that technological dangers or threats present a different and more varying kind of challenge to human societies than do natural hazards or risks. most researchers have since dropped the distinction as hazards have come to be seen as less important than the social setting in which they appear. in recent major volumes on what is a disaster (quarantelli, ; perry & quarantelli, ) , the distinction was not even mentioned by most of the two dozen scholars who addressed the basic question. other scholars have struggled with the notion that there may be some important differences between what can be called "disasters" and "crises". the assumption here is that different community level social phenomena are involved, depending on the referent. thus, some scholars distinguish between consensus and conflict types of crises (stallings, tries to reconcile the two perspectives). in some research circles, almost all natural and most technological disasters are viewed as consensus types of crises (quarantelli, ) . these are contrasted with crises involving conflict such as are exemplified by riots, terrorist attacks, and ethnic cleansings and intergroup clashes. in the latter type, at least one major party is either trying to make it worse or to extend the duration of the crisis. in natural and technological disasters, no one deliberately wants to make the situation worse or create more damage or fatalities. now, there can be disputes or serious disagreements in natural or technological disasters. it is almost inevitable that there will be some personal, organizational and community conflicts as, for example, in the recovery phase of disasters, where scapegoating is common (bucher, ; drabek & quarantelli, cf. boin, mcconnell, & 't hart, ) . in some crises, the overall intent of major social actors is to deliberately attempt to generate conflict. in contrast to the unfolding sequential process of natural disasters, terrorist groups or protesting rioters not only intentionally seek to disrupt social life, they modify or delay their attacks depending on perceived countermeasures. apart from a simple observable logical distinction between consensus and conflict types of crises, empirical studies have also established behavioral differences. for example, looting behavior is distinctively different in the two types. in the typical disaster in western societies, almost always looting is rare, covert and socially condemned, done by individuals, and involves targets of opportunity. in contrast, in many conflict crises looting is very common, overt and socially supported, undertaken by established groups of relatives or friends, and involves deliberately targeted locations (quarantelli & dynes, ) . likewise, there are major differences in hospital activities in the two kinds of crises, with more variation in conflict situations. there are differences also in the extent to which both organizational and community-level changes occur as a result of consensus and conflict crises, with more changes resulting from conflict occasions (quarantelli, ) . finally, it has been suggested that the mass media system operates differently in terrorism situations and in natural and technological disasters (project for excellence in journalism, journalism, , . both the oklahoma city bombing and the - world trade center attack led to sharp clashes between different groups of initial organizational responders. there were those who saw these happenings primarily as criminal attacks necessitating closure of the location as a crime for a contrary view that sees terrorist occasions as more or less being the same as what behaviorally appears in natural and technological disasters (fischer, ) . scene, and those who saw them primarily as situations where priority ought to be on rescuing survivors. in the - situation, the clash continued later into the issues of the handling of dead bodies and debris clearance. all this goes to show that crises and disasters are socially constructed. whether it is by theorists, researchers, operational personnel, politicians or citizens, any designation comes from the construction process and is not inherent in the phenomena itself. this is well illustrated in an article by cunningham ( ) where he shows that a major cyanide spill into the danube river was differently defined as an incident, an accident, or a catastrophe, depending on how culpability was perceived and who was doing the defining. still other distinctions have been made. some advocate "crisis" as the central concept in description and analysis (see the chapter of boin, kuipers and 't hart in this handbook). in this line of thinking, a crisis involves an urgent threat to the core functions of a social system. a disaster is seen as "a crisis with a bad ending" (boin, ) . this is consistent with the earlier expressed idea that while there are many hazards and risks, only a few actually manifest themselves. but the crisis idea does not differentiate among the manifestations themselves as the consensus and conflict distinction does. this is not the place to try and settle conceptual disagreements and we will not attempt to do so. anyone in these areas of study should acknowledge that there are different views and different proponents should try to make their positions as explicit as possible so people do not continue to talk past one another. it is perhaps not amiss here to note that the very words or terms used to designate the core nature of the phenomena are etymologically very complex with major shifts in meaning through time. we are far from having standardized terms and similar connotations and denotations for them. a conceptual question that has come increasingly to the fore in the last decade or so is the question: have new kinds of crises and disasters began to appear? we think it is fair to say that there are new types of risks and hazards. there are also structural changes in social settings. together, they raise the prospect of new types of disasters and crises. for example, we have seen the breakdown of modern transportation systems (think of the volcanic ash crisis that paralyzed air traffic in ; kuipers & boin, ) . there have been massive information system failures either through sabotage or as a result of technical breakdowns in linked systems. there have been terrorist attacks of a magnitude and scale not seen before. we are living with the prospect of widespread illnesses and health-related difficulties that appear to be qualitatively different from traditional medical problems. we have just lived through financial and economic collapses that cut across different social systems around the world. many of these "new" disruptions have both traditional and non-traditional features: think of the heat waves in paris (lagadec, ) and chicago (klinenberg, ) , the ice storms in canada (scanlon, ) , but also the genocide-like violence in africa and the former yugoslavia. the chernobyl radiation fallout ( ) led some scholars and researchers to start asking if there was not something distinctively new about that disaster. the fallout was first openly measured in sweden. officials were mystified in that they could not locate any possible radiation source in their own country. later radiation effects on vegetation eaten by reindeer past the arctic circle in northern sweden were linked to the nuclear plant accident in the soviet union. the mysterious origins, crossing of national boundaries, and the emergent involvement of see safire ( ) who struggles with past and present etymological meanings of "disaster", "catastrophe", "calamity" and "cataclysm"; also see murria ( ) who looking outside the english language found a bewildering set of words used, many of which had no equivalent meanings in other languages. many european and transnational groups was not something researchers had typically seen together in other prior disasters. looking back, it is clear that certain other disasters also should have alerted all of us to the probability that new forms of adversity were emerging. in november , water used to put out fire in a plant involving agricultural chemicals spilled into the river rhine. the highly polluted river went through switzerland, germany, france, luxembourg and the netherlands. a series of massive fire smog episodes plagued indonesia in and . land speculations led to fire-clearing efforts that, partly because of drought conditions, resulted in forest fires that produced thick smog hazes that spread over much of southeast asia (barber & schweithelm, ) . these disrupted travel, which in turn affected tourism as well as creating respiratory health problems, and led to political criticism of indonesia by other countries as multi-nation efforts to cope with the problem were not very successful. both of these occasions had characteristics that were not typically seen in traditional disasters. in the original version of this chapter, we spoke about "trans-system social ruptures". this term was an extension of the earlier label of "social ruptures" advanced by lagadec ( lagadec ( , . the term "transboundary" has since become the more conventional way to describe crises and disasters that jump across different societal boundaries disrupting the social fabric of different social systems (ansell et al., ) . the two prime and initial examples we used in the original chapter were the severe acute respiratory syndrome (sars) and the sobig computer f virus spread, both of which appeared in . the first involved a "natural" phenomenon, whereas the second was intentionally created. since there is much descriptive literature available on both, we here provide only very brief statements about these phenomena. the new infectious disease sars appeared in the winter of . apparently jumping from animals to humans it originated in southern rural china, near the city of guangzhou. from there it moved through hong kong and southeast asia. it spread quickly around the world because international plane flights were shorter than its incubation period. at least infected persons died. it hit canada with outbreaks in vancouver in the west and toronto far away in the east. in time, persons died of the several hundred that got ill, and thousands of others were quarantined. the city's healthcare system virtually closed down except for the most urgent of cases with countless procedures being delayed or cancelled. the result was that there was widespread anxiety in the area resulting in the closing of schools, the cancellation of many meetings and, because visitors and tourists stayed away, a considerable negative effect on the economy (commission report, , p. ) . the commission report notes a lack of coordination among the multitude of private and public sector organizations involved, a lack of consistent information on what was really happening, and jurisdictional squabbling on who should be doing what. although sars vanished worldwide after june , to this day it is still not clear why it became so virulent in the initial outbreak and why it has disappeared (yardley, ) . the sobig computer f virus spread in august (schwartz, ) . it affected many computer systems and threatened almost all computers connected to the internet. the damage was very costly. a variety of organizations around the world, public and private, attempted to deal with the problem. initially uncoordinated, there eventually emerged in an informal way a degree of informational networking on how to cope with what was happening (koerner, ) . what can we generalize from not only these two cases, but also others that we looked at later in may , the so-called wannacry virus affected millions of computers across the world with ransomware. many hospitals were affected. (ansell et al., ) ? the characteristics we depict are stated in ideal-typical terms; that is, from a social science perspective, what the phenomena would be if they existed in pure or perfect form. first, the threat jumps across many international and national/political governmental boundaries. it crosses functional boundaries, jumping from one sector to another, and crossing from the private into public sectors (and sometimes back). there was, for example, the huge spatial leap of sars from a rural area in china to metropolitan toronto, canada. second, a transboundary threat can spread very fast. cases of sars went around the world in less than hours with a person who had been in china flying to canada quickly infecting persons in toronto. the spread of the sobig f virus was called the fastest ever (thompson, ) . this quick spread is accompanied by a very quick if not almost simultaneous global awareness of the risk because of mass media attention. third, there is no known central or clear point of origin, at least initially, along with the fact that the possible negative effects at first are far from clear. this stood out when sars first appeared in canada. there is much ambiguity as to what might happen. ambiguity is of course a major hallmark of disasters and crises (turner, ) . it is more pervasive in transboundary crises as information about causes, characteristics and consequences is distributed across the system. fourth, there are potentially if not actual large number of victims, directly or indirectly. the sobig computer virus infected % of email users in china, that is about million people and about three fourths of email messages around the world were infected by this virus (koerner, ) . in contrast to the geographic limits of most past disasters, the potential number of victims is often open ended in disruptions that span across boundaries. fifth, traditional "solutions" or approachesembedded in local and/or professional institutions will not always work. this is rather contrary to the current emphasis in emergency management philosophy. the prime and first locus of planning and managing cannot be the local community as it is presently understood. international and transnational organizations must typically be involved very early in the initial response (boin, ekengren, & rhinard, ) . the nation state may not even be a prime actor in the situation. sixth, although responding organizations and groups are major players, there is an exceptional amount of emergent behavior and the development of many informal ephemeral linkages. in some respects, the informal social networks generated, involving much information networking, are not always easily identifiable from the outside, even though they are often the crucial actors at the height of the crisis. in this section, we sketch several future scenarios that most likely would create transboundary disasters. even though some of the scenarios discussed might seem to be science fiction in nature, the possibilities we discuss are well within the realm of realistic scientific possibilities. the most obvious scenario revolves around asteroids or comets hitting planet earth (di justo, ) . this has, of course, happened in the past, but even more recent impacts found no or relatively few human beings around. there are two major possibilities with respect to impact (mcguire, ; wisner, ) . a landing in the ocean would trigger a tsunami-like impact in coastal areas. just the thinking of the possibility of how, when and where ahead of time coastal population evacuations might have to be undertaken, is a daunting thought. statistically less likely is a landing in a heavily populated area. but a terrestrial impact anywhere on land would generate very high quantities of dust in the atmosphere, which will affect food production as well as creating economic disruption. this would be akin to the tambora volcanic eruption in , which led to very cold summers and crop failures (post, ) . the planning and management problems for handling something like this would be enormous. the explosion of space shuttle columbia scattered debris over a large part of the united states. this relatively small disastercompared to a comet or asteroid impactinvolved massive crossing of boundaries, a large number of potential victims, and could not be managed by local community institutions. the response required that an unplanned effort coordinating organizations that had not previously worked with one another and other unfamiliar groups, public and private (ranging from the us forest service to local red cross volunteers to regional medical groups), be informally instituted over a great part of the united states (beck & plowman, ; donahue, ) . a second scenario is the inadvertent or deliberate creation of biotechnological disasters. genetic engineering of humans or food products is currently in its infancy. the possible good outcomes and products from such activity are tremendous (morton, ) and are spreading around the world (pollack, ) . but the double-edged possibilities mentioned earlier are also present. there is dispute over genetically modified crops, with many european countries resisting and preventing their use and spread in their countries. while no major disaster or crisis from this biotechnology has yet occurred, there have been many accidents and incidents that suggest that this will be only a matter of time. for example, in , starlink corn, approved only for animal feed is found in the food supply, such as taco shells and other groceries. the same year farmers in europe learned that that they had unknowingly been growing modified canola using mixed seed from canada. in , modified corn was found in mexico even though it was illegal to plant in that country. that same year, experimental corn that had been engineered to produce a pharmaceutical that was found in soybeans in the state of nebraska. in several places, organic farmers found that it was impossible for them to keep their fields uncontaminated (for further details about all these incidents and other examples, see pollack, ) . noticeable is the leaping of boundaries and uncertainty about the route of spreading. it does not take much imagination to see that a modified gene intended for restricted use, could escape and create a contamination that could wreak ecological and other havoc. perhaps even more disturbing to some is genetic engineering involving human beings. the worldwide dispute over cloning, while currently perhaps more a philosophical and moral issue, does also partly involve the concern over creating flawed human-like creatures. it is possible to visualize not far-fetched worst-case scenarios that could be rather disastrous. it should be noted that even when there is some prior knowledge of a very serious potential threat, what might happen is still likely to be as ambiguous and complex as when sars first surfaced. this can be seen in the continuing major concern expressed in to mid- about the possible pandemic spread of avian influenza, the so called "bird flu" (nuzzo, ; thorson & ekdahl, ) . knowledge of the evolution and spread of new pandemics, their effects and whether presently available protective measures would work, may well be very limited. knowledge that it might occur provides very little guidance on what might actually happen. it is possible to imagine the destruction of all food supplies for human beings either through the inadvertent or deliberate proliferation of very toxic biotechnological innovations for which no known barriers to spreading exists. these potential kinds of global disasters are of relatively recent origins and we may expect more such possibilities in the future. the human race is opening up potentially very catastrophic possibilities by innovations in nanotechnology, genetic engineering and robotics (barrat, ; joy, ; makridakis, ) . a potential is not an actuality. but it would be foolish from both a research as well as a planning and managing viewpoint to simply ignore these and other doomsday possibilities. the question might be asked if there is a built-in professional bias among disaster and crisis researchers and emergency planners to look for and to expect the worst (see mueller, for numerous examples). in the disaster and crisis area, this orientation is reinforced by the strong tendency of social critics and intellectuals to stress the negative. it would pay to look at the past, see what was projected at a particular time, and then to look at what actually happened. the worldwide expectations about what would happen at the turn of the century to computers are now simply remembered as the y k fiasco. it would be a worthy study to take projections by researchers about the future of ongoing crises and disasters, and then to look at what actually happened. in the s, in the united states, scholars made rough analyses about the immediate future course of racial and university riots in the country. their initial appearances had not been forecasted. moreover, there was a dismal record in predicting how such events would unfold (no one seemed to have foreseen that the riots would go from ghetto areas to university campuses), as well as that they rather abruptly stopped. we should be able to do a better job than we have so far in making projections about the future. but perhaps that is asking more of disaster and crisis researchers than is reasonable. after all, social scientists with expertise in certain areas, to take recent examples, failed completely to predict or forecast the non-violent demise of the soviet union, the peaceful transition in south africa, or the development of a market economy in communist china (cf. tetlock, ) . a disaster or crisis always occurs in some kind of social setting. by social setting we mean social systems. these systems can and do differ in social structures and cultural frameworks. there has been a bias in disaster and crisis research towards focusing on specific agents and specific events. thus, there is the inclination of social science researchers to say they studied this or that earthquake, flood, explosion and/or radioactive fallout. at one level that is nonsense. these terms refer to geophysical, climatological or physical happenings, which are hardly the province of social scientists. instead, those focused on the social in the broad sense of the term should be studying social phenomena. our view is that what should be looked at more is not the possible agent that might be involved, but the social setting of the happening. this becomes obvious when researchers have to look at such happenings as the southeast asia tsunami or locust infestations in africa. both of these occasions impacted a variety of social systems as well as involving social actors from outside those systems. this led in the tsunami disaster to sharp cultural clashes regarding on how to handle the dead between western european organizations who came into look mostly for bodies of their tourist citizens, and local groups who had different beliefs and values with respect to dead bodies (scanlon, personal communication with first author). the residents of the andaman islands lived at a level many would consider "primitive". at the time of the tsunami in southeast asia, they had no access to modern warning systems. but prior to the tsunami, members of the tribal communities saw signs of disturbed marine life and heard unusual agitated cries of sea birds. this was interpreted as a sign of impending danger, so that part of the population got off the beaches and retreated inland to the woods and survived intact (icpac report, ) . there is a need to look at both the current social settings as well as certain social trends that influence disasters and crises. in no way are we going to address all aspects of social systems and cultural frameworks or their social evolution, either past or prospective. instead, we will selectively discuss and illustrate a few dimensions that would seem to be particularly important with respect to crises and disasters. what might these be? let us first look at existing social structures around the world. what differences are there in authority relationships, social institutions and social diversity? as examples, we might note that australia and the united states are far more governmentally decentralized than france or japan (bosner, for example, rees ( ) , a cosmologist at cambridge university, gives civilization as we know it only a - chance of surviving the st century. schoff, ) . this affects what might or might not happen at times of disasters (it is often accepted that top-down systems have more problems in responding to crises and disasters). but what does it mean for the management of transboundary disruptions, which require increased cooperation between and across systems? will decentralized systems be able to produce "emergent" transboundary cooperation? as another example, mass media systems operate in rather different ways in china compared with western europe. this is important because to a considerable extent the mass communication system (including social media) is by far the major source of "information" about a disaster or a crisis. they play a major role in the social construction of disasters and crises. for a long time in the former soviet union, even major disasters and overt internal conflicts by way of riots were simply not openly reported (berg, ) . and only late in did chinese authorities announce that henceforth death tolls in natural disasters would be made public, but not for other kinds of crises (kahn, ) . another social structural dimension has to do with the range of social diversity in different systems (bolin & stanford, ) . social groupings and categories can be markedly different in their homogeneity or heterogeneity. the variation, for instance, can be in terms of life styles, class differences or demographic composition. the aging population in western europe and japan is in sharp contrast to the very young populations in most developing countries. this is important because the very young and the very old incur disproportionately the greatest number of fatalities in disasters. human societies also differ in terms of their cultural frameworks. as anthropologists have pointed out, they can have very different patterns of beliefs, norms, and values. as one example, there can be widely held different conceptions of what occasions disasters and crises. the source can be attributed to supernatural, natural, or human factors as indicated earlier. this can markedly affect everything from what mitigation measures might be considered to how recovery and reconstruction will be undertaken. norms indicating what course of action should be followed in different situations can vary tremendously. for example, the norm of helping others outside of one's own immediate group at times of disasters and crises ranges from full help to none. thus, although the kobe earthquake was an exception, any extensive volunteering in disasters was very rare in japan (for a comparison of the us and japan, see hayashi, ) . in societies with extreme cross-cultural ethnic or racial differences, volunteering to help others outside of one's own group at times of disasters or crisis is almost unknown. social structures and cultural frameworks of course are always changing. to understand future disasters and crises, it is necessary to identify and understand trends that may be operative with respect to both social structures and cultural frameworks. in particular, for our purposes, it is important to note trends that might be cutting across structural and cultural boundaries. globalization has been an ongoing force. leaving aside the substantive disputes about the meaning of the term, what is involved is at least the increasing appearance of new social actors at the global level. with respect to disaster relief and recovery, there is the continuing rise of transnational or international organizations such as un entities, the european union, religiously oriented groupings, and the world bank (boin et al., ) . with the decline of the importance of the nation state (guéhenno, ; mann, ) , more and new social actors, especially of an ngo nature, are to be anticipated. the rise of the information society has enabled the development of informal social networks that globally cut across political boundaries. this trend will likely increase in the future. such networks are creating social capital (in the social science sense) that will be increasingly important in dealing with disasters and crises. at the cultural level, we can note the greater insistence of citizens that they ought to be actively protected against disasters and crises (beck, ) . this is part of a democratic ideology that has spread around the world. that same ideology carries an inherent paradox: the global citizen may not appreciate government interference in everyday life, but expects government to show up immediately when acute adversity hits. finally, there has been the impact of the / attacks especially on official thinking not just in the united states but elsewhere also. this happening has clearly been a "focusing event" (as birkland, uses the term) and changed along some lines, certain values, beliefs and norms (smelser, ; tierney, ) . there is a tendency, at least in the us after / , to think that all future crises and disasters will be new forms of terrorism. one can see this in the creation of the us department of homeland security, which repeated errors in approach and thinking that over years of research have shown to be incorrect (e.g., an imposition of a command and control model, assuming that citizens will react inappropriately to warnings, seeing organizational improvisation as bad managing, see dynes, ) . these changes were accompanied by the downgrading of fema and its emphasis on mitigation (cohn, ) . valid or not, such ideas influence thinking about transboundary disasters and crises (and not just in the united states). the ideas expressed above and the examples used were intended to make several simple points. they suggest, for instance, that an earthquake of the same magnitude in france to one in iran will probably be reacted to differently. a riot in sweden will be a different phenomenon than one in myanmar. to understand and analyze such happenings requires taking into account the aspects just discussed. it is hard to believe that countries that currently have no functioning national government, such as somalia and the democratic republic of the congo or marginally operatives ones such as afghanistan, will have the same reaction to disasters and crises as societies with fully functional national governments. different kinds of disasters and crises will occur in rather different social settings. in fact, events that today are considered disasters or crises were not necessarily so viewed in the past. in noting these cross-societal and cross-cultural differences, we are not saying that there are no universal principles of disaster and crisis behavior. there is considerable research evidence supportive of this notion. we would argue, for example, that many aspects of effective warning systems, problems of bureaucracies in responding, the crucial importance of the family/household unit are roughly the same in all societies. to suggest the importance of cross-societal and cross-cultural differences is simply to suggest that good social science research needs to take differences into account while at the same time searching for universal principles about disasters and crises. this is consistent with those disaster researchers and scholars (e.g., oliver-smith, ) who have argued that studies in these areas have badly neglected the historical context of such happenings. of course, this neglect of the larger and particularly historical context has characterized much social science research of any kind (wallerstein, ) ; it is not peculiar to disaster and crisis studies. one trend that affects the character of modern crises and disasters is what we call the social amplifications of crises and disasters. pidgeon, kasperson, and slovic ( ) described a social augmentation process with respect to risk. to them, risk not only depends on the character of the dangerous agent itself but how it was seen in the larger context in which it appeared. the idea that there can be social amplification of risk rests on the assumption that aspects relevant to hazards interact with processes of a psychological, social, institutional, and cultural nature in such a manner that they can increase or decrease perceptions of risk (kasperson & kasperson, ) . it is important to note that the perceived risk could be raised or be diminished depending on the factors in the larger context, which makes it different from the vulnerability paradigm which tends to assume the factors involved will be primarily negative ones. we have taken this idea and extended it to the behaviors that appear in disasters and crises. extreme heat waves and massive blizzards are hardly new weather phenomena (burt, ) . there have recently been two heat waves, however, that have new elements in them. in , a long lasting and very intensive heat wave battered france. nearly , persons died (and perhaps , - , in all of europe). particularly noticeable was that the victims were primarily socially isolated older persons. another characteristic was that officials were very slow in accepting the fact that there was a problem and so there was very little initial response (lagadec, ) . there was a similar earlier happening in chicago not much noticed until reported in a study seven years later (see klinenberg, ) . it exhibited the same features, that is, older isolated victims, bureaucratic indifference, and mass media uncertainty. at the other temperature extreme, in , canada experienced an accumulation of snow and ice that went considerably beyond the typical. the ice storm heavily impacted electric and transport systems, especially around montreal. the critical infrastructures being affected created chain reactions that reached into banks and refineries. at least municipalities declared a state of emergency. such a very large geographic area was involved that many police were baffled that "there was no scene", no "ground zero" that could be the focus of attention (scanlon, ) . there were also many emergent groups and informal network linkages (scanlon, ) . in some ways, this was similar to what happened in august , when the highly interconnected eastern north american power grid started to fail when three transmission lines in the state of ohio came into contact with trees and short circuited (townsend & moss, ) . this created a cascade of power failures that resulted in blackouts in cities from new york to toronto and eventually left around million persons without power, which, in turn, disrupted everyday community and social routines (ballman, ) . it took months of investigation to establish the exact path of failure propagation through a huge, complex network. telecommunication and electrical infrastructures entwined in complex interconnected and network systems spread over a large geographic area with multiple end users. therefore, localized disruptions can cascade into large-scale failures (for more details, see townsend & moss, ) . such power blackouts have occurred among others in auckland, new zealand in (newlove, stern, & svedin, ) ; in buenos aires in (ullberg, ); in stockholm in and in siberian cities in (humphrey, ; in moscow in (arvedlund, ; in brazil in (brooks, ); in bangladesh in (al-mahmood, , and in sri lanka in (lbo, ). all of these cases initially involved accidents or software and hardware failures in complex technical systems that generate severe consequences creating a crisis with major economic and often political effects. these kinds of crises should have been expected. a national research council report ( ) forecast the almost certain probability of these kinds of risks in future network linkages. blackouts can also be deliberately created either for good or malevolent reasons having nothing to with problems in network linkages. employees of the now notorious enron energy company, in order to exploit western energy markets, indirectly but deliberately took off line a perfectly functioning las vegas power plant so that rolling blackouts hit plant-dependent northern and central california with about a million residences and businesses losing power (egan, ) . in the earliest days of electricity in new york city, the mayor ordered the power cut off when poor maintenance of exposed and open wires resulted in a number of electrocutions of citizens and electrical workers (jonnes, ) . one should not think of blackouts as solely the result of mechanical or physical failures creating chain-like cascades. most disasters are still traditional ones. for example, four major hurricanes hit the state of florida in . we saw very little in what we found that required thinking of them in some major new ways, or even in planning for or managing them. the problems, individual or organizational, that surfaced were the usual ones, and how to successfully handle them is fairly well known. more important, emergent difficulties were actually somewhat better handled than in the past, perhaps reflecting that officials may have had exposure to earlier studies and reports. thus, the warnings issued and the evacuations that took place were better than in the past. looting concerns were almost non-existent and less than ten percent indicated possible mental health effects. the pre-impact organizational mobilization and placement of resources beyond the community level was also better. the efficiency and effectiveness of local emergency management offices were markedly higher than in the past. not everything was done well. long known problematical aspects and failures to implement measures that research had suggested a long time ago were found. there were major difficulties in interorganizational coordination. the recovery period was plagued by the usual problems. even the failures that showed up in pre-impact mitigation efforts were known. the majority of contemporary disasters in the united states are still rather similar to most of the earlier ones. what could be seen in the hurricanes in florida was rather similar to what the disaster research center (drc) had studied there in the s and the s. as the electronic age goes beyond its birth and as other social trends continue, new elements may appear creating new problems that will necessitate new planning. if and when that happens, we may have rather new kinds of hurricane disasters, but movement in that direction will be slow. as the famous sociologist herbert blumer used to say in his class lectures a long time ago, it is sometimes useful to check whatever is theoretically proposed against personal experience. in , an extensive snowstorm led to the closing of almost all schools and government offices in the state of delaware. this was accompanied by the widespread cancellations of religious and sport events. there was across the board disruption of air, road and train services. all of this resulted in major economic losses in the millions of dollars. there were scattered interruptions of critical life systems. the governor issued a state of emergency declaration and the state as well as local emergency management offices fully mobilized. to be sure, what happened did not fully rival what surfaced in the canadian blizzard discussed earlier. but it would be difficult to argue that it did not meet criteria often used by many to categorize disasters. what happened was not that different from what others and we had experienced in the past. in short, it was a traditional disaster. finally, at the same time we were thinking about the florida hurricanes and the delaware snowstorm, we also observed other events that many would consider disasters or crises. certainly, a bp texas plant explosion in would qualify. it involved the third largest refinery in the country. more than a hundred were injured and persons died. in addition, there was major physical destruction of refinery equipment and nearby buildings were leveled. there was full mobilization of local emergency management personnel (franks, ) . at about the same time, there were landslides in the state of utah and california; a stampede with hundreds of deaths in a bombay, india temple, train and plane crashes in different places around the world, as well as large bus accidents; a dam rupture which swept away five villages, bridges and roads in pakistan; recurrent coal mine accidents and collapses in china; recurrent false reports in asia about tsunamis that greatly disrupted local routines; sinking of ferries with many deaths, and localized riots and hostage takings. at least based on press reports, it does not seem that there was anything distinctively new about these occasions. they seem to greatly resemble many such prior happenings. unless current social trends change very quickly in hypothetical directions (e.g., marked changes as a result of biotechnological advances), for the foreseeable future there will continue to be many traditional local community disasters and crises (such as localized floods and tornadoes, hostage takings or mass shootings, exploding tanker trucks or overturned trains, circumscribed landslides, disturbances if not riots at local sport venues, large plant fires, sudden discoveries of previously unknown very toxic local waste sites, most airplane crashes, stampedes and panic flights in buildings, etc.). mega-disasters and global crises will be rare in a numerical and relative sense, although they may generate much mass media attention. for example, the terrorist attacks in european cities (madrid in ; london in ; paris in ; brussels, nice, munich berlin in ; stockholm and manchester in ) were certainly major crises and symbolically very important, but numerically there are far more local train wrecks and car collisions everyday in many countries in the world. the more localized crises and disasters will continue to be the most numerous, despite the rise of transboundary crises and disasters. what are some of the implications for planning and managing that result from taking the perspective we have suggested about crises and disasters? if our descriptions and analyses of such happenings are valid, there would seem to be the need for new kinds of planning and preparation for the management of future crises and disasters (ansell et al., ) . non-traditional disasters and crises require some non-conventional processes and social arrangements. they demand innovative thinking "outside of the box" (boin & lagadec, ; lagadec, ) . this does not mean that everything has to be new. as said earlier, all disasters and crises share certain common dimensions or elements. for example, if early warning is possible at all, research has consistently shown that acceptable warnings have to come from a legitimately recognized source, have to be consistent, and have to indicate that the threat or risk is fairly immediate. these principles certainly pertain to the management of transboundary disruptions. actually, if traditional risks and hazards and their occasional manifestations were all we needed to be worried about, we would be in rather good shape. as already said several times, few threats actually manifest themselves in disasters. for example, in the , plus tornadoes appearing in the united states between and , there were casualties in only of them, and of these occasions accounted for almost half of the fatalities (noji, ) . similarly, it was noted in that while about . million people had been killed in earthquakes since , over % of them had died in only occurrences (jones, noji, smith, & wagner, , p. ) . we can say that risks and hazards and their relatively rare manifestations in crises and disasters are being coped with much better than they ever were even just a half-century ago. for example, there has been a remarkable reduction in certain societies of fatalities and even property destruction in some natural disaster occasions associated with hurricanes, floods and earthquakes (see scanlon, for data on north america). in the conflict area, the outcomes have been much more uneven, but even here, for example, the recurrence of world wars seems very unlikely. but transboundary crises and disasters require some type of transboundary cooperation. for example, let us assume that a health risk is involved. if international cooperation is needed, who talks with whom about what? at what time is action initiated? who takes the lead in organizing a response? what legal issues are involved (e.g., if health is the issue, can health authorities close airports?)? there might be many experts and much technical information around; if so, and they are not consistent, whose voice and ideas should be followed? what should be given priority? how could a forced quarantine be enforced? what of ethical issues? who should get limited vaccines? what should the mass media be told and by who and when? at a more general level of planning and managing, we can briefly indicate, almost in outline form, a half dozen principles that ought to be taken into account by disaster planners and crisis managers. first, a clear connection should be made between local planning and transboundary managing processes. there usually is a low correlation between planning and managing, even for traditional crises and disasters. but in newer kinds of disasters and crises, there are likely to be far more contingencies. planning processes need to be rethought and enhanced to help policymakers work across boundaries. second, the appearance of new emergent social phenomena (including groups and behaviors) needs to be taken into account. there are always new or emergent groups at times of major disasters and crises, but in transboundary events they appear at a much higher rate. networks and network links have to be particularly taken into account. third, there is the need to be imaginative and creative. the response to hurricane katrina suggests how hard it can be to meet transboundary challenges. but improvisation can go a long way. a good example is found in the immediate aftermath of / in new york. in spite the total loss of the new york city office of emergency management and its eoc facility, a completely new eoc was established elsewhere and started to operate very effectively within h after the attack. there had been no planning for such an event, yet around , persons were evacuated by water transportation from lower manhattan (kendra & wachtendorf, ; kendra, wachtendorf, & quarantelli, ) . fourth, exercises and simulations of disasters and crises must take into account transboundary contingencies. most such training and educational efforts along such lines are designed to be like scripts for plays. that is a very poor model to use. realistic contingencies, unknown to most of the players in the scenarios, force the thinking through of unconventional options. even more important, policymakers need to be explicitly trained in the management of transboundary crises and disasters. fifth, planning should be with citizens and their social groups, and not for them. there is no such thing as the "public" in the sense of some homogenous entity (blumer, ) . there are only individual citizens and the groups of which they are members. the perspective from the bottom up is crucial to getting things done. this has nothing to do with democratic ideologies; it has instead to do with getting effective and efficient planning and managing of disasters and crises. related to this is that openness with information rather than secrecy is mandatory. this runs against the norms of most bureaucracies and other organizations. the more information the mass media and citizens have, the better they will be able to react and respond. however, all this is easier said than done. finally, there is a need to start thinking of local communities in ways different than they have been traditionally viewed. up to now, communities have been seen as occupying some geographical space and existing in some chronological time. instead, we should visualize the kinds of communities that exist today are in cyberspace. these newer communities must be thought of as existing in social space and social time. viewed this way, the newer kinds of communities can be seen as very important in planning for and managing disasters and crises that cut across national boundaries. to think this way requires a moving away from the traditional view of communities in the past. this will not be easy given that the traditional community focus is strongly entrenched in most places around the world (see united nations, ) . but "virtual reality communities" will be the social realities in the future. assuming that what we have written has some validity, what new research should be undertaken in the future on the topic of future disasters and crises? in previous pages, we suggested some future studies on specific topics that would be worthwhile doing. however, in this section we want to outline research of a more general nature. for one, practically everything we discussed ought to be looked at in different cultures and societies. as mentioned earlier, there is a bias in our perspective that reflects our greater familiarity with and awareness of examples from the west (and even more narrowly western europe, the united states and canada). in particular, there is a need to undertake research in developing rather than only developed countries. and that includes at least some of these studies being undertaken by researchers and scholars from the very social systems that are being studied. the different cultural perspectives that would be brought to bear might be very enlightening, and enable us to see things that presently we do not see, being somewhat a prisoner of our own culture. second, here and there in this chapter, we have suggested that it is important to study the conditions that generate disasters and crises. but there has to be at least some understanding of the nature of x before there can be a serious turn to ascertaining the conditions that generate x. we have taken this first step in this chapter. future work should focus more on the generating conditions. a general model would involve the following ideas. the first is to look at social systems (societal, community and/or organizational ones), and to analyze how they have become more complex and tightly coupled. the last statement would be treated as a working hypothesis. if that turns out to be true, it could then be hypothesized that systems can break down in more ways than ever before. a secondary research thrust would be to see if systems also have developed ways to deal with or cope with threatening breakdowns. as such, it might be argued that what ensues is an uneven balance between resiliency and vulnerability. in studying contemporary trends, particular attention might be given to demographic ones. it would be difficult to find any country today where the population composition is not changing in some way. the increasing population density in high risk areas seems particularly important. another value in doing research on this topic is that much demographic data are of a quantitative nature. we mentioned financial and economic collapses cutting across different systems. how can financial collapse conceivably be thought of as comparable in any way to natural disasters and crises involving conflict? one simple answer is that for nearly a hundred years, one subfield of sociology has categorized, for example, panic flight in theater fires and financial panics as generic subtypes within the field of collective behavior (blumer, ; smelser, ) . both happenings involve new, emergent behaviors of a non-traditional nature. in this respect, scholars long ago put both types of behavior into the same category. although disaster and crisis researchers have not looked at financial collapses, maybe it is time that they did so. these kinds of happenings seem to occur very quickly, are ambiguous as to their consequences, cut across political and sector boundaries, involve a great deal of emergent behavior and cannot be handled at the community level. in short, what has to be looked for are genotypic characteristics not phenotypic ones (perry, ) . if whales, human beings, and bats can all be usefully categorized as mammals for scientific research purposes, maybe students of disasters should also pay less attention to phenotypic features. if so, should other disruptive phenomena like aids also be approached as disasters? our overall point, is that new research along the lines indicated might lead researchers to seeing phenomena in ways different than they had previously seen. finally, we have said little at all about the research methodologies that might be necessary to study transboundary ruptures. up to now, disaster and crisis researchers have argued that the methods they use in their research are indistinguishable from those used throughout the social sciences. the methods are simply applied under circumstances that are relatively unique (stallings, ) . in general, we agree with that position. but two questions can be raised. first, if social scientists venture into such areas as genetic engineering, cyberspace, robotics and complex infectious diseases, do they need to have knowledge of these phenomena to a degree that they presently do not have? this suggests the need for actual interdisciplinary research. social scientists ought to expand their knowledge base before venturing to study certain disasters and crises, especially the newer ones. there is something here that needs attention. in the sociology of science there have already been studies of how researchers from rather different disciplines studying one research question, interact with one another and what problems they have. researchers in the disaster and crisis area should look at these studies. our view is that the area of disasters and crises is changing. this might seem to be a very pessimistic outlook. that is not the case. there is reason to think, as we tried to document earlier, that human societies in the future will be able to cope with whatever new risks and hazards come into being. to be sure, given hazards and risks, there are bound to be disasters and crises. a risk free society has never existed and will never exist. but while this general principle is undoubtedly true, it is not so with reference to any particular or specific case. in fact, the great majority of potential dangers never manifest themselves eventually in disasters and crises. finally, we should note again that the approach in this chapter has been a heuristic one. we have not pretended that we have absolute and conclusive research-based knowledge or understanding about all of the issues we have discussed. this is in line with alexander ( , p. ) who wrote that scientific research is never ending in its quest for knowledge, rather than trying to reach once-for-all final conclusions, and therefore "none of us should presume to have all the answers". confronting catastrophe: new perspective on natural disasters the meaning of disaster: a reply to wolf dombrowsky bangladesh power restored after nationwide blackout: bangladesh, india blame each other for power failure managing transboundary crises: identifying the building blocks of an effective response system blackout disrupts moscow after fire in old power station the great blackout of . disaster recovery the seeds of civilization trial by fire: forest fires our final invention: artificial intelligence and the end of the human era temporary, emergent interorganizational collaboration in unexpected circumstances: a study of the columbia space shuttle response effort world risk society uncovering soviet disasters after disaster: agenda setting, public policy, and focusing events historical overview of u.s. emergency management. unpublished draft prepared for college courses for emergency managers collective behavior public opinion and public opinion polling the european union as crisis manager: patterns and prospects governing after crisis: the politics of investigation, accountability and learning what is a disaster? further perspectives on the question preparing for the future: critical challenges in crisis management the northridge earthquake: vulnerability and disaster disaster preparedness: how japan and the united states compare brazil government defends reliability of power grid after blackout leaves million in dark blame and hostility in disaster extreme weather: a guide & record book fema's new challenges. washington times where most see a weather system, some see divine retribution incident, accident, catastrophe: cyanide on the danube environmental risks and hazards asteroids are coming. wired collapse incident management teams: all-risk operations and management study scapegoats, villains and disasters blame in disaster: another look, another viewpoint the lisbon earthquake in : contested meanings in the first modern disaster finding order in disorder: continuities in the - response sociology of disasters: contributions of sociology to disaster research disasters, collective behavior and societal organization tapes show enron arranged plant shutdown a new species of trouble: explorations in disaster, trauma, and community handbook of crisis and emergency management the sociology of disaster: definitions, research questions and measurements. continuation of discussion in a post-september environment bp texas plant had fire day before blast disaster the end of the nation state assessment of post-event management processes using multi-media disaster simulation (pp. - - - ) mega-crises: understanding the prospects, nature, characteristics, and the effects of cataclysmic events wounded cities: destruction and reconstruction in a globalized world nature conservation and natural disaster management: the role of indigenous knowledge in kenya. report by igad climate prediction and applications centre (icpac) the eight plague: west africa's locust invasion disaster: a "reality or construct? casualty in earthquakes new york unplugged why the future doesn't need us. wired china to shed secrecy over its natural disasters the social contours of risk: risk communication and the social amplification of risk american dunkirk: the waterborne evacuation of manhattan on / the evacuation of lower manhattan by water transport on september : an unplanned success heat wave: a social autopsy of disaster in chicago in computer security, a bigger reason to squirm sociological inquiry and disaster research exploring the eu's role as transboundary crisis manager: the facilitation of sense-making during the ash-crisis ruptures creatrices. paris: editions d'organisation understanding the french heat wave experience: beyond the heat, a multi-layered challenge crossing the rubicon non-semitic deluge stories and the book of genesis. a bibliographic and critical survey sri lanka's island-wide blackout signals power supply reliability issue the forthcoming artificial intelligence (ai) revolution: its impact on society and firms has globalization ended the rise of the nation-state? cities and catastrophes: coping with emergency in european history sumatra earthquake and tsunami disaster by design: a reassessment of natural hazards in the united states biology's new forbidden fruit a false sense of insecurity? regulation urban catastrophes and imperial relief in the eighteenth-century british atlantic world: three case studies a disaster by any other name growing vulnerability of the public switched networks: implications for national security emergency preparedness. washington, d.c the moral cataclysm: why we struggle to think and feel differently about natural and man-made disasters auckland unplugged. stockholm: ocb/the swedish agency for civil emergency planning public health consequences of disasters the next pandemic? peru's five hundred year earthquake: vulnerability in historical context anthropological research on hazards and disasters normal accidents: living with high-risk technologies disaster exercise outcomes for professional emergency personnel and citizen volunteers introduction to emergency management in the united states. washington what is a disaster? new answers to old questions the exxon valdez oil spill and chronic psychological stress social amplification of risk can biotech crops be good neighbors open-source practices for biotechnology framing the news: the triggers, frames and messages in newspaper coverage disaster studies: an analysis of the social historical factors affecting the development of research in the area community crises: an exploratory comparison of the characteristics and consequences of disasters and riots what is a disaster? london: routledge disaster planning, emergency management and civil protection: the historical development of organized efforts to plan for and to respond to disasters. preliminary paper # statistical and conceptual problems in the study of disasters. disaster prevention and management dissensus and consensus in community emergencies: patterns of looting and property norms our final hour: a scientist's warning: how terror, error and environmental disaster threaten humankind's future in this century-on earth and beyond managing crises, threats, dilemmas, opportunities coping with crises: the management of disasters, riots and terrorism tsunami: the vocabulary of disaster military support to civil authorities: the eastern ontario ice storm emergent groups in established frameworks: ottawa carleton's response to the ice disaster a perspective on north american natural disasters wounded cities: destruction and reconstruction in a globalized world a history of dams crisis management in japan and the united states: creating opportunities for cooperation and dramatic change some contributions german katastrophensoziologie can make to the sociology of disaster old virus has a new trick: mailing itself in quantity theory of collective behavior as cultural trauma cities and catastrophes: coping with emergency in european history conflict in natural disaster: a codification of consensus and conflict theories methods of disaster research why things bite back expert political judgment. princeton virus underground avian influenza-is the world on the verge of a pandemic? and can it be stopped? the / commission and disaster management: little depth, less context, not much guidance the social roots of risk: producing disasters, promoting resilience facing the unexpected: disaster preparedness and response in the united states telecommunications infrastructure in disasters: preparing cities for crisis communication man-made disasters the buenos aires blackout: argentine crisis management across the public-private divide know risk the resilient city: how modern cities recover from disasters letter from the president. international sociological association newsletter hydropolitics of the nile valley the popular culture of disaster: exploring a new dimension of disaster research handbook of disaster research the societal implications of a comet/asteroid impact on earth: a perspective from international development studies social science and natural hazards after its epidemic arrival, sars vanishes urban space as "field" aspects of late ottoman town planning after fire cities and catastrophes: coping with emergency in european history key: cord- -c t bo authors: bin-hussain, ibrahim title: infections in the immunocompromised host date: journal: textbook of clinical pediatrics doi: . / - - - - _ sha: doc_id: cord_uid: c t bo nan infections are considered a major cause of morbidity and mortality in immunocompromised children. the survival rate in this particular population has increased over the last decades. this is mainly due to the advancement in medical technology leading to improvement in diagnosis capabilities as well as supportive care including antimicrobial therapy. immunodeficiency can be divided into primary and secondary immunodeficiency disorders. primary immunodeficiency disorders including combined t-cell and b-cell immunodeficiencies, antibody deficiency, disease of immune dysregulation, congenital defects of phagocyte number or function or both, defects in innate immunity, autoimmunity disorders, complement deficiencies, and cytokine defects. secondary immunodeficiency disorders include human immunodeficiency virus (hiv) and acquired immune deficiency syndrome (aids) -both of which lead to altered cellular immunity -dysgammaglobulinemia, defective phagocytic function or neutropenia. cancer leading to neutropenia, lymphopenia, humoral deficiencies and altered physical integrity especially with the use of chemotherapeutic agents leading to disruption barrier integrity with mucositis leading to easy access of microorganisms, solid organ transplant leading to deficiencies in cellular and phagocytic immunity, malnutrition which leads to impaired immunity, and complement activity. fever is the main manifestation and occasionally the only sign of infection in immunocompromised children. when approaching a patient with immunodeficiency in the context of infection, one needs to look at the net state of immunosuppression. the net state of immunosuppression can be evaluated by the host defense defects caused by the primary disease, dose and duration of the immunosuppressive therapy (the longer duration of immunosuppressive therapy, the higher risk of infection), presence of neutropenia, and anatomical and functional integrity because defect in the skin or mucosa can lead to easy access for the microorganisms, metabolic factors, and infection with immunomodulating viruses (hiv, hbv, hcv, cmv, ebv, and hhv- ). risk of infections can be classified as high, intermediate, and low. high risk includes hematologic malignancies, aids, hsct, splenectomized patient, and congenital immunodeficiency especially severe combined immune deficiency (scid). intermediate risk includes solid tumors, hiv/aids, and solid organ transplantation. low-risk patients include patients with corticosteroid therapy, local defects, and diabetes. the pathogens in immunocompromised patients can be predicted based on the immune defect. for example, if there is an anatomical disruption in the oral cavity it lead to infections caused by alpha hemolytic streptococci, anaerobes, candida species, and herpes simplex virus (hsv). patients with urinary catheters will be at risk for infection caused by gram negative bacteria including pseudomonas spp., enterococci, and possibly candida. if there is a skin defect including central venous catheter (cvc), the patient will be at risk of staphylococcus species (both coagulase-negative staphylococci and staphylococcus aureus, bacillus species, atypical mycobacterium, and gram-negative organism. if a defect in the phagocytic function, either quantitative or qualitative, predispose what to invasive diseases like invasive pneumonia caused by bacterial pathogens: gram-positive (staphylococci, streptococci, and nocardia species) and gram-negative bacilli (escherichia coli, klebsiella pneumoniae, p. aeruginosa), other enterobacteriaceae, and fungal pathogens like candida species and aspergillus species. patients with defective cell-mediated immunity are at risk of infections caused by intracellular pathogens (i.e., viral, fungi, mycobacterial, and intracellular bacteria). intracellular pathogens include legionella species, salmonella species, mycobacteria, and listeria species, histoplasma capsulatum, coccidioides immitis, cryptococcus neoformens, candida species, pneumocystis jiroveci, cytomegalovirus, varicella-zoster virus, epstein-barr virus, live viral vaccines (measles, mumps, rubella, and polio) and protozoal, toxoplasma gondii, strongyloides stercoralis, cryptosporidia, microsporidia, and isospora species. patients with immunoglobulin deficiencyo are at risk of sinupulmonary infection caused by s. pneumoniae, haemophilus influenzae, and cns infection from viral infections, especially enterovirus, leading to chronic meningoencephalitis as well as gastrointestinal infection due to giardiasis. patients with complement deficiency are at risk of diseases caused by s. pneumoniae, h. influenzae, and neisseria species. splenectomized patients are at risk of invasive diseases (e.g., sepsis, meningitis) caused by encapsulated organism including s. pneumoniae, h. influenzae, and neisseria meningitidis. in evaluating patients with immunodeficiency, one can predict the pathogen based on the primary immune defects, the organs involved, and the clinical presentation of the patient. for instance, staphylococcus aureus, burkholderia cepacia, serratia marcescens, pseudomonas and aspergillous infection should be considered for a chronic granulomatous diseased (cgd) patient with soft tissue infection, lymphadenitis, liver abscess, osteomyelitis, pneumonia, and sepsis. in centers dealing with immunocompromised patients, the microbiology laboratory as well as the radiology service need to be well equipped and trained in diagnosing these patients. patients with fever should be worked up with complete blood count with differential, renal, and hepatic profile, blood culture from central line (if present), and peripheral culture. chest x-rays are not done routinely unless the patients have respiratory symptoms. other investigations need to be guided by the presentation of the patient. patients with diarrhea should have stool checked for bacterial culture, ova and parasite, viral culture, rotavirus, and electron microscopy for viral studies, in addition to microspora, cryptosporidium, and isospora. in addition to chest x-ray, patients with respiratory symptoms required nasopharyngeal aspirate for rapid test for viruses and pcr multiplex -a newly developed laboratory procedure that can screen multiple viruses and other respiratory pathogens in the same setting. patients with skin lesions should have skin biopsy from the lesion, which will be sent for culture (bacterial, fungal, and mycobacterium) in addition to histopathology for gram-stain and special staining for fungal as well as acid fast stain (afb stain). there are several objectives in managing infections in immunocompromised patients. the first and foremost objective is to assure patients' survival and prevent infectious morbidity. decrease days of hospitalization and decrease exposure to multidrug resistance organism, decrease number of days of antibiotic use to minimize selection of resistance organism. modification of antimicrobial therapy in immunocompromised patients is the rule rather than the exception. timely modification of antibiotic therapy is very important to control breakthrough infection. there are several questions to be addressed to choose the effective antimicrobial therapy when evaluating patients. in addition to history and physical examination, it is important to determine which arm/arms of the immune systems that is/are affected? what the clinical syndrome/site of infection is? (to predict what are the likely pathogens), what clinical specimen(s) should be obtained (empiric/definitive therapy)? and which antimicrobial agents have predictable activity against pathogens? with these in mind, one can predict pathogen and choose the right antimicrobial agents. patients with wiskott-aldrich syndrome are at risk of bacterial pneumonia as well as sepsis with gram-positive organisms including mrsa. in this situation, medication should include agents active against gram-negative pathogen plus anti-staphylococcus agents, for example, cefotaxime or ceftriaxone plus naficillin; if mrsa or penicillin resistant s. pneumonia is suspected, one can use vancomycin. the pathogen in immunocompromised patients can be predicted by the system involved during the presentation. for example, the presentation and etiological agents in pneumonia in immunocompromised patients are different than immunocompetent persons. in evaluating pneumonia in immunocompromised patients, one needs to know that the pulmonary complication is present in up to % of immunocompromised patients and mortality is up to % of those who require mechanical ventilation. the initial evaluation needs rapid assessment of the vital signs including oxygen saturation, complete blood count with differential, renal profile, blood culture, and imaging of the lung either chest x-ray or ct scan. the organism can be predicted based on the primary immune defect. at certain point in the history, the defect in the immune system, the presence or absence of neutropenia, history of antimicrobial exposure, the presence of potential pulmonary pathogens in previous cultures, and the presence of indwelling catheters should be looked at. the pattern and distribution of radiological abnormalities can predict the pathogen and the time and the rate of progression and time to resolution of pulmonary abnormalities. for definitive diagnoses invasive procedures may be needed including bronchoalveolar lavage (bal), transbronchial biopsy, needless biopsy, thorascopic biopsy, and open lung biopsy. in obtaining the biopsy from this patient, it is very important to send it for histopathology for special staining, for viruses, bacteria, fungi, pneumocystis, mycobacterial pathogen, and also culture for viral, fungal, bacterial, and mycobacterium. other laboratory tests that will help in diagnosing pneumonia are nasal washings or swabs for direct fluorescent antibody, pcr for respiratory viruses and atypical pneumonia, culture and staining, cmv antigenemia or cmv viral load testing, aspergillus galactomannan assay, and , beta d glucan. the radiological finding in immunocompromised patient can be focal (lobar or segmental infiltrate), diffuse interstitial infiltrate or nodular (with or without cavitation). focal infiltrate can be due to gram-positive or gram-negative bacteria, legionella, mycobacteria, and fungal infection. also the noninfectious etiology includes infarction, radiation, and drug-related bronchiolitis obliterans organizing pneumonia (boop). diffuse interstitial infiltrate is caused by viral infection, pneumocystis jiroveci, less likely mycobacterium, disseminated fungal infection, atypical pneumonium including chlamydia, legionella, and mycoplasma. other noninfectious etiology causing diffuse interstitial infiltrate include edema, acute respiratory distress syndrome (ards), and drug-related radiation. for nodular infiltrate with or without cavitation the infectious etiology include aspergillus infection, and other mycoses, nocardia, bacteria either gram-positive or gram-negative, anaerobes, and mycobacterium tb, as well as noninfectious etiology including disease progression like metastasis and drug toxicity. the management of immunocompromised patients with pulmonary infiltrate will depend on the patient presentation. if the patient is acutely ill, it is very important to begin empiric therapy to cover the likely pathogen based on the presentation of the patient and the primary immune defect with simultaneously comprehensive evaluation. subsequently, therapy should be adjusted based on culture and clinical response. in providing empirical antibiotic therapy in patient with pulmonary infiltrate and defect in cell-mediated immunity one need to consider pneumocystis jiroveci, nocardia, legionella, mycoplasma, in addition to aerobic gram-positive cocci and gram-negative bacilli therefore it is advised to use trimethoprim-sulfamethoxazole, macrolides including erythromycin or clarithromycin and agent active against gram-positive and gram-negative; for example, thirdgeneration cephalosporin with or without aminoglycoside with anti-gram-positive either nafcillin or vancomycin based on the incidence of methicillin-resistant staphylococcus aureus (mrsa) and penicillin resistant streptococcus pneumoniae. the fever is defined in the context of febrile neutropenia as a single oral temperature of more than . c or more than . c for at least h and is not related to the administration of pyrexial agents including blood, blood product, ivig, and pyrogenic drugs, especially ara c. neutropenia is defined as absolute neutrophil count (anc) less than /mm or less than , /mm with predictive decline to less /mm h. the most important risk factor is the presence of neutropenia as well as the degree and duration of neutropenia. the lower the neutrophil count, the higher the risk of infection. the longer the duration of neutropenia, the higher the risk of infection. usually, neutropenia is considered high risk if days and low risk < days. other risk factors include associated medical comorbidity, primary disease, and status (remission or relapse). low-risk patients are clinically defined by neutropenia as anticipated lasting less than days, clinically stable, and having no medical comorbid conditions. about % of neutropenic patients who become febrile have established or occult infections and about % of patients with anc less than cells/mm have bacteremia. the risk varies depending on the underlying disease, for example, patients post allogenic bone marrow transplantation are at higher risk than autologous bone marrow transplantation while aml has the higher risk than all. the lowest risk is in patients with cyclic neutropenia. in evaluating a patient with fever and neutropenia, it is important to keep in mind that signs and symptoms can be muted or subtle. profoundly neutropenic patients can sometime have life-threatening infections and yet be afebrile especially if they presented with abdominal pain. careful and comprehensive physical examination is critical and should be repeated at least daily because these patients are dynamic and their condition can change rapidly. other important points in the history include the nature of chemotherapeutic agents, steroids, or other immunosuppressive agents because these can predict the degree of immunosuppression, the duration of neutropenia, and the severity of neutropenia. the history of antibiotic prophylaxis is also important because the antibiotic used as prophylaxis should be avoided in treating these patients. reviewing the recent documented infection with susceptibility can help in determining the empiric therapy. for example, if the patient has a previous infection with multidrug resistance pathogen, empiric therapy can be used to cover these pathogens. if the patient had recent surgical procedure, this means there is break of the skin and is at risk for certain pathogens including grampositive cocci (coagulase negative staphylococci and staphylococcus aureus). allergy history is an important factor in selecting empirical therapy as allergic medications need to be avoided. detailed and thorough physical examination is important with focus on certain sites that can be a portal of entry of pathogens including periodontium, pharynx, lower esophagus, lung, skin, perineum, bone marrow aspiration site, and catheter entry and exit sites. after history and thorough physical examinations, blood culture from central and peripheral lines should done in order to identify the source of infection. for example, if the blood culture is positive from the central culture but negative from peripheral culture, the likely source is the central line. if both are positive, time is needed to positively determine the source of infection. routine surveillance culture is not indicated as it is not cost effective and has low predictive value. other cultures should be guided by the sites of infection. for example, a patient with respiratory symptoms needs to have nasapharyngeal aspirate for viral study, pcr multiplex, and atypical pneumonia. patients with gastrointestinal symptoms, for example, with diarrhea, the stool needs to be sent for viral study, culture and sensitivity, ova and parasite. chest x-ray should not be done routinely in all patients with fever and neutropenia because it has low yield in patients without respiratory symptoms. it is only done in children who have respiratory symptoms. if negative, a chest ct scan to be considered to better evaluate patient not responding to therapy. most patients with fever and neutropenia have no identifiable site of infection and no positive culture results. bloodstream infection is documented in about % of patients with fever and neutropenia. disruption of the skin or soft tissue including vascular access or catheter insertion site can be a point of entry. in those centers, who are dealing with cancer patients, it is very important to monitor the infection rate and pathogen as well as the resistance pattern in the same center. the local data will help to select the appropriate empirical antimicrobial therapy (> table . ). there is no ideal regimen because there are variables which include the risk status of the patient, microflora and their sensitivity patterns, toxicity indication, preference, and the cost. prompt initiation of broad-spectrum therapy when neutropenic patients became febrile is the key to successful management. in the mortality rate was up to % initially but with the introduction of empiric therapy against gram-negative organism the mortality rate now is close to %. there is no ideal regimen because this can be determined based on the isolate and its susceptibility in the same center as each center for example, one cannot extrapolate from different centers the likely pathogen, the same thing that a center can have a different pathogen and different susceptibility pattern in adult versus pediatric population with febrile neutropenia ( > table . ). monotherapy and combination therapy has equal efficacy. the monotherapy needs to have antipseudomonal activities including antipseudomonal penicillin with or without beta-lactamase inhibitor, carbapenem, and third-or fourth-generation antipseudomonal cephalosporins. the combination therapy includes antipseudomonal betalactam with aminoglycoside. both monotherapy and combination therapy have equal efficacy but it is important to look at the local data to be able to predict the empiric therapy either combination therapy or monotherapy. it is worth stressing that vancomycin should not be used routinely for empiric therapy in febrile neutropenia and there is a special indication for vancomycin. the vancomycin indication includes hemodynamic instability or other evidence of severe sepsis, pneumonia documented radiographically, positive blood culture for gram-positive bacteria before final identification and susceptibility testing is available, clinically suspected catheter-related infections (e. g., chills or rigors with infusion through catheter and cellulitis around the catheter entry/exit site), skin or soft-tissue infection at any site, colonization with methicillin-resistant staphylococcus aureus, vancomycin-resistant enterococcus, or penicillin-resistant streptococcus pneumoniae, and severe mucositis, if fluoroquinolone prophylaxis has been given and ceftazidime is employed as empirical therapy. if the patient started empirically on vancomycin the need for continuation of vancomycin should be re-assessed on daily basis. overuse of vancomycin in more than %, and selection for resistant organism and emergence of vancomycin resistance enterococci. the factors influencing antimicrobial selection include the types of bacterial isolates found in the institution, antibiotic susceptibility patterns, drug allergies, presence of organ dysfunction, chemotherapeutic regimen whether the patient was receiving prophylactic antibiotics, and condition of the patient at diagnosis, for example, presence of signs and symptoms at initial evaluation and presence of documented sites requiring additional therapy. the center-specific factors include the patterns of resistance, effect on microbial ecology, high presence of vancomycin resistance enterococci (vre), or extended spectrum beta-lactamase (esbl) producing organism. the patient-specific factors including recent antibiotic use such as current prophylaxis as drug allergy, and the underlying organ dysfunction. the signs and symptoms present at the initial evaluation determine. in the recent year more interest in the outpatient therapy for patient with fever and neutropenia. the advantages of ambulatory management of febrile patients with neutropenia especially those at low risk include lower cost particularly with oral outpatient therapy, fewer superinfections caused by multidrug-resistant nosocomial pathogens, improved quality of life for patient, greater . not recommended for routine use *other antimicrobials (aminoglycosides, fluoropuinolone, and/or vancomycin) may be added to initial regimen for complicated presentation or if resistance is suspected or proven infections in the immunocompromised host convenience for family or other caregivers, and more efficient utilization of valuable and expensive resources. the disadvantage includes the potential risk for developing serious complications such as septic shock at home, risk of noncompliance particularly with oral therapy, false sense of security or inadequate monitoring for response to therapy or toxicity, and the need to develop a team and infrastructure capable of treating substantial numbers of low-risk patients. there are several requirements for successful outpatient treatment programs for patients with febrile neutropenia which include institutional infrastructure and support, a dedicated and experienced team of healthcare providers, availability of institution-specific epidemiological data and susceptibility and resistance data, microbiologically appropriate treatment regimen, frequent follow-up monitoring of outpatient, adequate transportation and communication capabilities, and access to management team h a day, days a week. there are certain clinical events or manifestations that require modifying the initial antimicrobial therapy; for example, if a patient has breakthrough bacteremia and if gram-positive is isolated (add vancomycin especially if there is a risk of mrsa or pneumococcal resistance penicillin). if gram-negative organism is isolated consider resistant gram-negative and can change the regimen or broaden the coverage (carbapenems if the data in the center showed that the carbapenems has better sensitivity than cephalosporin or beta-lactam antibiotic). if the patient has catheter-associated soft tissue infection, vancomycin should be added. patients with severe oral mucositis or necrotizing gingivitis are at risk of anaerobic bacteria as well as viruses; add agent that is active against beta-lactamase-producing anaerobic bacteria including clindamycin, metronidazole, and acyclovir should be considered. if the patient has diffuse pneumonia, continue with the broad-spectrum anti-gramnegative coverage (add trimethoprim-sulfamethoxazole and macrolide to the therapy). increasing neutrophil count on patients who developed new infiltrates while on antibiotic can be related to the recovery of neutropenia. if the patient is stable observe if the neutrophil count is not rising, antifungal therapy should be considered as the patient is at risk for fungal infection. in addition to other evaluation aspergillus galactomannan and b-d glucan (fungitell) should be done with chest ct scan. depending on the ct scan findings bronchoalveolar lavage or lung biopsy should be considered. patient with prolonged fever and neutropenia needs to be observed if recovery of neutropenia is not imminent. antifungal therapy can include either regular amphotericin b, or lipid formulation of amphotericin b including liposomal amphotericin b (ambisome) or amphotericin b lipid complex (ablc), caspofungin or voriconazole depending of the availability of medications and epidemiology of the institution. infections in the immunocompromised low risk episodes of fever and neutropenia in pediatric oncology: is outpatient oral antibiotic therapy the new gold standard care? predicting events in children with fever and chemotherapy-induced neutropenia: the prospective multicenter spog fn study cefepime and death: reality to rescue clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: update by the infectious society of america etiology and clinical course of febrile neutropenia in children with cancer risk prediction in pediatric cancer patients with fever and neutopenia advances in the management of primary immunodeficiency immunocompromised children: conditions and infectious agents emprical oral antibiotic therapy for low risk febrile cancer patients with neutropenia meta-analysis of a possible signal of increased mortality associated with cefepime use febrile neutropenia: a critical review of the initial management fever and neutropenia in pediatric patients with cancer ) b lactam monotherapy versus b lactam-aminoglycoside combination therapy for sepsis in immunocompetent patients: systemic review and meta-analysis of randomized trials fever in immunocompromised patients bloodstream infections in hematology: risks and new challenges for prevention pediatric cancer patients in clinical trials of sepsis: factors that predispose to sepsis and stratify outcome key: cord- -ro nhody authors: louis, mariam; oyiengo, d. onentia; bourjeily, ghada title: pulmonary disorders in pregnancy date: - - journal: medical management of the pregnant patient doi: . / - - - - _ sha: doc_id: cord_uid: ro nhody pregnancy is associated with some profound changes in the cardiovascular, respiratory, immune, and hematologic systems that impact the clinical presentation of respiratory disorders, their implications in pregnancy, and the decisions to treat. in addition, concerns for fetal well-being and safety of various interventions complicate the management of these disorders. in many circumstances, especially life-threatening ones, decisions are based upon a careful assessment of the risk benefit ratio rather than absolute safety of drugs and interventions. in this chapter, we review some of the common respiratory disorders that internists or obstetricians may be called upon to manage. asthma is the most common respiratory disease during pregnancy. asthma affects - % of pregnancies in the united states and up to % in the united kingdom and australia. difference in prevalence around the world may be related to reporting methods, diagnostic methods, or possibly some environmental or genetic infl uences. pregnancy is a state of important physiological changes in the respiratory system. these physiological changes vary across the course of the pregnancy and are summarized in table . . • - in fi rst trimester and - by third trimester paco (mmhg) • - in fi rst trimester and - by third trimester ph • . hco (meq/l) • - tlc total lung capacity, erv expiratory reserve volume, rv residual volume, frc functional residual capacity, vc vital capacity, ic inspiratory capacity, irv inspiratory reserve volume, fev forced expiratory volume in s, fvc forced vital capacity, pao partial arterial pressure of oxygen, paco partial arterial pressure of carbon dioxide the course of asthma during pregnancy is variable. the majority of patients who improve in pregnancy tend to worsen in the postpartum period and vice versa [ ] . in general, asthma improves toward the end of the pregnancy, including labor and delivery. however, the rate of asthma exacerbations is increased between gestational weeks and [ , ] . this may in part be due to medication noncompliance during the earlier part of the pregnancy upon discovery of the pregnancy but may also have to do with other pregnancy-related factors such as esophageal refl ux, nasal congestion, hormonal factors, and alterations in immunity that may result in increased susceptibility to infections. the major predictor of disease course is the severity of asthma prior to the pregnancy, but race and obesity may also play a role. african american and hispanic women are more likely to have asthma exacerbations. poor compliance with medications and diffi culties with access to medical services may be important confounders. additionally, obese women tend to have more severe asthma as both asthma and obesity share a common infl ammatory pathway at the cellular level. asthma also tends to behave in a similar fashion in subsequent pregnancies. while well-controlled asthma does not appear to have adverse consequences during pregnancy, poorly controlled asthma may negatively impact some maternal and fetal outcomes. in the largest study performed to date on over , women with asthma and over , controls, asthmatic women were more likely to have pregnancies complicated by miscarriage, antepartum and postpartum hemorrhage, anemia, and depression [ ] . however, the risk of other negative outcomes such as gestational hypertensive disorders and stillbirths was not signifi cant in this study. in other large studies, a small, but statistically signifi cant risk of perinatal mortality, preeclampsia, and preterm deliveries have been reported [ , ] . a more recent retrospective cohort study performed in clinical centers in the united states has shown increased risk of preeclampsia, gestational diabetes, and all preterm births [ ] . secondary analysis of a recent randomized controlled trial showed that women with perception of good asthma control had a reduced risk of planned cesarean deliveries, asthma exacerbations, and preterm birth [ ] . in the same study, women with increased anxiety had a higher risk of exacerbations. there is some evidence suggesting that poorly controlled asthma also confers an increased risk of small for gestational age, and low birth weight [ ] . growth restriction may, however, be confounded by smoking. babies born to severe asthmatics are possibly more likely to have congenital anomalies [ ] . the treatment of asthma involves assessment and management from preconception to the postpartum period. please refer to table . and figure . for a general overview of the classifi cation and management of chronic asthma. there are four general components of asthma care, irrespective of gestational age. these are ( ) monitoring of respiratory status, ( ) avoidance of possible triggers, ( ) patient education, and ( ) pharmacological treatment. patients should get a baseline spirometry and be instructed in how to follow their peak expiratory fl ow rate (pefr) at home. ideally, this should be done twice a day in patients with persistent disease. since pregnancy does not affect fl ow rates, reductions in these numbers usually indicate a worsening degree of airfl ow obstruction and should prompt quick medical evaluation. second, it is critical that patients avoid their known triggers to asthma including tobacco, dust, extreme temperatures, and allergens such as pollen and pet dander. third, patients need to be educated about their disease. pregnancy constitutes a perfect window to educate women given the multiple contacts with providers increased motivation due to concerns for fetal well-being. trigger control from washing bed sheets to vacuuming to rodent control are important strategies to review, especially since in most circumstances, women are more likely to be exposed to these triggers. important topics that need to be reviewed also include inhaler technique, early recognition of symptoms of worsening asthma, an action plan for acute asthma exacerbations, as well as an overview of how poorly controlled asthma can affect the pregnancy. patients should also be provided with the opportunity to express their concerns and ask questions. in a multi-institutional prospective study, lower forced expiratory volume in s (fev ), but not asthma symptom frequency, was shown to be associated with adverse perinatal outcomes [ ] . these data may be a refl ection of the effect of asthma severity or poor asthma control on perinatal outcomes and emphasize the possibility of discrepancies between symptom-based assessment and more objective measurement of lung function in pregnant women with asthma. finally, women with asthma need to receive the appropriate pharmacological treatment to achieve disease control. populationbased data do show that well-controlled asthmatics without exacerbations have better outcomes than women with exacerbations, but for obvious reasons, there are no randomized controlled trials evaluating this particular question. although most clinical practices use symptom-based, guideline-directed assessments to decide on medication use, recent data from a randomized controlled trial suggest lower rates of exacerbation, improved quality of life, and reduced neonatal hospitalization when management decisions were based on measurements of exhaled nitric oxide in pregnancy [ ] . it is likely that this improvement in outcomes is due to improved control, rather than the method of assessment itself. table . provides an overview of the asthma medications that are used in pregnancy. as in the nonpregnant population, the choice of pharmacological agent depends on disease severity. a frank discussion with the expectant mother and her partner should occur to encourage them to voice their concerns regarding asthma treatment in pregnancy. most women are told to stop their inhalers at the time of pregnancy diagnosis because of fda category listing. for that reason, a good amount of time should be spent on counseling about the use of asthma drugs in pregnancy. explaining to women that asthma control is key to the health of the pregnancy and their baby is an important part of counseling and may have to be done repeatedly during the course of pregnancy. in general, most asthma medications are justifi able in pregnancy, and some have adequate safety data. as noted in table . , many of the drug choices are category c according to the fda classifi cation; however, these drugs are used routinely in the care of pregnant women with asthma. in addition, although leukotriene inhibitors are listed as category b, safety data are less reassuring than other drugs classifi ed as category c. omalizumab is classifi ed as category b by the fda despite the fact that all of the initial trials have excluded pregnant women. these safety data are based on animal studies which are limited by the fact that teratogenicity may be species specifi c. in addition, although prednisone may be associated with a small risk of cleft palate when administered in early pregnancy, the benefi t of this drug in an acute exacerbation of asthma by far outweighs the small risk of malformation. table . reviews the classifi cation of asthma severity, which includes not only symptoms but also peak fl ow meter measurements. other coexisting diseases may worsen asthma and may have to be treated in order to achieve optimal control. the most common of these disorders are allergic rhinitis, gastroesophageal refl ux disease (gerd), sleep apnea, and psychiatric illnesses. allergic rhinitis occurs in - % of nonpregnant asthmatics and worsens asthma symptoms. management of the allergic rhinitis with drugs such as steroidal nasal sprays often improves asthma symptoms. women who are pregnant can also develop a different form of rhinitis, called rhinitis of pregnancy. this typically occurs in the latter part of pregnancy and resolves completely within weeks after delivery. the prevalence of gerd among nonpregnant asthmatics varies between and %. in pregnant women with asthma, this number is likely higher given that gerd has been reported to be present in nearly % of all pregnant women [ ] . gerd can worsen bronchoconstriction via increased vagal tone, heightened bronchial reactivity, and microaspiration of gastric contents into the upper airway. patients who have symptoms of gerd benefi t from treatment. although proton pump inhibitors are not expected to increase the risk of congenital malformation in experimental animal studies and limited human pregnancy exposures, ranitidine constitutes a safer fi rst choice. finally, asthma and psychiatric comorbidities may coexist. stress and mental illness can worsen asthma in the pregnant women and may also complicate compliance. during labor, the general management of asthma is not signifi cantly different than above. most patients with asthma do not require a labor and delivery plan. however, patients with more severe disease or those who suffered an exacerbation close to term would require a detailed plan. stress dosing with steroids during labor can be considered in patients who have been on prolonged periods of systemic steroids during the pregnancy. patients with active symptoms or more severe asthma may benefi t from regional anesthesia. epidural anesthesia reduces minute volume and oxygen consumption and may help prevent hyperinfl ation in patients with active symptoms and reduce oxygen consumption. if general anesthesia is to be considered, then ketamine and halogenated anesthetics are preferred. it is safe to use oxytocin and prostaglandin e . however, ergotamine and ergot derivatives, -methyl prostaglandin f alpha, morphine, and meperidine should be avoided in pregnant women with asthma as they may be associated with an increased risk of bronchospasm. an overview of the management of acute asthma exacerbations in the pregnant woman is detailed in fig. . . more detailed information can be found in national heart lung and blood institute guidelines on asthma and pregnancy published in . the treatment is similar to nonpregnant women with a few key differences that need to be highlighted. the fi rst is to remember that during pregnancy, the normal paco is lower than in the nonpregnant state. therefore, a normal or high paco heralds worsening respiratory failure and should be acted upon quickly. second, hypoxia during asthma exacerbations can lead to fetal distress and decelerations. therefore, immediate bronchodilators and supplemental oxygen should be administered. finally, it should be noted that while the indications for airway intubation are the same in the pregnant asthmatic as the nonpregnant asthmatic, intubation during pregnancy, especially in the third trimester, can be more diffi cult. this is due to increased airway edema, low frc and oxygen reserve, and a more profound response to sedatives from decreased venous return. hence, the most experienced member of the team should perform the intubation and be familiar with diffi cult airway management procedures. airway intubation is discussed in more detail in the critical care chap. . pneumonia is one of the leading causes of non-obstetric maternal deaths in the united states [ ] . there are several categories of pneumonia based on the likely spectrum of pathogens: community-acquired pneumonia (cap), healthcareassociated pneumonia, hospital-acquired pneumonia, and ventilator-associated pneumonia as well as pneumonia in the immune-compromised host. as pregnant women are usually young and healthy, cap predominates. the overall rate of cap in pregnant women is . - / , pregnancies depending on the population being studied [ - ] . the risk of pneumonia is notably increased in gravidas with comorbid conditions such as asthma, anemia, and human immunodefi ciency virus [ ] . tobacco and substance abuse have also been independently associated with an increased risk for pneumonia. infl uenza increases the risk for development of bacterial pneumonia by denuding the respiratory epithelium and predisposing the host to infection. in adults, the causative agents for cap are identifi ed in - % of cases when advanced testing techniques are utilized [ , ] . the yield is much lower, in the range of - %, with regular testing. though specifi c studies in pregnant women are lacking, the likely pathogens are not considered to be signifi cantly different • less severe symptoms • pefr between - % predicted [ ] . pregnant women may be more likely to contract viral infections and tend to have more severe disease than the nonpregnant population. therefore, the estimates above may be somewhat different in pregnancy. gingival hyperplasia in pregnancy may promote changes in oral fl ora and promote growth of anaerobic bacteria. aspiration risk and heartburn [ ] may be increased in pregnancy, especially when undergoing sedative procedures or general anesthesia. whether these changes and increased gastroesophageal refl ux disorders are associated with increased risk of pneumonia is not clear. immune alterations in pregnancy that promote maternal tolerance to the fetus may impair optimal function of host defense mechanisms and increase the risk of infections. pregnant women have decreased lung capacity and decreased erv and rv resulting in a reduction in functional residual capacity. a state of compensated respiratory alkalosis is established by increasing minute ventilation. this is largely secondary to an increase in tidal volume and to a lesser extent an increase in respiratory rate. healthy gravid subjects have increased cardiac output and decreased oncotic pressure which peaks in the third trimester that promotes transudation of fl uid into the pulmonary interstitium. these changes diminish oxygen reserve, increase the risk of development of pulmonary edema with fl uid resuscitation, and predispose to respiratory failure and predispose women to more severe disease. pneumonia may be complicated by hypoxia, respiratory failure, or death, and preterm delivery appears to be the most common obstetric complication associated with maternal pneumonia. while intrauterine infection is known to cause preterm delivery, a causal relationship between pneumonia in pregnancy and preterm delivery is not well established. it is possible that higher levels of cytokines and other mediators such as tnf-α and prostaglandin f reported in bacterial infections may lead to preterm delivery and low birth weight. other reported complications include placental abruption, preeclampsia and eclampsia, and low apgar scores [ - ] . it is unclear, however, whether these complications are related to the actual infection or to other host factors. common causes for respiratory distress in pregnancy include infection such as urinary tract infection, pulmonary edema, asthma, aspiration, and pulmonary embolus. the clinical spectra of pneumonia caused by different pathogens overlap considerably. thorough history and examination along with microscopic examination of respiratory secretions may narrow the differential diagnosis and identify the offending pathogen. urine pneumococcal and legionella antigen may also aid in guiding antibiotic therapy and should be considered for patients requiring admission. during infl uenza seasons, respiratory viral panel should be sent. though blood cultures are usually negative and of low yield, they may add value in the patient requiring admission to the intensive care unit (icu). arterial blood gas should be done for all patients with hypoxia or those requiring admission to the icu and interpreted according to pregnant status. chest x-ray should be performed in patients suspected of having pneumonia and helps confi rm the diagnosis or show evidence of a complicated pneumonia such as lung abscess or pleural effusion. computed tomography scan is unlikely to add value in the management of pneumonia, unless empyema is suspected. ultrasound guidance likely reduces the risk of complications with thoracentesis in pregnancy given the cranial displacement of the diaphragm in pregnancy. bronchoscopy though rarely needed can be performed safely in pregnancy and should not be withheld when indicated. general supportive measures are similar in patients with various types of pneumonia. for patients with a viable fetus who require admission, the obstetric team should be consulted for fetal monitoring as well as timing of delivery in the event of fetal distress. hypoxia, acidosis, and fever should not be tolerated as they are independently associated with poor fetal outcomes. oxygen should be supplemented for goal saturations > % or pao above . fever should be treated aggressively for a goal temp of less than °c. in cases of severe pneumonia associated with respiratory failure, early intubation should be considered. intubations in pregnancy have a higher failure rate than the general surgical population (see chap. on airway intubation ). attempts to maintain co within an acceptable range may be challenging in the event of acute respiratory distress syndrome (ards) and the use of lung protective strategies. low tidal volume ventilation strategy with a target tidal volume of ml/kg is recommended for ards [ ] . though pregnant women were excluded in the acute respiratory distress network studies on lung protective strategies, low tidal volume ventilation should be attempted, initially with a higher respiratory rate to maintain ventilation given the survival benefi t observed in the nonpregnant population. however, higher tidal volumes may be required to correct acidosis that may compromise the fetus, in such instances attempts should still be made to keep the plateau pressure below cm of water as barotrauma is thought to contribute signifi cantly to lung injury. paco levels need to be watched closely, and given the mmhg gradient between fetal and maternal, maternal paco should be kept at mmhg or lower. use of bicarbonate to correct the ph has been suggested in the nonpregnant population though clinical studies to support this approach are limited. it is thought that the transfer of bicarbonate across the placenta is slow and may not be adequate to correct fetal acidosis. while the decision to admit patients to the icu is complex and should be individualized, clinicians should have a lower threshold when evaluating pregnant mothers. antibiotic therapy should be initiated empirically while awaiting confi rmatory tests that may aid in narrowing the antimicrobial coverage. in infl uenza season, antiviral (usually oseltamivir) should be started empirically as well. decisions about antibiotic choice should address the most likely pathogen, adverse effect on the mother, and should also weigh the risk of the specifi c drug to the fetus against the risk of inappropriately treated disease. an optimal drug would be one with maximal efficacy against the known pathogen and no risk to the fetus. however, such drugs are scarce, and in most circumstances, a drug with more benefi t than risk can be selected. other than concern for fetal safety, preferred antibiotics are not different from those in nonpregnant women, but dosing should take into account increased hepatic and renal clearance and increased volume of distribution. there is a theoretical concern that aminoglycosides and vancomycin may be associated with hearing and kidney dysfunction in the offspring, but this possibility has not been confi rmed clinically. penicillins, clindamycins, and most macrolides except clarithromycin have a good safety profi le. fluoroquinolones are usually avoided in pregnancy due to a theoretical risk of arthropathy in the offspring. however, some experts argue that this issue is not clinically signifi cant in humans. tetracyclines should be avoided as they may cause permanent dental discoloration. varicella (chicken pox) is caused by varicella zoster virus (vzv). varicella is predominantly a childhood illness that is usually self-limited and rarely results in severe disease. in adults, however, it is much more likely to be severe. vzv is not only likely to have increased morbidity and mortality in pregnancy but may also be associated with congenital abnormalities and poor fetal outcomes. varicella pneumonia is among the most severe maternal complication of vzv infection [ - ] . viral particles are shed from varicella-associated vesicles and get airborne. inhalation or contact with the conjunctiva results into contraction of the infection with entry of the virus through the respiratory mucosa. crusting over of the last crop of vesicles usually marks the end of the contagious period. patients are known to be infectious - days prior to development of the vesicular rash; for this reason, an alternative viral shedding site such as the respiratory tract is believed to exist [ ] . varicella is highly contagious with seasonal variation in incidence, being most prevalent in the winter and spring. it has a very high clinical attack rate of - % following exposure to susceptible individuals [ ] . following a primary infection with varicella, lifelong immunity is usually established in the majority of subjects; in a few people, however, second attacks of varicella may occur [ ] . while varicella follows a benign course in children, adults have up to times increased risk of severe disease [ ] . pregnant women are at a uniquely increased risk for infection. in the united states, the incidence of primary varicella averages . - cases/ , pregnancies. varicella pneumonia complicates - % of all cases, and % of mothers with pneumonia require mechanical ventilation [ , ] . maternal mortality from varicella pneumonia used to be high at - % before the introduction of antiviral therapy but is currently estimated at less than - % [ , ] . changes in physiology and immunity associated with pregnancy may increase the risk of infection and severe outcomes in the pregnant women. in an effort to promote maternal tolerance to fetal antigens, pregnancy is associated with a shift from th to th lymphocyte responses and associated cytokines at the maternal fetal interface. macrophage and lymphocyte-secreted th cytokines stimulate b lymphocytes promoting a humoral response while suppressing cytotoxic lymphocytes. while pregnancy may not necessarily be an immune-suppressed state in the real sense, immunity against vzv infection is primarily cell mediated, and a systemic shift away from cell-mediated immunity may increase susceptibility to intracellular viral pathogens, parasites, and bacteria. primary varicella (chicken pox) is associated with several adverse effects in pregnancy such as preterm delivery and low birth weight. in one study involving pregnant women with varicella compared to a similar number of noninfected controls, . % of pregnant women with chicken pox had a preterm delivery as compared to . % of controls [ ] . low birth weight and intrauterine growth restriction have been described. nearly - % of cases of maternal primary vzv infection result in congenital varicella syndrome (cvs), which is associated with a mortality of up to % in the fi rst few months of life and severe disability in survivors. primary vzv infection prior to the th week of pregnancy is associated with the highest risk for cvs [ , ] . clinical features of cvs include skin lesions in a dermatomal distribution that may lead to eventual scarring in up to % of cases, muscle and limb hypoplasia in up to % of cases, chorioretinitis and cataracts in up to % of cases, and abnormalities of gastrointestinal, genitourinary, and cardiovascular system in - % of cases [ , ] . neurological abnormalities such as mental retardation, microcephaly, and hydrocephalus occur in - % of cases resulting in learning diffi culties and developmental delays [ ] . the pathobiology of cvs is thought to be in utero reactivation similar to that of herpes zoster with a shortened latency period that is likely due to immature fetal cell-mediated immunity. while up to % of babies born to mothers with primary vzv infection have serologic evidence of infection, there is no serologic evidence of infection in babies born to mothers with herpes zoster. similarly, infants do not appear to be at risk of infection if maternal zoster occurs near delivery [ ] . unless disseminated, herpes zoster is thus not associated with a signifi cant increase in adverse fetal outcomes [ , ] . peripartum varicella infection places the infant at risk for neonatal varicella, which is associated with mortality rate as high as %. following a -to -week incubation period, fever, headache, malaise, anorexia, and other constitutional symptoms precede the occurrence of the rash by - days. the rash is typically vesicular, generalized, and intensely pruritic. varicella pneumonia can develop anywhere from day to day after the onset of the rash. late onset of respiratory symptoms with recurrence of fevers is suggestive of bacterial coinfection rather than primary viral pneumonia. skin superinfection with staphylococcal bacteremia and neurological involvement with encephalitis may occur. a thorough history and skin exam may strongly suggest the diagnosis of varicella. chest radiograph pattern in varicella pneumonia is nonspecifi c and may be normal or show unilateral or patchy areas of consolidation or nodular opacities. ct fi ndings include multicentric hemorrhage and necrosis centered around the airways and small nodular opacities surrounded by ground glass which may coalesce to form consolidations. healed and calcifi ed pulmonary nodules may persist [ ] . skin lesion (rather than bronchoscopic) sampling offers a high yield and should be attempted fi rst. the base of newly erupted vesicles has the highest yield and should be sampled. specimens can then be sent for viral culture, polymerase chain reaction (pcr), and immunofl uorescence (dfa). direct fl uorescent antibody test is rapidly available in most institutions. though bronchoscopy in most cases is not necessary, varicella may be recovered from bronchial washings by viral pcr and viral culture techniques. pregnant women suspected of having varicella should be admitted for initiation of antivirals and other supportive treatment. chest imaging should be performed on admission to evaluate for pulmonary involvement. antiviral therapy is associated with a reduction in the duration of symptoms when initiated within the fi rst h of onset of the varicella rash. due to the high risk of varicella pneumonia in pregnancy, empiric antiviral therapy should be initiated while awaiting confi rmatory results. acyclovir or valacyclovir are the antivirals of choice. oral acyclovir has low bioavailability that requires it to be administered in frequent doses to achieve therapeutic levels. valacyclovir has high oral bioavailability and less frequent dosing intervals and is an alternative oral formulation. there is however less experience with valacyclovir compared to acyclovir. presence of pulmonary symptoms should prompt admission to the icu and initiation of intravenous acyclovir which has a guaranteed and higher bioavailability. antiviral therapy is associated with significantly less morbidity and mortality when initiated prior to h. late presentation with varicella pneumonia should not obviate the initiation of antiviral therapy. a dose of - mg/kg intravenously every h for - days is recommended for vzv pneumonia. pulmonary bacterial superinfection may occur. studies characterizing bacterial pathogens likely to cause superinfection are lacking. thus, empiric broad-spectrum antibiotic coverage should be initiated in pregnant women with pneumonia. despite acyclovir crossing the placenta in signifi cant amounts, there appears to be no reduction in congenital varicella syndrome with treatment. the neonate should be isolated from the mother in the peripartum period until the mother is deemed noncontagious. consultation with high-risk obstetrics and neonatology would be useful given the risk of preterm labor and growth restriction. immunity to varicella consists of both vzv-specifi c neutralizing antibodies and cell-mediated immunity. immunity against vzv can be assessed by the use of antibody serologic assays. though there are no adequate controlled trials examining the effectiveness of vzig prophylaxis, vzig is associated with more than - % reduction in risk of contracting varicella and a signifi cant reduction in risk of severe disease [ ] . vzv can be prevented by vaccination. vzv vaccine is a live attenuated vaccine and is generally not recommended in pregnancy and in immune-suppressed individuals. varicella can be contracted from herpes zoster lesions as well. family members with such lesions should minimize contact and cover their lesions to decrease the risk of transmission. healthcare workers who deal with pregnant women should be screened and vaccinated, and similarly pregnant healthcare workers should avoid contact or exposure to patients with varicella. infection with infl uenza virus can result in an acute respiratory illness of varying severity. the majority of healthy individuals infected with infl uenza is asymptomatic or has minimal symptoms. however, adults with comorbidities, elderly subjects, and healthy pregnant women are at increased risk of severe disease and death. in addition, infl uenza infection during pregnancy increases the risk of adverse fetal outcomes. in a regular endemic season, infl uenza is estimated to result in , hospitalizations and , deaths in the united states. pregnant women are at increased risk for morbidity (including cardiorespiratory complications) and mortality from infl uenza compared with nonpregnant controls [ - ] that is more pronounced in the second and third trimester of pregnancy [ ] . in , the pandemic h n infl uenza in pregnancy working group reported on pregnant women in the united states with infl uenza a(h n ). among those, died ( % of all reported infl uenza a (h n ) infl uenza deaths in this period). most hospitalizations and deaths occurred in the third trimester [ ] . pregnant women with comorbidities or those who smoke have an increased risk for severe disease requiring hospital admission compared to those without comorbidities [ , ] . as discussed above, these physiological changes make pregnant women more susceptible to acquiring viral infections and subsequent development of severe disease. apart from direct effects to the mother, infl uenza has been associated with undesirable effects to the fetus. risks of adverse fetal outcomes vary with the severity of maternal disease. preterm delivery appears to be the most common and consistent complication associated with infl uenza pandemics. in the pandemic of and , higher rates of pregnancy loss, premature delivery, preterm deliveries, as well as other adverse effects were reported. in several reports during the pandemic infl uenza of among pregnant women requiring admission, preterm delivery was close to % and was even higher among mothers who were admitted to the icu [ , , ] . several other adverse fetal outcomes of maternal infl uenza have been reported especially during pandemics, including abortion, fetal distress, and placenta abruption [ , ] . symptoms of infl uenza in pregnancy are similar to symptoms outside of pregnancy. infl uenza virus-mediated leukopenia may make the host more susceptible to bacterial infections. secondary bacterial pneumonia is characterized by the appearance of a new fever and productive cough during early convalescence. radiologic fi ndings are generally similar to other viral pneumonias, and more extensive fi ndings are associated with more severe complications. tree in bud opacities may also be seen. laboratory fi ndings may include an elevated or low white count, lymphopenia, and hyponatremia. myoglobinuria and renal failure can occur rarely. cardiac muscle damage with associated electrocardiographic changes, disturbances of rhythm, and high levels of cardiac enzymes have been reported after infl uenza virus infection. sputum cultures may be revealing in the event of bacterial superinfection. streptococcus pneumoniae , staphylococcus aureus , haemophilus infl uenzae , and group a hemolytic streptococci are the bacterial pathogens most commonly isolated in adults with infl uenza. a defi nitive diagnosis of infl uenza requires laboratory confi rmation. diagnostic tests for infl uenza fall into four broad categories: virus isolation [culture], detection of viral proteins, detection of viral nucleic acid, and serological diagnosis. detection of viral nucleic acid allows for typing and subtyping of the specifi c virus strain. treatment of infl uenza consists of supportive management and specifi c antiviral therapy. optimizing supportive treatment is central to the management of infl uenza and probably of more benefi t than specifi c antiviral therapy. supportive therapy is similar to other types of pneumonia as discussed above. as with most drugs, information about safety and effectiveness of anti-infl uenza drugs during pregnancy is scarce. in view of potential severe maternal disease from infl uenza and adverse fetal outcomes, benefi ts of treatment with antivirals likely outweigh the potential risks to the fetus. there are two classes of antiviral drugs currently in general clinical use: adamantanes, (examples of which include amantadine and rimantadine) and neuraminidase inhibitors such as oseltamivir, zanamivir, and peramivir . adamantanes are active against infl uenza a only, increase infl uenza a resistance to adamantanes, and are associated with embryotoxicity in animal studies. as such they are not recommended in pregnancy. neuraminidase inhibitors are active against infl uenza a and b viruses. they are preferred in all adults and in pregnancy. though studies in pregnancy are inadequate, extensive use of oseltamivir in pregnancy during the hin pandemic was not associated with adverse effects specifi c to the drug. neuraminidase inhibitors reduce the duration and severity of symptoms and duration of viral shedding when initiated within h of symptom onset [ , , , ] . there is also evidence to support reduction in complication rate, duration of hospitalization, and mortality in adults. observational studies published during the pandemic demonstrated that, among pregnant women hospitalized with pandemic h n infection, treatment with oseltamivir was associated with fewer intensive care unit admissions, less use of mechanical ventilation, and decreased mortality [ , ] . empiric treatment should always be initiated in the gravid woman when infl uenza is suspected while awaiting confi rmatory results as delay in initiation of treatment is associated with an increased risk of severe outcomes, icu admission, and death [ , , ] . pregnant mothers presenting after h of symptom onset should still be initiated on therapy as there is evidence of benefi t even when initiated after days of symptom onset. initiation of antiviral therapy within the fi rst h is associated with the most benefi t [ , , , , ] . there is less experience with zanamivir which is administered by inhalation route. zanamivir is also contraindicated in patients with asthma as it has a potential of worsening respiratory symptoms [ ] . for patients requiring admission to icu for infl uenza pneumonia or in cases of suspected secondary bacterial infection, empiric antibiotic therapy should be initiated. sputum culture may be helpful in the case of isolation of resistant bacteria that may warrant changes or broadening of antibiotic coverage. in pregnant women, infl uenza vaccination induces an antibody response similar to that in nonpregnant women. cdc and who recommend pregnant women or women who will be pregnant during the winter or peak infl uenza season to be prioritized for vaccination. in addition to protection to the mothers, infl uenza vaccination may offer protection to the neonate as well as contribute to herd immunity in other family members. pregnant mothers who have not been vaccinated or those with comorbidities such as asthma who have been exposed to infl uenza may benefi t from antiviral prophylaxis. oseltamivir is preferred for prophylaxis due to its ease of administration. sleep-disordered breathing (sdb) is a spectrum of disorders that encompasses snoring and upper airway resistance, obstructive sleep apnea (osa), and other disorders. osa is a disorder characterized by periodic and recurrent collapse of the upper airway during sleep. obesity, age, and upper airway and facial abnormalities are the most recognized risk factors for the disorder. osa is prevalent in patients with chronic hypertension, cardiovascular disease, and metabolic disorders such as diabetes mellitus. the pregnant population appears to be at risk for the disorder given anatomic upper airway changes that occur in pregnancy as well as physiological changes and hormones. snoring occurs in close to % of pregnant women [ ] . the prevalence of osa in pregnancy is not well known, but preliminary data suggest that close to % of loud snorers in pregnancy have at least mild osa. the natural history of snoring around pregnancy is, however, unclear. there are some data suggesting that osa actually improves in untreated postpartum women around months after delivery. data on osa predating pregnancy is missing and pregestational and gestational osa may have different clinical consequences. there is a signifi cant lack in screening for the disorder by obstetric providers according to a recent study, even in obese patients [ ] . notably, the berlin questionnaire, a widely used screening tool in the nonpregnant population, appears to have poor positive and negative predictive values in pregnancy [ ] . snoring and excessive daytime sleepiness may be important predictors [ ] . chronic hypertension, age, obesity, and snoring appear to have a good predictive value for osa in high-risk populations [ ] . further validation of this potential predictive model in different pregnant populations is needed. snoring and osa have been shown to be associated with a variety of adverse pregnancy outcomes including gestational hypertension, gestational diabetes, and cesarean deliveries. gestational hypertension is the most studied link with numerous studies on snoring as well as osa showing a two-to threefold increased risk of gestational hypertension in snorers, even after adjusting for confounders such as body mass index [ ] . mechanistic studies are lacking and the directionality of the association not well clarifi ed, but it is possible that intermittent hypoxia, fl ow limitation, poor sleep, and arousals may play a role in causing endothelial dysfunction, infl ammation, and hypercoagulability that are common to the two disorders. a few studies to date have also shown worse abnormalities in glucose metabolism and a higher prevalence of gestational diabetes in women complaining of loud snoring and poor sleep [ , ] . gestational diabetes has been associated with a fi vefold increase in the risk of type ii diabetes at years and a ninefold risk at years [ ] . snoring, poor sleep, and osa have all been associated with a higher risk of unplanned cesarean deliveries. this association may be harder to explain and may depend on the actual reason leading to unplanned cesarean delivery such as obstetric, fetal, or medical causes. the impact of sdb on fetal and neonatal outcomes has also been studied, but the results of such studies have been more confl icting. growth restriction has been reported to be associated with snoring in some studies but not in others. the effect on apgar scores also appears to be controversial. there are some case reports and case series suggesting fetal decelerations secondary to sleep apnea, but a recent study evaluating synchronized limited sleep studies and fetal monitors have failed to show a signifi cantly higher prevalence of late decelerations [ ] . once diagnosed, treatment of osa is approved in patients with an apnea hypopnea index ahi > or those with ahi > who have symptoms that are known to respond to therapy such as daytime sleepiness. there are no specifi c guidelines on therapy initiation in pregnancy yet for various reasons. as stated above, the natural history of the disorder around the perinatal period is not well known. thus, it is possible that, with weight loss and reversal of pregnancy physiology, the disorder may resolve or at least improve in the postpartum period. in addition, there have been no trials to date that have shown that treatment of osa in pregnancy would improve pregnancy or fetal outcomes. this reason likely contributes to the fact that the disorder remains underscreened and underdiagnosed [ ] . based on current data, weight loss is unlikely to be an option in pregnancy because of concern that it may affect the nutritional status of the mother and therefore fetal well-being. alcohol and cigarette smoking avoidance is another therapeutic strategy in pregnancy that carries additional pregnancy-specifi c benefi ts. outside of pregnancy, cpap therapy has been shown to improve quality of life and daytime sleepiness with some data suggesting improvement in cardiovascular outcomes such as hypertension. it is likely that these effects of cpap are also true in pregnancy. observational studies have shown improvement in daytime fatigue and daytime somnolence in pregnant women with osa treated with cpap and re-titrated around midpregnancy [ ] . in women with preeclampsia, small, randomized trials have shown that in-laboratory positive airway pressure therapy improves hemodynamics, uric acid, and cardiac output compared to untreated women [ , ] . until future studies of cpap therapy are available in pregnancy, indications for therapy are likely the same as in the nonpregnant population. we are awaiting trials evaluating the effect of pap therapy on pregnancy-specifi c outcomes to be able to determine the "urgency" of starting pap therapy in pregnancy. the type of pap therapy that is most benefi cial in pregnancy is unknown. however, auto-titrating pap therapy has the advantage of avoiding repeat re-titration of pressure requirements. in summary, pregnant women with the above disorders need to be managed with pregnancy physiology and fetal effects of the disease and the therapy in mind. the course of asthma during pregnancy, post partum, and with successive pregnancies: a prospective analysis acute asthma during pregnancy a comprehensive analysis of adverse obstetric and pediatric complications in women with asthma asthma during pregnancy-a population based study infant and maternal outcomes in the pregnancies of asthmatic women obstetric complications among us women with asthma psychosocial variables are related to future exacerbation risk and perinatal outcomes in pregnant women with asthma effects of asthma severity, exacerbations and oral corticosteroids on perinatal outcomes spirometry is related to perinatal outcomes in pregnant women with asthma management of asthma in pregnancy guided by measurement of fraction of exhaled nitric oxide: a double-blind, randomised controlled trial predictors of gastroesophageal refl ux symptoms in pregnant women screened for sleep disordered breathing: a secondary analysis causes of maternal mortality in the united states pneumonia during pregnancy an appraisal of treatment guidelines for antepartum community-acquired pneumonia epidemiology of community-acquired pneumonia in edmonton, alberta: an emergency department-based study pneumonia as a complication of pregnancy microbial aetiology of community-acquired pneumonia and its relation to severity etiology of communityacquired pneumonia: increased microbiological yield with new diagnostic methods pneumonia in pregnancy pneumonia and pregnancy outcomes: a nationwide population-based study pneumonia during pregnancy: radiological characteristics, predisposing factors and pregnancy outcomes acute and chronic respiratory diseases in pregnancy: associations with placental abruption the acute respiratory distress syndrome network. ventilation with lower tidal volumes as compared with traditional tidal volumes for acute lung injury and the acute respiratory distress syndrome varicella-zoster virus (chickenpox) infection in pregnancy varicella pneumonia in adults consequences of varicella and herpes zoster in pregnancy: prospective study of cases modifi cation of chicken pox in family contacts by administration of gamma globulin second varicella infections: are they more common than previously thought? epidemiology of herpes zoster in children and adolescents: a population-based study varicella-related deaths among adult--nited states managing varicella zoster infection in pregnancy treatment with acyclovir of varicella pneumonia in pregnancy use of acyclovir for varicella pneumonia during pregnancy outcome after maternal varicella infection in the fi rst weeks of pregnancy varicella and herpes zoster in pregnancy and the newborn neurodevelopmental follow-up of children of women infected with varicella during pregnancy: a prospective study congenital varicella syndrome: the evidence for secondary prevention with varicella-zoster immune globulin intrauterine infection with varicella-zoster virus after maternal varicella high-resolution ct fi ndings of varicella-zoster pneumonia risk factors for severe illness with pandemic infl uenza a (h n ) virus infection in china pandemic infl uenza a (h n ) virus infection in postpartum women in california maternal morbidity and perinatal outcomes among pregnant women with respiratory hospitalizations during infl uenza season deaths from asian infl uenza associated with pregnancy pandemic infl uenza a(h n ) virus illness among pregnant women in the united states infl uenza virus infection during pregnancy in the usa pandemic infl uenza a (h n ) in pregnancy: a systematic review of the literature pandemic infl uenza a (h n ) in critically ill pregnant women in california infl uenza a/h n v in pregnancy: an investigation of the characteristics and management of affected women and the relationship to pregnancy outcomes for mother and infant novel infl uenza a(h n ) virus among gravid admissions california pandemic working g. severe h n infl uenza in pregnant and postpartum women in california antiviral agents for the treatment and chemoprophylaxis of infl uenza --recommendations of the advisory committee on immunization practices (acip) severe, critical and fatal cases of h n infl uenza in china severity of pandemic infl uenza a (h n ) virus infection in pregnant women product information: relenza(r) oral inhalation powder, zanamivir oral inhalation powder. glaxosmithkline (per fda) pregnancy and fetal outcomes of symptoms of sleep-disordered breathing patient and provider perceptions of sleep disordered breathing assessment during prenatal care: a survey-based observational study prospective trial on obstructive sleep apnea in pregnancy and fetal heart rate monitoring excessive daytime sleepiness in late pregnancy may not always be normal: results from a cross sectional study development of a pregnancy-specifi c screening tool for sleep apnea sleep-disordered breathing in pregnancy glucose intolerance and gestational diabetes risk in relation to sleep duration and snoring during pregnancy: a pilot study type diabetes mellitus after gestational diabetes: a systematic review and meta-analysis pregnancy, sleep disordered breathing and treatment with nasal continuous positive airway pressure reduced nocturnal cardiac output associated with preeclampsia is minimized with the use of nocturnal nasal cpap nasal continuous positive airway pressure reduces sleep-induced blood pressure increments in preeclampsia key: cord- -z bjkl g authors: brossman, charles title: planning for known and unknown risks date: - - journal: building a travel risk management program doi: . /b - - - - . - sha: doc_id: cord_uid: z bjkl g this chapter covers standard definitions of duty of care, example case law where employer duty of care was applicable, a variety of sample risks and concerns that employers and travelers should be aware of, in context with a travel risk management program. legal duty of care-definition "duty of care" stands for the principle that directors and officers of a corporation in making all decisions in their capacities as corporate fiduciaries, must act in the same manner as would a reasonably prudent person in their position. courts will generally adjudge lawsuits against director and officer actions to meet the duty of care, under the business judgment rule. the business judgment rule stands for the principle that courts will not second guess the business judgment of corporate managers and will find the duty of care has been met so long as the fiduciary executed a reasonably informed, good faith, rational judgment without the presence of a conflict of interest. the burden of proof lies with the plaintiff to prove that this standard has not been met. if the plaintiff meets the burden, the defendant fiduciary can still meet the duty of care by showing entire fairness, meaning that both a fair process was used to reach the decision and that the decision produced a substantively fair outcome for the corporation's shareholders. ijet international defines "duty of care" specific to trm as follows: duty of care: this is the legal responsibility of an organization to do everything "reasonably practical" to protect the health and safety of employees. though interpretation of this language will likely vary with the degree of risk, this obligation exposes an organization to liability if a traveler suffers harm. some of the specific elements encompassed by duty of care include: • a safe working environment-this extends to hotels, airlines, rental cars, etc. • providing information and instruction on potential hazards and supervision in safe work (in this case, travel) • monitoring the health and safety of employees and keeping good records • employment of qualified persons to provide health and safety advice • relative to "duty of care" is the "standard of care" that companies are compared to in defending what is "reasonable best efforts" or "reasonably practical," based upon what resources and programs are put into place by an organization's peers to keep travelers safe. prior to , business travelers thought nothing of being able to walk into an airport and meet their loved ones at their arrival gate. no security barriers, no cause for concern because air travel was something that at the time, our collective psyche felt generally safe, with the exception of a hijacking upon occasion. fast forward to a post- / world, and consider what the world's airports look like now and how the processes surrounding airport security have changed the way that we travel, whether for business or pleasure. why would any of us believe that the need for added security, particularly around those traveling for business, begins and ends at the airport? for companies who have been paying attention since / , the ones who, outside of the public eye, have had to deal with critical incidents that had the potential for loss of lives, corporate liability, and damage to their company's reputation, having a structured trm program not only reduced the potential for risk, but heightened the awareness of risk to their travelers. their definition of "travelers" extended beyond employees (transient travelers to expatriates) to contractors, subcontractors, and dependents. keeping travelers aware of imminent dangers takes effort and planning, and isn't something that employers can any longer react to after the fact. in some countries, lack of planning or resources to support business travelers has the potential to be grounds for claims of negligence in a company's duty of care responsibilities, and can lead to a criminal offense, such as with the united kingdom's (uk) corporate manslaughter and corporate homicide act of . what the "business judgment rule" in the above duty of care definition means in layman's terms is that a company must be able to prove that it put forth reasonable best efforts to keep its travelers safe. how this applies in different circumstances, jurisdictions and countries will vary. most countries' duty of care requirements fall under their occupational safety and health laws. for a comprehensive list of occupational health and safety legislation by country, an updated global database is maintained by the international labour organization (www.ilo.org ). simply put, companies cannot afford to no longer have a proactive trm program and just react after an incident takes place. the end result could reflect negligence on behalf of the company. for extensive detail on the uk's definition of duty of care in relation to the corporate manslaughter and corporate homicide act of , visit http://www.legislation.gov.uk/ukpga/ / . because each of the u.s. states is a separate sovereign free to develop its own tort law under the tenth amendment, there are several tests to consider for finding a duty of care under u.s. tort law, in the absence of a federal law. tests include: • foreseeability-in some states, the only test is whether the harm to the plaintiff that resulted from the defendant's actions was foreseeable. • multifactor test-california has developed a complex balancing test consisting of multiple factors that must be carefully weighed against one another to determine whether a duty of care exists in a negligence action. california civil code section imposes a general duty of ordinary care, which by default requires all persons to take "reasonable measures" to prevent harm to others. in the case of rowland v. christian (after and based on this case, the majority of states adopted this or similar standards), the court held that judicial exceptions to this general duty of care should only be created if clearly justified based on the following public-policy factors: • the foreseeability of harm to the injured party; • the degree of certainty that he or she suffered injury; • the closeness of the connection between the defendant's conduct and the injury suffered; • the moral blame attached to the defendant's conduct; • the policy of preventing future harm; • the extent of the burden to the defendant and the consequences to the community of imposing a duty of care with resulting liability for breach; and the availability, cost, and prevalence of insurance for the risk involved; • the social utility of the defendant's conduct from which the injury arose. pioneering companies (often in the energy services sector or government contractors) who were some of the first to adopt and implement forward-thinking programs, recognized early on that a critical incident or "crisis," isn't usually defined as an event impacting large numbers of people. they found that the largest percentages of incidents that required support, involved individual travelers or small groups. so while policies, plans, and readiness exercises are good to have in place for those highly visible incidents impacting large numbers of people, if handled improperly, the smaller incidents can cost companies considerably in damages and litigation costs, should their travelers or their travelers' surviving families prove that the companies in question weren't properly prepared to handle such incidents as they arise. case study-u.s. workers compensation and arbitration khan v. parsons global services, ltd united states court of appeals, district of columbia circuit-decided april , (https://www.cadc.uscourts.gov/internet/opinions.nsf/ dd d dd bce f d/$file/ - - .pdf) • during the course of employment in the philippines, on a day off, mr. khan was kidnapped and subsequently tortured. • employment contract included a broadly worded arbitration clause, and a separate clause specifying "workers compensation insurance" as "full and exclusive compensation for any compensable bodily injury" should damages be sought. • allegations that employer's disregard for mr. khan's safety in favor of minimizing future corporate kidnappings considering the way parsons handled the situation provoked mr. khan's kidnappers to torture him, cutting of a piece of his ear, sending a video tape of the incident to the employer, causing the khans severe mental distress. • mrs. khan alleged efforts by the employer to prevent her from privately paying the ransom, despite threats of torture, may have exposed mrs. khan to guilt of knowing that she could have prevented mr. khan's suffering if the employer had not withheld the ransom details from her. • mr. and mrs. khan filed a lawsuit for parsons' alleged mishandling of ransom demands by the kidnappers, and also alleging negligence and intentional infliction of emotional distress in d.c. superior court in . the employer removed the case to the federal district court, arguing on the merits of the new york convention for the recognition and enforcement of foreign arbitral awards, and then filed a single motion to dismiss or, as an alternative, to obtain summary judgment to compel arbitration. the employer initially received a summary judgment to compel arbitration. • upon appeal, this judgment was reversed. the court found that the recovery of the khans' tort claims were not limited by mr. khan's contract to workers' compensation insurance. • an additional appeal contended that the initial summary judgment granted by the court denied the khan's discovery requests, and dismissed mrs. khan's claim for intentional infliction of emotional distress • through the appeals process, the court found that the employer had in effect waived their right to arbitration. this case study calls into question legal jurisdiction, u.s. workers' compensation liability limitations for employers, and the value of being prepared for such an incident as kidnapping. this chapter outlines at a high level general categories that all companies must take into consideration when developing a trm program. very often the question is asked, "do i really need to do any of this, because our company hasn't been sued to date?" if you have employees or contractors traveling on your behalf (especially internationally), whereby your company is paying for their time and/or expenses, then the answer is absolutely yes. the level of investment and complexity may vary between companies, but in general, all companies must have a plan for how to address the issues provided herein and others. duty of care is never finite in its definition because companies must consider how laws from one country to the next will apply to travelers, contractors, potential subcontractors, and expatriates and their dependents, as well as any potential for conflict of law. also, as shown in the khan v. parsons global services, ltd. case study listed earlier in this chapter, employer remedies such as worker's compensation insurance in the u.s. aren't absolute; and therefore, warrants additional efforts and protections. consider the following incident types or risk exposures, which in some instances can impact large numbers of travelers, but more commonly impact only one person. according to the u.s. department of commerce international trade administration, only percent of international business travelers receive pretravel health care. pretravel health care can include, but is not limited to things like new or updates to vaccinations or inoculations, general health exams, medical treatment or procedures for a condition that may be risky to travel with, or prescription medicine planning for travel lasting for extended periods (longer than days). the chief operating officer at ijet, john rose, comments that, "a percentage of calls into our crisis response center are for minor, individual medical issues." however, callers may not always know that the situation is minor until they reach someone for support, which is why having an easy-to-identify, easy-to-access, single contact number or hotline for medical and security support is so important to all companies. a contracted crisis support service will know based upon predetermined protocols, which providers will support the traveler in the part of the world where they are traveling for medical issues, and ensure that the traveler gets the immediate advice that they need from a vetted medical professional. sometimes with a brief conversation with a nurse, the parties can determine a minor treatment that the traveler can facilitate, and in other circumstances a referral to a more senior medical official or emergency medical resource may be necessary based upon the initial consultation by the first-level medical support personnel contracted by the traveler's company. as discussed later in the book, who provides the crisis response case management and who provides the medical or security services specific to the traveler in question are not necessarily mutually exclusive. there could be different providers in different parts of the world, used for different reasons that are outlined in company policies and protocols. the consequences of mistakes as a result of a lack of preparation or resources can be costly, from financial loss and traveler productivity loss to the company, to a serious health issue for the traveler, or simply a ruined trip. while clarity via training and policies on who supports traveler medical issues should be very clear to everyone within an organization, the following common medical mistakes should be avoided where possible, as recommended by dr. sarah kohl, md of travelreadymd (http://www.travelreadymd.com): statistically, most medical problems you are likely to experience while traveling overseas cannot be prevented with a vaccine. for example, there are no vaccines for jet lag, diarrhea, blood clots, malaria, or viral infections such as dengue. before you travel overseas, make sure you are educated about these potential problems. most can be prevented with simple measures. information from different sources on the internet can be conflicting and can lead you to believe you need more interventions than actually necessary. as travelers prepare to depart, employers should provide them with access to resources that can advise on medical concerns relative to your destinations. of course, travelers should also discuss any personal medical condition concerns with their own or qualified medical professionals in addition to receiving employer provided risk intelligence regarding their trip. unfortunately, travelers regularly suffer needless medical complications because they fail to take simple steps to avoid predictable issues. simple precautions can save you a lot of discomfort and make your trip safer and more enjoyable. here are some examples: medical compression stockings, if properly fitted, can protect you from a life-threatening blood clot. knowing the right insect spray to choose, from the multitude of choices available, can protect you from insect-borne disease. avoiding seemingly harmless activities in certain locations (ones that a hotel concierge might even recommend) can protect you from parasites, respiratory illness or malaria. travelers often fail to recognize how a common illness such as diarrhea or a respiratory infection can cause a flare-up of an underlying condition. travelers who are good at managing food allergies, asthma, and diabetes at home may experience difficulty finding the resources they need overseas. in addition, these individuals may find themselves looking to a non-english-speaking doctor for help. measles, tuberculosis, and other infections are gaining a foothold in some european countries. low immunization rates within these communities are thought to be the root cause. don't risk becoming ill or bringing an infection home. check with your health care provider before you travel to discuss preventive measures. if you have a chronic health problem that is well under control, you will want to be prepared to self-treat under certain conditions. you may also want to be prepared to access a network of doctors who speak your native language, if needed. lastly, travelers should never assume that a pre-existing condition is covered by corporate-or consumer-based travel insurance or medical membership programs. when in doubt, always ask your human resources department or trm program administrator. companies commonly expect that corporate insurance policies or business travel accident (bta) policies provide enough coverage for travelers, when sometimes they may not. this is why protocols and regular training exercises for internal risk program stakeholders take place, to understand what is covered and what is not, as well as how to handle each situation. whether insured or not, consider the value and cost savings of prevention based treatment as shown in the examples provided below. consider the possibility that anything that an employee or representative comes in contact with during the course of a business trip (during or after hours) that can potentially make them ill or kill them is a liability to the employer. biological hazards or biohazards are pathogens that pose a threat to the health of a living organism, which can include medical waste, microorganisms, viruses, or toxins. toxicity is the degree to which a substance can damage an organism (not exclusively biological, as it could be chemical). brett vollus, a former qantas airline employee of years, filed suit against the airline claiming that his spraying of government-mandated insecticides on planes to prevent the spread of insect-related diseases like malaria, caused him to develop parkinson disease after years of administering the chemicals in the flight cabins. it was also discovered from a brain scan after a tripping incident that vollus had a malignant brain tumor. considering this was a government mandate, it will be interesting to see if the question becomes: what did the government know about the risks of these chemicals? if a precedent is set in this suit, will liability extend to other airlines using or who have used such chemicals for extended periods, against repeat business travelers who regularly flew or fly in markets where such spraying was or is common practice over a long period of time? epidemics are outbreaks of disease that far exceed expected population exposures during a defined period of time. epidemics are usually restricted to a specific area, as opposed to pandemics that cover multiple countries or continents. mature trm programs monitor these more visible outbreaks and recommend vaccinations for travelers going to impacted areas; they also provide access to emergency medical resources when necessary, but also have a large focus on education, training, and prevention. however, employers should always be mindful of other environmental factors in the traveler's workplace both at home or abroad, such as urban or rural environmental factors. examples may include prolonged exposure to pollution, lack of sanitation (particularly when it comes to their expat communities). employers should work towards limiting those exposures or changing the environment through continuous process improvement reviews. according to major medical and security evacuations suppliers, corporate-sponsored evacuations involving one or more travelers happen almost every day when you include both medical and security-related evacuations. it is a mistake to think that just because a case study or example is slightly dated, the instances they represent occur infrequently. it's quite the opposite. however, most incidents are not publicly documented to the degree that they can be reported upon. the five primary things that companies must be concerned with when facing a pandemic situation are: . the potential impact on personnel. . the pandemic, crisis response plan. . the potential impact on business operations. . the potential impact on business supply chain. . the potential impact to share value or price. what many companies don't consider is the potential for shareholder lawsuits against executives for business losses resulting from a lack of planning for situations such as pandemics. from shared sick time policies to work-at-home policies during is your organization pandemic ready? harvard's school of public health recently released survey data showing how deeply concerned u.s. businesses are about the possibility of widespread employee absenteeism that might follow an outbreak of the swine flu (h n ). researchers from the school questioned more than businesses across the country. two-thirds of companies said they couldn't operate normally if more than half of their workers were out for weeks. and four of five organizations predicted severe operating problems if half of their workers missed a month of work. a crisis, being able to quickly communicate a position or a plan, and to answer questions in the event of such an emergency, can not only save money and productivity, but garner employee confidence and calm nerves. chapter elaborates on the relationship between travel risk management (trm) and other aspects of risk management across the enterprise (erm-enterprise risk management). according to the new zealand herald, the country's largest company, fonterra, could lose $ million because of the ebola epidemic. fonterra ceo, theo spierings, noted that when african countries lock down their borders to control the disease, demand dropped for fonterra's products. he commented, "so…movements in west africa become more and more difficult, so that limits movement of food as well, movement of people-people going to the market, doing their groceries-so you see demand really dropping pretty fast." "if the market in west africa slowed down or dropped off that would affect , tonnes of powder," mr. spierings said. "that's about percent, percent of our exports. so you talk…$ million or something like that." these survey results should encourage all organizations to prepare for the worst by developing a crisis management plan. in addition to ample warning, senior management has ample reason to prepare, and no excuse not to. an organization's executives won't be blamed for the outbreak, but they do risk censure if they fail to prepare, respond, and communicate with internal and external stakeholders. this white paper tells how. to help organizations and their leaders prepare for a possible h n pandemic, certain key issues must be addressed to keep operations running as smoothly as possible: • human resource (hr) issues that drive pandemic planning. • planning for steps necessary to keep an organization operating during the pandemic period. • implementing steps needed to create an enterprise-wide crisis management plan. • internal and external issues that crisis communications must address. why bother planning for the h n pandemic? to put it simply, companies and organizations that plan for any type of crisis demonstrate the behavior of responsible citizens. formulating a detailed crisis management plan specifically for h n achieves four things: . protects employees' health and safety. . lessens the chance of a major interruption of your daily business. . protects your company's or your brand's reputation. . allows daily business activity to continue with minimal disruption if you are affected. companies must establish open lines of communication with all audiences while dealing with the effects of the pandemic or other significant events. should one occur, these stakeholders will want to know what you are doing to manage the situation and minimize their risks. if you communicate with these stakeholders openly and promptly, you send four valuable messages: • you are taking charge of the situation. • you take it seriously. • you have the best interests of your staff and customers at heart. • you run a responsible company with nothing to hide. pandemics have a disastrous effect on a company's optimal functioning because they prevent large numbers of critical employees from showing up for work. the resulting interruption to normal operations can have a disastrous cascading effect, affecting nearly every corner of the organization at considerable cost. employees unable to work or prevented from working become anxious and insecure. when they start asking management questions that aren't answered sufficiently or quickly, it exposes the fact that management hasn't developed contingency plans or that management failed to consider what employees need to know. part of the cost of failing to prepare can be measured by the resultant loss of trust in management's capability, judgment, and credibility. we know from experience there are certain predictable questions that employees will ask and hr departments must be prepared to answer. for example: hr departments should, as a matter of urgency, review attendance and sickday policies to ensure they have made allowances for managing the largerthan-normal issues h n creates. some of the policies that will need to be considered for implementing or addressing include: . how/when to start monitoring/screening employees at the workplace to determine if they are sick or pose a risk. how/when sick employees should be sent home to protect colleagues at work or be stopped/prevented from coming to work where they could infect colleagues. . how/when the company should be temporarily closed due to the number of sick employees. . how/when to implement steps to minimize face-to-face contact at work. . how/when to allow certain employees, including senior management, to work remotely from home or another branch/office. . how/when employees should be allowed to stay at home to look after sick family members. . how/when the company's travel policies should be changed/suspended. . how/when to stop employees from coming into contact with suppliers and customers. . how/when to implement and enforce a "wash your hands" and "cover your mouth and nose when coughing and sneezing" policy; this must include making face masks and the use of hand sanitizers mandatory across the company. how/when to change the company payroll policy so that all employees receive electronic payments into their accounts; consider establishing an emergency "employee help" fund. . any and all extensions/additions to your existing payroll and work hours' policies. at the core of your h n crisis plan, your hr department must be fully prepared to explain and communicate any new policies or changes to employees on an ongoing basis in all offices. this includes offices and employees that may not be affected by the pandemic at all. international and regional offices must also be briefed as they, too, could be directly impacted if there is an h n outbreak. employees should also be asked for input and ideas. this may help to highlight potential management or operating aspects that have not been considered. it will also make employees feel part of the pandemic planning process and thus, more accepting of and cooperative with the final plan. if appropriate to your workplace and organizational culture, additional steps can be taken to protect employees by putting up educational posters, using training materials, and even arranging for annual flu shots (under doctor's supervision) to be provided in the workplace for convenience. employees should also be encouraged to learn and do more on their own and away from work. all of these actions send a message to employees that you are looking out for them, their jobs, and the company's well-being. in return, employees are much more likely to "go the extra mile" in order to lessen the business impact of widespread absences. communicating during a crisis is important, but what businesses do is always more important than what they say. making good decisions and providing straightforward, honest and factual information to all employees with frequent updates is one of the most critical actions management can take. ideally, all companies and organizations would have enterprise-wide crisis plans in place before a crisis breaks. but realistically, we know from multiple surveys that at least half don't. too many companies assume an "it can't happen to me" mentality or, in tough business or competitive conditions, they decide not to invest in "insurance" activities. unfortunately, some find out the hard way that you cannot choose your crisis; it chooses you-and almost always at the most inconvenient time. if yours is an organization that hasn't taken the steps necessary to implement crisis preparedness, here are some interim steps that you can take quickly to address h n . remember, the most effective and least costly way to manage a crisis is to prevent it from happening in the first place. you cannot stop h n , but you can take steps to keep it from damaging your operations, your reputation, and your bottom line. here's a quick checklist of things an organization can do, even at this late date: . appoint a pandemic coordinator or team. this individual or team will lead the organization through various steps to become pandemic-ready. have them first conduct a vulnerability and risk assessment. that means identifying areas in which you are at heightened risk of infection or in which your responses or ability to compensate will probably be weak. armed with this knowledge, you should be able to prepare for worst-case scenarios and begin planning accordingly. . get your crisis management team up to speed. a crisis management team consists of senior employees who will deal full time with a crisis while the rest of the organization runs as normally as possible. the most effective crisis teams typically consist of no more than five members who serve as its decision-making leadership. crises are not situations for committees or consensus building. they demand swift and certain decisions and actions be made under "battlefield conditions." we strongly recommend that you have a "five-star general" heading up your team. . a crisis management team must possess sufficient inherent or delegated power to command unrestricted access to a full cross-section of corporate disciplines, including hr, sales, customer service, information technology (it), security, operations, facilities management, communications, department/business unit headsfrom every corner of your organization. the crisis managers must know who from these disciplines are to be brought on to support the crisis management team on an as-needed "on-demand" basis. note that these disciplines are for advice and support, not crisis decision making. give them full authority to carry them out. . the team should also include someone who will be company spokesperson throughout the crisis. ideally, the spokesperson should be a senior company executive. he or she should have received formal media training, and should have the stamina, self-discipline, and inner strength to be able to convey trust and believability when speaking during a time when bad news may need to be delivered to various audiences. . think about including external experts on your team. these could include public health consultants, doctors, hr consultants, and business continuity experts. no organization can hope to be crisis-ready unless it is prepared with messaging ready to be disseminated to audiences on short notice and under pressure. crisis messaging typically consists of fully or partially (fill-in-the-blanks type) prepared statements addressing a range of potential situations anticipated in advance. prepared organizations keep them in a template format. then, as a crisis develops and the actual facts of the situation become known, the relevant template can be rapidly updated with all pertinent information. in a crisis, you simply do not have time to agonize for long over "what are we supposed to say?" remember, it is only during the first minutes of a crisis that you have your one chance to take control of the situation via proactive communication. in that time, messages must be disseminated internally to staff and externally to the relevant audiences, such as customers, stockholders, suppliers, and partners, and possibly the media. businesses that conduct vulnerability and risk assessments will have a better idea of the templates and draft messaging they will need for a flu outbreak. these situations range from temporarily closing a site to announcing an interruption of service. the tone of all messaging must demonstrate that management is taking the situation seriously. employees are your first priority and must receive crisis-related messaging before anyone else. the media and relevant external stakeholders can then receive the same or similar messaging soon after. department heads in your company can be used to communicate directly with employees. employees should also be provided with messaging that they can share with others outside the organization. in today's "always-on" instantaneous online world, whatever employees are told invariably becomes public knowledge within minutes. from time to time, someone will ask a question that cannot be answered using prepared messaging. the crisis team must be prepared to reply "i don't know," and then either explain why, honestly and plainly, or commit to providing the answer at a given time in the future. nothing destroys trust and creates anger more than speculating or guessing at answers that may be proven wrong at a later stage. while you must respond quickly to all questions, you may not be able to answer them all. the crisis team must understand the difference. stakeholders want reassurance you are doing everything possible to manage the situation and communicating without a hidden agenda. if you intend to keep your business open and running during a significant event, say so. for credibility, communicate the steps that you are taking to ensure it is kept open. if you are asked questions and are uncertain about what will take place, acknowledge this honestly. make every effort to find the answer quickly and, when you have it, follow up as soon as possible. plan to work with third parties. adopting a go-it-alone attitude in dealing with a pandemic is needlessly dangerous. organizations are wise to be working with key third-party consultants to make crisis preparedness as robust as possible. key third parties could include: don't overlook your supply chains. companies providing each other with operations-critical products, goods, or services become inextricably linked. a problem in another company may cascade to yours, affecting your ability to meet contractual obligations. steps they take to stay in business may be beneficial or disruptive to you. knowing ahead of time will help you make appropriate arrangements or establish alternatives. cooperating with customers, partners, suppliers, and local governments helps you become pandemic-resilient. expert legal opinion must be obtained on how to address contractual obligations should a full scale pandemic break out. if you're prevented from delivering products or services and thus break legally binding contracts, customers/ partners could hold you liable for failing to plan adequately. such legal action could expand or precipitate a second crisis, when the media reports the legal action and you are forced to deal with a reputational crisis. during a pandemic, organizations must communicate effectively with all internal and external audiences. being ready to communicate proactively and at a moment's notice requires advance preparations. in all cases, employees are the most important communications targets during a crisis. friends and family will contact them along with many of their external business relationships (including the media) to ask "what's really going on?" and we know from experience that poorly briefed employees tend to speculate in the absence of solid information. this could easily precipitate a secondary crisis, forcing you to deal with rumor-mongering by employees and potentially false reporting by the media. either could cause serious damage. thus, you must designate in advance your primary or "official" internal communication channels, and let everyone in your organization know what they are. while face-to-face verbal communication is the best medium for internal audiences during a crisis, it may not be possible if h n strikes. depending on your specific situation, one of the following channels should be considered in order to communicate companywide: remember: what is written and given to employees can be passed on to the media and other parties. communication with all external stakeholders must be timely and accurate, with messages consistent with what is being communicated internally. messaging differences should be determined by relevance to the receiver. but be safe: when in doubt, overcommunicate. in a crisis, everyone wants more information, not less. if you had to communicate with % of your customers within minutes, could you? do you have up-to-date accurate contact information housed in databases that can support mass messaging such as blast e-mail or recorded voice messages with outbound autodialing? blast-fax? cell phone information for texting? nobody has time to build these contact databases once a crisis strikes. assemble them now. the best time to start communicating is when there is no crisis. a proactive information campaign could spearhead the opening of new channels of communication with your various external audiences prior to a crisis. the following external communication channels can be used proactively or reactively depending on the situation: while social media tools such as twitter, facebook, youtube, and blogs can play a role in crisis communication, at this time we believe they are not the tools best suited to be your primary or "official" communication channel to the outside world. especially for business organizations, social media are not yet universally accessible. but more importantly, they are not within your complete control. you must be extremely careful about what you say via social media, as it is very difficult to change anything after it has been sent out. it's the very nature of most crises that the situations and facts change, and change often. social media messages containing old information can too easily recirculate, causing misunderstandings and conflicts precisely at a time when they can do the most damage. a major h n breakout could devastate supply-and-value chains, and possibly close down entire industry sectors. this will prevent companies from providing or delivering much needed services. customers, partners, suppliers, and employees will feel a significant impact. there will also be financial repercussions. in short, a business could be forced to close down if it is not ready for all eventualities. to be truly resilient in a crisis, the organization must have an up-to-date business continuity plan detailing how it will restore its operating functions, either totally or partially, within a certain period of time. to achieve this, key decision makers must: • have an in-depth look at their company to identify essential functions needed to keep doors open. nonessential ones can be temporarily discontinued without impacting day-to-day operations. people with key skills that are important to the business during the pandemic must be identified and protected whenever possible. those with nonessential skills may be told not to report for work during the pandemic. • consider contingency plans to switch operations to other sites, if possible. • identify alternative suppliers that you can switch to at a moment's notice. your primary suppliers of utilities, goods, products and services may suddenly shut down because of poor planning. you should ask current suppliers to disclose what contingency plans they have in place to ensure the provision of uninterrupted service to you. put backup plans in place to switch to other/competing suppliers and contractors if you're the least bit unsure of their preparedness. • determine if their it systems are sufficiently robust so critical technology-dependent business processes would still function. even though more than one billion people travel via commercial aircraft every year, illness as a direct result of air transportation isn't common; however, there are risk exposures associated with air travel that both employers and travelers should be cognizant of in order to mitigate the risks when possible. most modern aircraft are equipped with hepa (high efficiency particulate air) filters, which, according to the european air filter efficiency classification, can be any filter element that has between % and . % removal efficiency. according to pall corporation, for aircraft cabin recirculation systems, the definition has been tightened by the aerospace industry to a standard of . % minimum removal efficiency. most modern aircraft provide a total change of aircraft cabin air to times per hour, passing through these hepa filters, which trap dust particles, bacteria, fungi, and viruses. many airlines have an airflow mix of approximately % outside air, and % recirculated, filtered air whereby the environmental control systems circulate the air in a compartmentalized fashion by pushing air into the cabin from the ceiling area, and taking it in at the floor level from side to side, versus air movement from the front to back of the aircraft. however, most viral respiratory, infectious diseases, such as influenza and the common cold, are transmitted via droplets that are most commonly transmitted between passengers by sneezing or coughing. these droplets can typically only travel only a few feet this way. however, it is their survival rate once they land on seats, seatbelts, tray tables, and other parts of the passenger cabin that can provide additional exposure, which is why sanitation of your personal seating area when traveling, particularly your hands with an alcohol-based sanitizer before eating, is important. surgical masks have been shown to reduce the spread of influenza in combination with hand sanitization, particularly when worn and practiced by the infected individual. viral outbreaks in recent years of concern to business travelers have included middle east respiratory syndrome (mers), severe acute respiratory syndrome (sars), and ebola, h n (swine flu), among others. the international air transport association (iata) has developed an "emergency response plan template" for air carriers during a public health emergency, which can be found at the following link: http://www.iata.org/whatwedo/safety/health/ documents/airlines-erp-checklist.pdf disinsection is the use of chemical insecticides on international flights for insect and disease control. international law allows disinsection and the world health organization (who) and the international civil aviation organization suggest methods for aircraft disinsection, which include spraying the aircraft cabin with an aerosolized insecticide while passengers are on board, or by treating aircraft interior surfaces with a residual insecticide when passengers are not on board. two countries, panama and american samoa, have adopted a third method for spraying aerosolized insecticide without passengers on board. not specific to just air travel, blood clots or dvt (deep vein thrombosis) can be a serious and potentially deadly health risk for any traveler with restricted mobility in an aircraft, car, bus, or train. anyone traveling for more than hours without sufficient movement can be at risk. many blood clots are not necessarily visible and can go away on their own, but when a part of one breaks off, there is the possibility of it traveling to your lungs, creating a pulmonary embolism, which could be deadly. in addition to traveler training on prevention of dvt, companies should take this threat into consideration with regards to international class of service policies or reimbursement consideration for upgrades. according to the u.s. centers for disease control (cdc), the level of dvt risk depends on whether you have any other risks of blood clots in addition to immobility, as well as the length or duration of travel. the cdc also states that most people who develop blood clots have one or more other risks for them, such as: • older age (risk increases after age years) civil unrest generally takes place when a group of people in a specific location is angry, resulting in protests and violence. around the world, there are countless incidents of civil unrest that erupt, which can not only cause inconvenience and safety concerns for business travelers, but can also cause mental and emotional stress for which the employer is ultimately responsible to try to limit the effects of whenever possible, and to treat as early as possible after the incident is over. within the first months of , the world saw civil unrest and protests in turkey, brazil, ukraine, thailand, venezuela, malaysia, cambodia, india, egypt, hong kong, russia, china, and the united states (excluding military acts of war or civil war). in january of , governments and private organizations from around the world began evacuating people from egypt due to civil unrest. approximately , americans lived and worked throughout egypt at the time, and approximately requested evacuation assistance from the u.s. government. such an exercise requires massive planning and resource availability, even for much smaller groups of people. consider the number of other companies competing for the same resources to evacuate their people, as well as the general public trying to leave. companies without a plan in place, along with proper strategic crisis response resources, would have been last in line to evacuate their impacted travelers and at greater risk for someone getting hurt or killed. at one time, civil unrest may have been considered primarily politically motivated, but today, there are many factors that lead to the spark that starts the fires of violence. things such as overpopulation, lack of food and resources, poverty versus wealth (income inequality), crime, lack of jobs and religious persecution, while sometimes related to political causes, are all reasons for the increased violence we see today. with the advent of mobile technology being increasingly available to the middle and lower classes of the world, it doesn't take much or long time-wise, to incite anger or hatred in others who can assemble quickly, sometimes before one has a chance to react. throughout the text of this book, readers should see a common theme about the importance of quality risk intelligence. the previous statement about violence breaking out before one can react, is a perfect example of how real, risk intelligence (not simply recycled news) can often predict these events as they are starting to come together and warn people in advance, so that companies and individuals can take steps to mitigate their exposure. in such examples, would employers and travelers want "cheap information" from a provider that primarily scrapes news wires on the internet, or qualified, vetted security analysts with thousands of sources? if a life depended on it, i'm confident that people would choose vetted intelligence. another way to understand the value of news versus intelligence is that "intelligence" is in effect "analysis + news + context + advice." experienced security analysts specializing in specific geographic areas and subject matter produce quality intelligence. climate change can also drive civil unrest with sea-level risings, damage to property, water shortages, and increased costs associated with lost productivity or infrastructure collapse. people simply go where the goods and the work are provided. when that is lost for various reasons over a large area, there can be mass migrations that sometimes see the intervention of military units to prevent border crossings and an unanticipated drain on other population's resources. property damage and serious violence in vietnam in may , as a result of anti-chinese protests, was experienced not only by chinese businesses, but by other assets owned by companies from additional countries. some manufacturing experienced an interruption to production, causing between percent and percent decreases in company share prices. these figures and insight are intended to support business cases for companies to invest in not just products and programs to avoid business disruptions caused by civil unrest and other factors, but the time required to simply have plans in place to mitigate the risk. imagine being in a foreign country on business and getting pulled over on the road in your rental car by a local police officer. unaware of any laws that you may have broken, after a quick discussion with the officer, you realize that they are extorting you for a bribe and you simply don't have the cash or the training to respond to the situation properly. alternatively, a traveler arrives in foreign country via a commercial flight, carrying marketing collateral and merchandise to give away at a conference that they are attending. the local customs authorities misinterpret part of your merchandise, because the conference is being held in a deeply religious country with harsh laws regarding morality. not only does the traveler fear for their safety, the company doesn't want to cause an international incident, which can be difficult to clean up. does your company provide resources and training to travelers regarding how to handle themselves in such situations? women from western countries may still find it hard to believe how many places in the world where their personal safety, and possibly their lives, can depend upon the length of their skirt and sleeves, or the time of day that they are out and about, particularly without a male escort. in , a woman from new york was found dead in turkey; a turkish man confessed to killing her after allegedly trying to kiss her. according to news reports, she was a first-time international traveler, an avid social media user, and was in constant contact with friends and family. it is reported that she wasn't off the beaten path or doing anything risky, simply taking photographs. sometimes just having some awareness training about your destination can save female travelers the potential for conflict or incident, such as holding one's purse in her lap or at her feet with a thick strap around her leg to secure it, or ensuring that luggage tags do not openly display addresses and have a cover that must be opened to reveal the information. according to joni morgan, director of analytic personnel at ijet international, "in some cultures, for instance, it's not appropriate for a woman to initiate a handshake." "in afghanistan, it's considered an insult to show the bottom of your shoe, so when crossing your legs, you want to be aware of that." female road warriors are learning important skills that are notably helpful in all destinations, but in some more than others, additional care should be taken. indications of when to take additional care is an important part of pretrip travel intelligence provided by an employer's trm program, supported by a vetted travel risk intelligence provider. some considerations for female business travelers while traveling alone or even with peers on business include the following: . always plan your route before going anywhere. never leave your hotel or office without understanding where you are going and appropriate routes. travelers do not want to look lost in the street looking at maps or their mobile devices for directions. . use vetted taxis or ground transportation providers. make an attempt to prebook all transportation with providers that your company has preapproved, and have appropriate security policies and procedures in place, such as identifiable car numbers, driver identification, tracking, and electronic order confirmation. removing the potential for unfamiliar, unvetted ground transportation providers can drastically reduce the potential for assault or abduction. can purchase a device to block the outside view of the inside of their hotel room by assailants who have devices that enable broad visibility inside hotel rooms from the outside via peepholes. in the absence of such a device, place tape or a sticker over the inside peephole opening. . choose your hotels carefully. make it clear to your employer that you take safety seriously and that you expect safety considerations to have been taken into account when designating preferred hotels for employees to stay at. employers should be able to articulate what kinds of safety standards go into their preferred hotel selections, which form the basis for how different incidents can be mitigated or handled should an incident occur. . never stay at hotels or motels where the room door is exposed to the open air (outside). . try to not accept hotel rooms on the ground floor. being on a higher floor makes it more difficult for an assailant to get away or not be seen on surveillance cameras. . never tell anyone your room number verbally. if a hotel employee asks for it, provide them with it in writing and personally hand it to them. do not write it on a check and leave it unattended. you don't want someone in the area to overhear you providing this information verbally or to view it on your check. . alcohol consumption-never leave your drink unattended or out of your sight. a momentary distraction is an opportunity for someone to place drugs into your drink. also, never drink until intoxicated while on business and be mindful of locations where drinking alcohol may even be illegal. . emergency phone numbers-know the equivalent of or the local emergency services phone number and your local consulate or embassy phone numbers and preprogram them into your mobile phone, in addition to your company's provided crisis response hotline. whichever number you are instructed to call first according to your company's policies (if your company provides a crisis hotline), having those numbers handy can save your life when moments count. . never tell anyone that you are traveling alone. avoid solitary situations. try to remain in social situations where plenty of people are around. if you feel uncomfortable, leave. . leave a tv or radio on when you leave your hotel room to provide the perception that someone is in the room. . never hesitate to ask security or someone to escort you to your room, and avoid exiting an elevator on your hotel room's floor when sharing the elevator with a man. if necessary, go back to the lobby level until more people get on the elevator or you can ride it alone. use valet parking. self-parking can often put individuals at risk of assault in unsupervised car parks or garages. . upon arrival at your hotel, take a hotel business card or postcard and keep it with you at all times. if ever you are away and need to return, and you either don't remember the address, or your driver doesn't know where it is, or you don't have a signal on your mobile device, you can use the card to provide address details (usually in the local language). . do not use door-hanging room service order forms (typically for breakfast), as they often note how many guests you are ordering for. . make sure you have adequate insurance. just because you are on a business trip, doesn't mean that your employer has obtained enough insurance or services to support you in the event that a crisis occurs. hopefully, employer-provided insurance and support services are adequate and have been effectively communicated, but don't travel for business without a thorough understanding of what kind of coverage and support you have. in particular, any medical coverage should guarantee advance payment to local service providers and not require travelers to pay for services and file for reimbursement upon their return home. most people don't have access to the many thousands of dollars that might be necessary to procure sufficient treatment and support. . travel with smart travel accessories. travel with a small, high-powered flashlight and one or more rubber door stops for the inside of your hotel room (be aware of the downside of using in case of a fire). . leave copies of your passport with someone at home who can easily get a copy to you if you need it. having a copy can expedite the replacement of a lost or stolen passport if needed. an honor killing is a homicide of a family member, typically by another family member, based upon the premise that the victim has brought dishonor or shame to the family, in such a way that violates religious and/or cultural beliefs. again, as with religious or cultural restrictions on modest clothing, honor killings are not exclusive to women, but within the cultures and countries where honor killings are more generally accepted, men are more commonly the sources or perpetrators of the revenge or honor killings, very often charged by the family to watch over and police female family member behavior, restricting or prohibiting things such as adultery, refusal to accept an arranged marriage, drinking alcohol, or homosexuality. honor killings are not exclusive to any one country or religious faith, because they are found in a broad scope of cultures, religions, and countries. although more common in places such as the middle east and asia, there have been documented cases of honor killings in the united states and europe. if honor killings were based largely on the premise of family honor, why would nonfamily members or business travelers need to be concerned? honor killings have been known to happen to nonfamily members in strict, culturally conservative countries. perceived inappropriate behavior, typically with a female member of a conservative family, could result in the killing of the female family member and the nonfamily suspect. such killings can even take place in broad daylight. in lahore, pakistan in , one such incident occurred involving multiple participants while the police looked on. the victim killed for marrying a man that she loved without family consent. often these crimes are hard to document or record because they are disguised as suicides or, in some latin american countries, as "crimes of passion." the united nations fund for population activities (unfpa) estimates that as many as women fall victim to honor killings each year. article of qatar's constitution states that it is a "duty of all" who resides in or enters the country to "abide by public order and morality, observe national traditions and established customs." this means that wearing clothing considered indecent or engaging in public behavior that is considered obscene is prohibited to all, including visitors. in qatar, the punishment could be a fine and up to months in prison. with kissing or any kind of physical intimacy in public, as well as homosexuality, being outlawed under sharia law, all travelers to or via the middle east for business or tourism purposes (e.g., to attend the world cup), should take heed. the qatar islamic cultural centre has launched the "reflect your respect" social media campaign to promote and preserve qatar's culture and values. posters and leaflets advise visitors, "if you are in qatar, you are one of us. help preserve qatar's culture and values, please dress modestly in public places." while research finds no definition in qatar's article for modest clothing, campaigns such as this suggest that people cover up from their shoulders to their knees and avoid wearing leggings. they are not considered pants or modest dress. an example of the campaign leaflet can be found in "qatar launches campaign for 'modest' dress code for tourists" published by the independent (uk newspaper). modest dress applies to both men and women. of course, strict laws, preferences or rules regarding dress expectations for women are not exclusive to any one country. http://www.pewresearch.org/fact-tank/ / / / what-is-appropriate-attire-for-women-in-muslim-countries/. while each employer may have specific approaches to handling an incident such as sexual assault, there must be a defined process for reporting such an event that involves crisis response resources that can intervene and provide advice on how to handle the situation with local authorities, perhaps first by contacting diplomatic contacts before contacting the police. facing local authorities alone in a foreign country for such a sensitive issue as sexual assault can be daunting and intimidating nbc news, "family stones pakistani woman to death in 'honor killing' outside court," may , , http://www.nbcnews.com/news/world/family-stones-pakistani-woman-death-honor-killing-outsidecourt-n . united nations, resources for speakers on global issues, "violence against women and girls: ending violence against women and girls," http://www.un.org/en/globalissues/briefingpapers/endviol/. lizzie dearden, "qatar launches campaign for 'modest' dress code for tourists," independent, may , , http://www.independent.co.uk/news/world/middle-east/qatar-launches-campaign-for-modest-dresscode-for-tourists- .html. without a company or diplomatic representative being there to assist. crisis response suppliers should be equipped with necessary contacts, recommended protocols, and resources to help the victim and employer to address the situation and get help as soon as possible. this is another good example of why employers should have a single global crisis response hotline for any crisis that a traveler may encounter while on business travel. sexual harassment can happen anywhere. what happens if you require a traveler to use a supplier per the company's travel policy, and a representative of that supplier sexually harasses the traveler? in addition to standard protocols within the workplace, considerations must be given to business travel, which from many perspectives today is an extension of the workplace. a hate crime is a criminal act of violence targeting people or property that is motivated by hatred or prejudice toward victims, typically as part of a group, based upon creed, race, gender, or sexual orientation. a critical component of any trm program is disclosure of potential risks to the traveler prior to taking a trip to a destination. in consideration of laws and cultural beliefs in select countries or regions that sanction the persecution, imprisonment or killing of members of the lgbt (lesbian, gay, bisexual, and transgender) community, specific races, religions, or sex (mainly women), travelers must be prepared a female business traveler, over the course of several months on a project, travels during the week, returning home on weekends. over time, a car rental clerk at the location she rented from weekly, began making comments to her about her appearance each time she checked-in or returned a car. eventually, the rental clerk began calling her mobile phone to share how he liked what she was wearing and began sending her text messages while she was in town, using the mobile number she provided at check-in. not responding and scared, the traveler canceled all future reservations and books rental cars with another provider. shortly thereafter, the clerk began calling and texting her, asking why she canceled and when she would be coming back. a concerned colleague of the traveler brought the situation to the company's travel manager, who intervened with their human resources and legal departments to proactively address the situation with the authorities and the supplier, and to provide appropriate support for the traveler as best they could. the end result, after much investigation, was the issuance of restraining orders against the clerk and termination of his employment. it turned out that the supplier hadn't done sufficient background checks on its employees and the clerk in question had a history of similar behavior. with information and training on acceptable behavior when traveling to these destinations and understand how to get help should they find themselves in a difficult position or a potential victim of a hate crime. saying the wrong thing, at the wrong time, in the wrong place, or wearing something inappropriate, or acting a certain way that isn't culturally acceptable in some parts of the world, can put travelers in real danger. how does your company prepare your travelers for facing these challenges as they travel? while some laws that promote discrimination that can lead to hate crimes are more notable in the press, such as the antigay propaganda law put into place in russia prior to the sochi olympics, some are less obvious to the average business traveler, such as up to years in prison in nigeria for simply being gay, or india's supreme court ban on gay sex, or the execution of homosexuals in saudi arabia. in april , an -year-old man wearing islamic dress was attacked and killed while walking home from his mosque in birmingham, uk, by a -year-old ukrainian student who told police that he murdered the victim because he hated "nonwhites." according to "one in six gay or bisexual people has suffered hate crimes, poll reveals," a article in the the guardian (uk), some , gay and bisexual people in the uk have been victims of hate crimes in the previous years, prompting police to take the problem more seriously. such examples continue to support the notion that a crisis doesn't need to be an incident that impacts large numbers of people at once. quite often they involve one person at a time, and they don't need to take place in a high-risk destination, thus discounting the argument by some companies that trm isn't necessary for those who don't travel to high-risk destinations. a crisis can happen anywhere for many different reasons, affecting as few as one person at a time. although privacy laws generally prohibit companies from asking employees about sexual orientation, making sure that all employees (of any sexual orientation) understand the dangers that face lgbt travelers, can help to mitigate risks for themselves (if lgbt, traveling with an lgbt person, or if perceived as lgbt) or their fellow travelers, considering that there are many countries still in the world where homosexuality is a crime. • in mauritania, sudan, northern nigeria, and southern somalia, individuals found guilty of "homosexuality" face the death penalty. the last five years have witnessed attempts to further criminalize homosexuality in uganda, south sudan, burundi, liberia, and nigeria. • south africa has also seen at least seven people murdered between june and november in what appears to be targeted violence related to their sexual orientation or gender identity. five of them lesbian women and the other two were non gender-conforming gay men. • in cameroon, jean-claude roger mbede was sentenced to three years in prison for 'homosexuality' on the basis of a text message he sent to a male acquaintance. • in cameroon, people arrested on suspicion of being gay can be subjected to forced anal exams in an attempt to obtain 'proof' of same-sex sexual conduct. • in most countries, laws criminalizing same-sex conduct are a legacy of colonialism, but this has not stopped some national leaders from framing homosexuality as alien to african culture. • a cave painting in zimbabwe depicting male-male sex is over years old. • historically, woman-woman marriages have been documented in more than ethnic groups in africa, including in nigeria, kenya, and south sudan. • in some african countries, conservative leaders openly and falsely accuse lgbti (lesbian, gay, bisexual, transgender, and intersex) individuals of spreading human immunodeficiency virus (hiv)/acquired immune deficiency syndrome (aids) and of "converting" children to homosexuality and thus increasing levels of hatred and hostility towards lgbti people within the broader population. lgbti individuals are more likely to experience discrimination when accessing health services. this makes them less likely to seek medical care when needed, making it harder to undertake hiv prevention work for, and to deliver treatment where it is available. in many government programs they are not identified as an "at risk" same-sex marriage laws restricting freedom of expression and association kidnapping and ransom activities targeting military enemies and employees of multinational companies who are from countries considered to be enemies to terrorist causes, are the primary fundraising strategies of organized terrorist groups. even for companies that do not routinely visit high-risk locations, having some sort of policy in place for proof of life, which is the means for verifying that a captive is in fact who the captors say they are and that the captive is still alive, such as by providing information that only the alleged victim would know, can save valuable time in a sensitive situation and perhaps someone's life. additionally, a kidnap and ransom insurance policy is something for all companies to consider, with an understanding that kidnappings happen at anytime around the world, and largely go unreported. according to the guardian news and media (uk), approximately % of fortune companies have kidnap and ransom (k&r) insurance. k&r insurance originates from , when it was first offered by insurance provider lloyd's of london, after the kidnapping and murder of american aviator charles lindbergh's infant son. in , the uk's home secretary, theresa may, supported and passed the uk's "counter-terrorism and security act of ," which prohibits insurers from paying claims used to finance payments to terrorist groups. the uk is where many of the world's k&r insurers operate. many insurers insist that it shouldn't matter because they claim to not pay or finance ransoms, but instead pay claims for services and expenses related to negotiating the release of the captives in question, medical and counseling treatment, along with things such as employee salaries while in captivity. it's difficult to obtain information from clients who hold such policies, because most policies have strict cancelation provisions to prevent a company from disclosing the fact that it has such a policy. details specific to restrictions on insurance related payments associated with terrorist related ransoms in the uk's counter-terrorism act of can be found at http://www.legislation.gov.uk/ukpga/ / /section/ /enacted. companies with any travel to high-risk destinations have a responsibility to provide some kind of survival training for those travelers, in addition to access to resources and provision of current intelligence before, during, and sometimes after their travel is complete. to complicate matters, based upon a g summit, an agreement was made to not pay ransoms to kidnappers for fear that the money was directly funding terrorist organizations; therefore, some countries, such as the uk, are enacting laws to prohibit the transfer of funds for hostages in certain circumstances or locations. senior foreign and commonwealth office (fco) officials in the uk estimate over $ million has been paid in ransoms to terrorists during the years leading up to the report. it isn't safe to assume that your government will help bankroll your hostages' release if you find yourself in such a situation, and you may face criminal prosecution if you offer a ransom to specific groups. people who commit kidnappings do so for a variety of reasons, including political or religious views, but most often they are purely financially motivated. perception is everything, so identifying traveling employees of large or multinational companies, makes them an easy target, thus the reason for using code names for arriving ground transportation signs. of course, how one dresses and where one goes, also have an impact on how victims are targeted (i.e., wearing expensive jewelry, standing out from the crowd in expensive clothing or making it clear that you work for a large multinational company [clothing with logos or meeting drivers with company names on greeting placards]). later in this book, kidnappings are explored in greater detail. some statistics will be presented that both companies and travelers should find serious enough to change their perception about the possibility of kidnapping happening to them. kidnapping incidents should be accounted for in all corporate crisis response plans. while some medical emergencies may require the need for evacuation, it is more common to receive calls for assistance involving acute or preexisting conditions that can be diagnosed and treated locally. lost or stolen medication, allergic reactions to food or the environment, and unexpected illnesses, are common occurrences when calling a corporate crisis response hotline. however, in some instances, individuals must be quickly assessed to determine if adequate medical care can be obtained locally, and if not, a decision must be made to evacuate that person to the closest logical facility capable of treating the individual. many domestic health insurance plans do not provide coverage for individuals traveling abroad, and often when they do, they require out of pocket expenditures for services; in other words upfront payment by the patient, leaving the patient to file for reimbursement upon the patient's return. more often than not, in these circumstances, this equates to thousands of dollars that most people do not have immediate access to, especially on short notice. the cdc recommends that if domestic u.s. coverage applies, and supplemental coverage is being considered, the following characteristics should be considered when examining coverage for planned trips: • exclusions for treating exacerbations of preexisting medical conditions. the company's policy for "out of network" services. • coverage for complications of pregnancy (or for a neonate, especially if the newborn requires intensive care). • exclusions for high-risk activities such as skydiving, scuba diving, and mountain climbing. • exclusions regarding psychiatric emergencies or injuries related to terrorist attacks or acts of war. • whether preauthorization is needed for treatment, hospital admission, or other services. • whether a second opinion is required before obtaining emergency treatment. • whether there is a -hour physician-backed support center. additionally, one should have coverage for repatriation of mortal remains, should someone covered unfortunately die while away from their home country. because so many domestic healthcare plans do not provide for international coverage and evacuations services, companies must provide comprehensive coverage for their employees globally and employees should be fully aware of what is included in said coverage. employees may decide that what the company offers is not enough by their personal standards and consider purchasing additional coverage to supplement what the company provides. when purchasing different types of travel-related insurance, it's important to understand the differences between the different products offered in the marketplace, especially the differences between consumer and business travel products. options can include: . travel insurance, which provides trip cancellation coverage for the cost of the trip, delays or interruptions, and lost luggage coverage. it can and often does provide some amount of emergency medical and evacuation coverage, but often requires payment of medical expenses by the insured in the country where services are rendered (versus direct payment by the insurer), and the filing of paperwork for reimbursement upon the insured's return home. buyers should be mindful of whether or not the policy provides guaranteed payment directly to the suppliers in question. . generally, some consumer based travel health insurance pays for specified or covered emergency medical expenses while abroad; however, such insurance (and others) may require that the individual pay any medical expenses in the country where services are rendered and file for reimbursement upon the individual's return home. insured parties should always check whether guaranteed payment to providers is included in coverage, as with some consumer-based travel insurance. medical evacuation coverage is for medical transport to either the closest available treatment facility or the insured's home country for medical attention, depending upon the policy and the situation or medical condition. considering the cost of medical evacuations, depending upon the distance and the services required for the transport, expenses can vary greatly, but can be very costly. it is recommended that policies have greater than us$ , in coverage (some provide up to us$ , or more), and include transportation support for an accompanying loved one or family member. policies with less than us$ , in coverage should be reconsidered for possibly not providing enough coverage. buyers should note that these products cover primarily just the evacuation and not medical services or treatments. . medical membership programs can cater to individual travelers on a per-trip or annual basis or on a companywide basis. these programs can vary widely by provider and membership type, but can potentially provide access to network services resources with separate liability for payment, or network access with some coverage for payment of specified services rendered based upon premiums and policy guidelines. the lii at cornell university law school provides a third-party overview of workers' compensation. variable forms of this type of coverage are provided at both the state and federal levels in the united states, with similar forms of workers' compensation laws also in place in select countries around the world. these laws are typically intended to provide some form of medical benefits and wage replacement for employees who are injured on the job. this coverage is often provided to employees in exchange for releasing their right to sue their employer for negligence, sometimes with fixed limits on payment of damages. employers need to understand whether the workers' compensation coverage that is applicable and in place for their and their employees' protection, covers international travel. in some cases, additional policies or riders will be required to provide coverage for travel outside of the traveler's home country or state. additional considerations to this kind of coverage should be as to "when" and "where" the coverage is in effect outside of a company office or facility (e.g., business travel). in some cases this may limit employer liability, but whether it does varies by jurisdiction and circumstance. considering how workers' compensation benefits have been reduced in recent years, especially in the united states, much consideration needs to be given to assessing what coverage is needed for traveling employees above and beyond workers' compensation, and coordinated with crisis response protocols and risk management support providers for efficient case management, claims, and documentation. all of these considerations provide a strong business case for why employers should have unique and specific programs in place for medical services and evacuations for employees and contractors traveling abroad in addition to their standard domestic health care plans and workers' compensation plans. no traveler should embark on a business trip without the complete confidence that medical coverage and resources not requiring their personal, out-of-pocket expenditure is being provided by their employer. a study that included disclosures from institutional investors, representing us$ trillion in assets, provided by sustainable-economy nonprofit gross domestic product (gdp), stated that in addition to increased physical risks that are being caused by climate change, climate change is already impacting their bottom line. one major uk retailer has stated that percent of its global fresh produce is already at risk from global warming. according to the french foreign minister, commenting at a un conference in japan, two-thirds of disasters stem from climate change. comments were made days after the -year anniversary of the fukushima nuclear disaster that killed approximately , people in from an earthquake and tsunami. margareta wahlstrom, the head of the un disaster risk reduction agency, stated that preventative measures provided a very good return as compared to reconstruction. un secretary general ban ki-moon asked world nations to spend us$ billion dollars a year on prevention. an important aspect of both a company's trm and business continuity plan is to determine what are the unique dangers or risks associated with where your offices or facilities are located, as well as where you travel to on a regular basis, making emergency evacuation and safety plans in the event that a unique incident occurs, such as the following case study related to the japanese earthquake and tsunami. it is important to know what local governments have made available in close proximity to your travelers' or expats' locations in terms of resources, or something that your company itself may provide, such as "vertical evacuation points" to escape rising tsunami flood waters. these vertical evacuation points may be in a building that is tall enough to support large numbers of the local population at a high water level, with ample support systems and supplies. not understanding and communicating these plans to your people when appropriate could exact a cost in lives, money, and corporate reputation. * american red cross, "japan earthquake and tsunami: one year update, march ," http:// www.redcross.org/images/media_customproductcatalog/m _japanearthquaketsunami_ oneyear.pdf. on march , , a . magnitude earthquake created a -foot tsunami. more than , people died or were presumed dead, with more than , people evacuated and more than . million people impacted across the country.* for the first time in more than years, iceland's eyjafjallajökull volcano erupted on march , , with massive lava flows and ash clouds that closed most of europe's commercial air space for several days, but then the ashcloud spread to other parts of the world, stranding millions of air travel passengers. based upon the composite map from the london volcanic ash advisory centre for the period april to , , one can clearly see the massive geographic scale of this incident, and why almost all commercial and private air transportation was prohibited and severe shortages of lodging and emergency shelters occurred. whether or not you believe in climate change and the reasons behind it, the statistics demonstrating the depletion of the world's ice sheets and glaciers, warmer ocean waters, and consistent year-over-year sea-level increases, will touch most multinational companies profoundly in the st century. the new york times states that sea levels worldwide are expected to rise to feet by the year , but rates are not occurring evenly worldwide. the times' referenced study states that the atlantic seaboard could rise by up to feet, with boston, new york, and norfolk, virginia, named as the three most vulnerable areas. if current warming trends and rising sea levels continue, cities such as london, bangkok, new york, shanghai and mumbai could eventually end up under water according to greenpeace, displacing millions of people and causing massive economic damage. consider a weather event the size of 's hurricane sandy, which tips the scales of expected water levels in a low-lying urban city, and results in the displacement of thousands or millions of people, with your travelers or expatriates stuck in the middle of it. when evacuation is not an immediate option, questions regarding the availability of safe accommodation, power, food, and water become priorities as demand far outweighs supply under such circumstances. these occurrences are much more common now than in our recent past. whether working in their local office or manufacturing facility, or traveling for business, many companies have employees with disabilities. although building or facility laws and rules may require designated escape routes, ramps, and elevators/lifts in the event of an emergency such as a fire, what about plans for when a disabled traveler is in transit or at a hotel? special considerations need to be made for disabled travelers in the event of a medical or security-related evacuation, such as: the need to relocate travelers can be caused by any number of factors, but before the decision to evacuate is made (usually at considerably more expense than traditional commercial air travel), someone with access to quality intelligence has to make the call as to whether to "shelter in place," assuming safe shelter is available, or to evacuate to the closest safe location. nonmedical causes for evacuation could be biohazards (e.g., the fukushima nuclear facility damage in japan), or civil unrest, or incoming natural disasters. to evacuate or not to evacuate requires thoughtful planning and resources, in order to insure that companies aren't competing with the rest of the world in a reactive situation where many others were caught off guard as well. ijet case study-ijet and the south sudan evacuations in december , ijet international provided continuous monitoring, intelligence, and analysis of the situation involving heavy ethnic fighting in south sudan to existing clients with operations in the country. support included providing real-time situational updates, establishing direct lines of communications with client personnel, and arranging for safe havens and security evacuations. on december , , the situation worsened to include the closure of the juba international airport. during the first days of fighting, prior to the airport closure, more than people were killed and more than wounded in the violence. during this time, several client personnel traveled across the country's borders to safe havens, but soon after the airport closure, with mounting concerns about large numbers of refugees, those borders quickly closed. ijet successfully evacuated its clients within the first hours of the airport's reopening, bringing in a -seat light-passenger aircraft from nairobi, kenya, performing some of the first successful group evacuations from this incident without injury. the ijet case study excerpt is an example of why a company's trm program cannot consist of technology alone, and discounted news being marketed as intelligence. in situations like these, quality intelligence is what saves people's lives. in this instance, quality intelligence was critical to the coordination of ijet's incident management team's on-the-ground services and support, which lead to not only evacuating its clients, but knowing when was the right time to move its clients to the airport and into the air. some medical evacuation services do not provide security-based evacuations, while some can offer both. companies should consider that one provider for both medical and security services and support, intelligence and insurance, might not always be the best solution. some companies select one provider for their terms and coverage for medical services, support, and evacuations, but another provider for security-related intelligence, services, and evacuations. there are even those companies with multiple providers for each medical and security service in different parts of the world, working with completely separate insurance providers to pay for the services rendered. each company must consider the coverage and resources currently available to them via their existing insurance relationship, and then solicit proposals for coverage based upon a clear outline of what the company needs are based upon claims history. ultimately, companies need a program that can coordinate with all contracted services and insurers, providing a seamless experience for travelers and administrators, and consolidated documentation. the term "open booking" refers to a booking made by a traveler that was made outside of their managed corporate travel program, avoiding usage of any contracted travel management company (tmc). technical advances have found ways to incorporate reservations data from multiple websites or suppliers for a traveler's trips into one place for reporting and calendar population. however, to properly capture this data, there are two primary methods available. the first is to allow the applications the ability to scan our inbox for travel-related e-mails and import the data accordingly. the second method is having travelers or independent suppliers e-mail reservation confirmations to an application or "parser," which can parse the data into a standardized database. with some major travel suppliers (such as airlines, for example) there are "direct connections" from their websites to some of these applications. however, in the absence of a direct connection, if you cannot get beyond the security concerns of a third-party application scanning your inbox, one cannot guarantee the automatic capture of percent of open booking data because of human error. for that reason and many others relative to policy and program management, and because of the high probability of human error, for effective trm, open booking should not be promoted as a primary booking method within a managed travel program. however, there is a place for open booking technology within a managed travel program: to help capture data from travel data normally considered "leakage," which is often not collected for reporting. such data can originate from conference-or meeting-based bookings made via housing authorities or meeting planners, or perhaps for travel that is booked and paid for by a client. companies who allow open booking for all travel struggle to effectively locate travelers in a crisis, disclose any potential risks or alerts, or provide services to some travelers in the event of a crisis. outside of suppliers with direct connections to open booking applications or parsers, even when your travelers are trained to e-mail those open booking itineraries to the required application for data capture, employers have no control over when they do this. within a managed program (via most tmcs), all new bookings, modifications and cancelations are usually updated in the database in real time or close to it, providing employers with ample opportunities to mitigate risk in a number of ways when time is of the essence. some well-known companies, offering travel-related solutions, claim that open bookings equate to more traveler choice and that their solutions can bridge the gap for any potentially missing data. when using an open booking application's itinerary data for security purposes, changes and cancelations can be a major issue. some applications require user intervention to manually delete trips that have been canceled, or to resubmit trips for changes unless an update can be e-mailed or picked up by an e-mail scan. consider a situation where a trip is booked and ticketed via an airline website, the itinerary is e-mailed to the traveler, who either allows their inbox to be scanned or they forward the e-mail to the open booking application. days later, the traveler needs to cancel that booking and rebook with another airline to travel with someone else from the company. the arrangements are made with the new airline, but the traveler forgets to delete the original trip in the open booking application. now there are two trips in the system for the traveler. imagine the confusion this could cause with employers if similar circumstances impacted multiple employees at the same time? a good managed travel program can still provide a variety of options, including easy methods of making reservations, yet still capture critical reservations data needed to effectively manage risk for business travelers. trying to manage risk with a completely unmanaged booking process for the sake of open booking, even if it did offer more traveler choice, is not worth the risk, considering that in a crisis you have a higher likelihood of inaccurate data unlike if the traveler had booked via your managed program (via a contracted tmc working in conjunction with your trm provider). does that mean that managed program data is perfect? no, but if implemented properly, reservations data can be more tightly controlled. on january , , when us airways flight went down in the hudson river in new york city, a regional office for an employer received a phone call from an employee's relative who was hysterical, insisting that his family member was on that plane. the office in question contacted their tmc, but was unable to obtain any information on the traveler, so they then turned to the travel manager. by this time, the inquiring family member had intentions of coming into the office because he wanted "some answers," for which there were none at the time. human resources suggested that the relative contact the crisis response hotline, while dispatching security to the office in question to protect the facility and its personnel. human resources also advised the person to stay home for any communications, and for their safety, considering the person was so upset. it turns out that the traveler in question was on a legitimate business trip, but that the traveler had purchased the trip online (outside of the employer's managed program), with the traveler's personal credit card, and without using an open booking application for itinerary data capture. because of this situation, it was difficult or nearly impossible to get helpful intelligence to the traveler or the traveler's family or to provide adequate resources and support, and had there been a death or severe bodily injury involved, the traveler wouldn't have been eligible for their corporate credit card's accidental death and dismemberment (ad&d) coverage. consider the personal losses of a business traveler whose hotel room was just broken into. what if as a result of such a theft, the traveler's identity was stolen? will your company support the needs of the traveler to ensure that the traveler's assets and identity are preserved? the traveler wouldn't have been where the traveler was if it weren't for the business trip! identity theft has reached epidemic proportions globally, with plenty of statistics published by consumer advocacy groups and government agencies, such as the u.s. federal trade commission. the u.s. federal trade commission's consumer sentinel network data book listed identity theft as the top reported complaint by consumers for the th year in a row, with approximately , complaints. the act of traveling for business presents many opportunities for a traveler to be exposed to scam artists looking to steal the traveler's identity. while taking precautions may be inconvenient and time consuming, there are many things that business travelers can do to reduce their chances of having their personal information stolen, such as: • keep a copy of all account numbers and relative account information in a safe place that is separate from where debit and credit cards are kept. • put mail and newspaper delivery on hold. this can prevent mail theft or an indication that the person is away, which can lead to the person's home being robbed. • don't travel with a checkbook; use only credit cards and cash. • don't use debit cards as pins (personal identification numbers) can be stored in some card reader devices and if the information is stolen, criminals could steal all of the cash available in the account(s) linked to the debit card. • notify credit card issuers prior to travel, especially if traveling internationally, so that they can authorize legitimate charges and notify the card holder promptly if activity on the account doesn't match their records. • use vpns (virtual private networks) when using the internet. if the traveler's company doesn't provide one, the traveler should purchase their own annual subscription. what if your employee had prescription medicine that may have black market value and it got taken as well? now, a theft has turned into a potential medical issue. ask yourself the following: • some medicines cannot be refilled before their due date, and other medicines are not easily refilled before their due dates. do you have the resources and support available globally ( × × ) to get those medicines replaced? • do you have the means to get the traveler replacement medicine before the traveler experiences any serious medical issues? • what kind of medical support do you have available, particularly outside of the traveler's home country, should the traveler need immediate medical attention? having someone steal property from your hotel room or safe is bad enough, but when theft has happened, the event itself ends quickly. but if your computer is hacked, the problem could linger in many ways. hotels are ideal places for business travelers to fall victim to hackers who not only may want access to some of your intellectual property, but to your identity as well. referenced in subsequent chapters, there are tips about using hotel and public access wi-fi, if you must use them. however, by whatever means you access the internet while on business travel (e.g., personal hotspot, or wi-fi with vpn, or other tools), try to not conduct any financial transactions or to log into financial-related websites while traveling. losing personal passwords to e-mail accounts or other personal use websites can not only be financially damaging to the individual, but can occasionally be humiliating when private information is made public. the most important thing to remember when faced with a mugging or pickpocketing incident is to not resist in the event of any confrontation and do not pursue assailants. things can be replaced, but not your life or well-being. your first priority should be to get away to a safe place, typically a business or well-lit public place with lots of people, where you can contact the authorities. according to the united states cdc (centers for disease control and prevention), the percentage of adults aged - , during the years to : • percent of persons using - prescription drugs in the past days: . % • percent of persons using five or more prescription drugs in the past days: . % source: http://www.cdc.gov/nchs/data/hus/hus .pdf# according to a report by cbs news atlanta, approximately in americans use prescription drugs. consider that with such a large percentage of the working population taking prescription medications regularly, people taking medications need a basic understanding and awareness to always do their research prior to international travel about bringing the drugs with them into another country. in general, most countries allow up to a -day supply of legitimately prescribed medications, in their original bottle. more than days of prescription medication on a traveler can be considered a violation of many country's laws, particularly when it comes to controlled substances, such as narcotic pain medication or psychotropic drugs. in some cases, it simply isn't enough to carry the original prescription bottles with medication in them; travelers may be required to carry additional documentation along with having filed advance approval forms to be in compliance with the jurisdiction in question. in particular, narcotics or psychotropic drugs must have extensive paperwork prepared by your doctor and submitted to the government of the country that you are visiting well in advance of travel, in order to process your paperwork for approval. employers must consider providing this kind of information to travelers with their pretrip briefings or risk reports, where applicable. the possibility of medicine being confiscated and/or criminal charges filed against someone for lack of approval to transport controlled substances into some countries is very real, and could cost someone their life if stranded on international travel without their medicine. tclara, a travel data analytics firm, has developed a scoring system to track how much wear and tear each traveler accumulates from his or her travels. the goal is to predict which road warriors are at the highest risk of burnout, so that management can intervene in a timely manner. the system uses a company's managed travel data to score a dozen factors found in each traveler's itineraries. trip friction points are assigned to factors such as the length of the flight, the cabin, the number of connections and time zones crossed, the time and day of week of each flight, etc. this allows for traveler-specific and companyspecific benchmarking, which in turn helps senior executives to influence travel policy, procurement strategy, and traveler behavior to optimize a managed travel program. push travelers through too many pain points, and the traveler may soon find reasons to not take the next trip. for example, think about flying coach from chicago to singapore, or taking a short haul connection for a lower fare. tighten the travel policy too much, and you could have recruiting and retention problems, which could have serious cost or business implications. companies shouldn't focus solely on minimizing the transaction cost of their trips; instead, they should focus on minimizing the total cost of traveling. that's the sum of the trip's transaction cost plus the cost of traveler friction (the black curve in the figure below) or the "total cost paradigm." to put trip friction into perspective, tclara provides two trip examples (refer to the figure below) showing a low level of trip friction in "trip a" versus a higher level in "trip b." according to tclara (refer to the figure below), their data shows a correlation between trip friction and higher numbers of road warrior or frequent traveler turnover. trip friction is clearly correlated with higher road warrior turnover. while strong travel policies under managed corporate travel programs are critical to successful trm (versus unmanaged, open booking allowances), there is a delicate balance between cost savings, safety, traveler satisfaction, and, very importantly, business continuity. trip friction and traveler friction are good examples of the link between trm and operational risk management (see chapter ), which shows how losses of productivity or employees managed under the guise of trm can impact company production and/or success. personal well-being of travelers might be the most surprising of topics for consideration, but it certainly is relevant in context with trm programs today. believe it or not, employers must be as cognizant of their employees' or contractor's mental wellbeing as of their physical safety. stressed out, tired, or even unhappy employees can represent lower productivity and a higher threat of risk. from something as simple as knowingly requiring someone to work in a stressful environment without trying to make it better, or just working them to excess, can cause an employee to suffer various forms of posttraumatic stress or depression. however, it can also be as extreme as requiring employees to work in a stressful situation without being properly trained or counseled, as was the case with some flight attendants who may have been forced to immediately fly again out of new york after witnessing the / attacks, when the commercial flights began operating again, without consideration of stress or trauma, proper treatment, and counseling. to the extent that employers monitor and evaluate the physical safety of employees or contractors in the workplace, they must now take notice of the level of employee/contractor stress and contribute to overall happiness. it turns out that employees with high states of well-being have lower health care costs. it's unfortunate that employers must usually see a financial benefit associated with such things before implementing them, but in addition to health care costs, if people are happier and healthier, it stands to reason that they are also more productive. the cwt solutions group conducted a study to shed light on the hidden costs of business travel caused by travel-related stress. their aim was to understand and measure how and to what extent traveler stress accumulates during regular business trips. they defined a methodology and a set of key performance indicators (kpis) to estimate the impact that this travel-induced stress has on an organization (see "the carlson wagonlit travel solutions group study"). the scope of the study includes data from million business trips booked and recorded by carlson wagonlit travel (cwt) over a -year period. they followed a divide-and-conquer approach: each trip was conceptually broken down into potentially stressful activities covering pretrip, during trip (transportation-and destination-related elements), and posttrip. associated stress was measured based on the duration and the perceived stress intensity for each activity. in essence, each of the steps of the trip was viewed as having two components: stress-free time and lost time. to quantify the effects of stress, we introduced the following kpis [key performance indicators]: the travel stress index (tsi) across all trips booked through cwt is %. our results show that the actual lost time is . hours per trip, on average. the largest contributions to this lost time arise from flying economy class on medium and long-haul flights ( . hours) and getting to the airport/train station ( . hours). the financial equivalent of this . hours is us$ . the lost time greatly depends on the type of trip taken: an increase in the transportation time typically generates an increase in the lost time. the average actual lost time values by trip type are: finally, the study indicates that the impact of stress can be reduced, but not entirely eliminated. they analyzed the tsi on a client-by-client basis and found out that companies can expect to control, on average, percent of the actual lost time. in a previous publication [ref . ] , cwt solutions group presented the perceived stress reported for activities related to a typical business trip. the current study incorporates of these factors (table . ), including nine of the be provided directly to suppliers for services as needed, or will prepayment be required by the family or loved ones, only to request reimbursement later? if it can be avoided, such understanding can reduce stress associated with paperwork, authorizations, and payment. according to the cornell university law school, in general terms, intellectual property is any product of the human intellect that the law protects from unauthorized use by others. the ownership of intellectual property inherently creates a limited monopoly in the protected property. intellectual property is traditionally comprised of four categories: patent, copyright, trademark, and trade secrets. in summary, if you are in business, you likely have some intellectual property to protect. it could be an idea, or simply a process that you use, which gives you a competitive edge. most people think of a stolen laptop or mobile phone when they think of vehicles for stolen intellectual property, but a far more common vehicle is a flash drive, which most business travelers carry with them today on business trips and aren't monitored or regulated in the same manner as phones, computers, or tablets. companies should either limit the use of flash drives to those drives that have some level of fips (u.s. federal information processing standard) to encrypt the data and/or destroy the data should the drive be tampered with physically in an attempt to access its contents. information on current fips standards (fips - ) and announcements regarding the upcoming fips - standard, can be found by visiting http://csrc.nist.gov/ groups/stm/cmvp/standards.html# . many companies have policies specific to certain countries whereby, when travelers intend to visit the countries in question, the travelers either cannot take laptops or standard mobile devices with them, or the travelers must take "clean machines" or hardware designed for travel specifically to countries with high numbers of intellectual property theft. some of this hardware may have special configuration or software to add layers of protection, in addition to not storing important files locally (i.e., cloud computing), or transportation of valuable files is done via one-time-use usb flash drives. because there are times when identifying intellectual property thieves can be nearly impossible, one might not have the opportunity to take advantage of any legislation or treaties. however, it is good to know that programs are developing and in place to try and protect intellectual property owners, such as the trips (trade related aspects of intellectual property rights) agreement from the wto (world trade organization). trips was designed to set some standards for how intellectual property rights are protected around the world under common international rules. these trade rules are seen as a way to provide more predictability and order, and a system for dispute resolution, providing a minimum level of protection for all wto member governments. for more details on the trips agreement, see https://www.wto.org/english/ thewto_e/whatis_e/tif_e/agrm _e.htm. as of may , countries place various forms of restrictions for the entry, stay, and/or residence of people who are hiv-positive. in , the united states removed its entry restrictions for people living with hiv, which received considerable media coverage and is believed to have had an influence on many another country's legislation on the matter, as the number of countries with such restrictions has declined from in to in . restrictions vary from country to country, but are broken down into the following categories: reminder: although this text provides various reference materials found on the internet, there is no substitute for or comparison to the quality of medical and security intelligence created, monitored, and provided by qualified risk intelligence providers, which are at the core of employer-managed trm programs. one specific reason for the importance of risk intelligence providers is because guidelines, laws and requirements regularly change. what is surprising to realize is that some of the countries from which an hivpositive traveler could be deported if the traveler's hiv status were known, are countries that are common destinations for many business travelers today. imagine a security check that uncovers prescription hiv treatment medication in a country where there are entry restrictions? this is a difficult position for employers because of the privacy concerns of employees or travelers and their medical records, which are not typically the kinds of records or information that a person shares with employers. however, just as with prescription medications that people can travel with, employers need to provide appropriate training and information to travelers going to places where hiv concerns may be an issue. while adding this kind of information on top of standard risk and policy disclosures may be an extensive and painfully large amount of information to read and understand prior to travel, employers have a duty to provide it, and travelers have a duty to understand it and act accordingly if one or more of any disclosed travel restrictions apply to them. in some of the more strict countries with legislation that allows deportation of hiv-positive travelers, deportation often doesn't apply to travelers connecting or in transit only. however, employers and travelers have to decide whether or not they want to take such a chance. some countries require medical exams for those who intend to stay longer than days, and if hiv is discovered, doctors are required to report it to the government, and the law will be administered relative to the country in question. exploding the myths: pandemic influenza center for infectious disease research and policy (cidrap) -point framework for pandemic influenza business preparedness pandemic planning and your supply chain four-fifths of businesses foresee severe problems maintaining operations if significant h n flu outbreak pandemic influenza planning: a guide for individuals and families at the time of this publishing, the following countries maintain strict regulations for travelers with restricted medications (see full list in the incb "yellow list measuring traveler wear and tear too much travel can burn many a road warrior out. the costs of this burnout are well known: lost productivity, increased safety risks, poor health, increased stress at work and home, unwillingness to travel, and, ultimately, increased attrition. top -those with scores above / . the remaining factors are either challenging to quantify (e.g., "eating healthily at destination") or require certain data that was not available at this time. several stress factors, such as flight delays, mishandled baggage, and traveling to a high-risk destination, require the usage of external data stress triggers for business travel is a leading publisher of flight information to travelers and businesses around the world sita (www.sita.aero) com) is an intelligence-driven provider of operational risk management solutions, working with more than multinational corporations and government organizations . having adequate medical supplies available during and after evacuation transportation. . an accessible method of handicap transport. . addressing any additional criteria needed to determine whether the disabled traveler should be transported or be sheltered in place. a. deciding who makes the call about whether it is safer to "stand by for assistance." . determining whether the transport destination is handicap accessible. . determining whether the transport destination has adequate food, shelter, and supplies for any special needs. . determining whether employers prepared to incur any additional costs relative to evacuating disabled travelers. a. determining whether adequate resources are available. b. identifying the risks or costs for lack of planning.the adoption of this convention is regarded as a milestone in the history of international drug control. the single convention codified all existing multilateral treaties on drug control and extended the existing control systems to include the cultivation of plants that were grown as the raw material of narcotic drugs. the principal objectives of the convention are to limit the possession, use, trade in, distribution, import, export, manufacture, and production of drugs exclusively to medical and scientific purposes and to address drug trafficking through international cooperation to deter and discourage drug traffickers. the convention also established the international narcotics control board, merging the permanent central board and the drug supervisory board. article , penal provisions of single convention on narcotic drugs, , as amended by the protocol amending the single convention on narcotic drugs, , provides: . a. subject to its constitutional limitations, each party shall adopt such measures as will ensure that cultivation, production, manufacture, extraction, preparation, possession, offering, offering for sale, distribution, purchase, sale, delivery on any terms whatsoever, brokerage, dispatch, dispatch in transit, transport, importation and exportation of drugs contrary to the provisions of this convention, and any other action which in the opinion of such party may be contrary to the provisions of this convention, shall be punishable offences when committed intentionally, and that serious offences shall be liable to adequate punishment particularly by imprisonment or other penalties of deprivation of liberty. b. notwithstanding the preceding subparagraph, when abusers of drugs have committed such offences, the parties may provide, either as an alternative to conviction or punishment or in addition to conviction or punishment, that such abusers shall undergo measures of treatment, education, after-care, rehabilitation and social reintegration in conformity with paragraph of article . unfortunately, people sometimes die while away from home on business. making arrangements to transport their remains across international borders can be complicated and expensive, as legislation and protocols vary greatly from country to country, as do suppliers who will provide such services. don't assume that your tmc will or can handle this for you. usually these situations are handled by medical emergency or insurance providers. the following items should be covered in repatriation of mortal remains insurance:• if passing takes place outside of a medical facility, adequate transportation (ambulance, airplane, or helicopter) equipped with proper storage and handling capabilities for the body during transport to the closest appropriate medical facility prior to international transport.• treatment costs incurred (including embalming).• legally approved container for shipment of the remains.• transportation costs for the deceased and an accompanying adult to the country of residence.• cremation if legally required (conditional).other coverage may be included for things such as hotel accommodations preor posttreatment prior to the passing of the insured, but coverage will vary widely between providers. under such stressful circumstances, it is very important for the insured's family to understand the claims process and coverage, such as will payment key: cord- -toevn u authors: venkatesan, sudhir; carias, cristina; biggerstaff, matthew; campbell, angela p; nguyen-van-tam, jonathan s; kahn, emily; myles, puja r; meltzer, martin i title: antiviral treatment for outpatient use during an influenza pandemic: a decision tree model of outcomes averted and cost-effectiveness date: - - journal: j public health (oxf) doi: . /pubmed/fdy sha: doc_id: cord_uid: toevn u background: many countries have acquired antiviral stockpiles for pandemic influenza mitigation and a significant part of the stockpile may be focussed towards community-based treatment. methods: we developed a spreadsheet-based, decision tree model to assess outcomes averted and cost-effectiveness of antiviral treatment for outpatient use from the perspective of the healthcare payer in the uk. we defined five pandemic scenarios—one based on the a(h n ) pandemic and four hypothetical scenarios varying in measures of transmissibility and severity. results: community-based antiviral treatment was estimated to avert – % of hospitalizations in an overall population of . million. higher proportions of averted outcomes were seen in patients with high-risk conditions, when compared to non-high-risk patients. we found that antiviral treatment was cost-saving across pandemic scenarios for high-risk population groups, and cost-saving for the overall population in higher severity influenza pandemics. antiviral effectiveness had the greatest influence on both the number of hospitalizations averted and on cost-effectiveness. conclusions: this analysis shows that across pandemic scenarios, antiviral treatment can be cost-saving for population groups at high risk of influenza-related complications. influenza pandemics are rare, unpredictable events with potentially serious consequences. they are considered to be important public health emergencies by the world health organization, and a number of countries, with many having specific pandemic preparedness plans. [ ] [ ] [ ] neuraminidase inhibitors (nai) often feature prominently in pandemic influenza preparedness plans and several high-income countries have acquired nai stockpiles because pandemic specific vaccines may not be widely available for up to months. clinical trials show nai effectiveness in modestly reducing duration of symptomatic illness in patients with uncomplicated seasonal influenza. [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] however, these trials were under-powered to assess nai impact on secondary outcomes such as hospitalizations. [ ] [ ] [ ] two meta-analyses of the extant clinical trial data, examining outcomes based on the intention-to-treat-influenza infected (itti) approach, found that early nai treatment (≤ h of symptom onset) was associated with a risk reduction of and % for hospital admission in otherwise healthy patients with influenza. other meta-analyses of trial data that evaluated all outpatients with influenza-like-illness (ili) using the intention-to-treat (itt) approach did not find a reduction in hospitalizations in those treated with nais. , if a future pandemic is severe, hospital capacity may be exhausted and therefore reserved for the severely ill who are most likely to benefit. countries may decide to focus a significant part of their pandemic response plan towards community treatment aimed at averting hospitalizations. policy makers considering nai stockpiling for a future pandemic of unknown severity will have to consider both number of hospitalizations averted and the cost-effectiveness of such an intervention. nai treatment for pandemic influenza has generally been estimated to be cost-effective for higher-income countries. [ ] [ ] [ ] however, a review identified that previous health economic evaluations often neglected pandemic uncertainty by only evaluating singular, fixed pandemic scenarios. moreover, few models have incorporated the increased risks of adverse pandemic influenzarelated outcomes for patients with at-risk conditions. we present a spreadsheet-based decision tree model that evaluates the impact of community-based nai treatment in terms of the averted influenza-related hospitalizations and associated costeffectiveness in a range of pandemic scenarios. we built a decision tree model ( fig. ) to calculate the impact of community-based nai treatment for five pandemic scenarios. the first scenario is based on the uk's a(h n ) pdm experience, with a clinical attack rate (car) of % and a case hospitalization risk (chr) of . and . % among non-high-risk and high-risk patients, respectively (table ) . the other four scenarios were based on hypothetical pandemics that varied the car ( and %) and the chr ( . - . % for non-high-risk patients; - % for high-risk patients) ( table ). the hypothetical scenarios are based on a risk assessment framework developed by the cdc. , a standardized risk space was defined based on previous influenza pandemics, and hypothetical pandemic scenarios were identified from this risk space to allow easy comparisons to future economic evaluations. the chrs for the high-risk groups in these four hypothetical pandemics were assumed to be five times the chr for the non-high-risk group of patients based on estimates from the a(h n ) pandemic. we also assumed that the percentage of patients seeking outpatient/ambulatory care would increase with the chr of the pandemic, ranging from % among non-high-risk patients in a -type pandemic to~ % among high-risk patients when the chr is % (table ) . we estimated the number of deaths averted through averting hospitalizations by multiplying the number of hospitalizations averted with an inhospital mortality risk that was constant across the scenarios. we did not differentiate between oseltamivir and zanamivir in the definition of nais in our model; however, we based our cost and treatment effectiveness estimates on data specific for oseltamivir. we focus on community-based treatment and do not consider nai prophylaxis. we used nai effectiveness estimates from an individual participant data (ipd) meta-analysis of clinical trials data on otherwise healthy patients with seasonal influenza based on itti analysis (relative risk: . , % confidence interval: . - . ) since nais are not active against non-influenza respiratory infections. to account for nai prescriptions to patients with non-influenza ili, we assumed a 'wastage factor' of %, i.e. patients with non-influenza ili would be prescribed % of the number of regimens that are prescribed to patients with influenza. we assumed that all patients would start nai treatment ≤ h of symptom onset in our main model and then performed a sensitivity analysis varying the promptness of care-seeking within h of symptom onset from to % (percentage of all care-seeking patients who do so ≤ h of symptom onset). based on estimates from , we also assumed that % of patients would be compliant with the prescribed regimen. unit cost data for our model were obtained from secondary sources including the british national formulary and uk-based reports on the cost of health and social care (table ) . briefly, we used a weighted average cost of physician-based consultation of £ . . this cost was calculated as a weighted average cost of either a conventional primary care consultation or a phone-based consultation with the national pandemic flu service (npfs). the weighting of the costs was done using the proportion of assessments routed through each consultation service in . we used a cost of £ for an nai prescription, which included the cost of delivery. costs of hospitalizations ranged from £ for non-high-risk patients to £ for highrisk patients (table ). all costs were inflated to the british pound sterling (£) using the hospital and community health services (hchs) index. the overall population of . million was based on the uk population. we performed the analyses from the perspective of the healthcare payer, the uk national health service (nhs). given that we did not undertake a full costutility analysis, we chose to measure our outcomes in natural units (deaths and hospitalizations) rather than in standardized units (qalys). we considered a time horizon of less than one year (one pandemic event), therefore a discounting rate would not apply. in each pandemic scenario, we compared the number of outcomes averted (hospitalizations and deaths) and total costs associated with nai treatment compared to no nai treatment. we assessed cost-effectiveness of communitybased nai treatment by estimating the cost per averted hospitalization. our primary analysis was performed using the middle values of our input parameters using formulas provided in appendix . to account for uncertainty in parameter estimates, we performed sensitivity analyses by probabilistically varying input parameters along pre-defined probability distributions (table ) and using monte carlo simulations ( iterations using latin hypercube sampling) to calculate mean output values and % confidence intervals for different combinations of input parameters. the sensitivity analyses were performed using the software @risk version . (palisade corporation). further, we also performed two-way sensitivity analysis to assess the impact of varying nai effectiveness and patient compliance on the outcome (hospitalizations averted). in a -like pandemic scenario, we estimated that in our base-case model (no nai treatment) there would be hospitalizations in the overall population. we estimated that . million regimens of nais would be dispensed for outpatient treatment. nai treatment would have averted ( %) hospitalizations in a population of . million ( hospitalizations averted/million population) at a cost of £ per hospitalization averted ( table ). the cost to avert one hospitalization was £ in high-risk populations and £ in the non-high-risk population ( table ). in the % car-severity scenario (chr: non-high-risk = . %; high-risk = . %), we estimated that hospitalizations would occur. the . million regimens of nais would be dispensed, averting ( . %) hospitalizations at a cost per averted hospitalization of £ in the overall population and £ in the non-high-risk population. nai treatment was seen to be cost-saving in the high-risk population. in the % car-severity scenario (chr: non-high-risk = %; high-risk = %), we estimated that over . million hospitalizations would occur. the . million nai regimens would be dispensed, averting ( . %) hospitalizations in the total population at a cost per averted hospitalization of £ in the non-high-risk population. nai treatment was seen to be cost-saving in the overall population and in the high-risk population. in the % car-severity scenario, (chr: non-highrisk = . %; high-risk = . %), we estimated that over hospitalizations would occur. the . million nai regimens would be dispensed, averting ( . %) hospitalizations at a cost per averted hospitalization of £ in the overall population and £ in the non-high-risk population. nai treatment was seen to be cost-saving in the high-risk population. in the fourth pandemic scenario, (chr: non-high-risk = %; high-risk = %), we estimated that over . million hospitalizations would occur. the . million nai regimens would be dispensed, averting ( . %) hospitalizations in the overall population at a cost per averted hospitalization of £ in the non-high-risk population. nai treatment was seen to be cost-saving in the overall population and in the high-risk population. we found that varying the proportion of care-seeking patients who do so within h of symptom onset, while keeping all other variables constant, lowered the percentage of averted hospitalizations in the overall population from . % (assuming %) to . % (assuming %) in the -like pandemic scenario ( table , supplemental table s ). our sensitivity analyses revealed that using just the middle values of input parameters in a simple multiplicative model without probability distributions was likely to overestimate the number of hospitalizations averted and underestimate the cost per averted hospitalization. for the -like pandemic scenario, multiplying the middle values of input parameters (table ) overestimated the overall number of averted hospitalizations by % and underestimated the overall cost peraverted hospitalization by % when compared to the mean estimated from the monte carlo simulation (supplemental table s ). similar differences in estimates were observed in the other scenarios as well. the sensitivity analyses, based on a -like pandemic scenario, indicated that nai effectiveness had the greatest impact on both the total number of hospitalizations averted, as well as on the cost per hospitalization averted (see fig. for scenario). when the nai effectiveness was varied from to %, the resulting overall proportion of averted hospitalizations ranged between and %, at a cost per averted hospitalization of £ -£ . the percentage of care-seeking patients who were prescribed nai, the proportion of nai prescriptions to non-influenza patients, and nai treatment compliance were in the top three influential parameters for one or both outcomes (fig. ) . in our two-way sensitivity analysis we varied the treatment compliance level along with nai effectiveness beyond the % confidence intervals of our input parameter (from % effectiveness to % effectiveness). increased compliance levels were consistently associated with an increased number of averted hospitalizations across nai effectiveness estimates (fig. ) . the impact of prescribing nais to non-influenza ili patients had a considerable effect on the cost per averted hospitalization. for the -like pandemic scenario, this ranged from £ per averted hospitalization (wastage factor = %) to £ per averted hospitalization (wastage factor = %). main finding of this study we found that community-based nai treatment would avert a significant proportion of hospitalizations and deaths, particularly in high-risk patients, across the pandemic scenarios we explored in this analysis. however, a substantial number of hospitalizations and deaths would continue to occur even with community-based nai treatment. the proportion of hospitalizations averted by nais could be an important consideration while planning for conditions when hospital capacity could be exceeded. community-based nai treatment was seen to be cost-saving for the overall population in a pandemic with a high car and high severity, and costsaving for patients at high risk of complications from influenza across all the pandemic influenza scenarios tested. the value of nai treatment for population groups not at high risk and for milder pandemic scenarios will have to be determined by careful review under country-specific willingnessto-pay thresholds and the desire to reduce the number of hospitalizations and potential hospital capacity issues. what is already known on this topic nai treatment for pandemic influenza has generally been shown to be cost-effective, when compared to no nai treatment. [ ] [ ] [ ] previous studies have found that nai effectiveness is, by far, the most influential factor affecting the numbers of outcomes averted and the associated cost-effectiveness. , results from our sensitivity analysis support this finding. a study based in the united states that used a similar model showed slightly lower proportions of hospitalizations averted due to nai treatment when compared to ours, but the difference could be because of the lower level of treatment effectiveness assumed in the us study. the us study further found that while nai treatment averted many hospitalizations, large numbers of hospitalizations would remain, which is similar to what we have found. we found that variations in nai prescription rate, treatment compliance and healthcare-seeking behaviour (to include the choice to seek care and the promptness in care-seeking) impacted considerably on the outcomes, suggesting that even with a drug of fixed effectiveness, factors relating to healthcare-seeking and healthcare delivery could significantly influence the total number of hospitalizations and deaths averted. these data indicate that a successful pandemic stockpiling strategy must be linked to operational procedures which optimize timely access to antivirals, widespread treatment implementation, and high levels of compliance in targeted groups. one recognized limitation of some previous economic analyses of nai treatment has been that entire populations have been modelled homogenously without accounting for the increase in the likelihood of influenza-related care-seeking and complications in patients with underlying at-risk conditions. , in our model, we vary the propensity to seek care and chr by patients' at-risk status. the significance of this is that countries with limited resources could consider obtaining smaller antiviral stockpiles to target at-risk population groups and avert a higher number of hospitalizations and deaths for each antiviral course dispensed than if they adopted a treat-all approach. the car was an important factor in determining the number of nai regimens that would be needed for communitybased treatment. our model showed that a highly transmissible, but low severity pandemic would require a larger nai stockpile than a pandemic with lower transmissibility and higher severity. however, across all pandemic scenarios, the number of nai regimens dispensed for outpatient treatment was well below the uk's published national nai stockpile size of almost million courses of the drug. we have adopted a simple and transparent approach to model building in which we account for important epidemiological factors, population healthcare-seeking behaviour and service utilization rates in a range of pandemic scenarios. our analyses are uk-focussed, but the spreadsheet tool is easily adaptable to represent other healthcare systems. while the epidemiological parameters are unlikely to change drastically by country, input parameters relating to healthcare utilization and costs will need to be replaced with country-specific ones. we provide the simple version of the spreadsheet tool (without the sensitivity analysis) in appendix . we used updated nai effectiveness estimates from seasonal influenza data, although observational data from the a(h n ) pandemic in a high-severity (high risk of hospitalization) population suggest similar estimates of nai effectiveness (≤ h from symptom onset). we assumed nai effectiveness is the same in patients with and without atrisk conditions. while there is some evidence to suggest that the level of effectiveness against hospitalization is similar for both groups, there is also evidence that suggests a reduction in nai effectiveness in patients with at-risk conditions. this study is subject to limitations. we used a decision tree model (not a transmission dynamic model) and assumed no effect of nai treatment on transmission. there is evidence to suggest that nai treatment, at a population level, is likely to have minimal impact on influenza transmission. however, decision tree models are known to be limited, especially in their ability to describe the change in influenza attack rates in different risk groups over the course of a pandemic. a comparison of static and dynamic models of nai treatment for pandemic influenza concluded nai treatment was seen to be costeffective with both modelling paradigms; although the associated cost-effectiveness ratios were seen to differ. due to a lack of evidence specific to hospitalization, we did not consider benefits of nai treatment > h of symptom onset. nai treatment has, however, been shown be beneficial even when started beyond h from symptom onset. the use of nais may be associated with additional costs to the healthcare system due to possible adverse effects of nais but we have not considered these costs in our model since most side effects are known to be minor. finally, we have assumed that the multiplier for high-risk patients remains constant between severity scenarios resulting in a chr as high as %. chrs of %, even for high-risk patients, may be unlikely. our analyses show that nai treatment in outpatients can be cost-saving, particularly for population groups at high risk of influenza-related complications. model-based estimates like these of the potential hospitalizations, deaths and costs associated with different pandemic scenarios can help countries consider different treatment options and inform stockpiling decisions while developing pandemic preparedness plans. nai stockpiling decisions are also influenced by other costs to the healthcare system related to storage and maintenance of the nai stockpile. currently, the shelf-life for the mg hard capsules of oseltamivir phosphate that comprise most of the nai stockpile is estimated to be years if stored as per instructions. however, influenza pandemics cannot be predicted, and nai stockpiles could remain unused at the end of their shelf-life, or they may be rendered ineffective or less relevant by the development of antiviral drug resistance or newer, more effective influenza antiviral therapies. additionally, evidence suggests that in-hospital nai treatment may also be associated with protective effects , and nai treatment has been shown to be cost-effective if the benefits of nai usage are confined only to those treated in hospital. if a pandemic treatment policy was pursued which combined community use of nais to prevent hospital admission and nai treatment of hospitalized patients to reduce mortality, then cost-effectiveness and stockpile strategies across both scenarios would need to be considered. future research in optimizing nai distribution to risk groups during a pandemic will further inform the costeffectiveness of stockpiling. supplementary data are available at the journal of public health online. dh pandemic influenza preparedness team. uk influenza pandemic preparedness strategy pandemic influenza preparedness and response: a who guidance document department of health and human services selecting viruses for the seasonal influenza vaccine efficacy and safety of the neuraminidase inhibitor zanamivir in the treatment of influenza a and b virus infections randomized, placebocontrolled studies of inhaled zanamivir in the treatment of influenza a and b: pooled efficacy analysis efficacy and safety of oseltamivir in treatment of acute influenza: a randomised controlled trial efficacy and safety of the oral neuraminidase inhibitor oseltamivir in treating acute influenza: a randomized controlled trial zanamivir for treatment of symptomatic influenza a and b infection in children five to twelve years of age: a randomized controlled trial efficacy of oseltamivir treatment started within days of symptom onset to reduce influenza illness duration and virus shedding in an urban setting in bangladesh: a randomised placebo-controlled trial intravenous peramivir for treatment of influenza a and b virus infection in high-risk patients single dose peramivir for the treatment of acute seasonal influenza: integrated analysis of efficacy and safety from two placebo-controlled trials evidence and policy for influenza control antivirals for influenza: where now for clinical practice and pandemic preparedness? neuraminidase inhibitors for preventing and treating influenza in healthy adults and children impact of oseltamivir treatment on influenza-related lower respiratory tract complications and hospitalizations oseltamivir treatment for influenza in adults: a meta-analysis of randomised controlled trials effectiveness of oseltamivir in adults: a meta-analysis of published and unpublished clinical trials oseltamivir for influenza in adults and children: systematic review of clinical study reports and summary of regulatory comments pandemic influenza: guidance for primary care trusts and primary care professionals on the provision of healthcare in a community setting in england cost-effectiveness of antiviral stockpiling and near-patient testing for potential influenza pandemic strategies for antiviral stockpiling for future influenza pandemics: a global epidemic-economic perspective cost-benefit of stockpiling drugs for influenza pandemic cost-effectiveness analysis of pandemic influenza preparedness: what's missing? novel framework for assessing epidemiologic effects of influenza epidemics and pandemics standardizing scenarios to assess the need to respond to an influenza pandemic pandemic (h n ) in england; an overview of initial epidemiological findings and implications for the second wave use of neuraminidase inhibitors in influenza estimating the united states demand for influenza antivirals and the effect on severe influenza disease during a potential pandemic access to the nhs by telephone and internet during an influenza pandemic: an observational study department of health. the national pandemic flu service: an evaluation uk population total health outcomes in economic evaluation: the qaly and utilities dynamic versus static models in cost-effectiveness analyses of anti-viral drug therapy to mitigate an influenza pandemic public accounts committee. stockpiling tamiflu and the management of the stockpile in impact of outpatient neuraminidase inhibitor treatment in patients infected with influenza a(h n )pdm at high risk of hospitalization: an individual participant data metaanalysis effectiveness of neuraminidase inhibitors in preventing hospitalization during the h n influenza pandemic in british columbia the early transmission dynamics of h n pdm influenza in the united kingdom electronic medicines compendium (emc). tamiflu , and mg hard capsules effectiveness of neuraminidase inhibitors in reducing mortality in patients admitted to hospital with influenza a h n pdm virus infection: a metaanalysis of individual participant data impact of neuraminidase inhibitors on influenza a(h n )pdm -related pneumonia: an individual participant data meta-analysis evidence synthesis and decision modelling to support complex decisions: stockpiling neuraminidase inhibitors for pandemic influenza usage scientific summary of pandemic influenza & its mitigation: scientific evidence base review using an online survey of healthcare-seeking behaviour to estimate the magnitude and severity of the h n v influenza epidemic in england influenza-like illness, the time to seek healthcare, and influenza antiviral receipt during the - influenza season-united states seasonal influenza vaccine uptake amongst gp patient groups in england: provisional monthly data for influenza pandemic: an independent review of the uk response to the influenza pandemic predictors of clinical outcome in a national hospitalised cohort across both waves of the influenza a/h n pandemic - in the uk unit costs of health and social care department of health. nhs reference costs vaccination against pandemic influenza a/h n v in england: a real-time economic evaluation we would like to thank anita patel from cdc, atlanta, for reviewing this manuscript and offering helpful comments. key: cord- - y b authors: madanoglu, melih title: state-of-the-art cost of capital in hospitality strategic management date: - - journal: handbook of hospitality strategic management doi: . /b - - - - . - sha: doc_id: cord_uid: y b nan making well-informed and effective capital investment decisions lies at the heart of any successful business organization. however, prior to investing in a project, an executive/manager should make three key estimates to ensure the viability of a business project: economic useful life of the asset, future cash flows that the project will generate, and the discount rate that properly accounts for the time value of the capital invested and compensates the investors for the risk they bear by investing in that project ( olsen et al. , ) . although the first two items are fairly challenging to estimate, the last one is even more challenging. in their book related to cost of capital, ogier et al. ( ) provided an excellent example which i would like to use to provide a practical introduction to this chapter. i take the liberty to modify the story in accordance with the needs of this chapter. imagine yourself at the edge of a river where your goal is to pass the river getting minimally wet in the least possible time. before making your move you need to turn to a local inhabitant who knows which stepping stones are safe, what the velocity and the viscosity of the water are, what the turning moments are, and what the probability of loose stones on the stream bed is. this situation is similar to the world of today's business investments. that is, executives need to make informed decisions about their investments and find out the minimum acceptable rate of return their shareholders expect as a compensation for the risks investors undertake. in addition, when an investment consists of both debt and equity, then the executives need to estimate the total cost of capital employed in this project to be able to pay their debt holders. this chapter intends to serve as a field guide or handbook of the cost of capital estimation for hospitality executives and practitioners. however, before getting into the practical aspects of cost of capital, some relevant concepts will be discussed from a theoretical perspective to better understand the background of this important topic. prior to getting into the core of the subject of estimating cost of capital, it is useful to define what risk is and describe the role it plays in investment decisions. in the hospitality field, risk is often defined as the variation in returns (probable outcomes) over the life of an investment project ( choi, ; olsen et al. , ) . the concept of risk is at the foundation of every firm as it seeks to compete in its business environment. financial theory states that shareholders face two types of risk: systematic and unsystematic. the examples of systematic risk could be changes in monetary and fiscal policies, the cost of energy, tax laws, and the demographics of the marketplace. finance scholars refer to the variability of a firm's stock returns that moves in unison with these macroeconomic influences as systematic, or stockholder, risk ( lubatkin and chatterjee, ) . stated differently, the level of a firm's systematic risk is determined by the degree of uncertainty associated with general economic forces and the responsiveness, or sensitivity, of a firm's returns to those forces ( helfat and teece, ) . in other words, these types of risk are external to the company and are outside of its control. however, a loss of a major customer as a result of its bankruptcy represents one source of unsystematic, or firmspecific risk (idiosyncratic or stakeholder risk). other sources of unsystematic risk include the death of a high-ranking executive, a fire at a production facility, and the sudden obsolescence of a critical product technology ( lubatkin and chatterjee, ) . unsystematic risk is a type of risk that can be eliminated by an individual investor by investing his/her funds in multiple companies ' stocks. the same rule may not be applied by company executives, since the success of a single project determines their tenure within their firms. the traditional financial theory looks at investment in securities from a portfolio perspective by assuming that investors are risk-averse and can eliminate the unsystematic risks (variance) associated with investing in any particular firm by holding a diversified portfolio of stocks ( markowitz, ( markowitz, , . markowitz pioneered the application of decision theory to investments by contending that portfolio optimization is characterized by a trade-off of the reward (expected return) of that individual security against portfolio risk. since the key aspect to that theory is the notion that a security's risk is the contribution to portfolio risk, rather than its own risk, it presumes that the only risks that matter to investors are those that are systematically associated with market-wide variance in returns ( lubatkin and schulze, ; rosenberg, ) . investors, it argues, should only be concerned about the impact that an alternative investment might have on the risk-return properties of their portfolio. however, the capital asset pricing model (capm) ( lintner, ; sharpe, ) (to be discussed in detail later) does not explicitly explain what criteria investors should use to select the alternative investments and how they should assess the risk features of these investments. moreover, the capm assumes that because investors can eliminate the risks they do not wish to bear, at relatively low costs to them, through diversification and other financial strategies, there is little need, therefore, for managers to engage in risk-management activities ( lubatkin and schulze, ) . in contrast, the field of strategic management is based on the premise that to gain competitive advantage, firms must make strategic, or hard-to-reverse, investments in competitive methods (portfolios of products and services) that create value for their shareholders, employees, and customers in ways that rivals will have difficulty imitating ( olsen et al ., ) . these investments enable the firms to protect their earnings from competitive pressure, and allow firms to increase the level of their future cash flow, while simultaneously reducing the uncertainty associated with them. the management of firmspecific risk lies at the heart of strategic management theories ( bettis, ; lubatkin and schulze, ) , and, from this perspective, management must work hard at avoiding investments that create additional levels of risk for the firm. bettis ( ) further affirms that the capm's emphasis on the equilibration of returns across firms (i.e., systematic risk) relegates to a secondary role strategy's central concern with managerial actions that seek to delay the calibration of returns (i.e., unsystematic risks). thus, the claim that systematic risk is paramount to the firm is undermined by the two arguable assumptions from portfolio theory: stockholders are fully diversified, and the capital markets operate without such imperfections as transaction costs and taxes. some stockholders, however, are not fully diversified, particularly the corporate managers, who have heavily invested, both financially and personally, in a single company ( vancil, ) . also, transaction costs, such as brokerage fees, act as a minor impediment, inhibiting other stockholders from completely eliminating unsystematic risk ( constantinides, ) . finally, taxes make all stockholders somewhat concerned with unsystematic risk (amit and wernerfelt, ; hayn, ) because interest on debt financing is tax deductible, thereby allowing firms to pass a portion of the cost of capital from their stockholders to the government. thus, firms can create value for their stockholders, within limits, by financing investments with debt rather than equity (kaplan, ; smith, ) . the limits are determined in part by the amount a firm is allowed to borrow and the terms of such debt, both of which are contingent upon the unsystematic variation in the firm's income streams. lubatkin and chatterjee ( ) contend that the debt markets favour firms with low unsystematic risk because they are less likely to default on their loans (this is particularly the case of the hospitality industry firms). in summary, the discussion of partially diversified stockholders, transaction costs, and leverage suggests that some stockholders may be concerned with unsystematic risk and factor it along with market risk to determine the value of a firm's stock (amit and wernerfelt, ; aron, ; lubatkin and schulze, ; marshall et al. , ) . cost of capital is defined as the rate of return a firm must earn on its investment projects in order to maintain its market value and continue attracting needed funds for its operations ( fields and kwansa, ; gitman, ) . consequently, a firm adds shareholder wealth when it undertakes the projects that generate a return higher than the cost of capital of the project. cost of capital is an anchor in firm valuation, project valuation, and capital investment decisions. cost of capital is generally referred to as weighted average cost of capital (wacc): where e is the market value of equity, d the market value of debt (and thus v ϭ e ϩ d ), t c the corporate tax rate, r e the cost of equity, and r d the cost of debt ( copeland et al. , ) . both of these items ( r d and r e ) are difficult to estimate and require some careful deliberations. the cost of debt is relatively simpler to calculate when a hypothetical firm issues bonds that are rated by the major bond-rating agencies such as standard & poor's and moody's. thus, these ratings may be used as a guide in computing the cost of debt. in addition, an investor may use the bond's yield to maturity or the rate of return that is in congruence with the rating of a bond. averaging the interest rates of long-term obligations of a firm is another method to calculate the cost of debt. the cost of debt estimation becomes difficult when a given firm has no bonds and no outstanding long-term debt. the cost of equity is difficult to estimate in its own right. first, cost of equity is generally estimated using historical data, which may be confounded by business cycles and abnormal • • • events affecting firm stock returns (e.g., fire in a hotel property) and industry returns (e.g., the terrorism events of september ). second, although several methods were developed in the last years, there is not one single method that produces consistent and reliable estimates. last, a hypothetical executive/ entrepreneur will face greater challenges as he/she needs to estimate the required rate of a single restaurant/hotel unit. the next section covers some of the common methods that are used by practitioners in the fields of financial and strategic management. cost of equity can be defined as the rate of return a firm must deliver to its shareholders who have foregone other investment opportunities and elected to invest in this particular company. however, cost of equity is a complex concept because firms do not promise paying a certain level of dividends and delivering a certain level of stock returns. thus, since there is no contractual agreement between the shareholders and the firm, the expected rate of return on invested equity is extremely challenging to estimate. fortunately, there are some models that can help us in tackling this challenging task. the next section will cover the major cost of equity models that gained prominence among practitioners and researchers in the last four decades. one of the early forward-looking methodologies is the dividend growth model (dgm) originally developed by gordon ( ) . it offers a very parsimonious method for estimating discount rate and thus accounts for risk. the dividend growth approach to cost of equity states that where, k e is the cost of common equity, dps the projected dividend per share, p the current market price per share, and g the projected dividend growth rate. the model assumes that over time, successful reinvestment of the value received by retained earnings will lead to growth and growing dividends. the approach suffers from oversimplification because firms vary greatly in their rate of dividend payout ( helfert, ) . this is due to the fact that common stockholders are the residual owners of all earnings not reserved for other obligations, and dividends paid are usually only a portion of the earnings accruing to common shares. the other major difficulty in applying this model lies in determining the specific dividend growth rate, which is based on future performance tempered by past experience. another key issue is that the model becomes unusable when a firm is not a dividend payer. the capm ( lintner, ; sharpe, ) is based on the assumption of a positive risk-return trade-off and asserts that the expected return of an asset is determined by three variables: β (a function of the stock's responsiveness to the overall movements in the market), the risk-free rate of return, and the expected market return ( fama and french, ) . the model assumes that investors are risk-averse and, when choosing among portfolios, they are only concerned about the mean and variance of their one-period investment return. this argument is, in essence, the cornerstone of the capm. the model can be stated as where, r m is the market return of stocks and securities, r f the risk-free rate, β the coefficient that measures the covariance of the risky asset with the market portfolio, and e ( r i ) the expected return of i stock. although the capm is touted for its relatively simple application, several other studies ( lakonishok and shapiro, ; reinganum, ) present evidence that the positive relationship between β and returns could not be demonstrated for the period of . particularly over the last two decades, even stronger evidence has been developed against the capm by fama and french ( , , and roll and ross ( ) . these researchers challenged the model by contending that it is difficult to find the right proxy for the market portfolio and that capm does not appear to accurately reflect the firm size in the cost of equity calculation, and that not all systematic risk factors are reflected in returns of the market portfolio. from the strategic management perspective, business executives face the following issues. implicit to the capm is the recommendation that managers should focus on managing their firm's overall market risk by focusing on β or the firm's • • • systematic risk and not be concerned with what strategists may focus on: firm-specific (unsystematic) risk. chatterjee et al. ( ) claim that herein lie two dilemmas: first, decreasing β requires managers to reduce investors ' exposure to macroeconomic uncertainties at a cost lower than what investors could transact on their own by diversifying their own portfolio; and second, to downplay the importance of firm-specific risk that not only is contrary to the strategic management field but also tempts corporate bankruptcy ( bettis, ) . therefore, an executive of a given company has to take into account the total risk of the project because, unlike investors holding stocks of multiple companies, the executive may not be able to diversify the risk of his/her company's investment by investing in multiple projects. another prominent cost of equity model is the arbitrage pricing theory (apt) developed by ross ( ) . the model states that actors other than β affect the systematic risk. the apt is based on the assumption that there are some major macroeconomic factors that influence security returns. the apt states that no matter how thoroughly investors diversify, they cannot avoid these factors. thus, investors will " price " these factors precisely because they are sources of risk that cannot be diversified away. that is, they will demand compensation in terms of expected return for holding securities exposed to these risks ( goetzmann, ) . although the model does not explicitly specify the risk factors, the apt depicts a world with many possible sources of risk and uncertainty, instead of seeking for equilibrium in which all investors hold the same portfolio. more formally, the apt is based on the assumption that there are some major macroeconomic factors that influence security returns. the apt states that no matter how thoroughly investors diversify, they cannot avoid these factors. thus, investors will " price " these factors precisely because they are the sources of risk that cannot be diversified away. that is, they will demand compensation in terms of expected return for holding securities exposed to these risks. just like the capm, this exposure is measured by a factor β ( goetzmann, ) . chen et al. ( ) managed to identify five macroeconomic factors that, in their view, explain the expected asset returns: the industrial production index, which is a measure of state of the economy based on the actual physical output; the shortterm interest rate, measured by the difference between the yield on treasury bills (tb) and the consumer price index (cpi); short-term inflation, measured by unexpected changes in cpi; long-term inflation, measured as the difference between the yield to maturity on long-and short-term u.s. government bonds; and default risk, measured by the difference between the yield to maturity on aaa-and baa-rated long-term corporate bonds (chen et al ., ; copeland et al. , ) . the apt describes a world in which investors behave intelligently by diversifying, but they may choose their own systematic profile of risk and return by selecting a portfolio with its own peculiar array of β s. the apt allows a world where occasional mispricings occur. investors constantly seek information about these mispricings and exploit them as they find them. in other words, the apt somewhat realistically reflects the world in which we live ( goetzmann, ) . although the apt provides the benefits explained above, these benefits come with some drawbacks. the apt demands that investors perceive the risk sources, and that they reasonably estimate factor sensitivities. in fact, even professionals and academics are yet to agree on the identity of the risk factors, and the more β s they have to estimate, the more statistical noise they have to put up with. last, this model does not offer much guidance to business executives as it focuses primarily on investors. one of the major proponents of the capm fama and french ( ) found that the relationship between average returns and β was flat and there was a strong size effect on stock returns. as a result, they developed a model that has gained popularity in recent years among the scholars and practitioners in the hospitality industry. the fama-french (ff) model is a multifactor model that argues that factors other than the movement of the market and the risk-free rate impact security prices. the ff is a multiple regression model that incorporates both size and financial distress in the regression equation. the ff model is typically stated as where β is the coefficient that measures the covariance of the risky asset with the market portfolio, r m the market return, r f , the risk-free rate, s the slope coefficient, and small minus big (smb) the difference between the returns on portfolios of small and big company stocks (below or above the nyse median), h the slope coefficient, and high minus low (hml) the difference between the returns on portfolios of high-and low-be/me (book equity/market equity) stocks (above and below the . and . fractiles of be/me) ( fama and french, ) . the size factor is denoted as smb premium where size is measured by market capitalization. smb is the average return on three small portfolios minus the average return on three big portfolios as described by fama and french ( ) . hml is the average return on two value portfolios minus the average return on two growth portfolios ( fama and french, ) . high be/me (value) stocks are associated with distress that produces persistently low earnings on book equity which result in low stock prices. in practice, the ff model shows that investors holding stocks of small capitalization companies and firms with high bookto-market value ratios ( annin, ) need to be compensated for the additional risk they are bearing. the size argument is supported by barad ( ) who reports that small stocks have outperformed their larger counterparts by an average of . % over the last years . however, fama and french ( ) find that the book-to-market factor (hml) produces an average premium of . % per month ( t ϭ . ) for the - period, which, in the authors ' view, is large both in practical and statistical terms. the starting point for selecting the best method for the estimation of the cost of equity can be achieved by reviewing the relevant studies undertaken in the fields of hospitality and tourism. fields and kwansa ( ) conducted the first study that directly looked into the cost of equity and suggested the use of pureplay technique for estimating the cost of equity for the divisions of a diversified firm. later, several studies investigated how macroeconomic variables affect security returns in the hospitality industry (hotels and restaurants). the first study was conducted by barrows and naka ( ) . their study encompassed the -year period between and and employed five factors that were slightly different than the five factors of chen et al . ( ) . barrows and naka postulated that the return of the stocks is a function of the following five factors: where einf is the expected inflation, m the money supply, conn the domestic consumption, term the term structure of interest rates, and ip the industrial production. the results revealed that none of the macroeconomic factors was significant in explaining the variance of u.s. hotel stocks at . level and the factors accounted for the . % of the variance in the lodging stocks. however, einf, m , and conn had significant effect on the variation of the stock returns in the u.s. restaurant industry. in terms of the signs of the β coefficients einf had a negative whereas m and conn had a positive relationship with the restaurant stock returns . the postulated model explained % of the variance in the restaurant stocks. the authors cautioned that the results should be interpreted with care due to the small sample size of both restaurant and hotel portfolios, which were represented by five and three stocks, respectively. the second study was undertaken by chen et al. ( ) who used hotel stocks listed on taiwan stock exchange. the macroeconomic variables included in their study were ip, cpi, unemployment rate (uep), money supply (m ), -year government bond yield (lgb), and -month tb rate. these variables were used in the following way: cpi was utilized to estimate einf, and lgb, and tb were used for the computation of the yield spread (spd). based on the six time-series data the authors arrived at the common five macroeconomic variables which were predominantly used in the literature, namely, ip (change in ip), einf uep (change in unemployment rate), m (change in money supply), and spd (rate of the yield spread). these five variables explained merely % of the variation in hotel stock returns while only two of these variables were significant at the . level (m and uep). the regression coefficient of change in money supply had a positive relationship with hotel stock returns, whereas the relationship between change in uep and lodging returns was negative. in madanoglu and olsen ( ) proposed a conceptual framework that called for the inclusion of some of the intangible variables into the cost of equity estimation in the lodging industry. some of these variables were human capital, brand, technology, and safety and security. it is common knowledge that these variables were relevant for the lodging industry; however, there exists no time-series data to include them in the cost of equity estimations. publicly traded multinational lodging companies tend to differ on some key points regarding how assets are treated on their balance sheets. many of these companies do not actually own assets and produce their future cash flows from management contracts or franchise agreements. in many cases, they may also lease hotels or restaurants and the leases do not appear on their balance sheets. instead, these firms hold an equity position in a different company that holds these leases. therefore, it is almost unfeasible to properly assess the book value of the hospitality firms, which confounds the application of the ff model. sheel ( ) was the first researcher in the hospitality industry to point out that capm does not seem to meet the industry needs and called for further research into industry-specific factors. in the mainstream financial economics, downe ( ) argued that in a world of increasing returns, risk cannot be considered a function of only systematic factors, and thus β . he pointed out that the position of the firm in the industry, as well as the nature of the industry itself become a risk factor. thus, firms with a dominant position in the industry that succeed to adapt to the complexities of the business environment, will have a different risk profile than their competitors. this argument is particularly well fitting in the context of the hospitality industry where companies such as mcdonald's and marriott may demonstrate a different risk profile based on their market share in their segments. as for ff factors, professionals in the lodging industry are sceptical about such measures as the book-to-market value ratio (hml). some hospitality industry experts argue that hml is an inappropriate measure for the industry and attribute it to the fact that the difference between the firms whose value is captured by the assets they own and the firms whose value is derived from their intangible assets is not as distinct as in some manufacturing firms. while jagannathan and wang's study ( ) added a human capital variable to their cost of equity capital model, it measured human capital effects from the macroeconomic perspective as opposed to a micro level where most hotel firms operate. in other words, the overall labour index may not properly reflect the state of the human capital in the hospitality industry. as fama and french ( ) stated, their work (ff model) leaves many open questions. the most important missing piece of the puzzle is that fama and french ( ) have not shown how the size and book-to-market factors in security returns are driven by the stochastic behaviour of firm earnings. this implies that it is not yet known how firm fundamentals such as profitability or growth produce common variation in returns associated with size and be/me factors and this variation is not captured by the market return itself. these authors further query whether specific fundamentals can be identified as state variables (variables that describe variation in the investment opportunity set) and these variables are independent of the market and carry a different premium than general market risk. this question is of utmost importance for lodging industry executives who are aiming to identify the major drivers of their companies ' stock returns in their effort to create value for their stockholders. in their current state, the cost of equity models are far from satisfying the needs of the hospitality industry. as fama and french ( ) pointed out, the cost of equity estimates yielded by these models are distressingly imprecise. standard errors of more than % per year were typical when the capm and ff models were used to estimate industry costs of equity in their study ( fama and french, ) . they stated that large standard errors are driven primarily by uncertainty about true factor risk premiums. since the hospitality industry is really the aggregate of individual units that all have their own unique business environments and return on equity structures, this means that the standard errors, and thus, cost of equity capital on a per-company, single-unit (a hotel property or a restaurant) basis, or for a new project will be even more imprecise. therefore, the risk determinants of cost of equity and risk factor loadings for individual operating units will be even more difficult to estimate. thus, it is very important to consider the purpose for which the cost of equity is estimated (e.g., a single project, business division, or an entire corporation). particularly, in the case of single project cost of equity estimations there might be several factors that need to be considered before arriving at the proper discount rate of the project. these factors might be location of the project, local/regional competition, political risk, credit risk, and other risk idiosyncratic to a given project. consequently, as ogier et al. ( ) suggest when estimating a cost of equity for a given project the risk of the project will be much more important than the risk level of the corporation making the investment. in other words, when marriott corporation makes a capital investment decision in nairobi, kenya, the marriott corporation executives will be much more concerned with the risks surrounding that project. unlike cost of equity, cost of debt does not require the use of sophisticated theoretical models. rather, cost of debt is simply the rate at which a given company can borrow capital from a lender (e.g., bank) or the rate at which the aforementioned company can issue bonds. some experts caution that the • • • promised and the expected yields of debt are two different concepts. in other words, when a firm makes contracted debt payments on time it meets " the promised yield " to its lender. however, in reality, there is always a possibility for default and thus the difference between the promised yield and the probability for default equals the expected yield. the expected yield can be regarded as true cost of debt since it is more realistic. although many textbooks calculate the cost of debt as promised yield, it should be noted that expected yield is more meaningful since it includes not only the systematic risk of the market but also the firm-specific risk of a given firm. another challenge for calculating the cost of debt might occur when a firm uses multiple debt instruments (e.g., bank loans, commercial papers, bonds). in this case, it may be fruitful to average the rate of these instruments based on their weight in the debt portfolio. however, an easier and more simplistic approach would be to use the " generic long-term debt " rate which can be calculated from the current rate of a company's bond or current rate at which the company can borrow a longterm loan ( ogier et al ., ) . last, to estimate the cost of debt, the issue of tax shield should be given a close consideration. for instance, although the majority of the finance textbooks use or % as an average for corporate tax rate in the united states, it is common occurrence to observe companies whose effective corporate tax rate is often lower than the statutory rate. here, an executive should assess the situation and decide whether the effective tax rate trend is expected to continue to be below the statutory corporate tax rate in the long term. if that is the case, then he/she should use the effective tax rate in calculating the cost of debt. however, if a low effective tax rate is a short-term occurrence, then a given firm should use the statutory corporate tax rate instead ( ogier et al ., ) . hospitality industry is part of the overall service sector and is dependent on human capital in order to maintain and grow its operations. in an increasingly competitive environment, the human factor becomes one of the keys in creating sustainable competitive advantage. therefore, murphy ( ) stated that the hospitality industry should learn to view its employees from a new paradigm that human capital is a strategic intangible asset (knowledge, experience, skills, etc.). this implies that, like other assets, it is an important determinant of firm value. however, studies have concluded that " the research of human resources expenditures " is in its infancy and is seriously hampered by the absence of publicly disclosed corporate data on human resources ( lev, ) . caroll and sikich ( ) argued that keeping track of at least a -year history of labour costs would serve to identify the dollar value of " premium " labour-related costs, which could be thought of as all labour/benefit costs above federally mandated minimum wage. other techniques proposed by the authors were ( ) to design a scoring system that illustrates productivity versus both baseline and premium labour/benefit costs by departments, and ( ) to establish metrics to determine a productivity level for guest experience standards, facilities standards, and targeted revenue improvements on a department-by-department basis. bloxham ( ) advocated adjustments to certain human resource expenditures to capitalize them over the time of the investment. in that approach, one-time human resources costs are amortized and capitalized in the value creation equation in an effort to demonstrate that human capital investments go beyond being a cost item in the firms ' operations. these costs can include recruiting, interviewing, and hiring costs; one-time hiring bonuses and relocation expenses; and training costs. the costs are capitalized and amortized over the average employee tenure with the company. in this case, if employee turnover is high, these costs would be amortized over a shorter time period (thus the costs will be higher), whereas the longer tenure of the workforce will enable the firm to spread the costs over a longer period of time. kalafut and low ( ) reported that in a study of the airline industry conducted by cap gemini ernst & young's center for business innovation (cbi), the employee category was the single greatest value driver that had an impact on the firm's market value. the employee factor had a positive correlation of . with the firm value. thus, kalafut and low ( ) conclude that in the aggregate, quality and the talent of the workforce, quality of labour management relations, and diversity are critically important in the value creation process of the airline companies. the arguments above can be justified on the grounds that higher-quality human resources decrease labour turnover and increase employee productivity. this results in better organizational performance that results in stabilization of cash flows which in turn decreases the uncertainty of firms ' stock returns. therefore, one would expect that hospitality firms that have institutionalized quality human resource management practices would achieve a more realistic cost of equity estimates that reflect the lower risk associated with these practices. although definitions of the concept of brand differ across the professional and trade literature, the underlying notion is that of a distinctive name with which the customer has a higher level of awareness and a willingness to pay a higher-thanotherwise average price or make a higher-than-otherwise purchase frequency ( barth et al ., ) . a brand is the product or service of a particular supplier which is differentiated by its name and perceived expectations on the part of the consumer. brands are important and valuable because they provide a " certainty " as to future cash flows ( murphy, ) . however, since the task of estimating brand value is yet an improbable one, its value is not specifically reflected on the company's balance sheet. yet, the lodging industry has made much of the importance of the value of the brand but has not been able to unequivocally substantiate the role of the brand in reducing the variance in firm cash flows, and thus contributing to lower cost of capital for the firm. srivastava et al. ( ) provided an analytical example of how successful market-based assets (the term authors use in lieu of intangibles) lower costs by building superior relationships with customers, enable firms to attain price premiums, and generate competitive barriers (via customer loyalty and switching costs). all these factors lead to the conclusion that a strong brand reduces the uncertainty pertaining to the future cash flows which in turn decreases the required return by the investors for the risk they bear by investing in a particular firm. in attempts to value the brand in the manufacturing industries, the use of the following methods has been cited by murphy ( ) : • valuation based on the aggregate cost of all marketing, advertising, and research and development expenditures devoted to the brand over a stipulated period. • valuation based on premium pricing of a branded product over a non-branded product. • valuation at market value. • valuation based on various consumer-related factors such as esteem, recognition, or awareness. • valuation based on future earning potential discounted to present-day value. in further analysis, the investigators rejected these methods because, if indeed, brand values were the function of its cost of development, then failed brands would be attributed high values. in addition, brand valuation based solely on the consumer esteem or awareness factor would bear no relationship to commercial reality ( murphy, ) . in an effort to link the firm's security returns with brand value, simon and sullivan ( ) proposed a technique to estimate the firm's brand equity based on its value. this was done by estimating the cost of tangible assets and then subtracting it from the market capitalization of the firm to obtain the value of intangible assets. as a second step, the researchers tried to break down the intangible assets into brand value and nonbrand value components. the authors utilized the aaker and jacobson ( ) equitrend brand quality measure to evaluate the quality of major brands. they examined associations between measures of brand quality and stock returns and reported that the relationship is positive. according to murphy ( ) , the only logical and consistent way to develop a multiple for brand profit was through the brand strength concept. brand strength is a composite of six weighted factors: leadership, stability, market, trend, support, and protection. the brand is scored on each of these factors according to different weightings and the resultant total known as " brand strength score. " a further addition to the brand strength concept came from prasad and dev ( ) who developed a hypothetical brand equity index via customer ratings of the brand using five key brand attributes in two sets of indicators-brand performance and brand awareness. brand performance was measured by overall satisfaction with the product or service, return intent, price-value perception, and brand preference, while brand awareness was measured as top-of-mind brand recall. olsen ( ) proposed brand-related value drivers specific to the lodging industry such as brand dilution and brand sincerity ratio. brand dilution is related to the question of how many new corporate sub-brands must be introduced in order to maintain growth, whereas, brand duration deals with what percentage of hotels in the portfolio currently meet the brand standards or promise. as a result, it is argued that hospitality companies that possess higher-brand strength will be able to achieve a lower cost of equity capital. according to connolly ( ) , one of the greatest issues plaguing the advancement of technology in the hospitality industry is the difficulty of calculating return on investment. until recently, most technology investment decisions have been considered using a support or utility mentality that stems from a manufacturing paradigm. current policies rely more on faith than on a rational business assessment. as a result, the hotel industry is perceived to be lagging behind the rival industries in the use of technology ( sangster, ) . in part, this is attributed to the fragmented nature of the hotel business itself; however, it is also believed to be closely related to hoteliers ' lack of experience and understanding in technology investments ( sangster, ) . connolly further argued that " today's financial models are inadequate for estimating the financial benefits for most of the technology projects under consideration. while the hospitality industry has disciplined models and sufficient history to determine the financial gains or success of opening a new property in a given city, it lacks the same rigorous models and historical data for technology, especially since each technology projects are unique. although this problem is not specific to the hospitality industry, it is particularly problematic since the industry tends to be technologically conservative and unwilling to adopt new technology applications based on the promises of their long-term merits especially if it cannot quantify the results and calculate a defined payback period. when uncertainty surrounds the investment, when the timing of the cash flows is unpredictable, and when the investment is perceived as risky, owners and investors will most likely channel their investment capital to projects with more certain returns and minimal risk. thus, under this thinking, technology will always take a back seat to other organizational priorities and initiatives. efforts must be made to change this thinking and to develop financial models that can accurately predict and capture the financial benefits derived from technology ( connolly, ; p. iii) . " although there are no hard and fast rules to facilitate the valuations of technology investments, it is common knowledge that technology is transforming the way business is conducted in the lodging industry. particularly the surge in internet usage in the early years of the new millennium brought about the issue of capacity control for hotel room inventory holders. therefore, firms that are more adaptive to utilize technology to market and sell their perishable product (hotel rooms) may accomplish a lower variation in their future cash flows, since they are able to retain greater control over pricing. the author would like to acknowledge the fact that the body of literature does not offer a direct causal relationship between the cost of equity capital and the technology utilization. however, based on the arguments discussed above, the author contends that firms that invest in technology wisely may achieve a higher average daily rate or revpar in their properties which in turn will lead to a decrease in the variance in firm's cash flows. thus, better utilization of information technology can possibly reduce the uncertainty surrounding the future earnings of the firm. as a result, capital markets will assign a lower risk premium to hospitality firms that successfully utilize and deploy technology into their operations. guest safety and security topics in the lodging industry can vary from building safety codes and bacterial contamination of hotel whirlpools to restaurant food safety and hotel crime statistics ( olsen and merna, ) . the need for greater commitment to safety and security for the hospitality industry became evident in after the san francisco earthquake and hurricane hugo occurred ( olsen and merna, ) . the culmination of these events and all the other events sparked an effort by the hotel industry to manage the risk and liability related to guest safety and security. ray ellis, the director of risk management and operations in the american hotel & motel association (at that time in ), contended that after the end of the gulf war the benefits of increased security for the industry go far beyond intangibles such as peace of mind ( jesitus, ) . ellis stressed that improved safety and security will significantly decrease the insurance premiums of the properties, and thus enable the companies to have more resources to invest in their operations. although ellis said that chances of terrorist attacks on the united states post gulf war were fairly remote, he warned that the hotels, particularly those serving international markets, be most wary of arson and bomb threats. the international hotel and restaurant association in identified safety and security as one of the major forces driving change in the global hospitality industry ( olsen, ) . with the destruction of the world trade center in , and subsequent terrorist attacks in bali and kenya, it is clear that force has emerged now as a major risk factor for all tourismrelated enterprises. in february , the federal bureau of investigation (fbi) alerted its law enforcement partners that " soft targets, " such as hotels, can be subject to terrorist attacks ( arena et al. , ) . this report simply reaffirms the argument proposed by olsen ( olsen ( , that lodging properties which are situated in an area exposed to terrorist attacks, should factor that risk into their cost of capital estimates. therefore, lodging property executives should apply this risk factor into their future capital investment decisions. in addition, outbreaks related to food-borne diseases, infectious bacteria occurrences on cruise ships, increased crime, and the growing threats of human immunodeficiency virus (hiv), and other viral infections such as severe acute respiratory syndrome (sars) have created a significant challenge for hospitality managers worldwide. these must be considered as important risk variables that will no doubt have an impact on the estimates of cost of capital. although the factors mentioned above are critical in estimating the cost of capital of a given project, there are no methods that can quantify these factors and apply them to the cost of equity models. however, executives are advised to consider these industryspecific risk factors before making a capital investment decision. the models covered thus far do not provide any guidance for estimating the cost of equity in a global setting or multinational projects. in order to fill this void, academics and practitioners developed adjustment models to account for differences in cost of equity among markets in developing and emerging countries. the adjustment models are primarily concerned with whether the emerging markets are segmented or integrated with the world markets. that is, in a completely segmented market, assets will be priced based on local market return. the local expected return is a product of the local β times and the local market risk premium (mrp) ( bekaert and harvey, ) . bekaert and harvey ( ) developed a modified model after researching emerging markets for the pre- and post- periods and reported that the correlation of the emerging markets with the morgan stanley capital international (msci) world index increased noticeably. for instance, turkey is one of the countries whose market correlation with msci world index increased from less than . to more than . . based on this, turkey may be considered an integrated capital market where the expected return is determined by the β with respect to the world market portfolio multiplied by the world risk premium. this is the core argument of the bekaert-harvey mixture model ( bekaert and harvey, ) . in cases when integrated markets assumption does not apply, investment banks and business advisory firms use a method called " the sovereign spread model (goldman model). " this is conducted by regressing an individual stock against the standard & poor's stock price index returns to obtain the risk premium. then, an additional " factor " is added which is called the " sovereign spread " (ss). this spread between respective country's lgb for bonds denominated in u.s. dollars and the u.s. treasury bond yield is " added in. " the bond spread serves as a tool to increase an " unreasonably low " country risk premium ( harvey, ) . this section offers a practical example for managers to estimate the wacc of their projects. in addition, this section breaks down the wacc into its respective components in order to assist executives in the capital investment decisions. the major components of the wacc estimations are a firm's stock return, market return, risk-free rate, regression coefficients ( β , s , and h ), smb, hml and equity market risk premium (emrp) (which is r m Ϫ r f ), capital structure (proportion of debt and equity), corporate tax rate, and cost of borrowed debt. if you are an executive of a company that is not publicly traded, you have two options to estimate the cost of equity. you can either use the industry average for cost of equity or locate two or three comparable firms that compete in the same line of business and estimate their cost of equity. however, even if you are an executive of a large restaurant corporation that is traded publicly, it is still recommended that you estimate the cost of equity for the entire restaurant industry because the standard error of regression coefficients for a single firm is fairly high, which decreases the reliability of these coefficients. my past research experience has showed me that at times using a single firm may create a situation in which cost of equity cannot be even estimated. more often than not, i obtained distressing results when running a regression for small-or medium-size hospitality firms. as a result, in the practical example, i will estimate the restaurant industry's cost of equity. since the cost of equity calculation process may be a fairly complex process for someone who is not familiar with data analysis, i will offer a step-by-step procedure, which should better clarify this process: step : obtaining a -year monthly stock return for your company/industry and the market • • • ideally, you need years of monthly stock return data for your firm and the -year market return. the issue of selecting the best index of all traded assets in the world is a very challenging and sometimes a controversial issue. based on seminal • • • studies in financial management, the market index that yields most reliable results in the united states is center of research in security prices value weight (crspvw) index housed at the university of chicago. both your company's stock and market return should be used as excess return (i.e., return less risk-free rate which is -month tb rate) in order to measure the cost of equity in real units (i.e., after accounting for inflation). for reasons mentioned before, i will be estimating the u.s. restaurant industry's cost of equity and leave the decision to restaurant industry executives to adjust this value to their specific projects at hand. in order to be able to observe the accuracy of cost of equity models, we estimate the restaurant industry cost of equity by using the capm and ff model. the observation period of this example is between and . the reason for not selecting a longer observation period is that the values of β and other variables become unstable over extended periods. the sample is developed from the nation's restaurant news (nrn) index, which entails restaurant firms. in cases when executives are not familiar with building stock portfolios, they can alternatively use monthly returns of hospitality indices for lodging and restaurant industries from data providers such as yahoo! finance, wall street journal, or industry publications such as nrn. step : estimating β and fama-french factor coefficients • • • the capm's β can be computed by regressing excess stock return of a firm over the excess market return. the monthly returns for ff factors (smb and hml) can be retrieved from eventus database housed in the wharton school at the university of pennsylvania or from kenneth french's website at dartmouth college. by regressing monthly smb and hml returns on market returns you can obtain " s " and " h " coefficients that can later be inserted into the equation to estimate the cost of equity. in our practical example, the results indicate that the ff model explains more than half ( . %) of the variation in the returns of the nrn index. in addition, the ff model results in a significant r change over the capm, which showed that the two ff variables (smb and hml) explained some extra variance over and above the capm which accounted for . % of the variation in the restaurant industry stock returns. the analysis at the variable level indicates that the market index variable ( β ) and the hml are significant at . level (see table . ). however, the smb was not significant at the . level, which means that the size factor does not affect the restaurant industry stock returns while controlling for β and hml. in practice, this means that restaurant industry portfolio behaves as a large company stock, and therefore there is no size premium when considering the overall cost of equity for the restaurant industry. it should be remembered that if you are an executive of a small restaurant company there is a high possibility that your stock returns will have a size premium. step : the risk-free rate, market, size and distress premiums • • • there are certain rules of thumb that executives should be aware of before inserting the regression coefficients into the cost of equity calculation. first, it should be pointed out that there are two risk-free rates ( r f ) in the capm and ff models. the first r f is used in order to demonstrate the level of risk-free rate that a firm needs to exceed to compensate its investors for the risk they undertake. the second r f should ideally match the life of an asset. in other words, if the asset in this project is expected to last at least years, then a given investor/executive should use a -year government bond as its risk-free rate to obtain the mrp ( r m Ϫ r f ). another important issue is calculating market, size and distress premiums. executives/investors may often face challenges when the -year mrp (which equals r m Ϫ r f ) is negative or extremely low, or when size premium (smb) and distress premium (hml) figures are negative. in these cases, i would recommend that executives/investors use the longterm equity premium ( r m Ϫ r f ) figure of % ( siegel, ) , - , - , - , and so on) until and verified that in all instances smb, and hml premiums were positive. step : solving cost of equity equation • • • since the market index (vwcrsp) has a very low return ( . %) for the -year period, i will use the long-term equity premium of % ( siegel, ) . next, by using the obtained regression coefficients in table . , the regression equations provide the following results: as it can be seen from the results above, the restaurant industry cost of equity is considerably higher when estimated by using the ff model. in basic terms, this means that a hypothetical investor will expect a return of % from the u.s. restaurant industry in order to invest his/her funds in the u.s. restaurant portfolio. however, if a restaurant executive believes that % is a fairly high rate of return and his/her restaurant company does not have the same risk profile as the overall u.s. restaurant industry, he/she may elect to use the average of the capm and ff estimates, which is around %. next, a restaurant executive may adjust the rate of his/ her firm's project by considering whether the project will be riskier than the restaurant industry's expected return. here one should consider factors such as competition, life of the project, and the events that may have an impact on the risk of the project by influencing forces driving change in firm's external (e.g., economic, political, technological) and internal (e.g., industry, local) environment. the next step in estimating the cost of capital is to estimate the cost of debt. unlike cost of equity, cost of debt does not require consideration of the average cost of debt for the hospitality industry. this is because in simple terms, cost of debt denotes an interest rate at which a given company can borrow. therefore, a given company can calculate the cost of debt for a given project in a relatively simple manner. the situation is little more complex in cases when a corporation has multiple projects to invest in and has to estimate its corporate cost of debt. this is because some of the projects may be expansion projects that are already financed by loans obtained in the past. consequently, executives need to average out the interest rate of the outstanding debt related to this project and also consider the interest rate at which the company can borrow new funds. in this particular example, we will assume that a hypothetical company plans to issue bonds which mature in years and will also secure a -year loan to finance a portion of the project. in this scenario, we assume that both the bond issuance and the loan will have equal contribution to the funding of the project (e.g., % each). let us assume that the hypothetical company in this example issues -year bonds whose expected yield-to-maturity is %. this rate is assumed based on the present bond rating of this company. we also assume that the rate of a -year bank loan is % and the corporate tax rate %. thus, the cost of debt can be calculated as follows: before entering the values from previous sections we assume that the current project will be financed with % equity and % debt. we use the average cost of equity estimate ( . %) and the cost of debt ( . %) we obtained before. consequently, the weighted cost of capital for this project can be calculated as follows: . % it should be noted that the executive of this hypothetical firm needs to make adjustments to this project if the project carries any specific risk such as political risk, divisional risk (if the firm has multiple divisions), risk of early termination, stiff competition, and so on. this section considers a case when the cost of equity needs to be estimated for an international project. here i use a hypothetical scenario where a thai investor plans to make a hotel in line with the suggestions made by annin ( ) , and barad and mcdowell ( ) , a minimum of months ' stock market trading is the criterion for a hospitality firm to be included in the turkish tourism index. in addition, crspwv index is used as a market portfolio index for the united states. this is in congruence with the previous seminal studies related to asset pricing models ( fama and french, , jaganathan and wang, ) . however, imkb ulusal index is utilized as a market portfolio for turkey. β is computed by regressing excess return of the four seasons and turkish tourism index over the excess market return; therefore, both variables are analysed in real units (e.g., after subtracting inflation). excess market return (mrp) for the united states is computed by subtracting -month tb rate from the monthly vwcrsp index return. the mrp for turkey is calculated by subtracting the turkish government's tb from the monthly ise ulusal index return. the data for the five apt variables are obtained from global insight database. the apt variables are calculated as in chen et al. ( ) . einf is estimated following the method of fama and gibbons ( ) . country risk premium is adapted from aswath damodaran at new york university. damodaran ( ) explains the estimation procedure as " to estimate the long term country risk premium, i start with the country rating (from moody's: www.moodys.com ) and estimate the default spread for that rating (us corporate and country bonds) over the treasury bond rate. this becomes a measure of the added country risk premium for that country. i add this default spread to the historical risk premium for a mature equity market (estimated from us historical data) to estimate the total risk premium. " both direct and indirect approaches are used to estimate the expected return (indirect and direct) of an investment. in this method, i first compute the expected rate of return for the u.s. stock (in this case four seasons) by using the average estimates for the capm and apt. then i adjust for country risks of turkey and thailand based on moody's country risk ratings as reported by damodaran ( ) . this method assumes that the turkish stock market is integrated and thus using the u.s. market indices to estimate the cost of equity for four seasons is equivalent to using ulusal market index for the turkish tourism portfolio. first, i run a regression of the monthly returns of four seasons over the crspvw return for the - period. the results show that the β for four seasons is . . next, the -year annualized return for the crsp was calculated in order to estimate the mrp. the -year historical return for crsp was . %. the riskfree rate for the - period was . %. as a result, the cost of equity estimate based on the capm for four seasons is as follows: . % ϭ ϩ ϫ Ϫ ϭ in an effort to have less biased estimates, i also use the five apt variables (chen et al. , ) to calculate the expected return for four seasons. the results reveal that, among the five apt variables, only the default risk variable (upr) is significant at the . level. however, it is not feasible to use this variable to estimate the expected return because the regression coefficient for upr is a negative number. as a result, the four seasons is likely to have a negative expected return based on the apt. as a consequence, i elect not to use the apt results in the final stage of the direct approach, since the results of the apt are in conflict with the contemporary financial theories. therefore, i use the capm's estimate of . % and adjust this estimate with the country risk of turkey and thailand. according to damodaran ( ) , the historical risk premium for the united states is . %. turkey's country risk premium is . % above the united states value and that for thailand is . % above the risk premium for the united states. this denotes that turkey's country risk premium is . % over that of thailand. these figures result in an expected return of . % ( . ϩ . %) for the thai entrepreneur who is undertaking an equity investment in a hotel in turkey. in the direct approach, i estimate the nominal required rate of return for the portfolio of turkish tourism and hospitality stocks. as a next step, i adjust for the sovereign spreads of turkey and thailand as it is assumed that the thai investor will repatriate the returns from an investment to his/her home country. in this method, i regress the monthly return of the turkish tourism index over the return of the ise. the β for the tourism index was merely . . the -year average for the risk-free rate (turkish government's tb) for the - period was . %. the annualized return of the market index (ise) for the - period was . %. the expected return for the tourism portfolio was calculated by applying the capm and it provided the following results: . % ϭ ϩ ϫ Ϫ ϭ ϩ ϭ the next step entails the addition of the sovereign spread between thailand and turkey to arrive at the estimate for the cost of equity capital for the thai investor. the sovereign spreads are obtained from fuentes and godoy ( ) . the spread for turkey was . % and that of thailand . %. based on these figures, the cost of equity for the direct approach was . % ( . ϩ . %). as it can be seen from both the examples of cost of equity estimation (the united states and international), the expected returns (costs of equity) varied widely. in the example of united states, the use of the capm resulted in a cost of equity that was fairly low (less than %). it is worth asking, would a given investor invest in a u.s. restaurant portfolio of stocks for less than % a year? the answer would probably be " no. " however, if one elects to use ff as its main cost of equity model then the possibility of obtaining more relevant results is likely to increase. as it can be seen in this example, the cost of equity by using the ff model yielded a fairly logical return which far exceeds the historical equity premium for the united states. for the international example, one of the main reasons for the stark difference in cost of equity estimates using the two approaches (direct and indirect) is the high historical inflation in turkey. this is demonstrated by the gap in the tb rates for this country ( . % for and . % for ). hence, if a hypothetical investor elects to use the " going-rate ( . %) in then the new expected return for the turkish tourism portfolio would be at least twice lower than the original estimate of . %. another challenge in the direct approach for international cost of equity estimations is the low β estimate for the turkish tourism portfolio ( . ). does this mean that the tourism portfolio is five times less risky than the overall ise index? what if the real risk of tourism stocks is twice higher than that of the market? (this is quite likely as the β for four seasons in the united states was . .) if that is the case, then the thai investor needs to require a rate of return that is more than % in thai currency. how can the investor hedge his investments against the large swings in the cost of equity estimates? as the results indicated thus far, cost of equity estimations for hospitality investments in emerging and developed markets are beset with uncertainty. the main shortcomings stem from the challenge of applying the seminal models such as the capm, ff, and the apt. the second set of challenges arises when countries such as turkey tend to have high historical rates of inflation but now are entering a more stabilized period of fiscal reforms. thus, should an investor use the historical data or try to forecast the future interest rates in turkey? although the practical examples provided some answers to these questions, few more questions are left for future research. hence, i suggest two interim solutions for this cost of equity conundrum in the emerging markets: ( ) the investors and academics should either solely focus on future cash flows of the project, or ( ) use simulations such as monte carlo in order to create multiple scenarios that approximate the investment realities of the emerging markets. otherwise, the expected return remains to be a " gut feeling " estimate for foreign investors in emerging markets. the financial information content of perceived quality why do firms reduce business risk? fama-french and small company cost of equity calculations preparations for possible attacks gear up: new flight restrictions planned around washington ability, moral hazard, firm size, and diversification technical analysis of the size premium. business valuation alert capturing industry risk in a buildup model use of macroeconomic variables to evaluate selected hospitality stock returns in the u brand values and capital market valuation research in emerging markets finance: looking into the future modern financial theory, corporate strategy and public policy: three conundrums economic value management: applications and techniques what is your irr on human capital? towards a strategic theory of risk premium: moving beyond capm the impact of macroeconomic and non-macroeconomic forces on hotel stock returns economic forces and the stock market the restaurant industry, business cycles, strategies, financial practices, economic indicators, and forecasting. unpublished dissertation understanding information technology investment decision making in the context of hotel global distribution systems: a multiple-case study . unpublished dissertation capital market equilibrium with transaction costs valuation: measuring and managing the value of companies country default spreads and risk premiums increasing returns: a theoretical explanation for the demise of beta the cross section of expected stock returns common risk factors in the returns on stocks and bonds size and book-to-market factors in earnings and returns industry costs of equity risk, return and equilibrium: empirical tests a comparison of inflation forecasts analysis of pure play technique in the hospitality industry sovereign spreads in emerging markets: a principal components analysis principles of managerial finance introduction to investment theory (hyper textbook). retrieved the investment, financing, and valuation of the corporation the theory and practice of corporate finance: evidence from the field twelve ways to calculate the international cost of capital tax attributes as determinants of shareholder gains in corporate acquisitions vertical integration and risk reduction techniques of financial analysis: a guide to value creation conditional capm and cross section of expected returns valuation in emerging markets march . safety and security: risk management, threat of terrorism top hoteliers ' concerns in the value creation index: quantifying intangible value the effects of management buyouts on operations and value systematic risk, total risk and size as determinants of stock market returns intangibles: management, measurement, and reporting the valuation of risk assets and the selection of risky investments in stock portfolios and capital budgets extending modern portfolio theory into the domain of corporate diversification: does it apply? risk, strategy, and finance: unifying two world views (editorial) . long range planning cost of equity conundrum in the lodging industry: a conceptual framework portfolio selection portfolio selection: efficient diversification of investments incentives for diversification and the structure of the conglomerate firm assessing the value of brands a proposed structure for obtaining human resource intangible value in restaurant organizations using economic value added the real cost of capital: a business field guide to better financial decisions into the new millennium: the iha white paper on the global hospitality industry: events shaping the future of the industry global hotel finance--the future . one-day program co-sponsored by hong kong shanghai bank corporation, deloitte & touche consulting group and richard ellis international property consultants leading hospitality into the age of excellence: competition and vision in the multinational hotel industry march . trends in safety & security . hotel and motel management strategic management in the hospitality industry managing brand equity: a customer-centric framework for assessing performance model estimates financial impact of guest satisfaction efforts cost of capital: estimation and applications a new empirical perspective on the capm on the cross-sectional relation between expected returns and betas the capital asset pricing model and the market model persuasive evidence of market inefficiency the arbitrage theory of capital asset pricing technology: the importance of technology in the hotel industry capital asset prices: a theory of market equilibrium under conditions of risk an empirical analysis of anomalies in the relationship between earnings ' yield and returns of common stocks: the case of lodging and hotel firms stocks for the long run the measurement and determinants of brand equity: a financial approach corporate ownership structure and performance: the case of management buyouts marketbased assets and shareholder value: a framework for analysis passing the baton: managing the process of ceo succession key: cord- -ofwdzu t authors: tan, wei‐jiat; enderwick, peter title: managing threats in the global era: the impact and response to sars date: - - journal: nan doi: . /tie. sha: doc_id: cord_uid: ofwdzu t in early , the sars virus brought disruption of public and business activities in many areas of the world, particularly asia. as a result of its impact, sars quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. this article examines the impact that sars has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. reconsideration of strategic and risk‐management approaches have become necessary. supply‐chain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenario‐based planning. this will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. similarly, contingent‐based continuity plans help businesses continue running even during a crisis. © wiley periodicals, inc. managing threats in the global era: the impact and response to sars wei-jiat tan ■ peter enderwick in early , the sars virus brought disruption of public and business activities in many areas of the world, particularly asia. as a result of its impact, sars quickly established itself as a new kind of global uncertainty and posed challenges for traditional methods of risk management. this article examines the impact that sars has had through means of a case study and builds on this to provide recommendations for how uncertainty may be managed in an increasingly globalized world. reconsideration of strategic and risk-management approaches have become necessary. supply-chain management and corporate strategy require a fundamental rethink to balance the pursuit of efficiency with increased responsiveness and flexibility. unpredictability and turbulence in the international business environment suggest that traditional planning approaches that assume linear growth may give way to more scenario-based planning. this will encourage firms to contemplate a variety of possible futures and better prepare them for unanticipated events. similarly, contingent-based continuity plans help businesses continue running even during a crisis. © wiley periodicals, inc. province in november (horstman, ) . largely due to the failure of the chinese authorities to recognize the seriousness of the problem or provide international notification, sars quickly spread throughout china (enderwick, ) and, in february kong was to provide the global accelerant from which sars quickly spread, particularly to neighboring asian countries, including vietnam, singapore, and taiwan. the high incidence of travel between toronto and asia also saw the outbreak of sars in canada (enderwick, ) . over the course of the outbreak, sars infected more than , people and left more than dead in countries, with of those fatalities recorded in mainland china. while infectious epidemics are by no means a new phenomenon, there is little doubt that sars had a greater impact on the international business environment than its predecessors. this is largely due to the fact that countries and economies are now more interconnected than before, allowing for easy transmission of a virus like sars. while literature does exist on the management of risk, sars is indicative of a new kind of uncertainty, the impact and management of which must be analyzed in the context of a world that has become increasingly globalized. this article examines the impact of sars on the international business environment and considers how managers can incorporate events such as sars into an ongoing riskmanagement framework. the discussion comprises four substantive sections. the first section provides a contextual background of the international business environment at the time of the sars outbreak. the second section provides a case-study discussion of the impact of sars on international business operations. drawing on this case study, the third section examines some strategic implications for firms seeking to cope with the new types of uncertainty such as that created by sars. concluding thoughts are provided in the final section. there is little doubt that businesses and firms operate in an increasingly globalized and integrated environment. globalization manifests itself as an increase in cross-border movements of goods and services, capital and technology flows, tourist and business travel, and the migration of people (craig & douglas, ) . this integration has been made possible by declining trade and investment barriers, the growth of free trade agreements, and regional integration. another driver of globalization has been technological advancements in communications and transportation. the use of satellite links, company intranets, and the internet has improved communication networks and linkages across borders, thereby lowering the costs of coordinating and controlling a global organization. modern communication systems have also enabled the rapid dissemination of information, leading to some convergence of consumer tastes and preferences. in addition, developments in transportation have allowed the rapid supply of people, goods, and services from distant locations. globalization has provided significant opportunities for firms to reconfigure their supply chains and globalize their production processes, thereby reaping economies of scale and taking advantage of national differences in the cost and quality of factors of production. however, globalization also presents some very real challenges. the interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence (thomas, ) . indeed, crises in one country now have the ability to affect other countries around the world. this was evident from the asian financial crisis, september , and sars, where such crises had a "ripple effect" so that their direct and residual impacts were felt far from their epicenters (enderwick, ) . the international business environment has never been predictable or certain. however, the scale of investments in today's globalized world, coupled with rapid technological change, shortening product life cycles, and the increasing aggressiveness of competitors (volberda, ) , has increased the uncertainty and complexity of operating in such an environment. indeed, it has been stated that: globalisation and technology are sweeping away the market and industry structures that have historically defined the nature of competition. the variables that can profoundly influence success and failure are too numerous to count. that makes it impossible to predict, with any confidence, which markets a company will be serving or how its industry will be structured, even a few years hence. (bryan, . p. ) accordingly, unlike past decades that exhibited long, stable periods in which firms could achieve sustainable competitive advantage, competition is increasingly characterized by short periods of advantage, the interconnectedness that is characteristic of globalization also means that local conditions are no longer the result of purely domestic influence. marked by frequent disruptions (volberda, ) . in such hypercompetitive environments, risk is not so much predicted as it is responded to. accordingly, strategies that focus solely on efficiency and pay close attention to cost structures must now be reassessed in light of the inflexibilities they exude in a changing and uncertain environment. further, exploitation of core competencies that were once seen as a precondition to success are now viewed as presenting the risk of core rigidities (volberda, ) . accordingly, a higher premium is now being placed on considerations such as flexibility, responsiveness, and adaptiveness. one area in which this need for flexibility has been recently espoused is that of supply-chain management. until recently, companies focused on developing tightly controlled supply chains, with the emphasis on efficiencies in operations and distribution. while tightly controlled supply chains work well in stable environments with minimal disruptions, we are now experiencing an environment of increasing unpredictability, where disruptions are more common. accordingly, the ability to respond to the resulting fluctuations in demand is paramount, and considerations such as flexibility and responsiveness are now considered as important as efficiency (morton, ) . globalization has also seen the emergence of a new type of risk, with a nature quite different to what was traditionally regarded as risk in international business. in the s, s, and even s, risk was generally equated to financial, exchange rate, and inflationary risks and, in particular, "political risk," which was reflective of host-government hostility toward foreign investment during much of this time. political risk was country-specific and could be summed up as the likelihood that a multinational enterprise's foreign operations could be constrained by host-government policies through measures such as forced divestment, unwelcome regulation, and interference with operations. accordingly, risk management was also country-specific and involved assessing the riskiness of a particular country through a variety of predictive approaches. where a country was deemed too risky, the firm would avoid investing or withdraw its current investment. other risk-management devices also involved responding to risk that largely emanated from host governments. indeed, defensive political risk-management strategies involved locating crucial aspects of the company's operations beyond the reach of the host, while integrative strategies aimed to make the firm an integral part of the host society, thereby minimizing the risk of government intervention. however, as the world economy has become increasingly global, political risk, while still present, is arguably not as pressing as before. this is largely because of a change in attitude toward trade and investment, with most countries now encouraging foreign direct investment (fdi). indeed, between and , more than countries made , changes in legislation governing fdi, with percent of these changes involving the liberalization of fdi regulations. this has also been supported by a dramatic increase in the number of bilateral investment treaties, as well as regional and global free-trade agreements (united nations conference on trade and development [unctad], ) . at the same time, we have witnessed the emergence of a new type of environmental business threat that has manifested itself in incidents such as global terrorism, sars, financial crises, and computer viruses, all of which have the ability to disrupt a firm's operations. enderwick ( ) describes such threats as being sudden, unexpected, and unpredictable, with the ability to spread quickly through global processes and forces, thus having a widespread impact but with a disproportionate impact on regions, sectors, and industries. clearly, risk is no longer country-specific, nor is it limited to threats from host-government actions. instead, it is global and systemic, and capable of being perpetrated by individuals or small groups. further, such threats do not simply affect a firm's operating conditions, but also its overall viability, as they can cause severe disruptions, threatening the very survival of the firm. accordingly, new strategies for managing this type of threat are required, and they cannot be avoided by simply deciding not to invest in a particular country, or by using strategies centered on host governments. however, while in the past risk was largely seen as negative, it should be noted that these environmental uncertainties provide both challenges as well as opportunities for those businesses that have the ability to respond quickly and effectively (enderwick, ) . it is useful to clarify the exact nature of a disruption such as sars in terms of risk and uncertainty. while these terms are often used interchangeably, they have distinct meanings (knight, ) . according to knight's analysis, risk is considered as the variation in potential outcomes to which an associated probability can be assigned. in statistical terms, while the distribution of the variable is known, the particular value that will be realized is not. uncertainty exists when there is no understanding of even the distribution of the variable. for decision making, uncertainty is a greater problem than risk. because probabilities can be attached to risk, options to mitigate risks through clearly, risk is no longer countryspecific, nor is it limited to threats from host-government actions. insurance or hedging are possible. because probability cannot be assigned to uncertainty, instruments to reduce uncertainty are not available. we suggest that sars (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. these types of disruptions share a number of characteristics. first, they can be considered as "jolts" (meyer, ) that occur randomly. no one anticipated the emergence of sars, or any similar virus. because such events are not continuous or even regular, it is not possible to assign probabilities to them. second, the nature of these jolts is such that they evolve, changing their forms, and do not simply recur. for example, viruses such as sars and avian bird flu are capable of mutating and assuming different forms with differing impacts. in the case of avian bird flu, there have been recent reports of the first full case of human-to-human transmission, and a recurring fear is that it could mutate into a human pandemic with devastating effects. similarly, global terrorism assumes a variety of forms including car bombs, suicide bombers, aircraft as weapons of destruction, and chemical attacks. this makes it difficult to use historical experience as a predictor of future occurrences and impacts. third, the impact of these uncertainties tends to be concentrated, either by sector or by geographical location. as the next section makes clear, the primary effects of sars were experienced in asia and disproportionately affected the transport, tourism, and medical industries. the impacts of natural disasters such as extreme weather events or financial and political problems appear to be more widely and randomly distributed. this is not to suggest that sars did not become a global issue; however, its global spread was clearly traceable to well-established patterns of personal and business contact. to understand the impact that sars had, it is useful to employ a "concentric band" framework (enderwick, ) , which sees a crisis like sars as having a "ripple effect," as illustrated in figure . the band closest to the center represents the primary or immediate impacts of sars. moving outward, the next band represents secondary impacts that are likely to develop over the short to medium term, followed by those impacts that result from the various responses to sars. finally, the outermost band represents the longer-term issues that are likely to arise out of the sars crisis. we suggest that sars (and similar recent environmental disruptions such as global terrorism, computer viruses, and avian bird flu) are uncertainties, not risks. the sectors that were immediately and significantly affected by sars were those that involved regular human contact (enderwick, ) . accordingly, asian tourism and transport were hit especially hard. flights to asia were cancelled, with sars hot zones like singapore and hong kong suffering the most (lemonick & park, ) . the hotel business in asia also suffered and plummeted % between february and march (lemonick & park, ) , with hong kong five-star hotels at occupancy rates of % and singapore occupancies falling from a norm of % to % to %. as fewer tourists arrived and locals chose to stay home to avoid public places, stores and restaurants in singapore and hong kong were almost empty at peak hours (engardio, shari, weintraub, arnst, & roberts, ) . sars also had a significant impact on medical facilities and staff. rapid increases in the number of cases quickly exposed inadequate surge capacities in hospitals and public health systems and a lack of protective gear, with the problem exacerbated by health workers falling victim to the disease (world health organization [who], ) . in beijing, a shortage of bed space in hospitals meant suspected sars cases could not be hospitalized and quarantined quickly, contributing to the spread of the illness (hutzler, ) . to reduce this heavy burden on existing hospitals, governments invested sub- (who, ) . indeed, hong kong spent hk$ million to create , hospital beds and a further hk$ million to train medical staff (fowler & dolven, ) . food industry. sars also led to secondary impacts in the food industry. food prices in asia plummeted as restaurants cut down on purchase orders, thereby affecting the region's farmers and fishing fleets (carmichael, mcginn, & theil, ) . supermarket sales in key markets such as singapore, taiwan, japan, and china also fell due to a loss in consumer confidence (tso, ) , although increased food preparation at home-and, in some cases, panic buying-had a positive impact on supermarkets. manufacturing. there was widespread belief that a major disruption like sars could paralyze just-in-time supply chains by holding up production and the flow of goods and services between countries due to port closures, travel restrictions, and forced closures of manufacturing plants if employees got infected. despite such media hysteria, the impact on the manufacturing sector was not that pronounced. this was largely because asian companies took preemptive steps as soon as the epidemic became known and increased production in the anticipation that there could be a problem, building up their buffer levels in inventory and safety stock. the result was very few plant shutdowns in the far east (morton, ) . investment. investment in asia was also affected, as international firms postponed plans to begin or expand operations in asia. real estate sales fell drastically as buyers refused to travel to hong kong or china to look at building sites (bodamer, ) . similarly, the cancellation of trade fairs affected manufacturers, particularly in china, who rely on such fairs to sell their goods (ben-ami, ) . the capital markets did not emerge unscathed, and it is estimated that overall fundraising in asia fell - % in due to sars (hamlin, smith, meyer, kirk, & horn, ) . stock prices of those companies with extensive operations in asia also fell (bolger, ) . unemployment. given that the tourism and hospitality industries that were hit hardest by sars were labor-intensive, there was also a corresponding rise in unemployment in sars-affected countries, mainly concentrated in these industries. in the worst-hit countries like china, hong kong, singapore, taiwan, and vietnam, the tourist industry faced losses of % of travel and tourism employment. the global impact of sars was also expected to bring a % loss in the tourism workforce in indonesia and oceania, and % in the rest of the world (bita, ) . economy and growth. the impact of sars on regional economies and projected economic growth was also substantial. indeed, economists estimate that china and south korea have each suffered $ billion in losses in tourism, retail sales, and productivity due to sars, with japan, hong kong, taiwan, and singapore estimated to lose approximately $ billion each. toronto, severely affected by sars, was losing $ million a day at the height of the crisis. in terms of the global cost of sars, this is estimated to have reached $ billion. positive effects. while negatively affecting a number of industries, others were able to capitalize on the opportunities that sars provided. the outbreak of sars saw a worldwide surge in demand for facemasks, given that sars is largely transmitted by coughing and sneezing. this saw demand outstripping supply, forcing large manufacturers like m to switch to -hour production (hopkins, ) . video conferencing was another industry to benefit, as asian employers sent their workers home and cancelled overseas conferences, meetings, and visits. while no industrywide traffic figures are available, many video-conferencing services reported spikes in usage in asia since the sars epidemic began. indeed, intercall, a hong kong teleconferencing company, doubled its business in march and april and saw a % increase in users signing up for the service in hong kong in april and a % rise in new customers worldwide (flynn, ) . individual firms. in response to the sars crisis, businesses undertook a number of measures to minimize the impacts of sars. business travel bans to sars-affected areas were a common risk-management device, as were temporary quarantine measures for those who had recently traveled to such areas (e.g., working from home, segregating them from other employees). less common was the repatriation of employees, and according to one survey, less than % of firms had brought employees home from sars-affected regions or placed them in another country (minihan, ) . many firms implemented business continuity plans, with some firms setting up operations at parallel sites or shifting operations altogether to other office com- plexes (hamlin et al., ) . it played a major role in all continuity plans, with firms issuing notebooks capable of accessing the firm's intranet so employees could work from home and employing technology such as video conferencing to ensure business continued as usual (lim, ) . government spending. in response to sars, governments in asia also took action and increased government spending to sarsaffected industries. in china, the central government launched the largest-ever tax relief package to help the aviation, tourism, and retail sectors recover from the sars epidemic, estimated to cost several billion yuan (pun, ) . the hong kong government similarly offered a $ . billion relief package for local businesses (carmichael et al., ) and has invested $ million to revitalize the tourism industry, with part of this money to be spent on a worldwide campaign to reassure visitors (coulter, ) . domestic measures. the outbreak of sars prompted governments to take decisive and often drastic action to curb the spread of sars. the singapore government authorized measures such as closure of schools and universities, temperature checks twice daily (at home and in the workplace), home quarantine for those exposed to sars, and triage centers at the entrances of hospitals to identify and separate sars patients (yew, ) . in taiwan, remote video monitors were installed in quarantine households to ensure against any quarantine violations (chinese government information office, ). in china, far more draconian measures were taken, arguably to compensate for the government's previous lack of responsiveness and reluctance to report the seriousness of the sars crisis in china. public entertainment spots in beijing were closed down, as were public schools and universities (kaufman & chen, ) . stricter border control. governments also responded to the rapid spread of sars by implementing stricter border control measures and the collection of detailed health and contact information. at one extreme, several countries, such as taiwan, banned individuals who had traveled to sars-affected regions (kaufman & chen, ) . other strict measures included requirements that travelers from sars-affected areas wear facemasks for two weeks after arrival and special powers of quarantines. greater international cooperation. in recognition of the fact that sars is a global problem, governments have also been more willing the outbreak of sars prompted governments to take decisive and often drastic action to curb the spread of sars. to cooperate to prevent its further spread. one such example was the "special asean-china leaders meeting on sars" held on april , , in bangkok, where ten association of southeast asian nations leaders and the chinese premier held crisis talks on how to fight the virus. this was consistent with the way the international community rallied together to understand and treat the sars virus. indeed, the global response was unprecedented and saw laboratories around the world that were previously strong competitors sharing information freely. importance of the state. as illustrated by the response-generated impacts, it is clear that the role of the state in international business is still important, as the sars crisis saw the state take on a major crisis management role. governments were responsible for mobilizing resources such as hospitals and other medical facilities, as well as coordinating public health care. quarantine measures also had to be introduced, monitored, and enforced, coupled with surveillance capacity to monitor and report quickly on disease outbreaks and their progress. finally, governments were responsible for handling the economic slowdown caused by sars and providing assistance to severely affected industry sectors through increased public spending. these tasks are not amenable to market forces and highlight the unlikelihood of globalization leading to the elimination of the nation-state (enderwick, ) . open and transparent government. connected to the idea of the growing importance of the state is the need for open and transparent government. the reasons for this are twofold. first, the need for transparency is paramount if a crisis is to be contained. china's initial understating of the number of confirmed cases, refusal to give daily reports, and blocking of who specialists from visiting guangdong (the origin of sars) allowed sars to spread rapidly (lavella, ) . it was only after china began reporting the true seriousness of the situation and allowed who officials to investigate that the sars crisis slowly came under control. in contrast, vietnam was able to contain the virus relatively quickly through prompt and open reporting, the early request for who assistance, and rapid case detection, isolation, infection control, and vigorous contact tracing (who, ) . accordingly, any attempt to conceal a crisis such as sars for fear of the social and economic consequences can only be regarded as a short-term measure that ultimately risks the situation spiraling out of control (who, ) . second, it is in the interests of government to be open and transparent, as in today's turbulent and unpredictable environment, investors are now placing a premium on governments that can be trusted. indeed, china's behavior during the sars crisis has resulted in a loss of credibility in the international community (who, ) and created fears among foreign investors about doing business in china. fear. another long-term issue is how to handle the fear and panic that accompanied sars, given it was fear that spread faster and had a greater impact than the disease itself. indeed, as far as infectious diseases go, sars is relatively mild; it is harder to catch than the flu, with a fatality rate of only % to %. despite this, sars had a devastating effect on the tourism industry, as people became unwilling to fly to sars-affected regions. business was also affected, as foreigners cancelled conferences and meetings in countries in asia, such as south korea, that had not reported a single sars case. this was largely because asia is viewed as "one place" (hamlin et al., ) , and therefore the crisis in one part of asia was extrapolated to the whole. personal behavior. sars is also likely to have a longer-term impact with regard to personal behavior and cultures. in singapore, through massive public-education efforts to promote public-health practices, there has been a noticeable change in personal habits. people wash their hands more, and public and restaurant toilets are much cleaner. similarly, people are using serving spoons for shared dishes when eating, and sick people are more likely to see a doctor when they become ill rather than do nothing (borsuk & prystay, ) . sars could also see a change in the way that business is done in asia. as mentioned earlier, many businesses turned to video conferencing at the height of the outbreak. video conferencing was found to be an effective tool to maintain communication during the crisis-and without the associated travel/hotel costs and jet lag (hamlin et al., ) . however, tensions remain in that asian businesspeople place a high value on personal contact and prefer to meet clients and customers face-to-face (minihan, ) . the reality of globalization-the need for openness and trust. the sars crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. technology, such as e-mail, instant telephone communication, and the internet, has united people and increased enormously the number of contacts that people have. these contacts are eventually pur- the sars crisis has illustrated the consequences of living in an interconnected world and has further clarified the nature of globalization. sued through personal visits or through business meetings, conferences, and plant tours. further, advancements in air travel mean any place in the world is accessible within hours, and coupled with the movement of commerce, this has brought china and other developing nations out of relative isolation (engardio et al., ) . the result has been a global network within which an infectious disease like sars can spread, and while diseases in the past have taken weeks or months to spread, sars was literally transmitted within days, setting a record for the speed of continent-to-continent transmission (borenstein, ) . accordingly, while globalization has provided the world with many benefits, it also brings risks, and increased connectedness also means that threats have a greater global impact. this implies that countries must understand that they can no longer insulate themselves from threats such as sars given the open borders of a globalized world, and there must be an increasing recognition that crises like sars are not simply a regional problem, but a global one. the case study illustrates that sars represents a new kind of threat and has implications for the way uncertainty is managed in the future. risk-management strategies that were largely country-focused are no longer adequate in themselves, given that this new type of threat is global and systemic. despite the high levels of uncertainty associated with events such as sars, this should be incorporated into decision making. a lack of precise knowledge does not preclude decision makers from further information gathering or from making decisions about likely probabilities of events occurring. as has been recognized, traditional strategic management approaches encourage perceptions of uncertainty in a binary fashion (courtney, kirkland, & viguerie, ) . the world is seen either as sufficiently certain that precise, and usually single, predictions of the future can be made, or that uncertainty renders such an approach totally ineffective. in the latter case, there may be a temptation to abandon analytical approaches and to rely wholly on gut instinct. courtney et al. ( ) argue that in many cases uncertainty can be significantly reduced through careful search for additional information; in effect, much that is unknown can be made knowable. the uncertainty that remains after the most thorough analysis they term residual uncertainty. there are a number of approaches that offer insights into how to manage uncertainty. the simplest approach is to ignore it. this can be done by developing a "most likely prediction" often based on "expert input" or by assigning a margin of error to key variables. each of these approaches yields a single unequivocal strategic option by either ignoring uncertainty or assigning it a probability. neither approach is satisfactory. ignoring an uncertain environmental event is clearly dangerous. assigning probabilities to unique events is invalid. even subjective probability derived from expert analysis is untestable and arbitrary. miller ( ) highlights a useful distinction between financial riskmanagement and firm-strategy approaches to managing environmental uncertainties. financial risk-management techniques such as insurance and futures contracts reduce the firm's exposure to specific risks without changing the underlying strategy. but, as noted earlier, such techniques only apply to risks, not uncertainties. in the case of an event such as sars, strategic responses, which attempt to mitigate the firm's exposure to uncertainties, are likely to be more useful. miller ( ) identifies five generic strategic responses to environmental uncertainties: avoidance, control, cooperation, imitation, and flexibility. avoiding an event such as sars through divestment, delayed entry, or a focus on low uncertainty markets is difficult. the irregular occurrence and variable impact of such events is unlikely to justify divestment. similarly, their unpredictable and evolving nature makes postponement or niching very difficult. uncertainty control strategies based on political lobbying, vertical integration, or enhanced market power are not an effective counter to sars. in the same way a cooperative strategy deals primarily with behavioral risk and is not likely to be effective, neither is an imitative strategy that addresses competitive rivalry. of more value is the management of uncertainty through organizational flexibility. flexibility focuses on the ability of the organization to respond and adapt to significant environmental changes. high levels of flexibility imply lower costs of organizational adaptation to uncertainty. in contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. a widely used strategy for increasing flexibility is diversification, whether of products, markets, or sources of supply. with regard to sars, the key strategic responses are likely to occur in the areas of supply-chain management, diversification, scenario planning, and ensuring business continuity. we consider these in more detail. in contrast to approaches that try to increase the predictability of uncertain events, flexibility strategies emphasize internal responsiveness, irrespective of the predictability of contingencies. the need for flexibility and responsiveness is no more evident than in the area of supply-chain management. while the manufacturing sector did not suffer severe disruptions given the relatively quick manner in which sars was contained, had the crisis persisted and impeded the flow of goods and services and/or caused plant shutdowns, major disruptions to manufacturing and distribution would have occurred. indeed, potential disruptions quickly became apparent as firms contemplated the possible effects of travel bans. firms recognized that problems could arise if a factory needed repair help to continue manufacturing but engineers could not be sent due to travel bans (wonacott, chang, & dolven, ) . in combination, these issues highlight the need for flexible supply chains that can respond quickly to changes in demand and cope with major disruptions. to develop this responsiveness, firms can do a number of things. first, in handling a crisis like sars, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage (mcclenahen, ) . accordingly, to ensure prompt action, firms must ensure quicker access to and action on information, preferably at the source, that may provide timely warnings. this certainly reiterates the importance of management basics such as environmental scanning and monitoring, and the need for it to be an ongoing activity. however, such environmental scanning will no longer simply involve monitoring the local political environment, as often happened in the past. instead, it will need to encompass the larger regional and global environment. further, while host-country managers previously played a vital role in conveying information about the political environment back to higher management, what will become increasingly important is the ability to channel this information to the firm's affiliates in other parts of the world and to share any lessons learned from the crisis so that these affiliates may benefit from them. this further reinforces the value of establishing an integrated global network and facilitating intracompany learning. the need to be responsive also has implications when choosing manufacturing locations. china's initial unresponsive and surreptitious approach to the sars crisis illustrates that while cost of production and a low-cost labor force have been, and will still remain, dominant considerations in the investment decision, stability, reliability, and predictability are likely to be given a higher premium. given the unexpected and sudden nature of threats such as sars, management is also likely to add to its investment criteria how well various parts of the world are equipped to deal with crises (mcclenahen, ) . in handling a crisis like sars, every moment of delay is critical, and the earlier you can get the supply-chain network to respond, the easier it is to manage. firms may also opt to switch from large production sites in a single location like china to smaller, but multiple facilities around the world, thereby creating a global network of manufacturing facilities. this allows increased flexibility so that if disruptions to manufacturing or the supply chain occur in one country, the firm has the ability to vary plant loadings and increase production elsewhere (maccormack, newman, & rosenfield, ) . such a manufacturing network will considerably increase the complexity of coordinating the global supply chain. however, this may simply be a necessary trade-off for firms wishing to balance cost-efficiency and responsiveness in managing their global supply network. alternatively, firms may find that establishing their own manufacturing operations is too risky an investment and may instead choose to outsource, thereby continuing a trend that has been taking place over the last ten to fifteen years. however, sars has highlighted the value of diversifying the supply base and sourcing from multiple locations (mcclenahen, ) , thereby reducing a firm's dependence on a single supply location (maccormack et al., ) . indeed, outsourcing offers the flexibility to switch sourcing to another country if a crisis like sars should disrupt supply-chain operations in a particular country. accordingly, many global firms are considering back-up suppliers outside of asia, with latin america and eastern europe likely locations. the way in which outsourcing is conducted may also change as events such as sars have increased the reluctance and inability to travel. indeed, rather than establishing their own network of suppliers, firms may increasingly turn to third-party logistics providers such as bchinab, who have offices on u.s. soil but manage a sprawling network of , factories, tool and die shops, materials suppliers, and plastic molders in china. by using such a provider to manage their logistics operations abroad, firms can reduce the requirement of traveling overseas to negotiate price quotations and samples and having to deal with chinese manufacturers directly (marshall, ) . the move toward responsiveness may also necessitate less of a focus on cost-efficiency and a loosening of the tight control that is currently held over supply chains. after sars, firms may have to reexamine their supply chains to identify potential problems and bottlenecks and allow for enough slack to accommodate delays and potential problems that can arise. such readjustments may include keeping buffer inventory and safety stock to hedge against uncertainties. while such measures incur costs, the costs of disruptions to an unresponsive supply chain may prove more severe-these being extended lead times, lost service contracts, and higher emergency logistics costs. another lesson from the sars crisis may be to illustrate the risks of having too focused a corporate strategy and the potential benefits of diversification. in the same way that financiers diversify their investment portfolios to decrease variability in their rate of return, a portfolio approach to corporate strategy ensures that even if some of the firm's corporate initiatives fail, the success of other initiatives achieves an overall favorable outcome for the firm (bryan, ) . this is especially so where the impacts of events like sars and terrorism are disproportionately borne by certain sectors or locations (enderwick, ) . accordingly, corporate strategies may now require this "portfolio approach" so that a firm is not overly focused on one sector or location. for example, the sars crisis, coupled with a more global world market, is likely to see exporters increase diversification in both products and geographical markets. on a larger scale, economies may also look to become more diversified, as sars revealed that many asian countries were heavily dependent on the services sector (shanmugaratnam, ) . for businesses, related diversification appears to be superior to unrelated diversification (rumelt, ) . as noted earlier, the nature of environmental threats is changing and is increasingly difficult to anticipate. indeed, sars illustrated the difficulty of trying to predict where the next threat will come from and has called into question traditional linear planning and forecasting. such planning techniques work on the assumption that the environment in the future will be very much like today, and that extrapolation is meaningful. however, in today's turbulent and disruptive environment, this assumption is no longer valid and what are needed are plans that are flexible enough to adapt to the circumstances (pritchard, ) . accordingly, the sars crisis is likely to accelerate the current trend toward the adoption of scenario planning. rather than forecasting a specific future or "most likely outcome," scenario planning builds on existing knowledge to develop several plausible future scenarios and then necessitates constructing robust strategies that will provide competitive advantage no matter what specific events unfold. as such, it encourages firms to think about "worstcase" scenarios, which may include technological, economic, political, or environmental calamities. schnaars ( ) discusses a number of different approaches that can be adopted when designing strategy for multiple scenarios. accordingly, scenario planning forces firms to pay closer attention to internal, external, and the broader global environmental factors that may influence the firm's future. this process challenges firms to avoid complacency in their strategy formulation and encourages managers to think more broadly and unconventionally and view events with a new perspective (lohr, ) , an essential requirement in trying to prepare for unknowable shocks and crises (kennedy, perrottet, & thomas, ) . arguably, sars will "shake things up" and encourage strategists to further consider more diverse and unexpected scenarios, as prior to sars, many strategic-planning scenarios had not been done with a disease in mind (hamlin et al., ) . the federal emergency management agency estimated that the costs of disasters are times greater than the costs of preparing for them (read, ) . indeed, events such as september and sars illustrated the value of business continuity planning, which essentially involves strategies, services, and technologies that enable firms to prevent, cope with, and recover from disasters, while at the same time ensuring the continued running of the business (read, ) . while the sars crisis certainly reinforced the need for such planning, it also provided implications for the content of such continuity plans in the future. while some continuity plans actioned during the sars crisis saw the establishment of parallel operations or the shifting of work to safer regions/locations so as to create back-up locations (minihan, ) , it is apparent that cost and time factors can mitigate against this being a feasible option for many firms (read, ) . however, what sars did demonstrate, and what has been suggested by business continuity writers, is that technology may be the key and that "telecommuting" or "teleworking" should be a part of any company's business continuity plan (jimenez, ) . accordingly, firms need to ensure they have the technological infrastructure to support the ability to work from home or from remote locations, in case an event like sars forces offices to close. at a basic level, this requires employees to have access to the firm's information/data or intranet from home, and such access must be secure. to reduce reliance on a single data source, firms are also beginning to employ "network storage" or "data mirroring" technologies so that key transactional data is copied in almost real time to other locations, thus creating a back-up (newing, ) . the sars crisis also highlighted the usefulness of video-conferencing and teleconferencing technology, particularly given that higher bandwidth speed now makes such conferencing a more viable option. while travel bans and the reluctance to travel persisted, conferencing technology allowed continued contact with clients and overseas partners, and allowed important meetings to take place. while such technologies may not immediately become the industry norm, given that personal contact in asian countries is highly valued, their introduction as a riskmanagement device may secure their gradual acceptance as their longterm benefits become more obvious. indeed, anecdotal evidence suggests that firms who invested in such technology during the sars crisis will continue to use this technology in the future (neuman, ) . while technology is important, it alone may not be sufficient, and the human element in continuity plans is also important. key workers must be identified and must have access to the right it equipment and training to enable them to carry on working if the office has to be shut down. key staff should also be spread among different sites, as one organization learned the hard way, when it lost its entire it recovery team located in the world trade center during september (newing, ) . finally, firms must also realize that telecommuting has a human element, as workers stuck at home often experience feelings of isolation, anxiety, and depression (wayne, ) . accordingly, planning must incorporate how such psychological problems can be addressed. as mentioned in the case study, the impact from the fear of sars was greater than sars itself, and has implications for how a crisis such as sars is managed in the future. what is glaringly obvious is the need for full disclosure of information, given that the panic about sars was fueled when information was concealed or only partially disclosed, leading to rumors and exaggeration (who, ) . employees need facts from, and questions answered by, reliable and credible sources. responses that have proved effective include establishing hour hotlines to communicate with staff and directing staff to other information sites, such as the who web site (aldred, ) . the sars crisis occurred against the backdrop of a highly interconnected and integrated world economy and has established itself as a new kind of global threat, along with other unpredictable events such as the asian financial crisis and global terrorism. rather than having a localized impact, the impact of sars has been far-reaching, even if this was largely from the fear of the virus rather than the virus itself. the impact from the fear of sars was greater than sars itself, and has implications for how a crisis such as sars is managed in the future. in table , we summarize some of the key differences between the traditional and new forms of risk. for governments, the message is clear-even in a world without borders, the state will still have a role, given that unsupported market processes are insufficient by themselves to solve the problems created by sars. however, with this responsibility comes the requirement that governments act in an open and transparent manner, something that is arguably a precondition for the effective handling of a crisis such as sars. global phenomena such as sars also emphasize the need for a collective response and more openness and cooperation among nations. for businesses, the ability of sars to significantly disrupt international business and the speed in which the disease was transmitted suggests that the nature of this new kind of event is global and systemic, and accordingly warrants a broad and encompassing risk-management approach. the implication is that firms must put a higher premium on strategies that emphasize flexibility and responsiveness. indeed, firms will find value in increasing diversification, whether this is in sourcing or in corporate strategy. planning must also become less linear and more contingent-based, and in considering a range of possible future scenarios, firms will be in a better position to handle disruptions that increasingly cannot be predicted. technology also appears to offer a possible solution as a risk-management device, and we are likely to see technolo- gies such as video conferencing become a commonplace feature in offices of the future. further research on the role that strategies, structures, and resources play in anticipating, responding, and adjusting to environmental disruptions is necessary (meyer, ) . in sum, while an event like sars produces considerable challenges, it also offers insights into how firms can better equip themselves to manage within an increasingly turbulent and unpredictable environment. virus challenges efficacy of risk management plans the cost of sars virus threat to , tourism jobs. the australian sars fears continue to weigh on investors. the times sars is the latest in explosion of new infectious diseases how singapore beat the virus-and still awaits it the mckinsey quarterly: special edition-risk and resilience chinese taipei to brief on its measure in response to the sars epidemic hong kong spends pounds m on recovery. travel trade gazette strategy under uncertainty configural advantage in global markets terrorism and the international business environment. aib newsletter, fourth quarter responding to new environmental uncertainties: terrorism and sars in the global business environment deadly virus-the economic toll: delayed deliveries, closed factories, and the spectre of recession teleconference business up as sars fuels demand our future with sars sars test. institutional investor sales of masks, air filters soar as sars spreads; manufacturers get busier as fliers try to protect themselves the outbreak: the virus factories of southern china the sars outbreak: beijing confronts crucial test in its struggle against sars-mayor cites big shortages of beds hong kong cases relapse technology key factor in business continuity the sars outbreak: beijing imposes new sars curbs city's theatres and cafes are shut down as china lists another cases scenario planning after / : managing the impact of a catastrophic event risk, uncertainty and profit is the panic worse than the disease? the truth about sars. time technology keeps asia running despite sars scenario planning" explores the many routes chaos could take for business in these very uncertain days the new dynamics of global manufacturing site location many unhappy gains. crain's get ready for the next sars adapting to environmental jolts a framework for integrated risk management in international business work goes on despite dangers asia, sars and the supply chain sars leaves a silver lining: companies save by videoconferencing flexible models to face up to the unexpected: disaster recovery: business continuity has to fight it out with other areas for funding. financial times strategy, structure and economic performance how to develop business strategies from multiple scenarios dealing with sars-rebuilding confidence and taking opportunities essentials of international management: a cross-cultural perspective seafood prices affected by sars world investment report : fdi policies for development. national and international perspectives united nations toward the flexible form: how to remain vital in hypercompetitive environments executives in singapore chafe at sars-related travel bans china's handling of sars virus concerns investors-new leadership's image suffers amid signs beijing failed in crisis management key: cord- -r hzazp authors: stowe, julia; andrews, nick; miller, elizabeth title: do vaccines trigger neurological diseases? epidemiological evaluation of vaccination and neurological diseases using examples of multiple sclerosis, guillain–barré syndrome and narcolepsy date: - - journal: cns drugs doi: . /s - - -y sha: doc_id: cord_uid: r hzazp this article evaluates the epidemiological evidence for a relationship between vaccination and neurological disease, specifically multiple sclerosis, guillain–barré syndrome and narcolepsy. the statistical methods used to test vaccine safety hypotheses are described and the merits of different study designs evaluated; these include the cohort, case-control, case-coverage and the self-controlled case-series methods. for multiple sclerosis, the evidence does not support the hypothesized relationship with hepatitis b vaccine. for guillain−barré syndrome, the evidence suggests a small elevated risk after influenza vaccines, though considerably lower than after natural influenza infection, with no elevated risk after human papilloma virus vaccine. for narcolepsy, there is strong evidence of a causal association with one adjuvanted vaccine used in the / influenza pandemic. rapid investigation of vaccine safety concerns, however biologically implausible, is essential to maintain public and professional confidence in vaccination programmes. vaccination is one of the most effective public health interventions successfully controlling many serious infectious diseases and saving millions of lives globally each year [ ] . however, as with any medical treatment or drug, vaccination can never be entirely risk free in terms of unwanted side effects. an important feature of vaccination is that unlike most therapeutic drugs, vaccines are given prophylactically to healthy individuals, often young children. when an event occurs shortly after vaccination in an otherwise healthy individual without an obvious cause, it is tempting to attribute its occurrence to the preceding vaccination. the assumption of a causal association with a vaccine from purely a temporal association is often incorrect as unrelated events will occur by chance irrespective of vaccination. it can be hard to disentangle these temporal associations when there is a strong perception that a temporal association is necessarily evidence of a causal association and the onset of the condition is insidious and its timing relies on patient or parental recall [ ] . even if only based on a temporal sequence of events, it is important that such safety concerns are rapidly investigated with robust epidemiological studies to allow mitigation procedures to be put in place if an association is confirmed or, if unfounded, to have the necessary evidence to sustain public confidence in the vaccination programme without which coverage drops and disease control is lost. in this article, which focusses on the evaluation of the relationship between vaccination and neurological diseases, the statistical approaches to causality assessment are first discussed and their relative merits evaluated, followed by an overview of a selection of vaccine safety studies involving neurological disease with differing conclusions; some of the included studies have shown a small elevated risk, others none, two lack evidence to draw any definitive conclusion and one provides robust evidence of causal association. to establish whether the signal seen is associated with the vaccine and to quantify the risk, a formal epidemiological study is usually needed. this requires a pre-specified protocol detailing the population under study, the period after vaccination for which an elevated risk is suspected, and the methods for case identification and statistical analysis. most importantly, the ascertainment of the condition of interest must be unbiased with respect to vaccination history [ ] . the following statistical methods have been used most commonly to address vaccine safety questions and to control for the inherent biases in the population and data under study. although these methods aim to address confounding, it can be difficult to fully control for this in an observational study. an assessment of the likelihood of residual confounding/ bias and its potential extent is an important consideration when weighing up the strength of a study and drawing a conclusion with regard to causality. in a cohort study, the risk of developing the condition is compared in the vaccinated and unvaccinated individuals in the study population. cohort studies need to be very large to detect rare vaccine adverse events and this often makes them impractical for a prospective study. retrospective cohort designs can use routinely collected data and cases identified by clinical coding but this study design may be disadvantaged by the need to collect a large number of confounding variables. factors such as underlying illnesses, sociodemographic characteristics, and propensity to consult may differ between unvaccinated and vaccinated individuals and would therefore need to be adjusted for in the analysis as they can independently determine the likelihood of the adverse event under study. the advantage is that an entire population is studied and relative and absolute incidence estimates can be reported. in addition, once the cohort is defined, several outcomes can be assessed within the same study design. when studying a vaccine that is given as part of a national schedule and high coverage is achieved, the small unvaccinated group may differ from the vaccinated group in ways that are difficult to capture and control for in an adjusted analyses. additionally, care must be taken to ensure unvaccinated cases are indeed unvaccinated and the data are not missing. this can occur when regional vaccine datasets are used and the transfer and sharing of data are not comprehensive. cohort studies are feasible for vaccine safety studies when data from a whole country or region can be used. an example of this is in denmark where danish residents contribute to a large linked dataset consisting of demographic factors that are linked to health information including potential confounding variables [ ] . the self-controlled case-series method (sccs) was designed for rapid unbiased assessment in vaccine safety studies using available disease surveillance data that may not be amenable to cohort analysis. the method only requires information on the timing of cases during a defined observation period and their vaccination status [ ] . the cases act as their own controls as the incidence of the event in pre-defined risk periods following vaccination is compared to the incidence outside the risk period generating a relative incidence (ri) measure ( fig. ) . a significant advantage of the method is that confounding factors that do not vary over the observation period, such as co-morbidities or sociodemographic status, are automatically controlled for. adjustment for timevarying confounders such as age is also possible by dividing up the observation period further into age categories. it has been demonstrated that the power of the sccs method is nearly as good as a cohort study when uptake is high and risk intervals are short, and it is superior to that of a casecontrol study [ ] . the self-controlled case-series method has been used by public health england to address many pertinent vaccine safety concerns [ ] [ ] [ ] [ ] . this design has been chosen both because of its simplicity and ability to control for individual level confounding and also because a national cohort of cases cannot be easily defined using the national hospital data as no national immunisation register is available. unlike a cohort study, the sccs method does not provide absolute risk estimates. however, if the number of doses given to the population from which the cases are derived is known and if ascertainment is complete, then absolute risks can be estimated and the cases attributable to vaccination estimated from the magnitude of the ri. a case-control study requires smaller numbers than a cohort study but the same confounding and bias can occur and it also has the added difficulty of selecting the correct controls for comparison. for vaccinations given in the short age range in the first and second year of life or during a short calendar period to target ages, close matching of the controls on date of birth is required. prior vaccination status is then compared between cases and controls using the date of onset in cases as a reference date. to obtain enough power to assess the required risk, multiple controls per case are often needed and defining appropriate criteria for the selection of controls can be problematic. while it is important to ensure that controls are similar to cases on characteristics such as age and geographical location that can independently affect vaccination status, over matching is a risk if too many extraneous variables are included in the matching, resulting in loss of efficiency and potentially introducing bias. a case-control study does not provide absolute risk estimates, rather it measures the odds of vaccination in cases compared to controls. however, as with the sccs method, if the number of doses given to the population from which the cases are derived is known and if ascertainment is complete, then absolute risks can be estimated and the cases attributable to vaccination estimated from the magnitude of the odds ratio. the case-control design has been used where controls can be selected from the same population as cases and can be readily matched on the relevant variables. as a case-control approach is more efficient than the cohort approach, it is often used on large databases that could be used for a cohort analysis. examples include the vaccine safety datalink in the usa, which accesses complete patient records from health maintenance organisations, or studies using hospital admission data bases linked to national immunisation registers such as the australian childhood immunisation register [ ] [ ] [ ] . the case-coverage design has recently been used in vaccine safety studies [ , ] . it is similar to the screening method, which until recently has been primarily used for vaccine effectiveness assessment [ ] , although it is more limited in terms of adjustment for possible confounders than the sccs method. each case is matched to a population coverage estimate and this is then used to see if the number of cases vaccinated is greater than expected. the method uses logistic regression on the odds of vaccination with an offset for the log-odds of the matched population coverage, thus it is similar to a case-control study with thousands of controls per individual. this design has been used by public health england to assess the risk between as adjuvanted h n pandemic vaccine pandemrix™ and narcolepsy. because pandemrix™ was rolled out over a short period of time in the winter season of / targeting children of different ages according to whether they had certain co-morbid conditions, it was necessary to have detailed information on dates of vaccination and dates of birth to estimate the population coverage for each narcolepsy case by age and time period. this was available from a representative subset of general practices in england, which also provided information on co-morbidities, the only other variable considered as a potential confounder [ ] . in the first study assessing the risk of narcolepsy in children [ ] , both the sccs method and the case-coverage design was used. the results from the sccs method were found not to be clear as this method requires the incidence in a pre-specified risk period after vaccination relative to the baseline incidence to be compared. because the duration of the risk period had not been defined at the time, the postvaccination interval was found to be too short and resulted in the inclusion in the baseline period of four patients with symptoms more than months after vaccination. the choice of study design to answer a vaccine safety question will depend on the hypothesis to be tested, the available data sources and the extent to which confounding variables are likely to bias the results. the sccs method has now become the gold standard design in vaccine safety studies, owing to the benefits highlighted above, but for each study question the methods should be adapted and potential biases considered in the context of the population under study, the dataset being utilised and the hypothesis being tested. it will inevitably be a trade-off between the ideal and the practical and the best designs will vary according to setting. when many studies are performed to answer the same question, the key to demonstrating causality is consistency in the results from well-designed studies [ ] . neurological conditions have a long history of causal associations with vaccination being inferred from temporally related onsets. an example is the damage that was made to the uk whole-cell pertussis vaccination programme in the late s when neurological damage was wrongly attributed to the vaccine based on case reports of infants with onset of encephalopathy shortly after vaccination. these reports of permanent brain damage following vaccination attracted intense and sustained professional and media interest causing vaccination rates to fall from % in to % in . following this, three national epidemics of pertussis occurred with an estimated hospital admissions, cases of pneumonia, cases of convulsions and deaths [ , ] . neurological vaccine safety concerns can be broadly assigned to either being biologically plausible or unsubstantiated and unexpected. the biologically plausible group are often a direct effect from a component of the vaccine. for example, in the case of a live attenuated vaccine, the adverse reaction could mimic, at a lower frequency, what the non-attenuated wild virus would do. this is demonstrated in the rare risk of acute flaccid paralysis following the oral polio vaccine after a reversion to virulence or with the risk of aseptic meningitis after the attenuated urabe mumps strain in the measles-mumps-rubella vaccine due to retention of some neurovirulent characteristics [ , ] . the unsubstantiated and unexpected group occurs usually because of the timing of the vaccine, which coincides with the diagnosis of the condition and has no immediate biologically plausible explanation. examples of this are measles-mumps-rubella and autism [ ] , gait disturbance and measles-mumps-rubella [ ] , and thiomersal and developmental delay [ ] . although a signal may not have a clear biological basis for its causation, it still needs to be fully investigated using robust epidemiological methods. neurological diseases for which a causal association with vaccination has been suspected have some common features. first, they are often serious conditions that are rare, second, their aetiology and pathophysiology are poorly understood, and third, immune stimulation is thought to play a role in the pathogenesis of the condition. because vaccines provoke an immune response, albeit targeted to a specific antigen, it can be tempting to invoke a superficially plausible causal pathway when adverse events with a suspected immune aetiology arise shortly after vaccination. universal hepatitis b vaccine was recommended by the world health organization in the early s to protect against the hepatitis b virus, which can cause chronic liver damage and cancer. following this recommendation, france carried out a mass vaccine campaign in . shortly after, reports of cases of multiple sclerosis (ms) with onset or relapse after vaccination were reported, leading to the hypothesis that the vaccine could cause an acute autoimmune reaction in susceptible persons soon after administration. with a lack of adequate background rates of ms in the vaccinated population to put the reported cases into perspective, mistrust in the vaccine soon grew and the vaccine programme was subsequently suspended. a systematic review and meta-analysis by mouchet et al. published in that included studies with a control group found no evidence of an increased risk. the overall adjusted risk ratios for ms was . ( % confidence interval [ci] . - . ) and for central demyelination was . ( % ci . - . ) [ ] . within the systematic review, there was one study that found a significant association using a primary care database from england [ ] . this study was unable to adjust for all risk factors and additionally no routine hepatitis b vaccination programme was in place at the time with most of the vaccine delivered via occupational health departments whose records may not be routinely transferred to primary care databases. france continues to have suboptimal vaccine coverage [ , ] and has the lowest level of confidence in vaccine safety in europe [ , ] . this demonstrates the need to have robust methods in place to rapidly respond to such scares because once confidence is lost in a vaccine it is difficult to restore and may generate a more general lack of confidence in vaccine safety. guillain-barré syndrome (gbs) is the most common cause of acute neuromuscular paralysis in the developed world resulting in muscle weakness and sometimes paralysis, which can lead to respiratory failure and a death rate in up to % [ ] . the strongest evidence of a causal link with a vaccine was obtained during the us swine influenza vaccine programme in military personnel, which was found to be associated with a risk of one case per , and resulted in the suspension of the vaccine programme [ ] . since then, gbs has been a potential vaccine-associated adverse event of interest particularly for vaccines given in adolescence, an age coinciding with the age at which autoimmune diseases are often diagnosed. . - . ) . the overall relative risk for gbs after seasonal vaccine was marginally increased at . ( % ci . - . ), with a somewhat larger relative risk of . ( . - . ) for the h n pandemic vaccine but this was not significantly higher than the relative risk for seasonal vaccine [ ] . the authors did not find any statistically significant differences by geographical region nor between adjuvanted and unadjuvanted vaccines. an earlier meta-analysis of studies using the sccs method also found a small elevated risk of gbs after the monovalent h n pandemic vaccine, with an ri of . ( % ci . - . ) in the days following vaccination [ ] . similarly, salmon et al. found an ri of . ( % ci . - . ) in a large study in the usa [ ] . in contrast, a strong association between gbs and a preceding influenza-like-illness was shown in a study in england using primary care data and the sccs method. no association was seen with influenza vaccine in the - days after administration (ri . [ % ci . - . ]) but a significantly increased risk was found in the days after influenza-like illness (ri . [ % ci . - . ]) [ ] . these studies show that a small overall risk of gbs after influenza vaccine probably does exist with a slightly larger risk after the monovalent pandemic vaccine. the mechanism may be multi-factored with the risk varying with the vaccine used, co-circulation of other infections and the inherent susceptibility to developing gbs. however, the small risk that exists does not outweigh the risk of developing gbs after influenza itself. human papilloma virus vaccine is given at an age when autoimmune disorders are often diagnosed. following a french study reporting a signal for gbs after human papilloma virus vaccination, a study was conducted in england identifying gbs cases in a national hospital discharge database (hospital episode statistics) [ ] . primary care practitioners were then contacted for the vaccination history and asked to confirm gbs diagnosis and provide an onset date and send supporting documentation. in a selfcontrolled case-series analysis of cases with a record of human papilloma virus vaccination, there were episodes in the -to -day risk period after any dose with no significant increased risk, ri . ( % ci . - . ). the analysis was also stratified by manufacturer (of either the quadrivalent or bivalent product); there was no difference in the ri between products and no significant increased risk for either manufacturer. the pandemic influenza vaccine, pandemrix™, was the most widely used vaccine in europe during the pandemic. it was a monovalent h n pdm vaccine containing as , a powerful oil-in-water adjuvant. uptake of the vaccine varied between countries with high coverage of % in children in finland [ ] and lower coverage in england where children in a risk group eligible for the seasonal influenza vaccine and later all children under years of age were targeted, with uptake being % and %, respectively. in england, pandemrix™ was also used in the influenza season / because of a shortage of seasonal influenza vaccine. in august , concerns were raised in finland and sweden, where vaccine coverage was high, about a possible association between narcolepsy and pandemrix ™ when a large increase in cases of narcolepsy in vaccinated cases was reported by sleep centres [ , ] . a subsequent cohort study in finland reported a -fold increased risk of narcolepsy following pandemrix™ in children aged - years, the majority of whom had onset within months of vaccination and almost all within months [ , ] . narcolepsy was a totally unexpected adverse event and the early reports were met with initial scepticism in the global vaccine community. the world health organization global advisory committee in vaccine safety issued a statement in april stating "no excess of narcolepsy has been reported from several other european states where pandemrix was used" and "it seems likely that some as yet unidentified additional factor was operating in sweden and finland". however, it was unlikely that narcolepsy would be identified by passive surveillance systems in other countries where pandemrix™ coverage was low given the low background incidence of the condition and the complexity and frequent delays in diagnosis. to assess this risk identified in finland, the health protection agency (now public health england) performed a study in sleep centres in england where the majority of children with sleep disorders are seen. this study identified a -fold increased risk in those vaccinated with pandem-rix™ [ ] with the attributable risk estimated to be . per , doses. this demonstrated that even in a country were vaccine coverage was low, the association can be demonstrated using robust epidemiological methods. the study of the relationship between narcolepsy and pandemrix™ has been an epidemiological challenge in terms of identifying the cases and their vaccine histories in a non-biased manner. not only can the diagnosis be lengthy and complex, but admitted patient care databases, which are widely used for a non-biased ascertainment of cases in vaccine safety studies, are incomplete as patients experiencing narcolepsy may not be admitted and if they are admitted, the admission date is not an accurate reflection of the onset of the narcolepsy symptoms leading to misclassification bias. an important consideration when selecting cases is the awareness of the hypothesised association. this awareness may lead to an increased reporting of cases known to be vaccinated and has two aspects; public awareness and professional awareness. first, this heightened awareness may lead to vaccinated individuals presenting to healthcare institutions and being diagnosed earlier than unvaccinated cases leading to ascertainment bias. if a condition has an insidious onset making the recall of the first symptom difficult to determine, media attention may lead to a differential recall of the symptom-onset date in the vaccinated cases. using source documents, which were created prior to any media attention in the country of study, can address this potential recall bias. professional awareness is likely to occur even if media attention is low, as health professionals in the specialty will be aware of current topics of interest through professional bodies and literature. differential misclassification bias will occur if cases known to have been vaccinated are more likely to be assigned a diagnosis of narcolepsy than unvaccinated cases. in the study from england, public awareness of the association was assessed by analysing google searches for "narcolepsy" in the period of interest and found there was little activity in the uk compared to sweden (fig. ) [ ] . even with these practical challenges, there has now been a consistent strong association demonstrated in countries that used pandemrix™ but no association has been seen with other pandemic or seasonal vaccines [ ] . as with all vaccine safety studies, but particularly in the case of narcolepsy and pandemrix™ where the association was completely unexpected, the key to demonstrating causality was consistency of results from well-designed studies in different settings. the answer to the question of whether vaccination can cause neurological disease is multifaceted. the evidence does not support an association between ms and the hepatitis vaccine, while for gbs and influenza vaccines the evidence suggests a small increased risk though it is much smaller than the risk from a natural influenza virus infection. the now established association between narcolepsy and pan-demrix™ should act as a lesson for the vaccine safety community that sometimes unexpected but serious conditions can arise and need to be investigated rapidly however biologically implausible. the neurological vaccine safety issues outlined here demonstrate that rapid assessments of safety signals are needed to ensure that public confidence is maintained in national immunisation programmes. the confirmation of a signal and estimation of the magnitude of vaccine-attributable risk will require consistent results from a number of well-designed epidemiological studies, preferably conducted in different settings. as the experience with narcolepsy has shown, not all vaccine safety concerns can be anticipated on the basis of biologically plausible and thus predictable effects. as new vaccines are introduced, the basis of discussions on vaccine safety should be the acceptance that vaccination can carry a small risk but that this risk needs to be balanced against the enormous individual and public health benefits. funding public health england, national infection service, immunisation and countermeasures division has provided vaccine manufacturers with post-marketing surveillance reports, which the marketing authorisation holders are required to submit to the uk licensing authority in compliance with their risk management strategy. a cost recovery charge is made for these reports. world health organization. the power of vaccines: still not fully utilized recall bias, mmr, and autism vaccine safety surveillance postlicensure epidemiology of childhood vaccination: the danish experience control without separate controls: evaluation of vaccine safety using case-only methods statistical assessment of the association between vaccination and rare adverse events post-licensure autism and measles, mumps, and rubella vaccine: no epidemiological evidence for a causal association guillain-barre syndrome and h n ( ) pandemic influenza vaccination using an as adjuvanted vaccine in the united kingdom: self-controlled case series idiopathic thrombocytopenic purpura and mmr vaccine the risk of intussusception following monovalent rotavirus vaccination in england: a self-controlled case-series evaluation population-based study of rotavirus vaccination and intussusception intussusception risk and disease prevention associated with rotavirus vaccines in australia's national immunization program mmr vaccine and idiopathic thrombocytopaenic purpura risk of narcolepsy in children and young people receiving as adjuvanted pandemic a/h n influenza vaccine: retrospective analysis risk of narcolepsy after as adjuvanted pandemic a/ h n influenza vaccine in adults: a case-coverage study in england estimation of vaccine effectiveness using the screening method incidence of narcolepsy after h n influenza and vaccinations: systematic review and meta-analysis pertussis immunisation and control in england and wales, to : a historical review the pertussis vaccine controversy in great britain risk of aseptic meningitis after measles, mumps, and rubella vaccine in uk children risks of convulsion and aseptic meningitis following measlesmumps-rubella vaccination in the united kingdom autism and mmr vaccination in north london; no causal relationship no evidence of an association between mmr vaccine and gait disturbance thiomerosal exposure in infants and developmental disorders: a retrospective cohort study in the united kingdom does not support a causal association hepatitis b vaccination and the putative risk of central demyelinating diseases: a systematic review and metaanalysis. vaccine recombinant hepatitis b vaccine and the risk of multiple sclerosis: a prospective study estimates of national immunization coverage european centre for disease prevention and control. measles vaccination coverage (second dose the state of vaccine confidence : global insights through a -country survey vaccine hesitancy among general practitioners and its determinants during controversies: a national cross-sectional survey in france guillain-barre syndrome guillain-barre syndrome following vaccination in the national influenza immunization program guillain-barre syndrome and influenza vaccines: a meta-analysis international collaboration to assess the risk of guillain barre syndrome following influenza a (h n ) monovalent vaccines association between guillain-barre syndrome and influenza a (h n ) monovalent inactivated vaccines in the usa: a meta-analysis investigation of the temporal association of guillain-barre syndrome with influenza vaccine and influenzalike illness using the united kingdom general practice research database no increased risk of guillain-barre syndrome after human papilloma virus vaccine: a self-controlled case-series study in england increased incidence and clinical picture of childhood narcolepsy following the h n pandemic vaccination campaign in finland risks of neurological and immune-related diseases, including narcolepsy, after vaccination with pandemrix: a population-and registry-based cohort study with over years of follow-up as adjuvanted ah n vaccine associated with an abrupt increase in the incidence of childhood narcolepsy in finland key: cord- -xpzx qg authors: murphy, peter e. title: risk management date: - - journal: the business of resort management doi: . /b - - - - . - sha: doc_id: cord_uid: xpzx qg nan risk management has been placed at the end of this book in affirmation of its crucial and central role in resort management, and as a prime example of pulling together the external and internal elements of parts b and c. while some may think risk management is a recent phenomenon, a result of global warming and terrorism, it has been associated with resort management for a long time and in a variety of ways. in normal business, financial risk is a regular occurrence that should be recognized and managed like other factors of demand and supply. however, with the taking-in of guests comes an extra responsibility, known as 'duty of care', where management is obliged to protect their guests from harm to the best of their ability. on the demand side guests are often looking for excitement and the spectacular, which can put them at risk. those who seek excitement in adventure tourism, when they challenge themselves or look for an adrenalin rush, purposely place themselves at risk and it is up to resorts to ensure the real risk is minimalized by managing the situation. even those who have not come to a resort to exert or excite themselves regularly demand spectacular views and sunsets that often require building on risky sites and in nonconformist style. the sounds of the sea and uninterrupted tropical sunsets attract resorts to the water's edge in areas where hurricanes and cyclones occur with regularity. in the mountains, similar demands for spectacular views place buildings at crests or on steep slopes where local climatic conditions are at their extreme and avalanches can occur. on the supply side risk is present at the very start, requiring a correct interpretation of market research and feasibility studies over the - year life span of many resort investments. risk is present in the location of many resorts on the 'edge of civilization', well removed from regular infrastructure and services that are the basis of quality service experiences. it is present in the operation of resorts where guests come to participate in challenging activities, regular sports or simply to unwind, a process that inevitably leads some of them to leave natural caution and common sense behind at home. it is not surprising that 'risk management is not just good for business, but is absolutely necessary in order for tourism and related organizations to remain competitive, to be sustainable, and to be responsible for their collective future' (cunliffe, : ) . resort management risk not only involves both demand and supply considerations, it can range in scale from minor yet important internal issues like a lack of staff in crucial situations and places to overwhelming natural disasters or human external interventions like terrorism or financial crises. whatever form it takes the element of risk is ever present for resort management and some type of management structure needs to be in place to minimize its impact on the business. if no event or business decision within resort management is risk free, a risk management framework needs to take on a statistical probability structure. tarlow ( ) has suggested a useful framework would be one that considers the probability of an event and its likely consequences. figure . provides some examples, using tarlow's suggested framework, but it should be noted that the consequences will vary according to each incident's severity and relevance to the resort product offerings. food poisoning is a serious occurrence for a resort because it means one duty of care has failed, ruining the visit of some guests and possibly closing a restaurant; but in the overall scheme of things, it has a low probability of occurrence and low consequences in a well-run establishment. the consequences are generally limited to some temporary bad publicity, financial compensation, a revision of safety procedures and possibly new equipment. this level of risk is discussed under the heading of 'security' within this chapter. accidents are presented in the form of personal injury, where the probability of occurrence can be high when a resort is associated with adventure tourism or dangerous locations. duty of care is still a major consideration, but if a guest chooses to undertake a risky activity they are expected to assume some of that risk. under these circumstances resorts are expected to minimize the level of risk by preparing the site properly, instructing the guest where appropriate, and providing warning signs or professional help where warranted. this level of risk has been assigned a low consequences ranking in that it usually applies to individuals or small groups and through the implementation of 'risk management' these consequences can be minimized, but not eliminated. natural disasters have been a fact of life for the resort industry since its inception, with one of the earliest recorded disasters being the tarlow, .) destruction of pompeii by the mount vesuvius volcano in ad . natural disasters have high consequences because they cause severe damage and can destroy a resort or close it down for a long period. fortunately, their probability of occurrence is generally low. this form of risk is more difficult to anticipate so 'crisis management' is presented more as contingency planning, preparing for the worst in order to minimize its impact, especially on the loss of life. the weather had been classified as a high probability and consequence risk, because so many resorts are dependent on this feature yet it is something beyond their control. bad weather or even the threat of it can reduce visits and sales, but in this era of global warming the signs of severe weather stress are starting to have an impact. the increasing number of force hurricanes is raising the insurance rates of all tropical resorts and not just those affected directly. the long-term drought in australia's alpine areas is creating poor snow seasons and raising questions about the ski industry's viability in these areas. such events do not have the sudden impact of a site-specific natural disaster, but they can have a major say on the long-term viability of a resort business. in this regard such risks are incorporated into the overall framework of 'sustainable management', where the evolving weather patterns are integrated into the long-term resource planning for a resort. at the basic level a resort's 'duty of care' requires it to ensure the safety of its guests within reasonable limits. possible threats to a guest's safety can arise from internal and external sources. internally, the design of facilities should include safety considerations along with their functional role, whether that involves a ski lift or a hotel balcony. staff should be trained to undertake their tasks safely, to keep an eye on guests and the general situation, and to note any external threats. these threats can include vandalism, theft or terrorism. the level of security will be determined by the perceived degree of threat in the local area, but some attention to this matter is necessary everywhere, for insurance and legal purposes alone. security of a resort involves three basic steps: . analyze and identify vulnerable areas/processes. resorts offer extensive and varied terrain with open and friendly access to welcome guests. they need to reduce the vulnerability of property and guests by identifying the weakest and most exposed elements of the property, and the riskiest regular activities of the guests. . establish security priorities. as figure . indicates not all risks warrant equal attention because their probabilities of occurrence and levels of impact on the operation will vary. in terms of basic security a differentiation should be made between public areas and private accommodation which ensures the privacy and security of guest accommodation can be maintained. in the restaurants and food outlets the storing, cooking and presentation of food must be undertaken with respect to health and safety regulations. in the most popular recreational areas like swimming pools and play areas, there must be qualified supervision. in guest rooms there should be a smoke alarm and sprinkler system to protect guests and the resort investment. . organization of a security system. the combination of guest enjoyment and security often requires a delicate touch in a resort environment, so as not to spoil the vacation experience. this means security should be present, but as invisible as possible. in a casino resort the presence of occasional guards can be re-assuring, but most of the security surveillance is conducted by closed circuit television (cctv). in other resorts there may be patrols for the grounds, especially at night, alarms in sensitive areas and instructions to staff and guests about possible dangers. in-room security is aided by instructions via notices or local hotel channel on the television, and an in-room safe. when resorts are exclusive in terms of being up-market, providing extensive facilities and attracting wealthy guests they need to be particularly concerned with security because they can become a magnet for criminal activity and litigious activity. for example, the jalousie plantation resort and spa in the island of st lucia would offer a tempting target according to the description offered by pattullo ( : a security system is only as good as its staff and cannot operate effectively in isolation from general staff and guests. resorts can either hire their own security staff or outsource the responsibility to a professional company. regardless of the approach used it must be integrated into the daily operations in such a way as to be effective yet undetected. 'management should view the cost of developing a security training program as an investment. a resort in which all employees are attuned to the safety and security concerns can create a safer environment for guests and employees and a more profitable operation in the long run' (gee, : ) . there are several aspects to security planning: in major resorts it is now common to hire a professional security company to provide coverage for key areas and assets. in addition to being skilled in their task the individuals selected for resorts need to be presentable and able to interact with guests. just like disney theme park street sweepers are trained to know about their theme park and emergency procedures, resort security will find themselves called upon for directions and advice by the guests. ■ staff training. even if a resort has a professional security arm it needs to include general staff in its security planning. they should be knowledgeable about basic procedures and able to advise guests. they should learn to keep their ears and eyes open for trouble. ■ records and reports. recording what happens is vital because it can help identify danger spots and become invaluable evidence for insurance and litigation purposes. there are several types of reporting mechanisms; daily activity report, general incident report, loss report, accident report, monthly statistical report. failure to fulfil the 'duty of care' responsibility may result in a security related liability law suit. in a suit alleging negligence, the plaintiff (the accuser) must show the defendant (the resort) failed to provide 'reasonable care' regarding 'foreseeable' acts or situations. judges often apportion blame for an accident. that is a certain percentage of the blame is seen as the responsibility of the operator and the remainder the responsibility of the guest. damages associated with negligence can be of two types: . compensatory damages: to compensate for loss of income, for pain and suffering. . punitive damages: to inflict punishment for outrageous conduct and to act as a lesson for others (setting precedence). examples of the legal consequences from insufficient attention paid to 'duty of care' abound, but in many cases involving private companies the cases are settled out-of-court with minimum publicity. to illustrate what can happen with regard to apportioned blame and the challenges in safeguarding tourists, the following two published accounts of australian cases are presented. the first involves a young man, who like many before him went to swim at a local waterhole in the murray river. he dived from a log in the waterhole and struck the riverbed, suffered permanent spinal injuries and sued the berrigan shire council and forestry commission of nsw for a$ million in damages.'both defendants denied liability, but agreed that damages should be assessed at a$ . million' (gregory and hewitt, : ) . in the original judgement the presiding judge reduced the assessment by per cent to take account of the plaintiff's share of responsibility, through contributory negligence. for the remaining a$ . million he ordered the council to pay per cent and the commission per cent. as often happens in such cases involving large sums this judgement was appealed. in a new judge upheld the decision but placed all the blame and financial responsibility on the council. in his summary the new judge said: council employees were aware of people diving from the log, and the changes to the riverbed that floods could cause. he said the council has means and opportunity to put up warning signs, and in the longer term, to remove the log. justice nettle said the council owed (the plaintiff) a specific duty (of care) to take reasonable steps to guard against the risk of harm resulting from the use of the log for diving. but he said the commission had a very different charter and purpose (responsible for managing the forest alongside the river), and arguably no actual knowledge of the use of the log (as a diving platform). (gregory and hewitt, : ) the second case involves a man who on a warm day in went to sydney's bondi beach for a swim and like a responsible australian risk management waded into the sea 'between the red-and-yellow "safe swimming" flags, (where) he dived under a foaming wave and collided with a sand bar' (feizkhah, : ) . by , the plaintiff who is now a quadriplegic as a result of this incident, claimed: waverley council's life-guards should have put the flags in a different spot or installed warning signs. perhaps 'a fellow diving and a cross through it or some words saying "sand banks"'. last week a jury agreed, ordering the council to pay (the plaintiff) a$ . million. (feizkhah, : ) this judgement not only cost a local council a great deal of money, it sent shock waves through the industry because it meant standard procedures had failed to demonstrate sufficient 'duty of care' in the eyes of the law. the wider implication of this case and others like it has been an increase in claims for negligence and a rise in public liability insurance. feizkhah ( : ) reports: between and , the number of public liability claims australiawide rose by % to : total payouts rose by % to a$ million. most claims are settled out of court for less than a$ . but, 'there is a jackpot mentality, where people with minor injuries see reports of big payouts and see if they can get something too'. one of the most affected tourism activities in this regard has been adventure tourism, an activity closely associated with resorts whose owners, such as international chains and public companies, are often viewed as possessing deep pockets. claims in these activities and areas have been increasing over a long period and have been associated with rising public liability insurance costs to cover not just recorded claims; but to help cover the broader costs of global insurance increases. given its importance as a major attraction for many resorts and as a prime source of risk and insurance claims, adventure tourism deserves special attention. depending on the level of risk and size of insurance claims, it can vary from a general security issue to a risk management issue. as can be seen in table . representing the insurance claims for a whole state, the number and amounts are relatively minor, although they can be crippling for small businesses with limited resources. when accidents and claims involve major incidents, with extensive pain and suffering, loss of the business of resort management income and possibly life there is a need for more extensive risk management and control as will be discussed in the next section. adventure tourism is a very general term and hence a very inclusive subset of tourism, including a large array of activities. the term implies excitement and a change from normal daily life by pursuing an activity in a different environment. adventure tourism can take many forms because three dimensions have been linked to its structure (page et al., ) . these dimensions involve the following characteristics, with an indication in the brackets of who is the major player: the amount of physical effort a person is prepared to put into the activity is a major feature of adventure tourism. in terms of an active situation the guest is an active participant who, with or without the help of a guide or instructor, is looking for excitement and an adrenalin rush. in a passive situation the guest is a spectator or observer, one who wishes to learn more about the world around them rather than about themselves and their personal limits. ■ hard-soft dimension (business). these are the categories applied by the industry and relate more to the degree of preparation and pampering the guest requires for the activity. a 'soft' activity is one where the guest is able to view the scenery or wildlife from a safe vantage point with low risk of injury. a 'hard' activity is more risky because the guest participates directly in the activity in order to obtain that adrenalin rush and requires more individual attention, before (preparation) and during (guidance) the activity. ■ high risk-low risk dimension (business-guest). this is where guest perception and business management create the preferred and real tourism experience. beedie ( : ) notes correctly that a paradox has been created, 'whereby the more detailed, planned and logistically smooth an adventure tourist itinerary becomes, the more removed the experience is from the notion of adventure'. this helps to explain why 'injury rates do not necessarily conform to the notion of perceived risk', with some soft activities having substantial injury and death rates while some hard activities have far fewer than commonly expected. in bringing the guest's desires and expectations together with the resort's prepared and staged offerings, adventure tourism has been seen as a natural business opportunity by some (cloutier, ) and a commodifiction of the human spirit by others (beedie, ) . regardless of which interpretation is correct, for resorts it provides a varied and profitable business mix (figure . ). many of the references quoted in this section have extensive lists of activities cited as adventure tourism, but most shy away from classifying them because whatever groups are selected cannot be mutually exclusive given the three dimensional characteristics and the varying conditions under which they operate. for example, skiings' rating would be influenced by its location, whether it be in mountain areas with steep slopes and tricky runs or on the gentler beginner slopes attached to some resorts and urban centres. it would also depend on whether we are considering downhill or cross-country skiing, and of course on the skill and experience of the individual skier. when resorts present hard adventure tourism products they face the challenge of providing an exciting adrenalin provoking experience in the safest way possible. in many tropical resorts scuba diving is one of the key attractions, and although not physically demanding does require a reasonable level of health and fitness. wilks and davis ( : ) observe, however, that a review of consecutive scuba diving deaths found 'that in % of the fatalities there was a pre-existing medical contraindication to scuba diving' and those people should not have risked a dive. in this respect most dive companies rely heavily on personal honesty when guests fill in preliminary medical questionnaires, and at times this trust is abused through bravado and peer pressure. at other times the bravado and peer pressure can be laid at the door of the adventure tourism company. this was one accusation and explanation offered in relation to the canyoning tragedy in switzerland that claimed lives in (head, : ) . according to morgan and fluker ( : ) : clearly the risks associated with this incident were beyond the capabilities of participants. importantly, newspaper reports of the interlaken tragedy speculated that early warning signs of danger had been ignored by the activity's guides. expert opinions of experienced river guides were also reported. these reports expressed serious concerns that the interlaken river guides employed by the adventure company may have been under pressure to put profits before safety, this being compounded by their lack of knowledge of local conditions. the judge apparently agreed, stating the 'safety measures taken by the now defunct adventure world were totally inadequate -with no proper safety training for the employees' (bbc news, ) . something that should have been undertaken before hand, as part of a risk management process. risk management is a way in which to prepare for the security risk and crisis issues outlined above. it is becoming a significant aspect of resort risk management management given the adventurous nature of many of their activities, their exciting locations, the growing litigious nature of customers, and the growing threat of terrorism. risk management should incorporate the following, which involves considerable overlap with the previous security planning. the difference being that risk management is more comprehensive, including environmental concerns and financial considerations as well as human safety. the unique characteristic of adventure tourism, and those resorts offering that type of product, is that 'participants are deliberately seeking and/or accepting the chance of sustaining physical injury' (morgan and fluker, : ) . this means that for adventure clients perceived risk becomes an important part of the adventure experience, while for the commercial operator the actual and managed level of risk is the real risk as shown in figure . . when guests pay money for the specialized knowledge, skills and equipment of the commercial provider, 'they reduce their need for risk awareness and responsibility. this transfer of risk responsibility to an activity operator, arising from the tourist's financial consideration (contract), raises a number of legal and ethical issues' (morgan and fluker, : ) . the legal issues revolve around duty of care and individual responsibility, the ethical issues include the paradox that 'accidents can add to the allure of the adventure experience through providing a valid testimony of the risk' (morgan and fluker, : ) . risk management is a rational approach to deal with real risk. it is about managing risk rather than eliminating it because as we have seen some degree of perceived risk is inherent in adventure tourism and many resort locations. but 'it is important to grasp the concept that the level of risk management applied is relative to the tolerance of a specific business and its guests for risk, which can vary substantially from one operator to another' (cloutier, : ) . gee ( : - ) has identified a general process, that can assist resorts in their management of risk for both adventure tourism and general operations which consists of four steps. risk is associated with all aspects of business, and like the adventure tourist that is part of the thrill for many entrepreneurs and business people. adventure tourism operations must be identified in terms of their real risk, and even when they are outsourced to separate organizations with their own liability insurance, their professionalism and record will still impact on a resort's reputation and business. asset risks involve the major investment in property and facilities that need to be protected. identifying areas of particular danger and hazard is an important first step, such as the buildup of under growth and leaf-litter in woodland areas; the presence of currents or steeply shelving beaches along the beach-front; the physical dangers associated with wastewater treatment plants and with electrical substations; and the ever present danger of fire when people are relaxed and having fun. income risks are a major concern for resorts, which have a high dependence on external conditions often beyond their immediate control. anderson ( anderson ( : in the introduction to her article on crisis management in the australian tourism industry lays out a catalogue of disasters that have befallen that country's tourism over the past years: ■ -pilot's strike ■ -gulf war ■ -asian economic crisis ■ -dot com crash ■ -collapse of hih insurance company (which was the major public liability insurer in australia and with its demise there were major increases in insurance premiums for everyone). -world trade centre attacks -demise of ansett airlines (which had a per cent market share of the domestic airline business at the time). ■ -bali bombings which killed people ■ -iraq war -outbreak of severe acute respiratory syndrome (sars) as if this is not enough some countries could add the outbreak of foot and mouth disease, avian flu epidemics, further terrorists attacks and unreliable weather. although in australia and most countries tourism has recovered from such experiences, the industry has learned valuable lessons along the way. one has been to recognize the potential loss of business that can occur through interruption or damage, and to prepare for it through contingency planning and market diversification. legal liability risks are increasing as society becomes more litigious. resorts as businesses are responsible to their guests, employees and shareholders, all of whom are better educated regarding their rights and are more prepared and able to exercise those rights in court if need be. now liability insurance is a major cost factor that all businesses must consider, including resorts. loss of key personnel risks are often under-appreciated until such a situation occurs and the resort discovers how much a certain individual contributed to the business' overall attraction. a key person like a chef, an entertainer or instructor whose skills and special qualities are hard to replace can leave a big hole in a resort's reputation. these people should be identified and retained wherever possible, and if they are lost to illness or poaching then succession plans should be in place. to control the frequency and magnitude of losses due to risk it is essential to develop recording procedures and to create a repository of past records. detailed record keeping is a key to identifying where and when risks are occurring, and the staff who were or should be involved. if personal injury is involved it is particularly important to obtain independent witnesses to the incident, in case there are later legal or insurance claims. such data should be recorded in a central registry on a daily and weekly basis to be deposited in an appropriate computer database.this will provide important information regarding the safety record, or otherwise, of individual operations and the total resort. such data will prove useful when negotiating liability insurance or to demonstrate the resort's actual duty of care record. . risk reduction. business, like life, is never risk free so in designing and operating a resort one important emphasis is safety -for guests and staff. many of the common dangers like food, health and safety and fire are regulated and controlled by local by-laws or ordinances. however, such statutes often involve the 'minimum acceptable' precautions, so a resort may choose to follow the walt disney world lead and select higher standards. this will mean higher initial building costs, however it should reduce both the associated risks and annual insurance premiums. 'one of the most rewarding loss-control projects is training personnel to think in terms of accident and loss avoidance' according to gee ( : ) . this becomes particularly important in the operations phase and needs to be emphasized as part of the resort's duty of care. when accidents occur they become significant 'moments of truth' for the guests and if they are handled in an empathetic and professional manner many later difficulties can be avoided. most businesses, including resorts can absorb small and infrequent losses brought on by seasonal fluctuations or occasional accidents, but will need to transfer the risk of large business interruptions or liability claims to outside suppliers of such coverage, such as insurance companies or brokers. small losses and claims will still be a matter for management's discretion even when they have insurance coverage, due to the deduction or excess clause associated with their insurance premium. most personal injury and damage claims can be accommodated within these parameters and involve negotiation with the affected parties rather than court cases, but if it becomes a major contentious court case insurance providers will become involved. resort owners can also transfer risk to other parties via non-insurance responsibility. in terms of major recreation equipment, like ski lifts or spas, the supplier can be encouraged to guarantee the safety of its equipment if it is properly installed and operated. this often involves the supplier installing the equipment and certifying the operators. in terms of more hazardous adventure tourism activities the resort can sub-contract these activities to specialist operators who carry their own and independent insurance policies. cloutier ( : - ) provides some insights into actual risk management, as seen through the eyes of will leverette, who offers simple guidelines based on the real experience of lawsuits and consultations with adventure tourism companies. according to leverette there are six basic rules to follow: . develop a means to prove that guests were adequately warned and informed. . any guarantee of safety made in a business' literature or marketing materials is an open invitation to be sued. . all field staff must have current training in basic first aid. . the business should develop a written emergency/evacuation plan for all areas and activities to be used. . one good witness statement will shut down a frivolous lawsuit faster, more cheaply and less painfully than will anything else. . the business must use a properly drafted liability-release form. (this author's emphases) such personal experiences are a guide as to how today's risk management is evolving into a legal discourse over duty of care, but when there is a major disruption to business through some form of disaster the emphasis changes from prevention to the rescue and recovery of crisis management. 'crisis management is an extension of risk management' according to the pacific asia travel association (pata, : ) . risk management can be viewed as management initiatives designed to minimize loss through poor decision-making; but it can also be viewed as an important proactive step in reducing the dangers of catastrophic business collapse due to crisis or disaster. the pata booklet presents a 'four r' step process to crisis management: . reduction -detecting early warning signs; . readiness -preparing plans and running exercises; . response -executing operational and communication plans in a crisis situation; . recovery -returning the organization to normal after a crisis; where risk management practices would dominate the first two steps. risk management procedures that help to identify safety and security weaknesses in an operation will not only help to minimize danger and loss, they will expose the weak points in case a crisis occurs. crisis in a literate sense represents a moment of acute danger or difficulty, which in terms of the tourism industry has been defined as: an unwanted, unusual situation for an organization, which, due to the seriousness of the event, demands an immediate entrepreneurial response. (glaesser, : ) this approach places the emphasis of crisis management on the response and recovery phases of a crisis and brings it into line with disaster planning which has its own four-stage process of: assessment-warningimpact-recovery (foster, ) . it is natural disasters which often trigger crisis within the tourism industry be they earthquakes (kyoto - ) , volcanic eruptions (mount st helens - ), forest fires (yellowstone national park - ), tsunami (phuket - ) or hurricanes (new orleans - ) . one should not forget that human beings can and do create their own crises for tourism and the resort industry. wars, terrorism and political decisions can bring about dramatic declines in visitation due to the prospect of danger or the removal of access. king and berno ( ) provide a good example of this in their analysis of the impact of fiji's two coups in and on the local tourist trade, which is heavily resort oriented. they note that: like many other tropical island nations, fiji has long established procedures in place to deal with emergencies such as cyclones. this meant that the overall preparedness in was relatively good with provision to contact hotels across the country with advice on how to react (king and berno, : ) such an experience demonstrates strong synergistic links can exist between natural disaster and human crisis management. from past crisis experiences ritchie ( ) sets out a strategic framework for the planning and management of crises by public or private organizations. his model outlines three main stages in managing such incidents -prevention and planning, implementation, evaluation and feedback. ritchie's three strategic management stages, along with faulkner's earlier crisis stages and their ingredients, has been combined with more resort oriented issues and actions in table . . if the process is divided into the three phases of pre-, actual and post-crisis it is possible to determine a clear pattern of events and responsibilities for resorts. pre-crisis for most resorts will be some form of preparation for a future possible disaster. whether that be an 'inevitable' physical disaster like an earthquake in certain regions or a possible negative political change, such major 'unthinkable' events need to be considered and prepared for. the recommended and common approach at this phase is to develop contingency plans -plans for actions that will mitigate the disaster's effects. this involves recognizing the potential scale and frequency of the expected disaster, and planning accordingly. in terms of the guests, all staff must be trained in emergency procedures and have a role to fulfil. in most cases there will be an obvious overlap with regular security measures such as fire drills, but in terms of major disaster preparation additional factors will need to be considered and plans prepared. for human created crises the degree of notice may be shorter than for natural disasters but there will usually be a warning period of political unrest. in this case the experts will be political commentators and the global news services, and it will be up to owners and managers to keep abreast with their regional situations. more governments are becoming involved in this scanning process on behalf of their travelling citizens, and are offering travel advisories. for many resort destinations these advisories have both positive and negative features. on the positive side they provide up-to-date and comparable risk assessments for tourists; on the negative side they are susceptible to political influence and may not be as objective as one would hope; plus they paint the risk picture with broad strokes, so that relatively safe enclaves become included in the national summary. glaesser ( : - ) provides an example of how one crisis, the bali bombing of october , produced various levels of advisory notice in europe, ranging from a low level general security advice to warnings against travel under any circumstances. during the actual crisis there is going to be chaos and confusion and it is the prepared and cool-headed who will prevail. hence the key management task at this stage is to have staff able to implement the contingency plan and to be empowered to show initiative when conditions do not exactly follow the predicted pattern of events. a major disaster or crisis is likely to affect more than one resort or destination, so collaboration with other tourism organizations and government agencies will be essential. hopefully in the imminent stage all or most guests would have been evacuated with the help of public agencies and industry partners to safer areas, but generally a skeleton staff of key personnel will be required to stay on site as long as possible to ensure the safety of assets. one of the biggest challenges during this period will be media relations, for in today's global village a disaster or crisis attracts the attention of the world and a media frenzy erupts. glaesser ( : ) in his discussion of communications policy with respect to a crisis states 'the principle (sic) task is to convey information with the aim of influencing and guiding consumer behaviour, opinions and expectations'. to achieve this he advocates the affected organization creates a quick understanding of the situation and a transparency in its preparation and response, to build the credibility of the business. he recommends (glaesser, : - ) a communication process that follows the sequence of: . portray the dismay and responsibility of the organization. . describe decisions and measures introduced to cope with the crisis. . indicate, based on the current experience, what further measures will be introduced to avoid future repetitions. in the case of major disasters and crises resort destinations and businesses will need to collaborate with central and regional governments, for it is they alone who can mobilize the resources needed to handle major catastrophes.the evacuation of guests to other areas will require bus and truck transportation and possibly air evacuation. the chaos and confusion of a crisis provides the opportunity for crime and lawlessness, so governments will need to bolster regular police forces with federal police and possibly troops. in the case of the new orleans hurricane of such a government response was widely criticized for its slowness and inadequacy. since much of the world's resort business occurs in the developing world when a major disaster or crisis occurs the host communities often need international assistance. it is at such times that the international community has revealed its better nature, ignoring old differences to come to the aid of fellow humans suffering from the ravages of nature or the actions of a few. unfortunately, the generosity of individuals, charities and nations is not generally put to the best use. partly this is because many receiving nations have neither their own contingency plans for such disasters nor the organization and infrastructure to handle a major in-pouring of generosity; and partly because some of the recovering nations have been susceptible to pilfering and corruption, which results in funds and materials not reaching the designated destinations and sufferers. given the occurrence of irregularities and disappointments with various international aid programs, it is not surprising that their recovery period is often much longer and more difficult than many expect. the post-crisis phase occurs when the actual crisis has abated and is out of the international news headlines, and represents the business and community efforts to return to pre-crisis conditions. as many students of disaster/crisis management indicate this is not only the time to get back to business as quickly as possible, it is an opportunity to redevelop and learn from the mistakes of the past. an immediate concern is to take advantage of the media attention that will have placed the resort destination in the world headlines, by demonstrating that the crisis has passed and life is returning to normal. it is quite likely that the global public has been presented an inaccurate and exaggerated picture of local devastation, which needs to be remedied as quickly as possible. for example, when hurricane iwa struck the hawaiian island of kauai in november some international news media were reporting that part of the island had sunk and thousands of people had died, which was far from the truth, although the hurricane caused a lot of physical damage (murphy and bayley, : ) . such stories need to be refuted, and information about the undamaged areas and recovery put in their place wherever possible. when it comes to repairing the damage an opportunity is presented to upgrade a resort's infrastructure and facilities. there will always be improvements (real or imagined) that the consumer society has developed since the original building of a resort destination which can and should be integrated into the new resort. there will be lessons learned from the disaster or crisis that can be incorporated into the design of the new resort which will make it more disaster-proof in the future. one example of how a disaster can be turned into a positive for tourism occurred with the mount st helens eruption. prior to the eruption mount st helens was a relatively quite tourist attraction, appealing mostly to outdoor recreationists and offering mainly basic facilities. the publicity of the eruption, including dramatic television coverage, increased interest in the volcano to the extent that 'a national monument was created by setting off acres from the existing national forest to commemorate the eruption . . . a new visitor center was opened in december , with illustrations and other graphics that depict the events and the subsequent natural regeneration of the devastated areas' (murphy and bayley, : ) . this new visitor centre has good access to the local interstate freeway and now many more visitors are drawn to the area, incorporating a wider range of tourist types and market segments than before. as has been mentioned, life is a risk and as we modify the earth's environment we are creating new and uncharted conditions that will bring increased risk. signs that business conditions are changing come in various guises. nature seems to be going through a period of instability, with evidence of climate change beginning to take on more significance as scientific evidence points more to a global shift in weather patterns rather than normal climatic cycles. climate change is a serious and urgent issue. while climate change and climate modeling are subject to inherent uncertainties, it is clear that human activities have a powerful role in influencing the climate and the risks and scale of impacts in the future. all science implies a strong likelihood that, if emissions continue unabated, the world will experience a radical transformation of its climate. (stern, : ) even the conservative periodical the economist has come to the conclusion that 'the chances of serious consequences are high enough to make it worth spending the 'not exorbitant' sums needed to try to mitigate climate change' (the economist, b: ). resort tourism is particularly vulnerable to climatic change given that many resorts are located in high risk areas like mountains and tropical beaches, and it uses high levels of energy drawing in its guests and on-site water to keep them happy. war has been joined by global terrorism as a major disruption and deterrent to travel, with tourists seen as 'soft targets' bringing maximum exposure to the terrorist cause. an evercrowded world with over-stretched medical systems appears to be waiting for the next pandemic, with the recent outbreaks of sars ( ) and avian flu ( ) revealing a certain lack of control and openness in dealing with global health crises. under these circumstances one of the most direct business signs of change has been the dramatic increase in insurance premiums that everyone seems to have faced in this new millennium. increased liability insurance has put some single owner peripheral tourism operations out of business, regular security insurance as well as liability insurance has risen dramatically for resorts, and resort destinations in vulnerable locations are facing either dramatic increases in premiums or the loss of direct insurance. after a disastrous , with insured losses of $ billion in the us, of which $ billion was caused by hurricane katrina,american insurers 'are cutting back their exposure in coastal areas . . . home owners who can get insurance coverage face sharply higher rates. some premiums have risen by as much as % . . . many residents cannot get private coverage at all. as a result, state -backed insurance plans, meant to provide coverage as a last resort, are being inundated' (the economist, a: ) . this is only the tip of the iceberg according to figures provided by winn and kirchgeorg ( : ) who quote table . from the topics geo report, which shows 'the number of natural catastrophes rose nearly five fold (and) economic losses nearly fold over the last five decades'. winn and kirchgeorg use such information to suggest business in general needs to rethink its strategic approach to the environment and the business of resort management sustainable development. they view past and present management interest in environmental management and sustainable development as an 'inside-out' approach, 'one in which the primary perspective is to look from the firm out at the external environment' (winn and kirchgeorg, : ) that includes ecological and social considerations. but given the dramatic external forces in nature, politics and health which are leading to new levels of uncertainty in the ecological and societal realms, they see the need for 'a radical departure from the inside-out perspective of environmental management and its more systems theory-informed cousin from sustainability management' to one where sustainability management should be 'expanded and complemented, and may even need to be substituted by conceptual frameworks fairly new to organization theories, such as "resilience management" or "discontinuity management"' (winn and kirchgeorg, : ) . this is because if we are facing significant shifts in environmental and political conditions, the balancing nature of sustainable development will no longer apply in an unstable world. rather, business will need to take on board the possibility or probability of structural shifts and the prospect of facing several global emergencies during their lifetime. 'since ecological global systems cannot be affected significantly by actors in the short-term (the inside-out approach), broader adaptive behaviors that secure the survivability of the economy and society become increasingly relevant. crisis management, risk management, and emergency responses need to be supplemented with long term management for survival' (winn and kirchgeorg, : ) . to put such thoughts into practice will require guidelines that incorporate all the knowledge that has been accumulated to date on risk and crisis management to be supplemented by a broader and more collaborative decade - - - - -last approach to business survival and sustainability. given the noted exposure of tourism to the environmental, political and medical forces that seem to be in flux, it is not surprising to find some are already thinking along these lines. santana ( : ) considers that if decision-makers acknowledge that in these complex and unpredictable times in which we live and operate, anything is possible, including a major crisis that may prove devastating to their organizations, management will be in the 'right frame of mind' to accept the contention that forms the basic foundation of crisis management; proper advanced planning. as santana points out, with the increasing impact of external forces on tourism operations, crisis should be looked upon as an evolving process in itself, one that develops its own logic and consequences, rather than be treated as an isolated event. 'it is the degree to which management heeds the warning signals and prepares the organization (that) will determine how well it responds to the impending crisis' (santana, : ) . likewise, hall et al. ( : ) note that tourism and destinations are 'deeply affected by the perceptions of security and the management of safety, security and risk'. they think the concept of security has broadened significantly since the end of the cold war, with a dominant single political enemy being replaced by terrorism, wars of independence, indigenous rights, and religious differences. 'security ideas now tend to stress more global and peoplecentered perspectives with greater emphasis on the multilateral frameworks of security management' (hall et al., : ) . one of the few at this point to provide practical guidelines for sustainable crisis management in tourism is de sausmarez ( ) . de sausmarez maintains that many future crises will require careful detection and collaborative efforts to minimize their impact, and she has outlined a six step approach to tackle such major external threats. we, as students of tourism, know tourism and resorts are important socio-economic functions for many people, communities and nations, but we cannot assume others give tourism the same degree of importance in the bigger picture and more comprehensive view of events, including crises. hence, the first step in the establishment of a national or regional crisis management policy is to determine and demonstrate the relative importance of the tourism and resort sectors. until this has been done, it is impossible to prepare any sound strategy for the response to a crisis. this was well illustrated by sharpley the business of resort management and craven ( ) who show that even though tourism contributes substantially more to the british economy than agriculture does, the british government's response to the foot and mouth crisis in was to slaughter rather than vaccinated animals and to close the countryside to visitors, moves which favoured the agricultural sector rather than the tourism sector and cost the taxpayer substantially more than was necessary. (de sausmarez, : ) it is only when tourism in general and the resort component in particular are shown to be significant local and regional socioeconomic activities that governments and planners will consider them seriously and integrate their needs into macro-crisis management planning. if resorts and tourism are to integrate crisis management with their sustainable development philosophy they will need to identify the anticipated areas of greatest risk. in the literature and this chapter the emphasis has been on natural disasters, which are essentially supply side characteristics as they change or eliminate the attractiveness of a destination. however, just as important are demand side characteristics such as international political relations affecting visa requirements, economic conditions affecting the ability to travel, world health and safety, and competition from other destinations and leisure activities. although none of these supply and demand risks will fall under the direct control of resort management, knowledge of their existence and development will be essential for future strategic planning, and should be used to lobby government. while it is important to scan the environment continuously it is important to be able to measure trends in a relevant and timely manner. the evidence of global warming is building momentum, but it is often sending out confusing and at times conflicting information, little of which has any bearing on a single location or site. managers and owners of resort properties need to know what this impending crisis means for them specifically. de sausmarez ( : ) maintains tour operators and travel agents, along with government agencies, are in a 'strong strategic position to monitor and assess changes in the tourism status quo as they have access to data on both supply and demand'. she notes that the world tourism organisation's ( ) recommendation after the asian financial crisis of was that destinations should develop three categories of indicators, to warn of impending crises. i. short-term indicators of up to three months, that include advance bookings from key markets, or an increase in the usual length of time needed to settle accounts. ii. medium-term indicators, with a lead time of - months such as that needed for tour operator allocations and take-up to be recorded. iii. long-term indicators, with a lead time of a year or more that include planned capacity developments, international currency valuations and trends in gdp, interest rates and inflation in key markets. to which sustainability crisis management would add a fourth category: iv. future indicators, with a lead time of - years which covers the life of most mortgages and leases and provides sufficient time to determine whether the current climatic experiences are long-term phenomena or cyclical aberrations. these indicators would subjectively convert the environmental, political, business and health trends into local and more useful indices. they would be subjective because it would depend on local knowledge to disaggregate the global information meaningfully, and that process would be influenced by the outlook of the assessor -be they an optimist or pessimist. the type of global crises that tourism may be facing will be sufficiently large scale and evolving that they will require collaboration to implement an effective management strategy. this means that responsibilities and coordination plans need to be drawn up at an early stage and should cover three essential areas: a speedy response, appropriate measures in terms of the local needs of impacted areas and communication and coordination between different levels of jurisdiction and different sectors. . the development of a crisis plan. the development of a national crisis management plan is itself an example of macro-level proactive crisis management (de sausmarez, : ) , and a considerable achievement in itself. such plans need to be flexible as we can expect a series of different crises in the future, with varying local and regional impacts within national jurisdictions. an important part of any large scale crisis management plan will be media relations, relying on the various forms of media to distribute the relevant information as quickly and effectively as possible and being transparent about the severity of the crisis and remedies being undertaken. in some countries there may already be plans in place to cope with anticipated natural crisis such as cyclones (fiji) or earthquakes (pacific north west of america) that can be extended to include other forms of possible crisis. such plans would need regular re-assessment by government departments and the private sector, but can build an invaluable data bank and procedural map. in an asian context de sausmarez ( : ) feels communication and inter-agency cooperation needs to overcome the perceptual association between a crisis and 'loss of face', as is claimed to have occurred with the sars outbreak in east asia in . in terms of global warming and its associated crisis no single country or person is to blame, we all need to take joint responsibility. . the potential for regional cooperation. although de sausmarez focusses on the creation of national crisis management she recognizes that global issues like climate change require an even larger operational scale. no single country can be isolated from its neighbors, as 'was clearly illustrated by the decline in tourism to southeast asia following the bali bombings in october ' (de sausmarez, : ) . she points out that the combined effort of the association of southeast asian nations (asean) was able to effectively contain the sars epidemic through regional preventative action in , and that success has been repeated with the later outbreak of avian flu in east asia. the six steps outlined by de sausmarez do not follow a natural linear sequence, but should be viewed more as a continuous dynamic process which has been divided up to permit closer examination and appreciation of its component parts. the whole process depends on continual learning and adjustment, so as to be responsive and flexible in the face of future crises. for resorts which have survived as a separate form of tourism since the early days it becomes imperative to embrace risk and crisis management as a central part of their business strategy. this chapter has discussed the growing importance of risk management to resorts as our business and natural environments have changed. although financial risk has been a constant within business it is only in recent times, with the rise of a litigious society and a less stable natural environment, that it has become a more general and important issue for management. its increased prominence in business and society now means resorts should make it a key feature of their strategic management, and possibly their central concern. risk management does make a logical central theme for resort management in that it provides a focussed context for its past, present and future directions. the past experiences and business literature (part a) provide a guide as to what management may expect today, and the general level of risk associated with most options. present management must consider both external factors (part b) and internal strategies (part c) to create the most viable and sustainable options for today's resorts. the future can be extrapolated from the present if global business conditions change slowly and in a familiar manner, as predicted in the forecast for a growing senior's market for resorts. but what if we are experiencing major changes to the physical well-being of our planet and in human behaviour to one another? the risk factors that we have calculated from the past and present may no longer work so well or even apply, and we will need to enlarge our risk management process to incorporate more fluid business and environmental situations (part d). the purpose of this chapter and book is to encourage the reader to think about the wonderful legacy that has been provided by resorts; how we should strive to ensure resorts continue to delight our senses and educate us about our planet and its various cultures; and how we can achieve this through appropriate business management, even in this era of global change. a risk management focus would not only assist with the general sustainability objectives of a resort business, it can help position resorts at the forefront of monitoring and adjusting to the predicted changes in our natural and human environments. the chapter and book closes with an examination of one recent global crisis, which had a direct impact on resorts throughout a large area of the world, and one resort which learned a great deal from one hurricane season. the indian ocean tsunami of illustrates how we can still be caught unawares by a natural disaster, how such disasters can become international in scale, and thanks to rising sea levels may have even more significance in the future. walt disney world resort received its first hurricane direct hit in more than years in mid-august , only to have it followed by three others in the space of six weeks. in the process it learned some invaluable lessons. the indian ocean tsunami, also known in some quarters as the boxing day tsunami, occurred on december .this tsunami was generated by an earthquake under the indian ocean near the west coast of the indonesian island of sumatra, and is estimated to have released the energy of hiroshima-type atomic bombs. by the end of that day 'more than people were dead or missing and millions more were homeless in countries, making it perhaps the most destructive tsunami in history' (national geographic news, : ). these figures were subsequently revised upward, so that now the indian ocean tsunami is estimated to have 'left people dead or missing' (guardian unlimited, : ) . if this terrible natural disaster is examined using the threefold strategic action template of table . certain key crisis management lessons emerge. at the pre-crisis stage little formal preparation had been undertaken at either government or resort levels of responsibility. this is understandable because there had been little history of major tsunamis in the indian ocean, the last being associated with krakatoa's eruption in , but unforgivable for tsunamis are still a risk in oceans with volcanic and tectonic activity. what was missing was both an early warning system of seismic buoys and a way to convey that information to potentially threatened areas, so they could instigate evacuation plans. while the 'pacific tsunami warning centre in hawaii had sent an alert to countries, including thailand and indonesia, (it) struggled to reach the right people.television and radio alerts were not issued in thailand until a.m. local time -nearly an hour after the waves had hit' (global security, : ) . in this case there had been no regional forecasting or risk analysis and there was no internationally coordinated contingency plan to deal with such a situation. the result was that even if a coastal resort had its own evacuation plan there was nothing to trigger it until the arrival of the first wave, and by then it was too late. the actual crisis stage was viewed by millions of us around the globe, as we were able to view tourists' video camera images of this spectacular and unusual sunday morning feature on our television screens. the world press immediately brought us these graphic images to go along with the rising death and damage statistics, so that once again the selective reporting of a natural disaster convinced many that the whole region had been devastated. this was particularly the case with phuket island, where the images of destruction at patong beach on the west coast were transformed to represent the whole island in the minds of many, even case studies though phuket is a large island with many separate resort enclaves scattered around its varied shoreline and many of them were untouched by the tragedy. this confirms the need for control over communications, to ensure reporting remains factual and in proportion, rather than sensational and exploitive. the post-crisis stage represents an opportunity to learn from the crisis and to rebuild. this is certainly the case with the indian ocean tsunami. the biggest weakness was the lack of information and warning, which prevented the implementation of effective contingency planning. this is now being addressed with the building of the indian ocean tsunami warning system in .this system has been coordinated by the united nations' educational, scientific and cultural organization (unesco) and consists of new seismographic stations, supplemented by three deep-ocean sensors to provide the required early warning. but this is just the start, for the information needs to get to the areas around the indian ocean that are likely to be affected and the people in those areas who need to know what actions to take. therefore, unesco is continuing to work on international coordination and with governments to provide grassroots preparedness (terra daily, ) . unesco is providing expertise to assist with the redevelopment of mangrove, sea grass and coral reef rehabilitation; it is strengthening disaster preparation for cultural and heritage sites and integrating this into its reconstruction processes; and it is teaching tsunami awareness in schools, training decision-makers and broadcasters and staging local practice drills. the recovery is well underway around much of the indian ocean. in phuket where the damage was highly localized, patong beach showed no outward sign of the tsunami by october , when the author paid a visit. the local tourism industry and english newspaper reported that while business had been slow in the months immediately following the tsunami, things had started to pick up around june and 'we expect it will be per cent to per cent from new year ( ) to the end of march (high season)' (phuket gazette, : c) . another example of crisis recovery is provided in the maldives. like all low-lying islands the maldives are particularly susceptible to this form of disaster; thousands of local inhabitants lost their homes and were killed in the tragedy. however, only two tourists lost their lives and although most resorts were damaged their 'higher construction standards (meant they) withstood the waves much better than local housing did' (travel wire news, : ) . consequently, it did not take most resorts long to rebuild and re-open, but in the process local businesses and government wanted to be better prepared for the future. five months after the tsunami swept across these islands in the indian ocean, the tourism sector and government agencies are cooperating to ensure that low-lying resorts and the nation's airport are better equipped to handle any type of emergency. (travel wire news, : ) among the changes proposed are improving communications through the installation of satellite telephones on each island and a centralized emergency information command. new resort regulations will require evacuation plans and emergency supplies. a higher seawall around the airport and safeguards for electrical power supplies are also being considered. these and other accounts of the indian ocean tsunami indicate the challenges facing the resort sector with today's concerns over global warming and the negative impact of news coverage for such disasters. major tsunamis are fortunately rare events, but this case has demonstrated the need for some international warning system, so that regional and local contingency plans can be put into operation to minimize the impact. this will clearly require coordination at government levels and the will to maintain vigilance and training over long time periods between natural disaster events -something that will test human nature to the full. one also has to ask, if future tsunamis are associated with the rising sea levels of global warming will such improvements be enough? this is the type of question that some academics and researchers are asking us to consider, and should certainly be examined in terms of the sustainability and risk management of many resorts and their relevance in an era of possible climatic shifts. this case is based on an article by barbara higgins ( ) who was director of operations integration for walt disney world resort when four hurricanes impacted the resort's operations in , providing an invaluable learning opportunity for them and other resort operations. walt disney world's hurricane plans, as part of its general emergency planning, had definite priorities and procedures. priorities included (higgins, : ) : ■ keep guests safe; ■ keep employees safe; ■ have a thoughtful plan for tie-down, ride-out and recovery; and ■ provide the ability to get our parks open and operating as soon as possible after the storm. the procedures were designed to account for varying hurricane strengths, and whether the hurricane involved a direct hit or a near miss in terms of its path across central florida. to prepare for this walt disney world has instituted a five-phase approach to its hurricane preparedness, with each phase being selected in consultation with the national hurricane centre and local authorities. reviewing hurricane plans and verifying contact numbers for employees. further review of plans and beginning of preparation for possible shutdown of long-lead-time operations. predetermined emergency supplies are delivered, the site is cleared of loose materials and where relevant lightweight equipment and buildings are anchored to the ground, and managers evaluate moving to next phase. guests and essential staff take shelter in hurricane-proofed buildings or begin evacuation. all activities closed down, with only essential ride-out crews remaining in designated shelters. despite these plans and the thoroughness of preparation the sequence of four very different hurricanes revealed some additional factors and priorities. one major lesson from that summer's hurricane experience is that no two hurricanes are alike, so a resort can only prepare for hurricanes in general and not the specific one(s) that come its way.'the first lesson we learned was that our rigorous plans were only guidelines that needed to be flexible enough to adjust to changes dictated by our circumstances' (higgins, : ) . the most important elements in the general emergency plans turned out to be: ■ maintaining guest and employee communication, letting them know about the impending storm and providing the relevant information regarding each phase's action plan; ■ operating the food service, with the provision of hot meals being the biggest priority; ■ offering in-resort entertainment to guests who were room-bound for many hours; ■ preparing guests for confinement in their rooms over long periods, which is not what they came to the resort to do; ■ arranging for the ability to use news media to give (information on park closures and re-openings) and to get (weather details and various local government announcements regarding schools, police and emergency services). one 'important lesson to be learned in the face of a crisis is to show compassion for your employees and the toll the situation has had upon them, their families and their loved ones' (higgins, : ) . it is important to release unessential staff from their duties as soon as possible so they can attend to the safety of their family and homes as the hurricane approaches. likewise, in the aftermath it is likely some employees will require shelter and hot meals due to the hurricane damage. 'one lesson many floridians learned in the wake of these storms was the high deductible (excess) associated with hurricane insurance claims . . . (as a consequence) we anticipate providing more than $ million to as many as ninety-five hundred employees who desperately need the funds to recover from the damage to their homes' (higgins, : ) . thus, in the end we have a reaffirmation that the business of resort management is 'to think globally but act locally'. although the driver is business and financial concerns, there needs to be an appreciation of the importance of the local environment and community to the long-term success of a resort. furthermore, if resorts are to continue to survive by adjusting to changing social and technical circumstances, they will need to become more proactive with regard to the current climate and cultural changes that face us all. crisis management in the australian tourism industry: preparedness, personnel and postscript six guilty in swiss canyoning trial adventure tourism legal liability and risk management in adventure tourism the business of adventure tourism risk management for tourism: origins and needs crisis management for the tourism sector: issues in policy development towards a framework for tourism disaster management disaster planning: the preservation of life and property resort development and management the business of resort management glaesser asian tsunami/tiger waves dive quadriplegic keeps his millions, plans to buy a house. the age asian nations stage tsunami drill security and tourism: towards a new understanding profit drive blamed for swiss canyon tragedy the storms of summer: lessons learned in the aftermath of the hurricanes of ' tourism and civil disturbances: an evaluation of recovery strategies in fiji accidents in the adventure tourism industry: causes consequences and crisis management topics geo -annual review, national catastrophes tourism and disaster planning the deadliest tsunami in history tourist safety in new zealand and scotland crisis: it won't happen to last resorts: the cost of tourism in the caribbean chaos, crises and disasters: a strategic approach to crisis management in the tourism industry crisis management and tourism: beyond the rhetoric the foot and mouth crises -rural economy and tourism policy implications: a comment stern review on the economics of climate change. london: h.m. treasury.www.hmtreasury.gov.uk/independent_reviews/stern_review_ economics_climate disaster management: exploring ways to mitigate disasters before they occur indian ocean tsunami warning system up and running the price of sunshine: hurricanes and insurance. the economist the heat is on: a survey of climate change maldives takes steps to improve crisis response risk management for scuba diving operators on australia's great barrier reef the siesta is over: a rude awakening from sustainability myopia impacts of the financial crisis on asia's tourism sector. madrid: world tourism organisation key: cord- -haf y authors: offit, paul a.; destefano, frank title: vaccine safety date: - - journal: vaccines doi: . /b - - - - . - sha: doc_id: cord_uid: haf y nan during the past years, pharmaceutical companies have made vaccines against pertussis, polio, measles, rubella, and haemophilus influenzae type b (hib), among others ( table - ) . as a consequence, the number of children in the united states killed by pertussis decreased from , each year in the early th century to fewer than ; the number paralyzed by polio from , to ; the number killed by measles from , to ; the number with severe birth defects caused by rubella from , to ; and the number with meningitis and bloodstream infections caused by hib from , to fewer than . vaccines have been among the most powerful forces in determining how long we live. but the landscape of vaccines is also littered with tragedy: in the late s, starting with louis pasteur, scientists made rabies vaccines using cells from nervous tissue (such as animal brains and spinal cords); the vaccine prevented a uniformly fatal infection, but the rabies vaccine also caused seizures, paralysis, and coma in as many as of every people who used it. [ ] [ ] [ ] [ ] in , the military injected hundreds of thousands of american servicemen with a yellow fever vaccine. to stabilize the vaccine virus, scientists added human serum. unfortunately, some of the serum came from people unknowingly infected with hepatitis b virus. as a consequence, , soldiers were infected, severe hepatitis developed in , , and died. [ ] [ ] [ ] [ ] in , five companies made jonas salk's new formaldehydeinactivated polio vaccine. however, one company, cutter laboratories of berkeley, california, failed to completely inactivate poliovirus with formaldehyde. because of this problem, , children were inadvertently injected with live, dangerous poliovirus; in , , mild polio developed, were permanently paralyzed, and were killed. it was one of the worst biological disasters in american history. vaccines have also caused uncommon but severe adverse events not associated with production errors. for example, acute encephalopathy after whole-cell pertussis vaccine, , acute arthropathy following rubella vaccine, - thrombocytopenia following measles-containing vaccine, , guillain-barré syndrome (gbs) after swine flu vaccine, paralytic polio following live attenuated oral polio vaccine, anaphylaxis following receipt of vaccines containing egg proteins (ie, influenza and yellow fever vaccines), , severe or fatal viscerotropic disease following yellow fever vaccine, possible narcolepsy following a squalene-adjuvanted influenza vaccine, and severe allergic reactions associated with gelatin contained in the measles-mumps-rubella vaccine are problems associated with the use of vaccines, albeit rarely. as vaccine use increases and the incidence of vaccine-preventable diseases is reduced, vaccine-related adverse events become more prominent in vaccination decisions ( figure - ) . even unfounded safety concerns can lead to decreased vaccine acceptance and resurgence of vaccine-preventable diseases, as occurred in the s and s as a public reaction to allegations that the wholecell pertussis vaccine caused encephalopathy and brain damage ( figure - ). recent outbreaks of measles, mumps, and pertussis in the united states are important reminders of how immunization delays and refusals can result in resurgences of vaccine-preventable diseases. - because vaccines are given to healthy children and adults, a higher standard of safety is generally expected of immunizations compared with other medical interventions. tolerance of adverse reactions to pharmaceutical products (eg, vaccines, contraceptives) given to healthy people-especially healthy infants and toddlers-to prevent certain conditions is substantially lower than to products (eg, antibiotics, insulin) used to treat people who are sick. this lower tolerance for risks from vaccines translates into a need to investigate the possible causes of much rarer adverse events after vaccinations than would be acceptable for other pharmaceutical products. for example, side effects are essentially universal for cancer chemotherapy, and % to % of people receiving high-dose aspirin therapy experience gastrointestinal symptoms. safety monitoring can be done before and after vaccine licensure, with slightly different goals based on the methodological strengths and weaknesses of each step. - although the general principles are similar irrespective of country, the specific approaches may differ because of factors such as how immunization services are organized and the level of resources available. vaccines, similar to other pharmaceutical products, undergo extensive safety and efficacy evaluations in the laboratory, in animals, and in phased human clinical trials before licensure. , phase trials usually include fewer than participants and can detect only extremely common adverse events. phase trials generally enroll to several hundred people. when carefully coordinated, as in the comparative infant diphtheria and tetanus toxoids and acellular pertussis (dtap) vaccine trials, important insight into the relationship between concentration of antigen, number of vaccine components, formulation, effect of successive doses, and profile of common reactions can be drawn and can affect the choice of the candidate vaccines for phase trials. , sample sizes for phase vaccine trials are based principally on efficacy considerations, with safety inferences drawn to the extent possible based on the sample size (approximately to ) and the duration of observation (often < days). typically only observations of common local and systemic reactions (eg, injection site swelling, fever, fussiness) have been feasible. the experimental design of most phase to clinical trials includes a control group (a placebo or an alternative vaccine) and detection of adverse events by researchers in a consistent manner "blinded" to which vaccine the patient received. this allows relatively straightforward inferences on the causal relationship between most adverse events and vaccination. several ways of enhancing prelicensure safety assessment of vaccines have been developed. one of these ways includes the brighton collaboration ( www.brightoncollaboration.org ), established to develop and implement globally accepted standard case definitions for assessing adverse events following immunizations in prelicensure and postlicensure settings. without such standards, it was difficult if not impossible to compare and collate safety data across trials in a valid manner. for example, in the large multisite phase infant dtap trials, definitions of high fever across trials varied by temperature ( . °c vs . °c), measurement (oral vs rectal), and time (measured at vs hours). this was unfortunate because standardized case definitions had been developed in these trials for efficacy but not for safety, even though the safety concerns provided the original impetus for the development of dtap. , the brighton case definitions for each adverse event are further arrayed by the level of evidence provided (insufficient, low, intermediate, and highest); therefore, they also can be used in settings with fewer resources (eg, studies in less developed settings or postlicensure surveillance). another of the recent advances to prelicensure safety assessments of vaccines has stemmed from the recognition of the need for much larger safety and efficacy trials before licensure. because of pragmatic limits on the sample sizes of prelicensure studies, there are inherent limitations to the extent to which they can detect very rare, yet real, adverse events related to vaccination. even if no adverse event has been observed in a trial of , vaccinees, one can only be reasonably certain that the real incidence of the adverse event is no higher than in , vaccinees. thus, to be able to detect an attributable risk of per , vaccinees (eg, such as the approximate risk found for intussusception in the postlicensure evaluation of rotashield vaccine), a prelicensure trial of at least , vaccinees and , control subjects is needed. both second-generation rotavirus vaccines (rotateq and rotarix) were subjected to phase trials that included at least , infants. , while these trials were adequately powered to detect the problem with intussusception found following rotashield, in general, the cost of such large trials might limit the number of vaccine candidates that go through this process in the future. because rare reactions, reactions with delayed onset, or reactions in subpopulations may not be detected before vaccines are licensed, postlicensure evaluation of vaccine safety is critical. historically, this evaluation has relied on passive surveillance and ad hoc epidemiologic studies, but, more recently, phase trials and preestablished large linked databases have improved the methodological capabilities to study rare risks of specific immunizations. such systems may detect variation in rates of adverse events by manufacturer , or specific lot. more recently, clinical centers for the study of immunization safety have emerged as another useful infrastructure to advance our knowledge about safety. in contrast with the methodological strengths of prelicensure randomized trials, however, postlicensure observational studies of vaccine safety pose a formidable set of methodological difficulties. confounding by contraindication is especially problematic for nonexperimental designs. specifically, persons who do not receive vaccine (eg, because of a chronic or transient medical contraindication or low socioeconomic group) may have a different risk for an adverse event than vaccinated persons (eg, background rates of seizures or sudden infant death syndrome (sids) may be higher in unvaccinated people). therefore, direct comparisons of vaccinated and unvaccinated children are often inherently confounded, and teasing this issue out requires understanding of the complex interactions of multiple, poorly quantified factors. informal or formal passive surveillance or spontaneous reporting systems (srss) have been the cornerstone of most postlicensure safety monitoring systems because of their relative low cost of operations. - the national reporting of adverse events following immunizations can be done through the same reporting channels as those used for other adverse drug reactions, as is the practice in france, vaccine manufacturers also maintain srss for their products, which are usually forwarded subsequently to appropriate national regulatory authorities. , in the united states, the national childhood vaccine injury act of mandated that health care providers report certain adverse events after immunizations. the vaccine adverse events reporting system (vaers) was implemented jointly by the centers for disease control and prevention (cdc) and the us food and drug administration (fda) in to provide a unified national focus for collection of all reports of clinically significant adverse events, including, but not limited to, those mandated for reporting. the vaers form permits narrative descriptions of adverse events. patients and their parents-not just health care professionals-are permitted to report to vaers, and there is no restriction on the interval between vaccination and symptoms that can be reported. report forms, assistance in completing the form, and answers to other questions about vaers are available on the vaers web site (vaers.hhs.gov). web-based reporting and simple data analyses are also available. a contractor, under cdc and fda supervision, distributes, collects, codes (currently using the medical dictionary for regulatory activities (www.meddramsso.com/index.asp), and enters vaers reports in a database. reporters of selected serious events are contacted by trained clinical staff on report receipt and are sent letters at year after report receipt to provide additional information about the vaers report, including the patient's recovery. approximately , vaers reports are now received annually, and these data (without personal identifiers) are also available to the public (at vaers.hhs.gov and at wonder.cdc.gov/vaers.html). several other countries also have substantial experience with passive surveillance for immunization safety. in , canada developed the vaccine associated adverse event (vaae) reporting system, , which is supplemented by an active, pediatric hospital-based surveillance system that searches all admissions for possible relationships to immunizations (immunization monitoring program-active, or impact). serious vaae reports are reviewed by the advisory committee on causality assessment consisting of a panel of experts. the netherlands also convenes an annual panel to categorize reports, which are then published. the united kingdom and most members of the former commonwealth use the yellow card system, whereby a reporting form is attached to officially issued prescription pads. , data on adverse drug (including vaccine) events from several countries are compiled by the world health organization (who) collaborating center for international drug monitoring in uppsala. with so many different passive surveillance systems that collect information on various medical events following vaccination, standardized definitions of vaccine-related adverse events are necessary. in the past, different definitions were developed in brazil, canada, india, and the netherlands. however, implementation of similar standards across national boundaries has been advanced by the international conference on harmonization and the brighton collaboration. vaers often first identifies potential new vaccine safety problems because of clusters of cases in time or space, often with unusual clinical features. for example, in , passive reports to vaers of intussusception among children vaccinated with rotashield was the first postlicensure signal of a problem, leading to epidemiologic studies that verified these findings. , similarly, initial reports to vaers of a previously unrecognized serious yellow fever vaccine-associated neurotropic disease and viscerotropic disease , have since been confirmed elsewhere. because of the success in detecting these signals, there have been various attempts to automate screening for signals using srss reports. new tools developed for pattern recognition in extremely large databases are beginning to be applied. these include empirical bayesian data mining to identify unexpectedly frequent vaccine-event combinations. vaers has provided some of the first safety data after the introduction of a number of vaccines. - vaers has also successfully served as a source of cases for further investigation of idiopathic thrombocytopenic purpura after measles-mumpsrubella (mmr) vaccine, encephalopathy after mmr, , and syncope after immunization. when denominator data on vaccine doses distributed or administered are available from other sources, vaers can be used to evaluate changes in reporting rates over time or when new vaccines replace old vaccines. for example, vaers showed that after millions of doses had been distributed, reporting rates for serious events such as hospitalization and seizures after dtap in toddlers were one third of those after diphtheria and tetanus toxoids and whole-cell pertussis (dtp). because vaers is the only surveillance system covering the entire us population with data available on a relatively timely basis, it is the major means available currently to detect possible new, unusual, or extremely rare adverse events. despite the aforementioned uses, srss for drug and vaccine safety have a number of major methodological weaknesses. underreporting, biased reporting, and incomplete reporting are inherent to all such systems, and potential safety concerns may be missed. - aseptic meningitis associated with the urabe mumps vaccine strain, for example, was not detected by srss in most countries. , some increases in adverse events detected by vaers may not be true increases, but instead may be due to increases in reporting efficiency or vaccine coverage. for example, an increase in gbs reports after influenza vaccination during the to season was found to be largely due to improvements in vaccine coverage and increases in gbs independent of vaccination. an increased reporting rate of an adverse event after one hepatitis b vaccine compared with a second brand was likely due to differential distribution of brands in the public vs private sectors, which have differential vaers reporting rates (higher in the public sector). finally, pending litigation resulted in the filing of a large number of vaers reports claiming that vaccines caused autism. perhaps the most important methodological weakness of vaers, however, is that it does not contain the information necessary for formal epidemiologic analyses. such analyses require calculation of the rate of the adverse event after vaccination and a comparison rate among unvaccinated persons. the vaers database, however, provides data only for the number of persons who may have experienced an adverse event following immunization and, even then, only in a biased and underreported manner. vaers lacks data on the denominator of total number of people vaccinated and the corresponding data on number of cases and denominator population of unvaccinated people. sometimes reporting rates can be calculated by using vaers case reports for the numerator and, if available, doses of vaccines administered (or, if unavailable, data on vaccine doses distributed or vaccine coverage survey data) for the denominator. these rates can then be compared with the background rate of the same adverse event in the absence of vaccination, if available. because of underreporting, however, vaers reporting rates will usually be lower than the actual rates of adverse events following immunization. a higher proportion of serious events, such as seizures, that follow vaccinations are likely to be reported to vaers than milder events, such as rash, or delayed events requiring laboratory assessment, such as thrombocytopenic purpura after mmr vaccination. the reporting efficiency or sensitivity of srss can sometimes be estimated if an independent source of cases of specific adverse events following immunization is available to conduct capture-recapture analyses. such an analysis was conducted to estimate that vaers reporting completeness for intussusception following rotashield vaccine was %. formal evaluation has been limited by the quality of diagnostic information on vaers reports, especially the probability that a serious event reported to vaers has been diagnosed accurately. of cases reported to vaers in which gbs developed after influenza vaccination during the to season, and for which hospital charts were reviewed by an independent panel of neurologists blinded to immunization status, the diagnosis of gbs was confirmed in ( %). intussusception was verified in % of vaers reports filed after rotashield vaccination. clinical reviews of vaers reports submitted following h n influenza vaccine were able to verify % of possible gbs reports and % of reports of possible anaphylaxis. clinical review verification rates were similar for vaers reports following human papillomavirus vaccination: % for gbs and % for anaphylaxis. these studies highlight the often crude nature of signals generated by vaers and the difficulty in ascertaining which potential vaccine safety concerns warrant further investigation. the problems with reporting efficiency and potentially biased reporting and the inherent lack of an adequate control group limit the certainty with which conclusions can be drawn. recognition of these limitations in large part has helped stimulate the creation of more population-based methods of assessing vaccine safety. vaccines may undergo clinical trials after licensure to assess the effects of changes in vaccine formulation, vaccine strain, age at vaccination, number and timing of vaccine doses, simultaneous administration, and interchangeability of vaccines from different manufacturers on vaccine safety and immunogenicity. unanticipated differential mortality among recipients of high-and regular-titered measles vaccine in developing countries (albeit lower than among unvaccinated children) led to a change in recommendations by the who for the use of such vaccines. to improve the ability to detect adverse events that are not detected during prelicensure trials, some recently licensed vaccines in developed countries have undergone formal phase surveillance studies on populations with sample sizes that have included as many as , people. these studies usually have used cohorts in managed care organizations (mcos) supplemented by diary or phone interviews. these methods were first used extensively after the licensure of polysaccharide and conjugated hib vaccines. - large postlicensure studies on safety and efficacy have also been conducted for several other vaccines, including those for dtap, varicella, and herpes zoster. , requirements for phase evaluation have even been extended to less frequently used vaccines, such as japanese encephalitis vaccine. historically, ad hoc epidemiologic studies have been used to assess signals of potential adverse events detected by srss, the medical literature, or other mechanisms. some examples of such studies include the investigations of poliomyelitis after inactivated , and oral polio vaccines, sids after dtp vaccination, - encephalopathy after dtp vaccination, , meningoencephalitis after mumps vaccination, injection site abscesses after vaccination, and gbs after influenza vaccination. , , the institute of medicine (iom) has compiled and reviewed many of these studies. , unfortunately, such ad hoc studies are often costly, timeconsuming, and limited to assessment of a single event or a few events or outcomes. given these drawbacks and the methodological limitations of passive surveillance systems (such as described for vaers), pharmacoepidemiologists began to turn to large databases linking computerized pharmacy prescription (and later immunization records) and medical outcome records. these databases derive from defined populations such as members of mcos, single-provider health care systems, and medicaid programs. such databases cover enrollee populations numbering from thousands to millions, and, because the data are generated from the routine administration of the full range of medical care, underreporting and recall bias are reduced. with denominator data on doses administered and the ready availability of appropriate comparison (ie, unvaccinated) groups, these large databases provide an economical and rapid means of conducting postlicensure studies of safety of drugs and vaccines. , [ ] [ ] [ ] [ ] the cdc initiated the vaccine safety datalink (vsd) project in to conduct postmarketing evaluations of vaccine safety and to establish an infrastructure allowing for highquality research and surveillance. selection of staff-model prepaid health plans minimized potential biases for more severe outcomes resulting from data generated from fee-for-service claims. currently, eight mcos in the united states participate in the vsd. the eight participating mcos comprise a population of more than million members. each mco prepares computerized data files using a standardized data dictionary containing demographic and medical information on their members, such as age and sex, health plan enrollment, vaccinations, hospitalizations, outpatient clinic visits, emergency department visits, urgent care visits, and mortality data, as well as additional birth information (eg, birth weight) when available. other information sources, such as medical chart review; member surveys; and pharmacy, laboratory and radiology data are often used in vsd studies to validate outcomes and vaccination data. there is rigorous attention to the maintenance of patient confidentiality, and each study undergoes institutional review board review. the vsd project's main priorities include evaluating new vaccine safety concerns that may arise from the medical literature, , from vaers, , from changes in immunization schedules, or from introduction of new vaccines. , the creation of near real-time data files has enabled the development of near real-time postmarketing surveillance for newly licensed vaccines and changes in vaccine recommendations. the size of the vsd population also permits separation of the risks associated with individual vaccines from those associated with vaccine combinations, whether given in the same syringe or simultaneously at different body sites. for example, vsd safety monitoring found that the combined mmrv vaccine carried an increased risk of febrile seizures compared with giving mmr and varicella vaccines simultaneously as separate injections. such studies are especially valuable in view of combined pediatric vaccines. more than studies have been or are being performed within the vsd project, including general screening studies of the safety of inactivated influenza vaccines among children and of thimerosal-containing vaccines. disease-or syndrome-specific investigations have been or are being performed, including studies investigating autism, multiple sclerosis, thyroid disease, acute ataxia, alopecia, rheumatoid arthritis, asthma, diabetes, and idiopathic thrombocytopenic purpura following vaccination. amid these promises, a few caveats are appropriate. although diverse, the population in the mcos currently in the vsd project is not wholly representative of the united states in terms of geography or socioeconomic status. more important, because of the high coverage attained in the mcos for most vaccines, few nonvaccinated control subjects are available. therefore, vsd studies often rely on risk-interval analyses (eg, to study the question of whether outcome "x" is more common in period "y" following vaccination compared with other periods) (table - ). this approach, although powerful for evaluating acute adverse events, has limited ability to assess associations between vaccination and adverse events with delayed or insidious onset (eg, autism). the vsd project also cannot easily assess mild adverse events (such as fever) that do not always come to medical attention. finally, because vac-cines are not delivered in the context of randomized, controlled trials, the vsd project may not be able to successfully control for confounding and bias in each analysis, and inferences on causality may be limited. despite these potential shortcomings, the vsd project provides an essential, powerful, and cost-effective complement to ongoing evaluations of vaccine safety in the united states. , in view of the methodological and logistic advantages offered by large linked databases, the united kingdom and canada also have developed systems linking immunization registries with medical files. , because of the relatively limited number of vaccines used worldwide and the costs associated with establishing and operating these large databases, it is unlikely that all countries will be able to or need to establish their own. these countries should be able to draw on the scientific base established by the existing large linked databases for vaccine safety and, if the need arises, conduct ad hoc epidemiologic studies. more recently, there has been an increasing awareness that the usefulness of srss as potential disease registries and the immunization safety infrastructure can be usefully augmented by tertiary clinical centers. well-organized, well-identified clinical infrastructures for the study of rare vaccine safety outcomes were first developed in certain regions in italy and australia. , in the united states, the cdc established the clinical immunization safety assessment (cisa) network in with the following primary goals: ( ) to develop research protocols for clinical evaluation, diagnosis, and management of adverse events following immunization (aefi); ( ) to improve the understanding of aefi at the individual level, including determining possible genetic and other risk factors for predisposed persons and high-risk subpopulations; ( ) to develop evidence-based algorithms for vaccination of persons at risk of serious adverse events following immunization; and ( ) to provide a resource of subject matter experts for clinical vaccine safety inquiries. the cisa investigators bring in-depth clinical, pathophysiologic, and epidemiologic expertise to assessing causal relationships between vaccines and adverse events and to understanding the pathogenesis of adverse . define biologically plausible risk interval for adverse event after vaccination (eg, days after each dose). . partition observation time for each child in the study into periods within and outside of risk intervals, and sum respectively (eg, for a child observed for days during which three doses of vaccine were received, total risk interval time = × person-days = persondays; total nonrisk interval time = − = person-days). birth dose dose dose days . add (a) total risk interval and nonrisk interval observation times for each child in the study (person-time observed; for mathematical convenience, the following example uses and , person-months of observation) and (b) adverse events occurring in each period to complete a × table (for illustration, the example uses and cases): events following vaccinations. the cisa investigators have published a standardized algorithm for evaluating and managing persons who have suspected or definite immediate hypersensitivity reactions such as urticaria, angioedema, and anaphylaxis following vaccines. some of the studies undertaken by cisa include an assessment of extensive limb swelling after dtap, a study of the usefulness of irritant skin test reactions for managing hypersensitivity to vaccines, the clinical evaluation of patients with serious adverse events following yellow fever vaccine administration, and evaluation of vaccine safety among children with inborn errors of metabolism. new understanding of the human genome, pharmacogenomics, and immunology hold promise for future cisa studies and may make it possible to elucidate the biological mechanisms of vaccine adverse reactions, which in turn could lead to the development of safer vaccines and safer vaccination practices, including revaccination when indicated. in mass immunization campaigns during which many people are vaccinated in a short time, it is critical to have a vaccine safety monitoring system in place that can detect potential safety problems early so that corrective actions can be taken as soon as possible. mass immunization campaigns pose specific safety challenges precisely because large populations are vaccinated during a short time and often they are conducted outside the usual health care setting. mass immunization campaigns are often conducted in developing countries, which poses a particular challenge of ensuring injection safety. in any setting in which large numbers of immunizations are being administered, more adverse events will coincidentally occur following immunization. thus, it is important to have background rates available of expected adverse events to allow rapid evaluation of whether reported adverse events are occurring at a rate following immunization that is higher than would be expected by chance alone. the resources devoted to mass vaccination campaigns also provide opportunities to enhance existing immunization safety monitoring systems or to establish a system if none exists, and these may lead to long-term improvements in immunization safety monitoring beyond the specific mass immunization campaign. the response to the h n influenza pandemic involved probably the largest and most intense immunization safety monitoring effort ever undertaken in the united states and internationally. the emergence of a novel influenza a (h n ) virus prompted the development of influenza a (h n ) monovalent vaccines. the fda licensed the first -h n vaccines in september . with potentially hundreds of millions of people expected to be vaccinated, adverse events were anticipated to occur in some recently vaccinated people. to address the question of whether the vaccine could be causing the adverse events, background rates for several adverse events were developed. to rapidly detect any unforeseen safety problems, the federal government implemented enhanced postlicensure -h n vaccine safety monitoring. first, vaers undertook special outreach efforts to encourage providers to report, and daily reviews and followup of submitted reports were conducted by medical personnel to rapidly evaluate the reports and obtain any needed additional clinical or other information. second, a new web-based active surveillance system was implemented to prospectively follow tens of thousands of vaccinees for medically attended adverse events. third, large population-based systems that link computerized vaccination data with health care encounter codes were used to conduct rapid ongoing analyses to evaluate possible associations of h n vaccination with selected adverse events, including potential associations suggested by vaers or other sources. such systems included the existing vsd project; a new collaboration involving additional large health plans covering several million people that also performed rapid ongoing analyses similar to vsd; and the databases of the department of defense, medicare, and the veterans administration. fourth, active case finding for gbs was conducted in areas of the united states with a combined population of about million. the findings from the various safety monitoring activities were regularly reviewed by government and other scientists and an independent vaccine safety review panel convened by the department of health and human services. initial safety data were provided by vaers, which found that the adverse event profile after -h n vaccine in vaers ( > , reports) was consistent with that of seasonal influenza vaccines, although the reporting rate was higher after -h n than seasonal influenza vaccines, which may be, at least in part, a reflection of stimulated reporting; death, gbs, and anaphylaxis reports after -h n vaccination were rare (each < per million doses administered). , preliminary results from the large special study of gbs found . excess cases of gbs per million vaccinations, which is similar to the increased risk found with some seasonal influenza vaccines. similar efforts to intensely monitor the safety of influenza a (h n ) vaccines occurred in other countries, primarily in north america, europe, and australia, but also included the development of new immunization safety monitoring systems in countries such as taiwan. these countries collaborated in their activities and routinely shared information among themselves and with other countries that have limited vaccine safety monitoring capabilities. these extensive international safety monitoring activities and collaborations represented an unprecedented commitment to ensuring the safety of influenza a (h n ) vaccines, as well as a model for how we might improve tracking of safety for all vaccines going forward. unfortunately, vaccine safety issues have increasingly taken on a life of their own outside of the scientific arena-arguably to society's overall detriment. liability concerns, for example, have severely limited development of maternal immunizations to protect their newborn infants against diseases such as from group b streptococcus . more worrisome, however, are various chronic diseases (and their advocates) in search of a simple cause, for which immunizations-as a relatively universal exposuremake all too convenient a hypothesized link. case studies of some of these fears are discussed in the following sections. in , kulenkampff and coworkers reported a series of cases of children with mental retardation and epilepsy following receipt of the whole-cell pertussis vaccine. during the next several years, fear of the pertussis vaccine generated by media coverage of this report caused a decrease in pertussis immunization rates in british children from % to % and resulted in more than , cases and deaths due to pertussis. media coverage of the kulenkampff report also caused decreased immunization rates and increased pertussis deaths in japan, sweden, and wales. however, many subsequent excellent well-controlled studies found that the incidence of mental retardation and epilepsy following whole-cell pertussis vaccine was similar in vaccinated children compared with children who did not receive the vaccine and that many of these children actually suffered from dravet's syndrome (a neuronal sodium channel transport defect caused by an scn a mutation). - , a in the mid- s, the antivaccine group called dissatisfied parents together raised the notion that the whole-cell pertussis vaccine could cause sids. subsequent study of children who did or did not receive dtp vaccine showed that the incidence of sids was not greater in the vaccinated group. in the early s, when the hepatitis b vaccine was recommended for routine use in newborns, a program on abc's / raised the question of whether vaccines could cause sids. again, studies failed to find any association between hepatitis b vaccine and sids. , , two recent reviews have confirmed the notion that vaccines do not cause sids. , vaccines cause mad-cow disease by july , at least people in the united kingdom developed a progressive neurological disease termed variant creutzfeldt-jakob disease that likely resulted from eating meat prepared from cows with "mad-cow" disease, a disease caused by proteinaceous infectious particles (prions). some vaccines were made with serum or gelatin obtained from cows in england or from countries at risk for mad-cow disease. two products obtained from cows may be present in vaccines: trace quantities of fetal bovine serum used to provide growth factors for cell culture and gelatin used to stabilize vaccines. however, the bovine-derived products used in vaccines are not likely to contain prions for several reasons. first, fetal bovine serum and gelatin are obtained from blood and connective tissue respectively; neither are sources that have been found to contain prions. second, fetal bovine serum is highly diluted and eventually removed from cells during the growth of vaccine viruses. third, prions are not propagated in cell cultures used to make vaccines. fourth, transmission of prions occurs from eating meat contaminated with nervous tissue obtained from infected animals or, in experimental studies, from directly inoculating preparations of brains from infected animals into the brains of experimental animals. transmission of prions has not been documented after inoculation into the muscles or under the skin (routes used to vaccinate). taken together, the chance that currently licensed vaccines contain prions is essentially zero. the notion that the origin of aids could be traced to poliovirus vaccines that were administered in the belgian congo between and was the subject of a popular magazine article and book. the logic behind this assertion was as follows: ( ) the polio vaccine used in the belgian congo was grown in chimpanzee kidney cells. ( ) the chimpanzee kidney cells used at that time contained simian immunodeficiency virus (siv). ( ) siv is very closely related to human immunodeficiency virus (hiv). ( ) people were inadvertently inoculated with siv that then mutated to hiv and caused the aids epidemic. this reasoning is problematic and based on several false assumptions. - first, siv most closely related to hiv has been demonstrated in chimps in the cameroon, far from the chimps near stanleyville that were used to make the vaccine. second, siv and hiv are not very close genetically; mutation to hiv from siv would likely require decades, not years. third, polymerase chain reaction (pcr) analysis showed that the cell substrate used to make the vaccine was monkey, not chimp. fourth, siv and hiv are enveloped viruses that are easily disrupted by extremes in ph. if given by mouth (in a manner similar to the oral polio vaccine), both of these viruses would likely be destroyed in the acid environment of the stomach. last, and most important, original lots of the polio vaccine (including those used in africa for the polio vaccine trials) did not contain hiv or siv genomes as determined by the very sensitive reverse-transcription pcr assay. unfortunately, the notion that live attenuated polio vaccine could cause aids remains an obstacle to eliminating polio in some countries in africa. simian virus (sv ) was present in monkey kidney cells used to make the inactivated polio vaccine, live attenuated polio vaccine, and inactivated adenovirus vaccines in the late s and early s. recently, investigators found sv dna in biopsy specimens obtained from patients with certain unusual cancers (ie, mesothelioma, osteosarcoma, and non-hodgkin lymphoma), leading some to hypothesize a link between vaccination and the subsequent development of cancer. however, genetic remnants of sv were present in cancers of people who had or had not received contaminated polio vaccines; people with cancers who never received sv -contaminated vaccines were found to have evidence for sv in their cancerous cells; and epidemiologic studies did not show an increased risk of cancers in people who received polio vaccine between and and people who did not receive these vaccines. taken together, these findings do not support the hypothesis that the sv contained in polio vaccines administered before caused cancers. one hundred years ago, children received one vaccinesmallpox. today, young children receive vaccines routinely. although some vaccines are given in combination, infants and young children could receive more than shots and three oral doses by years of age, including as many as five shots at one time. the increase in the number of vaccines, and the consequent decline in vaccine-preventable illnesses, has focused attention by parents and health care professionals on vaccine safety. specific concerns include whether vaccines weaken, overwhelm, , or in some way alter the normal balance of the immune system, paving the way for chronic diseases such as diabetes, asthma, multiple sclerosis, and allergies. although we have witnessed a dramatic increase in the number of vaccines routinely recommended for infants and young children, the number of immunogenic proteins and polysaccharides contained in vaccines has declined (table - ). the decrease in the number of immunogenic proteins and polysaccharides contained in vaccines is attributable to discontinuation of the smallpox vaccine and advances in the field of protein purification that allowed for a switch from whole-cell to acellular pertussis vaccine. a practical way to determine the capacity of the immune system to respond to vaccines would be to consider the number of b and t cells required to generate adequate levels of binding antibodies per milliliter of blood. calculations are based on the following assumptions: -approximately ng/ml is likely to be an effective concentration of antibody directed against a specific epitope. -approximately b cells/ml are required to generate ng of antibody/ml. -given a doubling time of about . days for b cells, it would take about days to generate b cells/ml from a single b-cell clone. -because vaccine-specific humoral immune responses are first detected about days after immunization, those responses could initially be generated from a single b-cell clone per milliliter. -one vaccine contains about immunogenic proteins or polysaccharides ( table - ). -each immunogenic protein or polysaccharide contains about epitopes (ie, epitopes per vaccine). -approximately b cells are present per milliliter of blood. given these assumptions, the number of vaccines to which a person could respond would be determined by dividing the number of circulating b cells (approximately /ml) by the average number of epitopes per vaccine ( ). therefore, a person could theoretically respond to about vaccines at one time. the analysis used to determine the theoretical capacity of a person to respond to as many as vaccines at one time, although consistent with the biology and kinetics of vaccine-specific immune responses, is limited by lack of consideration of several factors. first, only vaccine-specific b-cell responses are considered. however, protection against disease by vaccines may also be mediated by vaccine-specific cytotoxic t lymphocytes (ctls). for example, virus-specific ctls are important in the regulation and control of varicella infections. second, in part because of differences in the capacity of various class i or class ii glycoproteins (encoded by the mhc) to present viral or bacterial peptides to the immune system, some people are not capable of responding to certain virus-specific proteins (eg, hepatitis b surface antigen). third, some proteins are more likely to evoke an immune response than others (ie, immunodominance). fourth, although most circulating b cells in a neonate are naïve, the child very quickly develops memory b cells that are not available for response to new antigens and, therefore, should not be considered as part of the circulating naïve b-cell pool. fifth, the immune system is not static. a study of t-cell population dynamics in hiv-infected persons found that adults have the capacity to generate about × new t lymphocytes each day. although the quantity of new b and t cells generated each day in healthy people is unknown, studies of hiv-infected persons demonstrate the enormous capacity of the immune system to generate lymphocytes when needed. primarily because of this fifth reason, the assessment that people can respond to at least vaccines at one time might be low. within hours of birth, cells of the innate and adaptive immune systems are actively engaged in responding to challenges in the environment (eg, colonizing bacterial flora). , similarly, newborn and young infants are quite capable of generating protective immune responses to single and multiple vaccines. for example, children born to mothers infected with hepatitis b virus are protected against infection after inoculation with hepatitis b vaccine (given at birth and month of age). [ ] [ ] [ ] similarly, newborns inoculated with bacille calmette-guérin (bcg) vaccine are protected against severe forms of tuberculosis presumably by activation of bacteria-specific t cells. [ ] [ ] [ ] in addition, about % to % of infants inoculated in the first months of life with multiple vaccines, including diphtheria-tetanus-pertussis, pneumococcus, hib, hepatitis b and polio, develop protective, vaccine-specific immune responses. conjugation of bacterial polysaccharides (such as streptococcus pneumoniae and hib) to carrier molecules that elicit helper t cells circumvents the poor immunogenicity of unconjugated polysaccharide vaccines in infants and young children. , vaccines weaken the immune system infection with wild-type viruses can cause a suppression of specific immunologic functions. for example, infection with wild-type measles virus causes a reduction in the number of circulating b and t cells during the viremic phase of infection and a delay in the development of cell-mediated immunity. , downregulation of cell-mediated immunity by wild-type measles virus probably results from downregulation of the production of interleukin- by measles-infected macrophages and dendritic cells. taken together, the immunosuppressive effects of wild-type measles virus account, in part, for the increase in morbidity and mortality from measles infection. similarly, the immunosuppressive effects of infections with wild-type varicella virus or wild-type influenza virus cause an increase in the incidence of severe invasive bacterial infections. live viral vaccines replicate (albeit far less efficiently than wild-type viruses) in the host and, therefore, can weakly mimic events that occur after natural infection. for example, measles, mumps, or rubella vaccines can significantly depress reactivity to the tuberculin skin test, can cause a decrease in protective immune responses to varicella vaccine, and high-titered measles vaccine (edmonston-zagreb strain) can cause an excess of cases of invasive bacterial infections in developing countries. all of these phenomena are explained by the likely immunosuppressive effects of measles vaccine viruses. however, current vaccines (including the highly attenuated moraten strain of measles vaccine) do not seem to cause clinically relevant immunosuppression in healthy children. studies have found that the incidence of invasive bacterial infections following immunization with diphtheria, pertussis, tetanus, bcg, measles, mumps, rubella, or live attenuated poliovirus vaccines was not greater than that found in unimmunized children. [ ] [ ] [ ] [ ] [ ] vaccines cause autoimmunity mechanisms are present at birth to prevent the development of immune responses directed against self-antigens (autoimmunity). t-and b-cell receptors of the fetus and newborn develop with a random repertoire of specificities. in the thymus, t cells that bind strongly to self-peptide-mhc complexes die, while those that bind with a lesser affinity survive to populate the body. this central selection process eliminates strongly selfreactive t cells, while selecting for t cells that recognize antigens in the context of self-mhc. in the fetal liver, and later in the bone marrow, b-cell receptors (ie, immunoglobulins) that bind self-antigens strongly are also eliminated. therefore, the thymus and bone marrow, by expressing antigens from many tissues of the body, enable the removal of the majority of potentially dangerous autoreactive t and b cells before they maturea process termed central tolerance. however, it is not simply the presence of autoreactive t and b cells that result in autoimmune disease. autoreactive t and b cells are present in all people because it is not possible for every antigen from every tissue of the body to participate in the elimination of all potentially autoreactive cells. a process termed peripheral tolerance further limits the activation of autoreactive cells. , mechanisms of peripheral tolerance include the following: ( ) antigen sequestration (antigens of the central nervous system, eyes, and testes are not regularly exposed to the immune system unless injury or infection occurs.); ( ) anergy (lymphocytes partially triggered by antigen but without costimulatory signals are unable to respond to subsequent antigen exposure.); ( ) activation-induced cell death (a self-limiting mechanism involved in terminating immune responses after antigen is cleared); and ( ) inhibition of immune responses by specific regulatory cells. [ ] [ ] [ ] [ ] therefore, the immune system anticipates that self-reactive t cells will be present and has mechanisms to control them. any theory of vaccine causation of autoimmune diseases must take into account how these controls are circumvented. as discussed subsequently, epidemiologic studies have not supported the hypothesis that vaccines cause autoimmune diseases. this is consistent with the fact that no mechanisms have been advanced to explain how vaccines could account for all of the prerequisites that would be required for the development of autoimmune disease. at least four key conditions must be met for development of autoimmune disease. first, self-antigen-specific t cells or selfantigen-specific b cells must be present. second, self-antigens must be presented in sufficient amounts to trigger autoreactive cells. third, costimulatory signals, cytokines, and other activation signals produced by antigen-presenting cells (such as dendritic cells) must be present during activation of self-reactive t cells. fourth, peripheral tolerance mechanisms must fail to control destructive autoimmune responses. if all of these conditions are not met, the activation of self-reactive lymphocytes and progression to autoimmune disease are not likely. evidence that vaccines do not cause autoimmunity rigorous epidemiologic studies of infant vaccines and type diabetes found that measles vaccine was not associated with an increased risk for diabetes; other investigations found no association between bcg, smallpox, tetanus, pertussis, rubella, or mumps vaccine and diabetes. a study in canada found no increase in risk for diabetes as a result of receipt of bcg vaccine. in a large year follow-up study among finnish children enrolled in an hib vaccination trial, no differences in risk for diabetes were found among children vaccinated at months of age (followed later with a booster vaccine) and children vaccinated at years only or with children born before the vaccine trial. the weight of currently available epidemiologic evidence does not support a causal association between currently recommended vaccines and type diabetes in humans. [ ] [ ] [ ] the hypothesis that vaccines might cause multiple sclerosis was fueled by anecdotal reports of multiple sclerosis following hepatitis b immunization and two case-control studies showing a small increase in the incidence of multiple sclerosis in vaccinated persons that was not statistically significant. - however, the capacity of vaccines to cause or exacerbate multiple sclerosis has been evaluated in several excellent epidemiologic studies. - two large case-control studies showed no association between hepatitis b vaccine and multiple sclerosis and found no evidence that hepatitis b, tetanus, or influenza vaccines exacerbated symptoms of multiple sclerosis. other well-controlled studies also found that influenza vaccine did not exacerbate symptoms of multiple sclerosis. - indeed, in a retrospective study of patients with relapsing multiple sclerosis, infection with influenza virus was more likely than immunization with influenza vaccine to cause an exacerbation of symptoms. a recent review also showed that the novel h n vaccine had an attributable risk for guillain-barré syndrome of - cases per million doses administered, not higher than that found following the - seasonal influenza vaccine. allergic symptoms are caused by soluble factors (eg, ige) that mediate immediate-type hypersensitivity; production of ige by b cells is dependent on release of cytokines such as interleukin- by th cells. two theories have been advanced to explain how vaccines could enhance ige-mediated, th -dependent allergic responses. first, vaccines could shift immune responses to potential allergens from th -like to th -like. second, by preventing common prevalent infections (the "hygiene hypothesis"), vaccines could prolong the length or increase the frequency of th -type responses. , although all factors that cause changes in the balance of th and th responses are not fully known, it is clear that dendritic cells have a critical role. for example, adjuvants (eg, aluminum hydroxide or aluminum phosphate ["alum"] contained in some vaccines) promote dendritic cells to stimulate th type responses. , adjuvants could cause allergies or asthma by stimulating bystander, allergen-specific th cells. however, vaccine surveillance data show no evidence for environmental allergen priming by vaccination. furthermore, local inoculation of adjuvant does not cause a global shift of immune responses to th or th type. , the other hypothesis advanced to explain how vaccines could promote allergies is that by preventing several childhood infections (the hygiene hypothesis), stimuli that evolution has relied on to cause a shift from the neonatal th -type immune response to the balanced th -th response patterns of adults have been eliminated. , however, the diseases that are prevented by vaccines constitute only a small fraction of the total number of illnesses to which a child is exposed, and it is unlikely that the immune system would rely on only a few infections for the development of a normal balance between th and th responses. for example, a study of , illnesses performed in cleveland, ohio, in the s found that children experienced six to eight infections per year in the first years of life; most of these infections were caused by viruses such as coronaviruses, rhinoviruses, paramyxoviruses, and myxovirusesdiseases for which children are not routinely immunized. also at variance with the hygiene hypothesis is the fact that children in developing countries have lower rates of allergies and asthma than children in developed countries despite the fact that they are commonly infected with helminths and worms-organisms that induce strong th -type responses. finally, the incidence of diseases that are mediated by th -type responses, such as multiple sclerosis and type diabetes, have increased in the same populations as those that experienced an increase in allergies and asthma. although some relatively small early observational studies supported the association between whole-cell pertussis vaccine and development of asthma, more recent studies have suggested otherwise. a large clinical trial performed in sweden found no increased risk, and a very large longitudinal study in the united kingdom found no association between pertussis vaccination and early-or late-onset wheezing or recurrent or intermittent wheezing. two studies from the vsd project have also lent data to this controversy. in one study of , infants with wheezing during infancy, vaccination with dtp and other vaccines was not related to the risk of wheezing in full-term infants, and, in another study of more than , children, childhood vaccinations were not associated with an increased risk for developing asthma. finally, a study from finland also suggested that children with a history of natural measles were at increased risk for atopic illness. such findings would run contrary to the hypothesis that the increase in atopic illnesses seen in several countries is due to the reduction in wild measles resulting from immunizations. another separate concern is whether inactivated influenza vaccination may induce asthma exacerbations in children with preexisting asthma. results of studies examining the potential associations between administration of inactivated influenza vaccine and various surrogate measures of asthma exacerbation, including decreased peak expiratory flow rate, increased use of bronchodilating drugs, and increase in asthma symptoms, have yielded mixed results. most studies, however, have not supported such an association. in fact, after controlling for asthma severity, acute asthma exacerbations were less common after inactivated influenza vaccination than before, and inactivated influenza vaccination seems to be associated with a decreased risk for asthma exacerbations throughout influenza seasons. several more recent studies have also shown a lack of correlation between receipt of vaccines and the development of asthma. - autism is a chronic developmental disorder characterized by problems in social interaction, communication, and responsiveness and by repetitive interests and activities. although the causes of autism are largely unknown, family and twin studies suggest that genetics has a fundamental role. in addition, overexpression of neuropeptides and neurotrophins has been found in the immediate perinatal period among children later diagnosed with autism, suggesting that prenatal or perinatal influences or both have a more important role than postnatal insults. however, because autistic symptoms generally first become apparent in the second year of life, some scientists and parents have focused on the role of mmr vaccine because it is first administered around this time. concern about the role of mmr vaccine was heightened in when a study based on children proposed an association between the vaccine and the development of ileonodular hyperplasia, nonspecific colitis, and regressive developmental disorders (later termed by some as "autistic enterocolitis"). among the proposed mechanisms was that mmr vaccine caused bowel problems, leading to the malabsorption of essential vitamins and other nutrients and eventually to autism or other developmental disorders. concern about this issue led to a decline in measles vaccine coverage in the united kingdom and elsewhere. significant concerns about the validity of the study included the lack of an adequate control or comparison group, inconsistent timing to support causality (several of the children had autistic symptoms preceding bowel symptoms), and the lack of an accepted definition of the syndrome. subsequently, population-based studies of autistic children in the united kingdom found no association between receipt of mmr vaccine and autism onset or developmental regression. , a study in the united states in the vsd project investigated whether measlescontaining vaccine was associated with inflammatory bowel disease and found no relationship between receiving mmr vaccine and inflammatory bowel disease or between the timing of the vaccine and risk for disease. soon after the lancet published the article that ignited the controversy, two ecologic analyses found no evidence that mmr vaccination was the cause of apparent increased trends in autism over time, , while two other studies found no evidence of a new variant form of autism associated with bowel disorders secondary to vaccination. , several more recent studies have also refuted the notion that mmr vaccine caused autism. [ ] [ ] [ ] [ ] [ ] [ ] in february , the lancet retracted the original article claiming an association. because of the level of concern surrounding this issue, the cdc and the national institutes of health requested an independent review by the iom. the immunization safety review committee appointed by the iom to review this issue was unable to find evidence supporting a causal relationship at the population level between autistic spectrum disorders and mmr vaccination, nor did the committee find any good evidence of biological mechanisms that would support or explain such a link. the fda modernization act of called for the fda to review and assess the risk of all mercury-containing food and drugs. this led to an examination of mercury content in vaccines. public health officials found that infants up to months old could receive as much as . µ g of ethylmercury (thimerosal) from vaccines, a level that exceeded recommended safety guidelines for methylmercury from the environmental protection agency, but not levels recommended by the fda or the agency for toxic substance disease registry. consequently, the routine neonatal dose of hepatitis b vaccine in infants born to hepatitis b surface antigen-negative mothers was suspended in the united states until preservative-free vaccines became available, and transitioning to a vaccine schedule free of thimerosal began as a precautionary measure. currently, some multidose influenza vaccines contain preservative quantities (ie, µ g per dose) of thimerosal although thimerosal-free vaccines are available. mercury is a naturally occurring element found in the earth's crust, air, soil, and water. since the earth's formation, volcanic eruptions, weathering of rocks, and burning of coal have caused mercury to be released into the environment. once released, certain types of bacteria in the environment can change inorganic mercury to organic (methylmercury). methylmercury makes its way through the food chain in fish, animals, and humans. at high levels, it can be neurotoxic. thimerosal contains ethylmercury, not methylmercury. studies comparing ethylmercury and methylmercury suggest that they are processed differently; ethylmercury is broken down and excreted much more rapidly than methylmercury. therefore, ethylmercury is much less likely than methylmercury to accumulate in the body and cause harm. a several pieces of biological and epidemiologic evidence support the notion that thimerosal does not cause autism. first, in iraq imported grain that had been fumigated with methylmercury. farmers ate bread made from this grain. the result was one of the worst, single-source, mercury poisonings in history. methylmercury in the grain caused the hospitalization of , iraqis and killed . pregnant women also ate the bread and delivered infants with epilepsy and mental retardation. however, there was no evidence that these infants had an increased incidence of autism. second, several large studies have now compared the risk of autism in children who received vaccines containing thimerosal with children who received vaccines without thimerosal or vaccines with lesser quantities of thimerosal; the incidence of autism was similar in all groups. - the iom has reviewed these studies and concluded that evidence favored rejection of a causal association between vaccines and autism and that autism research should shift away from vaccines. denmark, a country that abandoned thimerosal as a preservative in , actually saw an increase in the disease beginning several years later. third, studies of the head size, speech patterns, vision, coordination, and sensation of children poisoned by mercury show that the symptoms of mercury poisoning are distinguishable from the symptoms of autism. fourth, methylmercury is found in low levels in water, infant formula, and breast milk. although it is clear that large quantities of mercury can damage the nervous system, there is no evidence that the small quantities contained in water, infant formula, and breast milk do. an infant who is exclusively breastfed for months will ingest more than twice the quantity of mercury that was ever contained in vaccines and times the quantity of mercury contained in the influenza vaccine. one known and unfortunate sequela from the uncertainty surrounding the safety of thimerosal was confusion surrounding administration of the birth dose of hepatitis b vaccine. following the suspension of the routine use of hepatitis b vaccine for low-risk newborns in , there was a marked increase in the number of hospitals that no longer routinely vaccinated all infants at high risk of hepatitis b. as a result, there have been cases of neonatal hepatitis b that could have been prevented but were not because of many hospitals suspending their routine neonatal hepatitis b vaccination program. the hypothesis for why vaccines might cause autism has continued to shift. in , the concern was that the mmr vaccine caused autism. the following year, the concern shifted to include the fear that thimerosal in vaccines caused autism. as data continued to be generated showing that both of these concerns were ill founded, the hypothesis shifted again-this time to include the fear that too many vaccines given too soon caused autism. to address this concern, michael smith and charles woods mined data from a previous study that had been performed by cdc researchers to determine whether thimerosal in vaccines was associated with an increased risk of autism or neurodevelopmental delays. smith and woods compared children who had received vaccines according to the cdc/american academy of pediatrics schedule with children for whom a decision was made to delay, withhold, separate, or space out vaccines, noting no difference between the two groups in neurodevelopmental outcomes. aluminum salts have safely been used to adjuvant vaccines since the s. however, by the mid- s, parents became concerned that aluminum in vaccines might be harmful. indeed, high levels of aluminum can cause local inflammatory reactions, osteomalacia, anemia, or encephalopathy, typically in preterm infants or infants with absent or severely compromised renal function who are also receiving high doses of aluminum from other sources (eg, antacids). studies have shown that children who receive aluminum-containing vaccines have serum levels of aluminum that are well below the toxic range. [ ] [ ] [ ] formaldehyde in vaccines is harmful formaldehyde has been used in vaccines to detoxify bacterial toxins (ie, diphtheria toxin, tetanus toxin, pertussis toxins) and to inactivate viruses (ie, poliovirus). because formaldehyde at high concentrations can cause mutational changes in cellular dna in vitro, some parents have become concerned that formaldehyde in vaccines might be dangerous. however, because formaldehyde is a product of single-carbon metabolism, everyone has formaldehyde detectable in serum. indeed, the level of formaldehyde in the circulation is about -fold more than would be contained in any vaccine. also, people exposed to high levels of formaldehyde in the workplace (eg, morticians) are not at greater risk of cancer than people who are not exposed to formaldehyde. finally, the quantity of formaldehyde present in vaccines is at least -fold lower than that necessary to induce toxicity in experimental animals. two cell lines, mrc- and wi- , both derived from elective abortions performed in europe in the early s, have been used as cell substrates in vaccine manufacture. four vaccines continue to require the use of these cell lines: varicella, rubella, hepatitis a, and one of the rabies vaccines. human fetal cells were valuable in vaccine research because they support the growth of many human viruses and are sterile; they were first used at around the time that researchers found that primary monkey kidney cells were contaminated with sv virus. some religious groups have become concerned about the use of cells originally obtained from elective abortions. however, the pontifical academy of life of the catholic church has deemed vaccines made using these cells worthy of continued use, despite their origins. disease prevention, especially if it requires continuous nearuniversal compliance, is a formidable task. in the preimmunization era, vaccine-preventable diseases such as measles and pertussis were so prevalent that the risks and benefits of disease vs vaccination were readily evident. as immunization programs successfully reduced the incidence of vaccine-preventable diseases, however, an increasing proportion of health care providers and parents have little or no personal experience with vaccine-preventable diseases. for their risk-benefit analysis, they are forced to rely on historical and other more distant descriptions of vaccine-preventable diseases in textbooks or educational brochures. in contrast, some degree of personal discomfort, pain, and worry is generally associated with each immunization. in addition, parents searching for information about vaccines on the world wide web are likely to encounter web sites that encourage vaccine refusal or emphasize the dangers of vaccines. , similarly, the media may sensationalize vaccine safety issues or, in an effort to present "both sides" of an argument, fail to provide perspective. , for reasons discussed earlier, there may be uncertainty if vaccines are associated with rare or delayed adverse reactions if only because the scientific method does not allow for acceptance of the null hypothesis. therefore, one cannot prove that a vaccine never causes a particular adverse event, only that an adverse event is unlikely to occur by a certain statistical probability. the combination of these factors may have an impact on parental beliefs about immunizations. a national survey found that although the majority of parents support immunizations, % to % have misconceptions that could erode their confidence in vaccines. within this context, the art of addressing vaccine safety concerns through effective risk communication has emerged as an increasingly important skill for managers of mature immunization programs and health care providers who administer vaccines. the science of risk perceptions and risk communications, developed initially for technology and environmental arenas, has only recently been formally applied to immunizations. for scientists and other experts, risk tends to be synonymous with the objective probability of morbidity and mortality resulting from exposure to a particular hazard. in contrast, research has shown that laypersons may have subjective, multidimensional, and value-laden conceptualizations of risk. among the key principles and lessons learned about public perceptions of risk are the following: -individual people differ in their perceptions of risk depending on their personality, education, life experience, and personal values; , educational materials tiered for different needs are therefore likely to be more effective than a single tier. -perceptions of risk may differ dramatically among various stakeholders, such as members of government agencies, industry, or activist groups. the level of trust between stakeholders has an impact on all other aspects of risk communication. trust is generally reinforced by open communication about what is known and unknown about risks and by providing candid accounts of the evidence and how it was used in the decisionmaking process. -certain hazard characteristics, including involuntariness, uncertainty, lack of control, high level of dread, and low level of equity, lead to higher perceived risk; only risks with similar characteristics should be compared in risk communication efforts. -for quantitatively equivalent risk that is due to action (eg, vaccination reaction) vs inaction (eg, vaccinepreventable disease caused by nonvaccination), many people prefer the consequences of inaction to action. -when there is uncertainty about risks, patients frequently rely on the advice of their physician or other health care professionals; continuing education of health care professionals on vaccine risk issues is key. -finally, different ways of presenting, or framing, the same risk information (eg, using survival rates vs mortality rates) can lead to different risk perceptions, decisions, and behaviors. , risk communication can be used for the purposes of advocacy, public education, or decision-making partnership. people care not only about the magnitude of risks, but also how risks are managed and whether they participate in the risk-management process, especially in a democratic society. in medical decision making, this has resulted in a transition from more paternalistic models to increasing degrees of informed consent. some have argued that a similar transition to informed consent also should occur with immunizations. however, immunization is unlike most other medical procedures (eg, surgery) in that the consequences of the decision affect not only the individual person, but also others in the society. because of this important distinction, many countries have enacted public health (eg, immunization) laws that severely limit an individual person's right to infect others. without such mandates, persons may attempt to avoid the risks of vaccination while being protected by the herd immunity resulting from others being vaccinated. unfortunately, the protection provided by herd immunity may disappear if too many people avoid vaccination, resulting in outbreaks of vaccine-preventable diseases. , debates in the united states have focused on whether philosophical (in addition to medical and religious) exemptions to mandatory immunizations should be allowed more universally and, if so, what standards for claim of exemption are needed. , , thus, vaccine risk communications should not only describe the risks and benefits of vaccines for individual people, but also should include discussion of the impact of individual immunization decisions on the larger community. empathy, patience, scientific curiosity, and substantial resources are needed to address concerns about vaccine safety. although each evaluation of a vaccine safety concern is in some ways unique, some general principles may apply to most cases. as with all investigations, the first step is objective and comprehensive data gathering. it is also important to gather and weigh evidence for causes other than vaccination. for individual cases or clusters of cases, a field investigation to gather data firsthand may be necessary. , advice and review from a panel of independent experts also may be needed. , , causality assessment at the individual level is difficult at best; further evaluation via epidemiologic or laboratory studies may be required. even if the investigation is inconclusive, such studies can often help to maintain public trust in immunization programs. scientific investigations are only the beginning of addressing vaccine safety concerns. in many countries, people who believe they or their children have been injured by vaccines have organized and produced information highlighting the risks of and alternatives to immunizations. from the consumer activist perspective, even if vaccine risks are rare, this low risk does not reassure the person who experiences the reaction. such groups have been increasingly successful in airing their views in electronic and print media, frequently with poignant individual stories. , because the media frequently raise controversies without resolution and choose "balance" over perspective, one challenge is to establish credibility and trust with the audience. , factors that aid in enhancing credibility include demonstrating scientific expertise, establishing relationships with members of the media, expressing empathy, and distilling scientific facts and figures down to simple lay concepts. however, statistics and facts compete poorly with dramatic pictures and stories of disabled children. emotional reactions to messages are often dominant, influencing subsequent cognitive processing. therefore, equally compelling firsthand accounts of people with vaccine-preventable diseases may be needed to communicate the risks associated with not vaccinating. clarifying the distinction between perceived and real risk for the concerned public is critical. if further research is needed, the degree of uncertainty (eg, whether such rare vaccine reactions exist at all) should be acknowledged, but what is certain also should be noted (eg, millions of people have received vaccine x and have not developed syndrome y; even if the vaccine causes y, it is likely to be of magnitude z, compared with the magnitude of known risks associated with vaccine-preventable diseases). in the united states, written information about the risks and benefits of immunizations developed by the cdc has been required to be provided to all people vaccinated in the public sector since . the national childhood vaccine injury act requires every health care provider, public or private, who administers a vaccine that is covered by the act to provide a copy of the most current cdc vaccine information statement (vis) to the adult vaccinee or, in the case of a minor, to the parent or legal representative each time a dose of vaccine is administered. health care providers must note in each patient's permanent medical record the date printed on the vis and the date the vis was given to the vaccine recipient or his or her legal representative. viss are the cornerstone of provider-patient vaccine risk-benefit communication. each vis contains information on the disease(s) that the vaccine prevents, who should receive the vaccine and when, contraindications, vaccine risks, what to do if a side effect occurs, and where to go for more information. current viss can be obtained from the cdc's national center for immunization and respiratory diseases at www.cdc.gov/vaccines and are available in more than languages from the immunization action coalition at www.immunize.org. an increasing number of resources that address vaccine safety misconceptions and allegations also have become available, including web sites, brochures, resource kits, and videos (table - ) . some studies have been conducted to assess the use and effectiveness of such materials; - however, more research in this area is needed. immunization programs and health care providers should anticipate that some members of the public may have deep concerns about the need for and safety of vaccines. a few may refuse certain vaccines or even reject all vaccinations. an understanding of vaccine risk perceptions and effective vaccine risk communication are essential in responding to misinformation and concerns. toward this end, cdc's vaccine safety website (http://www.cdc. gov/vaccinesafety/index.html) provides basic information on the safety of routinely administered vaccines, as well as responses to frequently asked questions. the website also provides more detailed information on how vaccines are tested and monitored for safety; cdc's specific projects for monitoring, evaluation, and research on vaccine safety (vaers, vsd, and cisa); detailed sections addressing common concerns (eg, autism, thimerosal); and a resource library with articles, fact sheets, and other related materials on immunization safety. parental vaccine acceptance in a new era: the role of health care providers and public health professionals one consequence of the success of vaccines is that an increasing number of parents and clinicians have little or no personal experience with or knowledge of many of the diseases that vaccines prevent. thus, vaccine-preventable diseases often are not perceived as a real threat by parents. , moreover, increasingly parents want to be fully informed about their children's medical care, thus merely recommending vaccination may not be sufficient. also in this new era, stories in the media highlighting adverse events (real or perceived) may cause some parents to question the safety of vaccines. apart from the media attention on vaccine safety issues, a confluence of factors has an influence on parents' vaccine attitudes in the present environment of a low incidence of vaccinepreventable diseases. these factors would be relatively unimportant in an environment where diseases such as polio and measles were common and people lived in fear of their children contracting disease; however, they have become predominant in the current climate for some parents. some of these factors are: ( ) lack of appropriately tailored information about the benefits of vaccines and contrary information from alternative health practitioners, ( ) mistrust of the source of the information, ( ) perceived serious side effects, ( ) not perceiving the risks of vaccines accurately, and ( ) insufficient biomedical literacy. addressing these issues is a challenge for medical and public health professionals because the typical arrangement for providing medical care does not allow full reimbursement of health care providers for educating patients and parents. nevertheless, it is important for us to try to meet the challenge because an understanding of the aforementioned factors and a proactive approach to vaccine education may prevent future concerns from escalating into widespread refusal of vaccines, with a consequent increased incidence of vaccine-preventable diseases. most people today want to be thoroughly informed about their health care. the desire for more information also applies to parents with regard to medical issues for their children. parents want to be part of the decision-making process when it comes to immunizations for their children. providing the appropriate information at the appropriate time is especially important now with the increased questioning of vaccines and with states allowing philosophical exemptions in . there is an association between information and vaccine acceptance. a recent study found that while % of parents agreed that they had access to enough information to make a good decision about immunizing their children, % of parents disagreed or were neutral. parents who disagreed they had enough vaccine information had negative attitudes about immunizations, health care providers, immunization requirements and exemptions, and trust in people responsible for immunization policy. moreover, a larger percentage of parents who reported they did not have access to enough information about vaccines also had several specific vaccine concerns compared with parents who were neutral or agreed they had access to enough information. it may be that when there is a void of accurate, trusted information, doubts about vaccines arise and misinformation is more readily accepted. other studies have demonstrated the effect of providing information on the well-being of patients. for example, information is one factor that has been shown to positively influence a sense of control in patients with rheumatoid arthritis, and perceived lack of information among mothers was one reason contributing to nonimmunization of children in india. by using the principle of audience segmentation (partitioning a population into segments with shared characteristics), a survey study identified five parent groups that varied on health and immunization attitudes and beliefs. the two audience segments identified as most concerned about immunizations ("worrieds" and "fencesitters") were chosen as the focus of a follow-up study to obtain the input of mothers in these segments in the development of evidence-based, tailored educational materials. the purpose of these materials would be to assist health care providers in busy office settings to address questions from these two groups of parents. presentation of these tailored brochures by children's health care providers to parents in an empathetic and respectful manner could aid in improving the health care provider-parent relationship, increasing vaccine acceptance, and ultimately preventing vaccine-preventable diseases. the viss are typically given to parents the day the child is scheduled for immunization. , , this often places the parent in a conflict situation of attending to the vis or attending to a frightened or upset child. not surprisingly, studies have shown that parents would rather receive the information in advance of the first vaccination visit. , [ ] [ ] [ ] suggested earlier times for vaccine education include prenatal clinic visits and just after delivery in a hospital. a national survey indicated that % of providers said that a preimmunization booklet for parents would be useful for communicating risks and benefits to parents. the use of complementary and alternative medicine (cam) has been increasing during the past years in the united states. part of this increase is due to mcos providing coverage for some cam therapies. chiropractic care is among the top most commonly used cam therapies. it is of note that some chiropractic colleges teach a negative view of immunizations. in one study, one third of chiropractors agreed that there is no scientific proof that immunizations prevent disease. the basis for the negative views of vaccine effectiveness may lie in the chiropractic doctrine that disease is the result of spinal nerve dysfunction caused by subluxation coupled with the rejection of the germ theory of disease. , it may be that some chiropractors who adhere to this belief influence parents against immunizing their children. in one study, parents who requested immunization exemptions for their children were more likely to report cam use in their families than parents who did not request these exemptions. this emphasizes the importance of a trusting physician-patient relationship and providing parents with tailored information in advance of their child's immunizations; in this manner their questions are answered and they are prepared with the facts when they encounter contrary information from other sources. reaching out to chiropractic organizations to foster a better understanding of the benefits of immunizations may be advantageous to medical and public health professionals. parental concern about immunizations has been associated with a lack of trust. for example, one of the factors influencing parents who choose not to vaccinate their children for pertussis is doubt about the reliability of the vaccine information. in another study, compared with parents of vaccinated children, parents of children with an immunization exemption were more likely to express a low level of trust in the government, in addition to other factors such as low perceived susceptibility to and severity of vaccine-preventable diseases and low perceived efficacy and safety. these parents were less likely to believe that medical and public health professionals are good or excellent sources of immunization information. the majority of parents ( %), however, report receiving immunization information from a physician. thus, having a physician who engenders trust providing immunization information and who is available to listen and answer questions is the optimal situation from the public health perspective. if trust in a child's physician is low, parents may be drawn to other, less credible sources of information. when a child experiences an adverse event following receipt of a vaccine, it often raises the question "was this vaccine necessary"? to the parent, it may seem that the risks of the vaccine are greater than the risks of not getting the vaccine. parents who sought medical attention for any of their children owing to an apparent adverse event following immunizations ( . %) not only expressed more concern about immunizations, but also were more likely to have a child who lacked one or more doses of three high profile vaccines compared with parents who reported that none of their children had experienced an adverse event following immunization. two scenarios were seen as plausible. it may be that parents who were already concerned about vaccines before their child began the vaccination schedule were more reactive and thus sought medical attention for minor side effects (eg, fever) or nonrelated problems. it is also possible that an apparent adverse event following immunization that resulted in parents seeking medical attention for their child caused the parents' perception of vaccines to become more negative. both possibilities may result in parents declining future vaccines for their children. negative attitudes could be addressed by improving communication between clinician and parent. benefit-cost analysis research has shown that physician advice can produce benefits for health issues (eg, problem drinking). moreover, positive communication behaviors such as humor and soliciting questions are associated with lowered physician's risk of a malpractice suit. it may be that in this era of low vaccine-preventable disease incidence and increased public questioning of immunizations, improved provider communication can produce a positive net benefit for parents (reduced anxiety), a cost benefit to the health care system (reduced calls and medical visits for nonserious adverse events following immunization), and an improved physician-patient relationship (more trust and fewer malpractice suits). individual people can vary in their perception of the magnitude of vaccine risks. studies have shown that various factors such as sex, race, political worldviews, emotional affect, and trust are associated with risk perception. in addition, risk perception factors such as involuntariness, uncertainty, lack of control, and high level of dread can lead to a heightened perception of risks. all of these can be seen as associated with childhood immunizations. moreover, these factors have been referred to as "outrage" factors in the risk communication literature. outrage can lead to a person responding emotionally and can increase further the level of perceived risk. it can be difficult to communicate the risk of many vaccinepreventable diseases given their low prevalence in the united states and difficult to communicate the risks of serious vaccine adverse events because they affect such a small proportion of vaccine recipients. , several factors have been studied that might help people to better understand risk; the first are comparisons. comparisons that are similar (apples to apples) are reported to be better accepted, and, thus, comparisons for vaccines should focus on things that generally prevent harm in children but could pose a small risk (such as bicycle helmets, car seats). the second are visual presentations that help people understand numerical risk, including risk ladders, stick figures, line graphs, dots, pie charts, and histograms. unfortunately, there has been little research done in either of these areas. trust in the source of the risk information is an important factor in its ability to influence people and, as discussed, is developed through listening and ongoing communications. in , american adults had an average score of . on an index of biomedical literacy designed to measure understanding of biomedical terms and constructs. people with scores less than would likely find it difficult to understand medical stories about why antibiotics are not effective in combating the common cold and the relationship between certain genes and health. the main factors associated with biomedical literacy are the following: ( ) level of formal education, ( ) number of college-level science courses, and ( ) age. some characteristics of scientific literacy include the following abilities: ( ) distinguishing experts from the uninformed; ( ) recognizing gaps, risks, limits, and probabilities in making decisions involving a knowledge of science or technology; ( ) recognizing when a cause-and-effect relationship cannot be drawn; and ( ) distinguishing evidence from propaganda, fact from fiction, sense from nonsense, and knowledge from opinion. unfortunately, parental characteristics of those least motivated to obtain timely immunizations for their children are often characterized by low educational level of either parent. there is a wide gap in the level of biomedical understanding across the us population, and this gap emphasizes the need for tailored information. the need for tailored information applies to all areas of health, including childhood immunizations. immunization educational materials aimed at a middle level or a "one size fits all" are not likely to satisfy all parents' needs. the importance of educating parents concerned about vaccines why should we care about a small number of parents who are worried about vaccines for their children? we should care because it is not only ethically the right thing to do, it is also the right thing to do from a practical viewpoint. vaccine acceptability refers to the factors that go into parents' decisions to have their children immunized. it is important not to assume that just because most parents are having their children immunized that they will continue to do so. while the host of factors contributing to parents' decisions to have their children immunized (eg, need for information, experience with adverse events) might remain stable for some time, it is possible that one or more of the factors may change so that some parents perceive the risks of vaccines to be greater than the risk of disease. this would then push the parents above a theoretical "unacceptability threshold" in which they would choose not to have their children immunized with one or more vaccines. this is especially possible as more vaccines are added to the immunization schedule. an increasing number of parents have a choice, through religious or philosophical immunization exemption laws or schooling their children at home. averting the future possibility of outbreaks of vaccine-preventable diseases will take a concerted effort by health care and public health professionals to educate and better communicate with parents concerned about immunizations. in guidance for clinicians, the american academy of pediatrics suggests that pediatricians should listen carefully and respectfully to parents' immunization concerns, factually communicate the risks and benefits of vaccines, and work with parents who may be concerned about a specific vaccine or having their child receive multiple vaccines in one visit. providers can make a huge impact on vaccine acceptance, resulting in a cascading effect in which providing information can increase trust and increasing trust can lead to greater acceptance of and confidence in vaccines. for health care providers to be able to optimally fill this important role, however, two related issues should be addressed. the first is the need for quality communication courses and training in medical schools and residencies and training programs for medical and public health professionals. , the second is for mcos and medical insurance companies to adequately reimburse physicians for health education. lack of reimbursement to physicians has been noted as a barrier to implementation of behavioral treatments for health issues such as heart disease and smoking. it is important to note that studies have shown education programs can be a cost savings to health care systems. , we live in a world already benefiting from vaccines that exist, and there is the promise of more vaccines to come. the challenge we have now is to make sure that the promise is not lost because we did not present the benefits and risks of vaccines in a meaningful way acceptable to the public. an optimal immunization safety system requires rigorous attention to safety during prelicensure research and development; active monitoring for potential safety problems after licensure; and clinical research and risk management activities, including risk communication, focused on minimizing potential vaccine adverse reactions. prelicensure activities form the foundation of vaccine safety. rapid advances in biotechnology are leading to the development of new vaccines, and novel delivery technologies, such as dna vaccines and new adjuvants, are being developed to permit more antigens to be combined, reducing the number of injections. , new technologies can also be expected to be used to detect potential safety problems throughout the research and development process (eg, adventitial agents). a challenge will be determining the proper role and interpretation of new technologies. for example, a recent study used powerful new metagenomics and panmicrobial microarray technologies to screen for adventitious viral nucleic acid sequences in a number of vaccines. the study identified the presence of dna from porcine circovirus type (pcv ) in rotarix. this finding led to a temporary suspension of the use of the vaccine while the fda evaluated the study and its implications. ultimately, it was determined that the presence of the pcv nucleic acid sequences did not represent a health concern, and use of the vaccine was allowed to resume. in the prelicensure evaluation of new vaccines, the trend is likely to continue to conduct larger phase trials enrolling tens of thousands of participants. while such larger trials are helpful in identifying more rare adverse events, even these larger trials may not be large enough to detect increased risks of rare events. for example, the rotarix preclinical trial identified no increased risk of intussusception in a study that enrolled more than , infants. the manufacturer nevertheless committed to conduct a large postlicensure safety monitoring study. a preliminary analysis of postlicensure monitoring data from mexico identified a statistically significant increased risk within days of vaccination with an attributable risk of approximately per , . the attributable risk was much less than that found for rotashield (approximately per , ), and no changes were made to the vaccine recommendations. although technological advances and more thorough evaluation of safety before vaccines are licensed should lead to the development of safer vaccines, there will continue to be a need for comprehensive postlicensure safety monitoring systems. combined with the difficulties associated with identifying rare, delayed, or insidious vaccine safety problems in prelicensure studies, the well-organized consumer activist organizations, internet information of questionable accuracy, , media eagerness for controversy, , and relatively rare individual encounters with vaccine-preventable diseases virtually ensure that vaccine safety concerns are unlikely to go away. the existence of a robust vaccine safety monitoring system is essential for providing assurance of the safety of currently marketed vaccines and for rapidly identifying and responding to potential safety problems. currently, srss, such as vaers, serve as the frontline systems for the early identification of vaccine safety problems. such systems could be improved if reporting were more complete. application of web-based and text messaging technologies could make reporting easier and more accurate and also enable more active follow-up of vaccinated persons. alerts built into electronic medical record systems could also improve reporting to vaers, as could linkages with immunization registries. some of these advances will be particularly important to enable monitoring vaccine safety in mass vaccination campaigns during which vaccinations may be administered primarily outside of the traditional health care system. an optimal vaccine safety monitoring system must also include a mechanism or infrastructure to rapidly conduct formal epidemiologic evaluations of potential safety problems identified from srss or other sources. in the united states, this function is primarily served by the vsd project. the diffusion of electronic health records and the capability to link records across data systems (such as large health insurance claims databases and immunization registries) may allow the expansion of the population that could be included in postlicensure epidemiologic evaluations of vaccine safety. for example, the fda sentinel initiative has a goal to develop a national electronic system covering million people for monitoring the postmarket safety of drugs and other medical products, including vaccines. for adverse reactions that are established to be caused by vaccines, clinical and laboratory research is essential for determining the biological mechanisms of the adverse reaction, which in turn could lead to the development of safer vaccines. clinical research is also essential for the development of protocols for safer vaccination, including revaccination of persons who have previously experienced an adverse reaction. advances in genomics and immunology hold particular promise for elucidating biological mechanisms of vaccine adverse reactions and the development of possible screening strategies for persons who may be at high risk for an adverse reaction. a challenge for such research will be identifying sufficient numbers of people who may have rare vaccine adverse reactions and enrolling them into studies in which appropriate biological samples can be collected, stored, and analyzed under a standardized protocol. scientific data are essential in the monitoring and evaluation of vaccine safety, but scientific evidence alone often is not sufficient for providing reassurance about the safety of a vaccine. although immunization levels of us children are high, a sizable fraction of parents do not have their children fully immunized, and concern about vaccine safety is the leading reason for underimmunization. these concerns persist despite the scientific evidence that vaccines do not cause autism or a host of other conditions that have been alleged to be caused by vaccines, such as asthma, diabetes, and autoimmune diseases. thus, it is critically important that public health agencies, medical organizations, and other influential authorities continue to focus on the safety of vaccines and assure public confidence by providing clear, consistent messages on vaccine safety concerns; supporting effective and transparent vaccine safety monitoring systems and research activities; providing review and recommendations by respected independent expert groups on vaccine safety controversies; and engaging advocacy groups in constructive and open dialogue about their vaccine safety concerns. although the efforts of government, medical, and other authorities are important, it is health care providers who have the greatest influence in determining the acceptance of vaccines by individual people. even among parents who believe that vaccines may not be safe, most will have their children vaccinated if they have a trusting relationship with an influential health care provider. thus, development of tools and strategies that can assist health care providers in effectively communicating with their patients on the risks and benefits of vaccines will continue to be important. vaccine safety has also become an important concern in developing countries. the high-titer measles vaccine mortality experience highlighted the importance of improving the quality control and evaluating the safety of vaccines used in developing countries. plans to eliminate neonatal tetanus and measles via national immunization days, during which millions of people receive parenteral immunizations over a period of days, pose substantial challenges to ensuring injection safety, especially given concerns about inadequate sterilization of reusable syringes and needles, recycling of disposable syringes and needles, and cross-contamination resulting from the current generation of jet injectors. the who has promoted the use of safer auto-disposable syringes and disposal boxes. these and other new, safer administration technologies are urgently needed. in addition, there is a need to establish minimal vaccine safety monitoring capabilities, such as srss, and the capability to rapidly investigate vaccine safety problems and effectively communicate the findings of the investigations. vaccines are among the most successful and cost-effective public health tools for preventing disease and death. vaccines, however, are not completely without risk of side effects or other adverse outcomes. a timely, credible, and effective monitoring system, coupled with prompt action in response to identified safety problems, is essential to preventing adverse effects of vaccination and to maintaining public confidence in immunizations. since immunizations are typically administered to healthy people and are often recommended or mandated to provide societal and individual protection, vaccines must be held to a very high standard of safety. vaccine safety monitoring and research should optimally be able to detect potentially very small levels of increased risk, especially for adverse events that can result in death or permanent disability from vaccines that are universally recommended or mandated. the ultimate goal of such research, including the application of new developments in biotechnology, is to develop safer vaccines and vaccination practices. access the complete reference deadly choices: how the anti-vaccine movement threatens us all ensuring the optimal safety of licensed vaccines: a prospective of the vaccine research, development, and manufacturing companies active surveillance for adverse events: the experience of the vaccine safety datalink project understanding the role of human variation in vaccine adverse events: the clinical immunization safety assessment network addressing parents' concerns: do multiple vaccines overwhelm or weaken the infant's immune system? addressing parents' concerns: do vaccines cause allergic or autoimmune diseases? measles-mumps-rubella vaccine and autism thimerosal in vaccines: a joint statement of the american academy of pediatrics and the public health service autism's false prophets: bad science, risky medicine, and the search for a cure addressing parents' concerns: do vaccines contain harmful preservatives, adjuvants, or residuals we are grateful to robert davis, deborah gust, robert chen, and charles hackett who contributed sections of this chapter in previous editions of this book and for the excellent assistance on this chapter rendered by the following persons: dan salmon, john iskander, susan scheinman, christine korhonen, allison kennedy, michele russell, tamara murphy, penina haber, and gina mootrey. key: cord- -ktrw u authors: gupta, abhishek; lanteigne, camylle; institute, victoria heath montreal ai ethics; microsoft,; lab, algora title: report prepared by the montreal ai ethics institute (maiei) on publication norms for responsible ai date: - - journal: nan doi: nan sha: doc_id: cord_uid: ktrw u the history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity. in order to ensure that the science and technology of ai is developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of ai's potential threats and use cases. unfortunately, it's difficult to create a set of publication norms for responsible ai because the field of ai is currently fragmented in terms of how this technology is researched, developed, funded, etc. to examine this challenge and find solutions, the montreal ai ethics institute (maiei) co-hosted two public consultations with the partnership on ai in may . these meetups examined potential publication norms for responsible ai, with the goal of creating a clear set of recommendations and ways forward for publishers. in its submission, maiei provides six initial recommendations, these include: ) create tools to navigate publication decisions, ) offer a page number extension, ) develop a network of peers, ) require broad impact statements, ) require the publication of expected results, and ) revamp the peer-review process. after considering potential concerns regarding these recommendations, including constraining innovation and creating a"black market"for ai research, maiei outlines three ways forward for publishers, these include: ) state clearly and consistently the need for established norms, ) coordinate and build trust as a community, and ) change the approach. • create tools to navigate publication decisions: the use of extrinsic measurements like benchmarks, or third-party expert panels could be a crucial step to navigating publication decisions in a fair way. such methods could set a certain standard in terms of the acceptable level of risk associated with a publication. in line with this suggestion, it would also be pertinent to keep a record of the papers that were rejected due to their inherent risk, as well as some metrics on these. • offer a page number extension: it may be beneficial to extend the page limit for published papers to allow researchers to include negative results (results that are insignificant or disprove researchers' hypotheses), which aren't traditionally printed. in addition to expanding the number of pages, there should also be a significant change in the culture surrounding the publication of negative results. • develop a network of peers: developing a network of peers to evaluate researchers' ai models in terms of potential risks and benefits may be an important tool towards better and safer publication norms. if such a mechanism were put in place, the evaluation could be fully or partly based on philosopher john rawls's "veil of ignorance." if this idea was applied to reviewing ai research, peer reviewers could be asked to consider the potential advantages and risks of a new ai research from different social perspectives. • require broad impact statements: the neurips conference requires that submitted papers include a statement of broader impact with respect to the research presented. this incentivizes researchers to think about potential risks and benefits by making reflection a requirement for one's work to be considered at neurips. similar measures at all conferences and publications would encourage researchers to critically assess their research in terms of its effects, positive and negative, on the world • require the publication of expected results: requiring that researchers write and publish the expected results of their research project (including but not limited to its broader social and ethical impacts) could help foster reflection around potential benefits and harms even before researchers undertake their project. • revamp the peer-review process: the well-established practice of peer review is a great opportunity for exchanges on the risks and benefits each reviewer sees in the paper they are revising. if a question or requirement were added to this effect when papers were reviewed, it may have a rapid and widespread impact in inciting researchers to consider what may follow from their research. an effective review process should promote limiting risks while also being clear, fair, and efficient. one way of doing this is to intensify the requirements for publication proportionally to how risky the research is deemed by peers reviewing the paper. the history of science and technology shows that seemingly innocuous developments in scientific theories and research have enabled real-world applications with significant negative consequences for humanity; from the eugenics movement in the late th and early th centuries to the cataclysmic nuclear destruction in japan in . this history reveals that the real-world impacts of scientific research cannot be separated from the research itself, and it's important to look at the social, cultural, political, and economic realities that shape the way science is used and the norms that regulate it. it's also important to consider the impacts these developments may have on people across the world. often, researchers do not consult with or consider individuals that may be negatively affected by scientific developments, reflecting existing power imbalances in which a small group of privileged individuals make decisions or take risks that impact millions, if not billions, of lives. we need to be aware of these power imbalances and how they're rooted in existing inequalities-this is especially true in the field of artificial intelligence (ai). the pace and scale of impact from ai far exceed other technologies. thus, critical examination is necessary, especially as this technology becomes increasingly deployed throughout society. the aforementioned power imbalances and inequalities in scientific research are apparent in the general disconnect between the research priorities of funders (e.g. grant-making bodies, companies, governments, etc.) and the broader societal interest. the academic paradigm of " publish or perish" and the undue pressure it creates, overshadows more fundamental questions that need to be asked. the most important of which is, "why is this research project being pursued in the first place?" this lack of critical reflection and external pressure has given rise to predatory journals with lax quality standards regarding what gets accepted for publication-this is especially an issue for researchers who are new to an academic field and are uncertain about publishing norms. for example, research in phrenology continues to be accepted in highly revered journals despite decades of precedent demonstrating that phrenology is pseudoscience. the egregious inclusion of this type of research resulted in severe backlash from scholars that led to a retraction and apology from the journal. this example highlights how fallacious research can slip through the cracks, especially without critical reflection. in order to ensure that the science and technology of ai are developed in a humane manner, we must develop research publication norms that are informed by our growing understanding of ai's potential threats and use cases. unfortunately, there haven't been many efforts to create a set of publication norms for responsible ai because the field of ai is currently fragmented in terms of how this technology is researched, developed, funded, etc. thus, the norms around ai publications and ethical standards are not only fragmented, but also contradictory in many cases. a standardized approach to publication norms in ai across a large number of jurisdictions is essential. many subfields in ai are experiencing a boom in interest, with growing demands to produce novel research. thus, there is an elevated risk of a lack of awareness on what adequate and rigorous publishing norms are. additionally, there is a high degree of susceptibility to predatory journals that lure budding researchers through a "pay-to-play" model. this amplifies the potential impacts of harmful research that must be critically reviewed before published for public consumption. most scientific and academic journals have particular guidelines for submissions, which are a form of publication norms. however, problematic research can still be published. for example, researchers exerted external pressure on springer to withdraw the publication of a paper in which the researchers claimed to "predict criminality" using neural networks for facial recognition. this demonstrates that the publication ecosystem requires norms with multiple filters. as well as editorial boards that have sufficient demographic diversity and a range of expertise to be able to flag problematic research and prevent its publication, especially in cases where there is potential to render harm on marginalized peoples. however, the prevalent practice in peer-reviewed journals is to evaluate a work purely on its scientific merit, which overlooks the inherent interaction between fundamental research and the social context in which it's conducted. publication norms have a strong role to play in ensuring interdisciplinarity in review processes so that too narrow a focus does not allow potentially harmful work to pass as seemingly innocuous. beyond the social implications of research, it's also important to consider the technical implications. for example, in cryptography, it's important to find vulnerabilities in a system to improve robustness. cryptographers do this by looking at a system from an adversarial perspective, as well as by sharing their systems openly so they can be evaluated for undue risks by as many people as possible. for example, through this process decades ago, unnecessary risks arose from the use of substitution boxes (s-box) in data encryption standard (des), which led to the subsequent discovery of differential cryptanalysis. this is also important type of vulnerability was seen in the jboss middleware, which shipped quietly with dozens of consumer grade softwares but often with an unpatched, or even unpatchable, version. similarly, software which incorporates ai that relies on an unpatchable library will be vulnerable. to expose such vulnerabilities, as well as better understand the social implications of ai research, journal editors must be better equipped to identify all actors who may engage with ai research and make well-informed decisions around whether research should be published, and how. we believe developing a standardized approach to publication norms in ai across a large number of jurisdictions is the first step towards ensuring that the science and technology of ai are developed in a humane manner. the use of extrinsic measurements like benchmarks, or third-party expert panels could be a crucial step to navigating publication decisions in a fair way. such methods could set a certain standard in terms of the acceptable level of risk associated with a publication. in line with this suggestion, it would also be pertinent to keep a record of the papers that were rejected due to their inherent risk, as well as relevant metrics. for example, what subfield of ai was the paper part of? what institution or corporation was the authors affiliated with (if any)? demographic and identity metrics may also be pertinent to help ensure the review process is not discriminatory. to this end, mechanisms like retraction watch that are invested in by the scientific community and held to the highest academic standards will be a cornerstone to strategies combating spurious research. awareness of such mechanisms and their integration into tools like google scholar, akin to covid- research warnings for non-peer-reviewed articles, is essential. it may be beneficial to extend the page limit for published papers to allow researchers to include negative results (results that are insignificant or disprove researchers' hypotheses), which aren't traditionally printed. there is a bias towards publishing papers with positive results. that is, results that confirm or partially confirm one's hypothesis. in addition to expanding the number of pages, there should also be a significant change in the culture surrounding the publication of negative results. while a higher maximum of pages may help, it should be accompanied by other measures to encourage researchers to share their negative results and push the broader scientific community to consider negative results in their analysis. publishers could require the inclusion of all negative results that led the researchers to the positive results published and/or submitted. these could also be indexed through a standardized mechanism for retrieval by engines like google scholar so that downstream researchers are aware of what areas have already been explored. improving citations and value of such negative results work will also incentivize the ecosystem to invest in making negative results more public and elevate the quality of scientific publishing in the space. developing a network of peers to evaluate researchers' ai models in terms of potential risks and benefits may be an important tool towards better and safer publication norms. if such a mechanism were put in place, the evaluation could be fully or partly based on philosopher john rawls's "veil of ignorance," which asks individuals to create a new society and choose its governing principles without knowing their individual characteristics (e.g. gender, race, social class, etc.). the idea is that they'll choose principles that will benefit everyone in society since they're "ignorant" of their personal circumstances and social standing. if this idea was applied to reviewing ai research, peer reviewers could be asked to consider the potential advantages and risks of a new ai research from different social perspectives. specific details about fictional but realistic situations and personas may be used to help reviewers think more accurately and relevantly about privileges and disadvantages they might not have themselves. of course, this shouldn't replace important efforts to make the field of ai more diverse and inclusive-but the exercise of imagining how another person may be affected, positively or negatively, by an ai model is relevant for everyone. this practice has precedent in design thinking approaches and has shown to create products and services that are ultimately more empathetic to the needs of the communities that they are meant to serve and encourage the construction of more inclusive work. an important method to incite researchers to reflect on the impacts of their work is to give them incentives to do so. for example, the neurips conference recently added a requirement for papers that are submitted: they must include a statement of broader impact with respect to the research presented. this incentivizes researchers to think about potential risks and benefits by making reflection a requirement for one's work to be considered at neurips, one of the most prestigious conferences in the field of ai. therefore, similar measures at all conferences and publications may be a good way of encouraging researchers to critically assess their research in terms of its effects, positive and negative, on the world. iclr has followed suit and has included a code of ethics for their edition which is an important step in the proliferation of these standards becoming mainstream, not just something that one gives a brief nod to. it's important to note that how the research community adopts the formulation of broader impact statements is of fundamental importance. if broader impact statements become yet another "ethics stamp" on poorly conceived research, then the entire endeavour becomes counterproductive. researchers need to critically evaluate the broader impacts of their research before starting a project and use these insights to guide a conscientious research methodology and not construct these perspectives after the fact. the point of writing a broader impact statement must be to introspect the ethics of one's work and not as a mechanism to avoid desk rejection. one way of doing so is to popularize the importance of these norms in the early stages of a researcher's journey into the publishing world. during the formative years in the field, early-career researchers might be naturally drawn to emulating the behaviour of the more experienced researchers in their labs and workplaces. an educational push for holding research ethics, norms, and standards as the paramount element of doing research and development in a consequential field like ai needs to be strongly encouraged. principal investigators (pis) and other senior researchers in both academic and industry labs should shepherd those who are still growing accustomed to the modes of operation of research in the domain. the widespread presence of such a requirement could also incite greater awareness among technical researchers in the field of ai regarding fields and areas like ethics, psychology, anthropology, sociology, and critical race theory. furthermore, making impact statements and other educational pushes necessary could spark interdisciplinary collaborations between researchers. requiring that researchers write and publish the expected results of their research project (including but not limited to its broader social and ethical impacts) could help foster reflection around potential benefits and harms even before researchers undertake their project. for example, a list of journalists who would be interested in hearing about the benefits of the author's thesis on the rest of society could be created, incentivizing the consideration of their project benefits (scientists will want their papers to be promoted in newspaper articles). this would not only encourage researchers to reflect on the impacts of their work, but perhaps guide them towards generally less risky and more beneficial research. this could be similar to how many psychologists now publish a preliminary report on how they will conduct their research, the variables they will be measuring, and their hypothesis, before they perform an experiment. this prevents data manipulation in attempts to find statistically significant results. in the language of software engineering, it could create something akin to test-driven development (tdd) whereby the researchers can preemptively highlight what it is that they are aiming to achieve with their research and view negative results in a positive light as a way of showing ways that don't lead to results that were stated as the goals of the research nonetheless led to an understanding of some aspects of the field helping to improve the knowledge base for future researchers in the domain. of course, having much greater diversity of researchers in the field of ai-in terms of race, gender identity, sexual orientation, geography, language, lived experiences, and socioeconomic status-could also make a significant difference in how and how much researchers think about the impacts of their work. it seems reasonable that, in at least some cases (if not many), the potential benefits and pitfalls of researchers' work may be more pronounced for a certain demographic, and people who are part of that demographic are more likely to be attuned to how it may impact them. thus, having more diverse actors in the field of ai could help foster more comprehensive reflection on the potential impacts of a research project. additionally, a framing whereby the researchers take on the onus to surface these impacts rather than placing the burden on the people who might be impacted (often they might not have adequate knowledge, resources, or abilities) to defend themselves creates a more pro-social way forward for conducting research. the well-established practice of peer review is a great opportunity for exchanges on the risks and benefits each reviewer sees in the paper they are revising. if a question or requirement were added to this effect when papers were reviewed, it may have a rapid and widespread impact in inciting researchers to consider what may follow from their research. this could be done in a way that is similar to how the world health organization (who) highlights dual use research of concern (durc): research in the realm of the life sciences "that is intended for benefit, but which might easily be misapplied to do harm." to have a similar category for ai research would be pertinent, and developing guidelines for this kind of research seems crucial, both to define it and to control it, especially given that ai squarely falls under the category of a dual-use and general-purpose technology. such guidelines might include needing special authorization to conduct this type of research, or it may be required that the research be reviewed more extensively before being published. further, it's obvious that an effective review process should promote limiting risks while also being clear, fair, and efficient. one way of doing this is to intensify the requirements for publication proportionally to how risky the research is deemed by peers reviewing the paper. one way to go about this could be to identify a list of specific risks that any ai model proposed in a research paper could have. these risks may have numerical weights assigned to them depending on how dangerous their consequences may become. each peer reviewer could identify the risks they deem relevant to the ai model they are examining, and assign a score between and to each risk in accordance with how likely it is that the risk will materialize. the risks' weights could then be multiplied by their corresponding likelihood score. each of these quotients can be compiled into a sum. the sums obtained by each reviewer for one paper can then be added up and made into an average. if this 'total risk average' is higher than a pre-established number, the paper could then be immediately rejected, require further reviewing and discussion by a third-party group of experts, or be published under much stricter requirements than other papers that are below the 'total risk average' threshold. such a threat modelling approach is already used extensively in the field of cybersecurity to prioritize risks and vulnerabilities to guide the efforts of researchers and practitioners in working to address them . this process, or one with a similar structure, has the advantage of remaining somewhat efficient by not requiring that papers that are considered non-risky be subjected to more time-consuming reviewing or more stringent requirements unnecessarily. of course, a procedure like the one presented above risks potentially wrongly identifying a model as low risk, whereas a model that held all papers to higher standard regarding risk avoids this. however, it's unlikely that applying more stringent standards to all papers is necessary, efficient, or realistic considering the sheer volume of papers published. a more targeted approach seems better suited to the reality of publishing in the domain of ai. a consideration that publishers can adopt is to think about the rate of false positives and track them over the period of reviews and adjust as they go along to judge the efficacy of this mechanism. to the risk of this becoming repetitive, the cornerstone of an effective review process, as previously mentioned, will be a diverse body of reviewers from the point of view of race, gender, disability, socioeconomic status, geography, and more. risk assessment is unlikely to be fair and comprehensive if all reviewers have similar backgrounds and experiences. this will result in direct harm to those not represented among reviewers, which are often individuals who are already marginalized and most at risk. this is something that needs to be created proactively and will not necessarily emerge organically since reviewers are often sourced from a tightly knit network of people one is already familiar with; breaking free from that requires constant and conscious effort. one important area where there may be some pitfalls is innovation. if publishing norms are more stringent, then cutting-edge research may be ignored or underfunded. in some cases this may be because the potential risks outweigh the benefits. however, in other cases it may be the result of a lack of awareness from the researcher's part on what those publication norms are. this could have a particularly negative impact on emerging scholars in ai from regions and countries where ai research and development is in nascent stages, especially those which have a less than proportionate representation in the scientific publications at major conferences and journals (which are mostly concentrated in the western hemisphere). this could harm scientific diversity in the field. one can undoubtedly see this as a missed opportunity, but it's better framed as an opportunity for better work and innovation. what if we could use this as an opportunity to build ai that did more good than harm? by erring on the side of caution, we may encourage researchers to better understand the consequences of developing and deploying a certain system or application. understanding the tail risks of innovation is critical. the insights from fields such as cryptography are that the risks are enormous from poor research, and given the nature of ai systems, a similar level of risk is to be expected. in light of this understanding of asymmetric returns, we need to raise our standards of what constitutes "innovation," and who gets to decide if a new technological application constitutes positive innovation; bringing more good than harm. from this perspective, constraining innovation is not a pitfall of changing publishing norms. instead, changing publishing norms may foster higher quality, more inclusive and positive innovation. it can be viewed as a mechanism to bend the field of research towards a prosocial direction, something that governments use frequently in the form of regulations to guide how innovation happens in the market. in a similar vein, like many other scientific disciplines, machine learning and artificial intelligence have been affected by their own reproducibility crisis, where a worrying number of algorithmic research results cannot be reproduced when other data scientists run the same experiments. this is of particular concern in fields such as digital biomedicine, where faulty models for disease diagnosis and monitoring can place human lives at great risk. one key component of the problem is the absence of information about the training and evaluation code, the number of training runs required, and datasets used. another part of the issue is the environment in which data scientists operate, where there is a pressure to publish quickly, a reluctance to report failed replications and a lack of computational and human resources to test every condition and fine-tune each hyperparameter. better publication norms will not only encourage more rigour in scientific methodology, but also limit the number of cases where research is modelled on "false starts." the danger of this can ensure that future innovation in the field is based on a set of verifiable and veritable discoveries. in an inherently stochastic domain like ai, the degree of variability that can occur in experiments where one might possibly find an experimental run through sheer luck to get the results that showcase a correlation that they want is all the more reason for advocating for higher rigour; a focus on causation over correlation is going to be essential in addition to the points mentioned above. with more stringent norms on what gets published in the field of ai, there is a possibility that research deemed too risky will be driven underground. meaning, researchers whose work is rejected may publicize their research in other ways, circumventing the measures put in place. due to the stricter publishing norms, the rejected research may become even more dangerous as it's excluded from mainstream critique and necessary scrutiny. put simply, stringent publishing norms may actually increase the likelihood of harmful ai by creating a "black market" for research that has subversive aims. of course, there are counterarguments against this scenario. first, the field of ai/ml is heavily dominated by individuals who are affiliated with universities and companies. in both cases, getting research published by a reputable journal or presented at a recognized conference is key to advancing their career. simply putting out research without it being published or affiliated with their institution (whether academic or corporate) because it is too risky according to publishing norms is of limited use. hence it seems unlikely that someone with the knowledge and qualifications necessary to build innovative yet risky ai would have an incentive to share it using an alternative method. there may be a greater incentive for scientists to conduct research that is more likely to be published in accordance with the new, more stringent standards. it is highly unlikely that the risks for underground research are big enough to warrant not moving towards more stringent publishing standards in terms of security and risk in the field of ai. it's also important to note that the field's current exclusivity to those with affiliations to large and wealthy institutions is not in itself positive. we believe the field of ai should be more accessible than it currently is in that regard. paradoxically, there are questions as to whether it's actually more dangerous if risky ai research goes "underground" than if it's publicized and sanctioned by highly regarded institutions and publications. if an ai research submission is rejected by a publication, then it's likely that it'll receive minimal attention. however, if risky ai research were to be published by these journals and publications then it's likely it may do significant harm because people (especially academics and fellow researchers) tend to refer to these sources for the latest relevant information about progress in the field; under the assumption that the information published has been peer-reviewed. it's this same research that often gets widespread media coverage as well. this follows from the principle of using "strategic silence" as a way of limiting the oxygen that is provided to mis-, dis-, and mal-information by not offering it a platform by way of discussion, time, and resources. furthermore, general audiences are less likely to scrutinize research sanctioned by a journal or an institution. in other words, when risky papers are published, we may not be able to count on public scrutiny to highlight its dangerous possibilities. there have been numerous instances of popular media coverage for dubious research stemming from low-quality journals, preprint servers, and other places that had a significant impact on the public's perception of ai. as an example, research that claimed to create a "gaydar" identifying a person's sexual orientation was thoroughly debunked by leading researchers in the field of ai, but the harmful and offensive work still received significant media attention. we posit that this might be because of the highly technical nature of the field (not that this isn't the case in other domains, but there is a disproportionate attention paid to the advances in this field while the ability of the general public to parse the advances for what they truly are might not be sufficient) which leads to an overestimation in the capabilities of the systems. the fact that such research was published-and thus, endorsed by the publication or institution behind it-created a false sense of security and legitimacy for many people. papers that are published are usually revised and reviewed, and this process, along with the metaphorical seal of approval from the publication/institution, can obfuscate the overly risky nature of the results being published. it essentially shields the paper from any scrutiny because, at first glance, the paper has all the characteristics of an acceptable or even outstanding paper. thus, getting widespread public scrutiny, especially from individuals outside the ai community, is likely to be quite difficult as nothing seems particularly questionable. with this in mind, there is a responsibility on the part of the journal's editorial board to have experts outside of the field review papers in order to identify errors or risks with the research. homogenous editorial boards (i.e. dominant avatars of society), don't have the required perspective to ask essential, critical questions like that of a diverse board. in this sense, a diverse board will be able to provide additional points of failure for controversial scrutiny, before even requiring any public intervention. hence, stricter publication norms that include such a board would be desirable. in addition, one of the mechanisms trialled with neurips and icml over the last few months is the concept of ml retrospectives (co-organized by abhishek gupta, one of the authors of this document) where researchers are asked to reflect on their prior work and identify shortcomings, improvements, changes, and any other comments that they have on those papers to highlight the need to revise understandings of discoveries over time as new evidence and knowledge comes to light. normalizing the "owning" of faults in prior publications can move accountability back to authors, no matter their prestige, leading to a healthier intellectual ecosystem. research progress may be slowed to allow for more thorough risk evaluation. this may affect the quick rate at which ai papers are currently published. nevertheless, pausing the influx of information on ai can provide more time to dissect, digest, and process research. furthermore, a stricter process to pass, and one that takes time, could discourage researchers against cutting corners, which could lead to having their paper rejected and making the process unnecessarily long. this can then help guard against unnecessary research being proposed. one of the trends of taking rejected submissions from one conference or journal and making quick, minor tweaks and pushing it out to the next conference is a particularly problematic scenario which encourages the behaviour whereby researchers attempt a "keep trying to till you win" mindset that has the potential to lower the quality of overall scholarship in the field. making explicit perhaps where this idea was submitted before and reasons for rejections, how those shortcomings were addressed can serve two purposes: improve allocation of scarce reviewing resources in the field and push higher standards of transparency in the field. cross-linking submissions on a platform like openreview (acknowledging though that there are many caveats on the platform) can help us move in that direction. some publishers may feel as if they have lost an element of control over what they publish if more stringent norms are put into place. they may also perceive these new norms as constricting what they traditionally viewed as "worthy" of scientific regard. whether true or not, many publications perceive themselves as being apolitical; taking an explicitly agnostic stance towards scientific developments. therefore, norms that have an arguably social and/or political bent could present a fundamental challenge and potentially give rise to an increase in the bifurcation of journals split on social, political, or other lines. there are a host of concerns that must be accounted for in the case that an external body-such as a journal, a norm-enforcement committee, an ethics board, etc.-oversees the release of research findings; one being the importance of equal opportunity and diverse representation in the technological sphere. this standard not only ensures that the domain of technological research remains democratic, but also prevents it from becoming an echo chamber or a site of informational homogeneity. in order to frame best publication practices for high-stake ai-related research, we must ensure that this set of practices is predicated upon a firm anti-oppression mandate, whether this oppression is related to socioeconomic status, race, gender, ethnicity, mental/physical disability, or any other characteristic that incites unjust discrimination. responsible publication norms, by their very nature, must not only strive to mitigate harms related to the exposure of certain research findings, but also to inhibit the epistemic threat of information suppression under the guise of best practices. in order for a novel set of publication practices to be truly responsible in nature, it must be acknowledged that while a regulatory framework for ai research has the potential to reduce threats to public safety, it may also inadvertently serve as a site of gatekeeping, suppression and other forms of exclusionary practice. such a framework therefore risks consolidating epistemic injustice within the academic, technological and political spheres of influence, the prevention of which will require a defined set of safeguarding measures. injustice can be mitigated through the use of radical transparency where the publisher maintains an open list of papers that were rejected and the reasons for rejection in a public repository that is subject to public scrutiny and analysis preventing biases against any groups. to ensure that reactive measures for harm mitigation (i.e. flagging papers post-publication) are not discriminatory in nature or grounded in any form of identity-based prejudice, a more robust set of flagging requirements must be established. in addition to allowing the user to select a general reason for flagging certain content, the process could be made less enticing and/or gameable to those with unfounded reasoning and discriminatory motives by requiring a more exhaustive and substantiated explanation as to why the content should be removed. to this end, making the process involved enough to prevent trolling behaviour can be one proposed approach. in the case of suspected discriminatory practice throughout the publication process, there ought to be an appeal protocol in place whereby subjects of discrimination can formalize a complaint against the relevant publication or platform and negotiate reconsideration of their research by a designated secondary screening committee. such a protocol may include an instrument of a similar nature to that of the checklist or dread-like (damage, reproducibility, exploitability, affected users, discoverability) scoring principles . rather than measuring security threats, this instrument could assess and systematize the long-term personal impacts of discriminatory publication practices whether they be social, professional, or economic in nature. within a novel regulatory framework for responsible release norms, this impact checklist could be appealed to and taken up with a third party entity in instances of publication prejudice; this entity having authority over the reconsideration of initially rejected research, along with the pursuit of certain disciplinary measures if necessary. such an organizational framework could serve to encourage publisher accountability, caution against future discriminatory practices, and ultimately preserve epistemic justice in the domain of ai research. more staged or closed publication practices pose a threat to democratized exposure under the guise of responsible release norms, which may result in the omission of crucial information or gatekeeping. for instance, if research unveils potential risk of a given ai/ml model and there exists a conflict of interest between the publication platform and stakeholders invested in the model, the pretence of "alternative release strategies" could serve as something of a loophole for these stakeholders to suppress certain information and preserve their interests (whether financial, social, etc.) without being held liable for censorship. the malicious application of release norms not only entails self-interested nondisclosure, but also the suppression of projects carried out by actors who occupy marginalized identities, which could theoretically be falsely attributed to the overstated "harms" that their findings may yield, should they be disclosed publicly. such scenarios lend themselves to the risk of epistemic discrimination against marginalized groups. they can also have a chilling effect on the field of research and promote underground attempts at recreating the systems without proper controls in place because of this obfuscation of the real impacts of the system, thus potentially leading to more harm. it's clear that additional restrictions and/or guidelines in the field of ai are required because there are risks that need to be addressed without exception. unfortunately, however, there is no consensus on this in the field. therefore, more individual researchers, institutions, governments, etc. need to clearly and publicly communicate the potential societal impacts of ai research in order to convey the need for third party regulation. in addition, more researchers and institutions supporting and replicating efforts like the neurips impact statement requirement would make such considerations more standardized across the field. forming alliances, making connections, and building relationships are the hallmarks of effective community collaboration. one way this can be enacted is through connecting with a publication standard-setting body, whereby concerned researchers and institutions would be able to lobby for greater action on the topic and witness first-hand how the publication norms are created. for example, a potential partnership with cope could prove fruitful. cope is an organization committed to educating and supporting editors, publishers and those involved in publication ethics with the aim of moving the culture of publishing towards one where ethical practices become a normal part of the publishing culture. so far, it seems they haven't done any work related to the field of ai, but perhaps they would be interested in expanding their work to account for new ethical issues arising in the technology and computer science space. on a larger scale, a solid foundation for this collaboration within the community is the potential for an institution or process that is recognized worldwide and respected in terms of trust around technology. maiei's work on secure and green lighting ml are potential options. these initiatives could provide a standardized comparative measure to evaluate the trust in ai systems across the broad range of topics and themes contained within the community, providing a much-needed lingua franca . as previously mentioned with cope, the field of ai must work towards a new publication culture that prioritizes ethical practices and de-prioritizes progress for the sake of progress. to create safe and responsible ai, the core values and culture of the field must be informed by ethical practices, principles, and guidelines. additionally, it may be important to place a greater emphasis on the application of the technology after it has been deployed, rather than simply the publishing of the information itself. for example, canada has developed national ai standards, which regulate the application of the technology rather than the published content about said technology. this isn't to say that ai publication norms shouldn't be created and followed, but physiognomy's new clothes gaydar" study and the real dangers of big data the invention of ai 'gaydar' could be the start of something much worse . the verge that they should be done in parallel with other efforts necessary to ensure that the science and technology of ai are developed in a more humane manner committee on publication ethics (cope) secure: a social and environmental certificate for ai systems green lighting ml: confidentiality, integrity, and availability of machine learning systems in deployment canada's cio strategy council publishes national ai standards key: cord- -po bu v authors: chakraborty, sweta title: how risk perceptions, not evidence, have driven harmful policies on covid- date: - - journal: nan doi: . /err. . sha: doc_id: cord_uid: po bu v covid- hits all of the cognitive triggers for how the lay public misjudges risk. robust findings from the field of risk perception have identified unique characteristics of a risk that allow for greater attribution of frequency and probability than is likely to be aligned with the base-rate statistics of the risk. covid- embodies these features. it is unfamiliar, invisible, dreaded, potentially endemic, involuntary, disproportionately impacts vulnerable populations such as the elderly and has the potential for widespread catastrophe. when risks with such characteristics emerge, it is imperative for there to be trust between those in governance and communication and the lay public in order to quell public fears. this is not the environment in which covid- has emerged, potentially resulting in even greater perceptions of risk. receive significant media attention, especially compared to other disease states that are known (eg cardiovascular disease, cancer or alzheimer's disease). this was true of h ni, , and has anecdotally so far proven true of risks are amplified or attenuated through the media through social amplification stations, which can range from individuals to the news media. amplification happens in two stages: in the initial transfer of information about the risk; and in the response mechanisms in society. it is through these amplification stations that public perceptions of risks are shaped. , these amplifications are exceptionally poignant in cases where first-hand knowledge is not tenable, such as with covid- , and the public is therefore reliant on the media to help ascertain the risk. , research shows that media coverage of a public health risk such as covid- can introduce particular risk characteristics that influence public perceptions and therefore become a factor in itself in how the risk is viewed. , in addition to the extent of media coverage is the way a public health risk is framed in the media. as mentioned above, a new, unfamiliar disease will be prescribed far higher dread than a more familiar disease (eg lou-gehrig's disease), even if the more familiar disease is actually deadlier. h n was also referred to in south korean media as shin-jong or new flu. before covid- was named, it was widely referred to in the media as the novel coronavirus. this media attention to a "new" or "novel" infectious disease frames diseases as unfamiliar and potentially catastrophic, which trigger cognitive over-attributions of frequency and probability. this along with the social amplification of risk amplifies risk perceptions and can result in the inaccurate overemphasis of primary public health impacts. given heightened public awareness of the primary public health impacts associated with the novel coronavirus, media coverage has acted as a feedback loopreinforcing the generated public awareness of these impacts. mass media have showcased epidemiological, medical and public health perspectives on the impacts of covid- primarily the lives lostat a serious detriment to understanding the big picture. observationally, there has been rare inclusion of risk or behavioural science expertise in the media. analysis of mass global media and social media coverage in the coming months and years will surely verify this observation. even being several months into the covid- outbreak, a comprehensive cost-benefit analysis of the various policies and combinations of policies put in place around the world has yet to be produced. policies have been based on historical data, models and disproportionate emphasis on mitigating against primary public health impacts. practitioners in risk analysis know all too well the dangers of risk analysis and policy-making in silos, and yet there has been no mainstream thorough costbenefit analysis on covid- in the context of a complex, global interconnected risk landscape. the global risk analysis community collectively holds a plethora of knowledge and data, as well as knowledge of where data are lacking, on the primary risks related to infectious disease (eg deaths caused by the disease), as well as secondary and tertiary impacts (eg mental health impacts, lost productivity). yet, the risk and behavioural science community has hardly been included in real-time analysis of covid- and its impacts. policies designed after the emergence of an outbreak carry inherent risk stemming from analysis of data that are fluid and rapidly evolving. these risks can and should be minimised by ensuring that policies across various outcome scenarios are well thought out and ready for implementation long before a crisis hits. the need for proactive preparedness for an inevitable infectious disease outbreak has been consistently maintained by the infectious disease community. this lack of preparedness has resulted in disjointed policies reacting to public perceptions of risk. specifically, a proactive risk communication plan ahead of an outbreak would have allowed for clear, consistent communication that would have quelled public fears and presumably have allowed evidence-based containment and mitigation policies to take hold. because of a variety of factors (eg resource restrictions, varying country priorities, general complacency when there is not an outbreak), not only are evidencebased policies not dictating nation-state responses within and beyond political borders, they are rather being replaced with fear-based measures. consistent, clear and credible messaging helps to quell public fears. fischhoff et al found in a survey of the us public's understanding of ebola following the outbreak in west africa that the public is less likely to horribly misjudge risk when information is effectively and accurately communicated. people also have clear preferences about how they like to receive information and what sources are viewed as trustworthy. while risk tolerance varies across cultures around the globe, the public generally demands governments ensure low exposure to risks, especially if they are new or unfamiliar. knowledge of this expectation is why proactive preparedness for anticipated risks is so critical. it has become painfully evident that this has not been the case for covid- . the disjointed communication response following the outbreak has most definitely perpetuated distrust in the usa and around the world. what the risk perception and communication community has urged since the development of the us centers for disease control and prevention (cdc) crisis communication lifecycle honest, accurate information (ideally researched and tested) from trusted spokespeoplehas clearly been ignored at any meaningful level. the consequences of such poor preparedness and policies have real-world implications. governance decisions made in reaction to public fears err on quelling short-term hysteria at the expense of worse overall outcomes. the secondary and tertiary impacts stemming from covid- will go well beyond the primary public health impacts. reactive policies such as prolonged quarantines and isolations may very well increase the odds of negative outcomes. for example, brooks et al found negative psychological effects of severe social distancing measures, including posttraumatic stress symptoms, confusion and anger. they recommended for policymakers to minimise such measures and to communicate consistently throughout in order to reduce harm. the ripple effects of the policies put in place to mitigate against the primary public health impacts of covid- may very well produce a worse overall outcomes picture. the role of the media in contributing to public perceptions of heightened risk and the reaction of policy-makers to govern based on public fears and not base-rate statistics of the disease (however fluid) will present several research opportunities in the future across multiple disciplines. this need not have been the case. it is evident that existing risk communication research has not been consistently consulted in managing the covid- outbreak, nor has a comprehensive risk-benefit analysis been conducted to prevent worse overall outcomes. these measures might have offset the power of the media in shaping risk perceptions, which might have in turn resulted in preventing potentially harmful policies and misallocation of precious resources in battling this global disaster. hopefully, the takeaways from covid- will prove helpful for the next inevitable disease outbreak. crisis and emergency risk communication as an integrative model media and social amplification of risk: bse and h n cases in south korea" ( ) disaster prevention & management attention cycles and the h n pandemic: a cross-national study of u.s. and korean newspaper coverage" ( ) asian journal of communication . re kasperson et al communication and health beliefs: mass and interpersonal influences on perceptions of risk to self and others the influence of mass media and interpersonal communication on societal and personal risk judgments communicating about emerging infectious disease: the importance of research public perceptions of everyday food hazards: a psychometric study risk analysis . oh et al, supra again, sing-jong flu? key: cord- -ris bff authors: garrido, guillermo; dhillon, gundeep s. title: medical course and complications after lung transplantation date: - - journal: psychosocial care of end-stage organ disease and transplant patients doi: . / - - - - _ sha: doc_id: cord_uid: ris bff lung transplant prolongs life and improves quality of life in patients with end-stage lung disease. however, survival of lung transplant recipients is shorter compared to patients with other solid organ transplants, due to many unique features of the lung allograft. patients can develop a multitude of noninfectious (e.g., primary graft dysfunction, pulmonary embolism, rejection, acute and chronic, renal insufficiency, malignancies) and infectious (i.e., bacterial, fungal, and viral) complications and require complex multidisciplinary care. this chapter discusses medical course and complications that patients might experience after lung transplantation. the lungs normally have a dual blood supply, consisting of ) large pulmonary arteries that provide desaturated blood under low pressure for alveolar gas exchange and ) smaller bronchial arteries that provide oxygenated blood under systemic pressure for nutrition and oxygenation of the bronchi and lung tissue. as the only solid organ transplant that does not undergo primary systemic (i.e., bronchial) arterial revascularization at the time of surgery, lung transplants rely on the deoxygenated pulmonary arterial circulation and are especially vulnerable to the effects of injury and ischemia [ ] . it has been hypothesized that the absence of the bronchial system in the lung allograft increases susceptibility to microvascular injury and chronic airway ischemia, which may be implicated in the genesis of chronic rejection and other complications [ ] . similarly, the native lymphatics and the neural supply to lung allografts are disrupted at the time of trans-plantation. the impact of these disruptions on lung transplant outcomes remains unclear, though it is possible that these changes lead to higher susceptibility to the development of pulmonary edema and infections, worse airway clearance, and ineffective cough [ ] . lastly, the lung allografts have higher exposure to immunogenic compounds, as compared to other organs, by ventilation. the ongoing exposure to various inhaled injurious agents may also predispose lung allografts to develop chronic rejection. there is a vast array of complications from lung transplantation. broadly these complications can be divided into noninfectious and infectious complications and have been summarized in table . . these complications arise at different times in the postoperative period [ ] . the understanding of timing of various complications post-lung transplant can lead to early recognition and management of these complications. epithelium, and alveolar macrophages. the interaction between these cells leads to release of cytokines, reactive oxygen intermediates, and proteolytic enzymes leading to graft dysfunction [ ] . the severity of pgd falls along a spectrum, ranging from mild dysfunction to severe lung injury. pgd can affect - % of transplanted patients, and the -day mortality can be as high as %. furthermore, severe pgd after lung transplantation has been associated with development of subsequent chronic rejection and graft dysfunction [ ] . the management of pgd is largely supportive and includes lung-protective ventilation strategies (low tidal volume, high positive end-expiratory pressure), judicious fluid management, inhaled nitric oxide or other inhaled pulmonary vasodilators to improve oxygenation, and extracorporeal life support (ecls) for the most severe cases. re-transplantation is an option for highly selected cases, but it is generally not recommended due to suboptimal outcomes [ ] . lung transplant recipients are at increased risk of vte. the risk factors include major surgery status, hypercoagulable state, high dose of corticosteroids, immobility, and indwelling vascular access. the reported incidences of pulmonary embolism (pe) and deep venous thrombosis (dvts) postlung transplantation are approximately - % and - %, respectively [ ] . the pulmonary embolism in setting of limited pulmonary reserve due to pgd, postoperative atelectasis, and single-lung transplantation can have catastrophic consequences, thus underscoring the need for early and appropriate vte prophylaxis after lung transplantation [ ] . the diagnosis can be made with computed tomography (ct) pulmonary angiography, ventilation-perfusion scan, or by documentation of dvt by doppler ultrasonography. the treatment is the same as for vtes in general, although the risk of postoperative bleeding needs to be weighed against the risk of pe. the choice of anticoagulant is based on kidney function, periprocedural reversibility of anticoagulant effect, and drug interactions, with unfractionated heparin, low-molecular-weight heparin, and/or warfarin being by far the most common agents used. in case of ongoing bleeding or high risk of bleeding, inferior vena cava filters can be used as a temporizing measure. inadvertent injury to various intrathoracic nerves during lung transplantation is a well-recognized and common complication. the most commonly affected structures are the phrenic and vagus nerves. the reported rates of phrenic nerve injury have ranged from % to % in lung transplant cases. this rate can be as high as % in combined heart-lung transplantation [ , ] . diaphragmatic dysfunction as a consequence of phrenic nerve injury can present clinically with dyspnea, hypoventilation and hypercapnia, and hypoxemia or as difficult wean from the ventilator. diaphragmatic paralysis can lead to increased length of stay and ventilator dependence. diagnosis can be confirmed by documenting paradoxical movement of affected diaphragm during quiet and deep breathing, using fluoroscopy or ultrasound visualization. the vagal nerve injury post-lung transplantation can lead to gastroparesis with associated risk of gastroesophageal reflux (gerd) and aspiration events. these in turn can place lung allograft at risk for recurrent infections, bronchiectasis, and possibly chronic allograft dysfunction [ ] [ ] [ ] . common symptoms of gastroparesis include early satiety, decreased appetite, abdominal pain, and bloating. a diagnosis is usually made by a nuclear medicine gastric emptying study. the potential management strategies include minimizing transit delaying medications (e.g., opioids), the use of pro-motility agents, placement of post-pyloric feeding tubes, botulinum toxin injection to the pylorus, and surgical fundoplication in conjunction with pyloroplasty [ ] . the pleural complications in early post-lung transplantation period include pleural effusions, hemothorax, pneumothorax, empyema, chylothorax, and interpleural communication. these complications usually arise as a result of the pleural disruption from the surgery itself, though rejection and immunosuppressive regimens may also play a role. the risk factors for the development of pleural complications include previous thoracic surgery, pleural adhesions, and donor-recipient size mismatch [ , ] . pleural effusions are extremely common in the early postlung transplant period. the reported incidence has been % in some series [ , ] . all patients have chest tubes in place immediately post-operation to allow lung re-expansion, pleural air, and fluid drainage. the increased amount of pleural fluid post-lung transplantation is related to capillary leak due to allograft ischemia reperfusion, fluid overload, bleeding, and surgical interruption of allograft lymphatics at the time of explantation [ , ] . late pleural effusions can be a consequence of infection, acute rejection, trapped lung physiology from pleural fibrosis, or malignancy [ , ] . in general, all pleural effusions need to be evaluated to rule out complicated effusions such as hemothorax, empyema, and chylothorax. these entities have all been associated with negative patient outcomes and are treated with a range of medical and surgical procedures depending on the condition and severity. for example, a chylothorax might necessitate mechanical interruption of thoracic duct, or hemothorax may need thoracotomy for control of bleeding. pneumothoraxes are common after lung transplantation. they can result from donor-recipient size mismatch, bronchopleural fistulas that occur secondary to operative injury or bronchial anastomoses dehiscence, or as a consequence of transbronchial biopsies performed in the course of allograft evaluation. small and stable pneumothoraxes after lung transplantation can be managed by watchful waiting, though larger or symptomatic pneumothorax may require chest tube drainage. an inadequately drained, hemodynamically significant pneumothorax can be a medical emergency necessitating urgent drainage [ , ] . in patients who have undergone sequential bilateral lung transplantation (bslt) or heart-lung transplantation (hlt), interpleural communication due to surgical severance of the pleural recesses that separate the left and right pleural spaces can develop. this entails that pleural issues in these patients must be managed aggressively as pneumothoraxes can be bilateral and life threatening, and empyema can spread quickly. vascular anastomotic complications can arise either early or late in the post-transplant course and can have very severe adverse consequences. pulmonary artery stenosis can be secondary to mechanical kinking, disruption, or narrowing of the anastomosis, sometimes due to the particulars of donor anatomy or due to thrombosis [ ] . the clinical picture is usually consistent with pulmonary hypertension and right ventricular failure. diagnosis can be made through pulmonary angiography and can be managed with interventions such as balloon dilation and stent deployment. occasionally, patients may require surgery for definitive management of the stenosis. pulmonary vein occlusion post-lung transplantation is a rare but serious complication. the commonest cause of pulmonary vein occlusion is the development of thrombosis at the anastomotic junction of the pulmonary veins and the left atrium, though inadvertent narrowing or ligation of pulmonary veins has also been reported. the potential clinical consequences include hypoxic respiratory failure, pulmonary edema, and cardio-embolic events. this entity should be included in the differential diagnosis of a patient with acute pulmonary edema post-lung transplantation. diagnosis is usually made by transesophageal echocardiography or ct angiography [ , ] . the airway complications after lung transplantation can be classified by time of occurrence. early anastomotic complications, usually within month of transplantation, include infection, dehiscence, and necrosis at the anastomotic sites. later complications include bronchopleural, bronchovascular and bronchomediastinal fistulae, excessive granulation tissue, bronchomalacia, and airway stenosis. airway anastomotic complications do not seem to be associated with decreased survival; however, they do negatively impact quality of life and significantly increase healthcare resource utilization [ ] . the risk factors for airway anastomotic complications include colonization with burkholderia cepacia and aspergillus fumigatus, pgd, acute rejection, prolonged mechanical ventilation, and sirolimus use prior to anastomotic healing [ , ] . bronchial necrosis and dehiscence occur - weeks after transplant. they can present with dyspnea, difficulty weaning from the ventilator, persistent air leak on the water seal, pneumomediastinum, and subcutaneous emphysema and infection, with symptoms ranging from mild to severe. depending on the severity, management can range from observation and antibiotics to minimally invasive or surgical repair. bronchial stenosis is the narrowing of the airway lumen, usually at the site of the anastomosis. patients can present with wheezing, cough, post-obstructive pneumonias, decline in pulmonary function tests (pfts), and stridor. the bronchial narrowing can also present distal to the anastomosis causing lobar lobe collapse. this syndrome occurs - months post-transplant but can present as late as months. treatment options include close monitoring, bronchial dilatation with or without stent placement, and re-transplantation [ ] . allograft rejection is a major cause of morbidity and mortality post-lung transplantation. at least a third of patients are reported to have acute rejection in the first year after transplant. acute rejection in itself seldom leads to mortality, but it is a main risk factor for the development of chronic rejection. the chronic rejection of lung allograft is the major hurdle to long-term survival after transplantation. despite the use of potent and novel immunosuppressive regimens, the incidence of chronic rejection and long-term survival post-transplant has remained essentially unchanged over the last two decades [ , ] . acute cellular rejection (acr) is the most common kind of acute lung transplant rejection and is mediated by t lymphocytes. symptoms and signs of acr include dyspnea, cough, fever, and hypoxia. high-grade rejection may be associated with respiratory failure. mild acr can be asymptomatic and frequently detected on surveillance pulmonary function testing and/or transbronchial biopsies. current imaging modalities are not diagnostic but may reveal useful findings such as infiltrates and ground-glass opacities [ , ] . flexible bronchoscopy with transbronchial biopsies is the gold standard for diagnosis. histologically, acr is characterized by the presence of perivascular and/or peribronchiolar (grade b) lymphocytes in the absence of infectious etiologies [ , , ] . risk factors for acr include the number of hla mismatches between donor and recipient, although it is unclear which specific hlas have more impact. other reported risk factors are age, with older patients having more rejection, immunosuppressive regimen used (tacrolimus regimens reject less), other genetic factors such as il- production, and documented gerd. acr has also been documented following infections with certain viruses, such as rhinovirus, parainfluenza virus, influenza virus, human metapneumovirus, coronavirus, and respiratory syncytial virus. the treatment for acr is not uniform, and high-quality randomized controlled trials are lacking. there is wide agreement that severe cases of acr must be treated, but there is variability among transplant centers on whether to treat milder cases. the mainstay of therapy is high-dose corticosteroids. in cases that are refractory or recurrent, usually the immunosuppressive regimen gets intensified or altered, and medications such as anti-thymocyte globulin (atg), antiinterleukin -receptor (il- r) antagonists, muromonab-cd (okt ), and alemtuzumab (anti-cd monoclonal antibody), among others, can be used [ , ] . antibody-mediated rejection (amr) is believed to be mediated by donor-specific antibodies (dsa) against human leukocyte antigens (hla) and other donor antigens. these antibodies may have been present in the recipient prior to transplant, although most appear to develop after transplantation. amr is described as the combination of the following: donor-specific anti-hla antibodies, evidence of complement deposition in allograft biopsies, histologic tissue injury, and clinical allograft dysfunction [ ] . once the aforementioned antibodies bind their receptors in the graft, they are capable of binding complement, specifically c q. this can trigger complement-mediated cell destruction and inflammation. the development of de novo anti-hla antibodies is associated with poor prognosis [ , ] . the mainstay of amr management involves depletion and/or neutralization of anti-hla antibodies by plasma exchange or intravenous immunoglobulin (ivig), followed by rituximab infusion. rituximab is an anti-cd- chimeric antibody that targets b-cell function and can decrease production of antibodies. in cases of refractory amr, newer agents such as bortezomib (anti-proteasome s) and the anticomplement antibody eculizumab have been tried with limited success. successful clearance of anti-hla antibodies has been associated with decreased risk of development of chronic rejection following amr [ ] . the term chronic lung allograft dysfunction (clad) encompasses pathologies that lead to chronic dysfunction of lung allograft. clad is predominantly a consequence of chronic rejection and is a major hurdle to long-term survival. the two major phenotypes of clad include (i) bronchiolitis obliterans syndrome (bos) and (ii) restrictive allograft syndrome (ras) [ , ] . bos is the predominant form of clad and is the number one cause of death after year of transplantation. it is reported to occur in up to % of lung transplant recipients at years post-transplant, and it is a major cause of morbidity, negative impact in quality of life, and increased costs. bos is defined by a sustained (> weeks) decline in the forced expiratory volume in the first second of expiration (fev ); provided alternative causes of pulmonary dysfunction have been excluded. at the tissue level, the hallmark of bos is obliterative bronchiolitis (ob), which is an inflammatory/fibrotic process affecting the small non-cartilaginous airways (membranous and respiratory bronchioles) characterized by subepithelial fibrosis causing partial or complete luminal occlusion [ , ] . risk factors include prior episodes of acute rejection, cytomegalovirus infection (cmv), community-acquired respiratory viruses (carv) infection, history of pgd, isolation of aspergillus fumigatus and pseudomonas aeruginosa, the presence of gerd, and other immune-mediated factors [ ] . the diagnosis can be made conditionally without histopathology (bos) or definitively with histopathology (bo). transbronchial biopsy is an insensitive method for detecting bo, and the clinical use of bos is the favored method for diagnosis and monitoring. the treatment of bos is disappointing in terms of outcomes; often success is measured in slowing the decline or stabilizing it. beyond augmentation of immunosuppression, azithromycin, extracorporeal photopheresis, montelukast, methotrexate, aerosolized cyclosporine, alemtuzumab, and total lymphoid irradiation have been used with limited success [ , ] . ras has been more recently described and occurs in less than a third of patients with clad. these patients present with predominant restriction, and the survival is worse as compared to patients with bos. the median survival postdiagnosis is months. ct scan shows interstitial opacities, ground-glass opacities, upper lobe-dominant fibrosis, and honeycombing. the only identified risk factor for the development of ras is late-onset diffuse alveolar damage (dad), occurring later than months after lung transplant. there is no proven treatment for this condition, and re-transplantation remains technically challenging [ , ] . lung transplant and associated immunosuppression are an established risk factor for development of cancer [ ] . the commonest malignancy post-lung transplant is the squamous cell cancer of the skin. the single-lung transplant recipients are at higher risk of development of lung cancer in their native lungs. this increased risk is in part related to the increased risk of cancer due to underlying disease (e.g., emphysema, idiopathic pulmonary fibrosis) [ , ] . similarly, the transplant recipients with cystic fibrosis remain at an elevated risk for development of gastrointestinal malignancies [ ] . it is imperative that transplant recipients adhere to age-appropriate health screening after transplant. additionally, all lung transplant recipients should undergo skin cancer screening annually. the risk is especially high for of viral infection associated malignancies such as lymphoma, kaposi sarcoma, and anogenital cancers [ ] . post-transplant lymphoproliferative disorders (ptld) encompass an array of diseases involving clonal expansion of b lymphocytes, ranging from polyclonal benign disorders to aggressive malignant lymphomas. the reported incidence of non-hodgkin lymphoma post-lung transplant has been as high as cases/ , person-years [ ] . there is a significant association between ptld and epstein-barr virus (ebv) infection, especially in patients who acquire infection the novo after being transplanted. ptld is managed by reducing the intensity of immunosuppression if possible, with specific chemotherapy for more severe and refractory cases. hyperammonemia affects - % of the lung transplant population; it is a rare but potentially fatal complication. it can be secondary to systemic infection with mycoplasma hominis and ureaplasma, which break down urea as an energy source, generating ammonia as a waste product. this likely represents a donor-derived infection and can respond to early appropriate antibiotic treatment [ ] . postoperative liver dysfunction and urea-cycle enzyme deficiencies can also cause hyperammonemia. diabetes mellitus (dm) is common in lung transplant recipients, with - % of patients developing it in the first year post-transplant and up to % at years. the use of glucocorticoids, calcineurin inhibitors, obesity, and advanced age is a significant risk factor for the development of dm. the development of dm in lung transplant recipients is associated with decreased survival. a close and judicious glycemic control is indicated in this patient population [ , ] . patients who undergo lung transplantation have multiple risk factors to develop acute kidney injury (aki) post-transplant, including decreased renal perfusion before, during, and/or after surgery, drug toxicities, and systemic infections. aki affects as many as % of patients with approximately % patients requiring renal replacement therapy (rrt). the postoperative renal failure necessitating the use of rrt is associated with increased risk of early mortality [ , ] . by years, % of surviving lung transplant recipients develop severe renal dysfunction (serum creatinine > . mg/ dl), and that percentage rises to % at -year mark [ ] . the risk factors for development of chronic kidney disease (ckd) include older age, dm, hypertension, smoking history, and use of nephrotoxic drugs. ckd is also associated with higher mortality in lung transplant recipients [ ] . recipients of lung transplant are at risk for development of osteopenia and osteoporosis due to multiple factors such as malnutrition, immobility, chronic corticosteroid use, calcineurin inhibitor use (e.g., tacrolimus), and other comorbidities. the strategies to prevent and reverse bone losses after transplant need to be proactively implemented. treatment includes adequate supplementation of calcium, vitamin d, use of bisphosphonates, enhancing physical activity, and minimizing contributing medications, if possible [ , ] . dyslipidemia is also very common in lung transplant recipients, as high as %, and it may be related to the aforementioned metabolic risk factors. treatment usually entails lifestyle modifications and cholesterol lowering medications. there are multiple cardiac complications after lung transplantation, both short and long term. atrial dysrhythmias are very frequent in the early postoperative period, likely related to stress of major surgery, catecholamine surge, medication side effects, and mechanical stresses related to vascular anastomoses. the reported incidence has been as high as - % [ , ] . these arrhythmias are usually managed with medications aimed at rate and rhythm control. hemodynamically significant and/or refractory arrhythmias may require electric cardioversion. atrial dysrhythmias are associated with increased length of hospital stay and increased mortality [ , ] . over the long term, lung transplant recipients are at increased risk for developing coronary artery disease (cad). as they progress into long-term survival, these patients have cumulative impact from risk factors previously discussed in this chapter, namely, dm, dyslipidemia, ckd, hypertension, chronic corticosteroid use, and other immunosuppressive medication. these risk factors should be carefully managed to decrease the impact of cad and related complications, with a combination of lifestyle modifications and specific medical therapies [ ] . lung transplant recipients experience a decrease in skeletal muscle strength and function, including respiratory and limb muscles. this is likely related to reduced activity postoperatively and deconditioning, corticosteroid-induced myopathy, critical illness-related weakness (neuropathy/myopathy), and in the case of the diaphragm, phrenic nerve injury. this issue seems to be consistent in lung transplant recipients and independent of pre-transplant diagnosis and surgery type. muscle weakness, deconditioning, and sarcopenia are associated with adverse outcomes and decrease in quality of life. aggressive rehabilitation is standard and important in the post-transplant care [ , ] . lung transplant recipients are at an increased risk for acquiring infections due to the immunosuppressed state, constant environmental pathogen exposure, decreased cough reflex, impaired mucociliary clearance, and lymphatic disruption. infectious complications are responsible for about a quarter of post-transplant deaths [ ] . pneumonias are the most significant bacterial infection in lung transplant recipients, and the highest risk is in the first days post-transplant. in the early period, they are more likely to be caused by hospital-acquired organisms, which tend to be more virulent and more resistant to antibiotics. the patients with cystic fibrosis are frequently colonized by multidrug-resistant organisms and are at increased risk of pneumonia post-transplant. in later stages, community-acquired organisms become more prevalent. moreover, throughout the post-transplant period, the patients are susceptible to numerous opportunistic infections [ ] . other commonly encountered bacterial infections in this patient population include pleural space infections, blood stream infections (bsis), and soft tissue infections. the bsis and empyema carry a high risk of morbidity and mortality [ , ] . pseudomonas aeruginosa, burkholderia cepacia, staphylococcus aureus (including methicillin-resistant), and other gram-negative organisms are common causes of serious infections in post-lung transplant period. these organisms have high rates of antibiotic resistance and are associated with worse outcomes [ ] [ ] [ ] . streptococcus pneumoniae is the most common cause of communityacquired pneumonia, and immunosuppressed patients have increased risk of disseminated infection [ ] . clostridium difficile associated diarrhea is a major complication in hospitalized, immunosuppressed and debilitated patients and is associated with increased hospital length of stay and mortality [ ] . molds are common fungal entities affecting lung allografts. aspergillus spp. are the most common and have a predilection for the respiratory tract [ ] . lung transplants have the highest incidence of invasive aspergillosis among solid organ transplant recipients, and it is the most common invasive fungal infection in lung transplant. aspergillus is ubiquitous in the environment and is acquired by inhalation. there are three main described presentations: invasive pulmonary disease, tracheobronchial aspergillosis, and disseminated disease, all of which are associated with varying degrees of increased mortality. other implicated molds include fusarium, scedosporium, and mucormycosis. these infections are difficult to treat and are associated with poor clinical outcomes [ ] . candida spp. are another common pathogen in lung transplant setting. oral candidiasis is the most common manifestation of this infection. however, candida infections can also manifest as candidemia, empyema, surgical wound infection, and disseminated disease. serious candida infections have been associated with increased mortality, though rates have been declining over time [ ] . other fungal infections in this patient population include opportunistic infections, such as pneumocystis jiroveci and cryptococcus, as well as endemic fungi, such as histoplasma capsulatum, coccidioides immitis, and blastomyces dermatitidis [ , ] . viral infections contribute to morbidity and mortality from acute infection and have been associated with an increased risk of rejection, chronic allograft dysfunction, lymphoproliferative and other neoplastic diseases, and other extra pulmonary organ damage [ ] . cytomegalovirus (cmv) is the most significant viral infection occurring in solid organ transplant recipients and is the second most common infection, after bacterial pneumonia. cmv infection can range from latent infection, to asymptomatic viremia, to cmv disease manifested with clinical symptoms and end-organ involvement. severity of disease may range from mild to life threatening. when there is organ damage, affected organs can include the lungs, pancreas, intestines, retina, kidney, liver, and brain. cmv disease is associated with increased mortality [ , ] . other notable dna viruses from the herpesviridae family include epstein-barr virus (ebv), which is associated with increased risk of ptld and other malignancies, herpes simplex virus (hsv) and , varicella-zoster virus (vzv), and human herpesvirus , , and [ ] . community-acquired respiratory viruses, including influenza, are a major source of respiratory symptoms and mor-bidity after lung transplantation. these infections may also be associated with development of chronic allograft dysfunction [ ] . currently, the median survival for all adult lung transplant recipients is years [ ] . bilateral lung recipients appear to have a better median survival compared to single-lung recipients ( versus . years) [ ] . overall lung transplantation confers clinically meaningful and statistically significant improvements in health-related quality of life (hrqol). greater than % of lung transplant recipients report no activity limitations [ ] . the care of lung transplant recipients is multidisciplinary, labor intensive, and comprehensive. it includes management of immunosuppression regimen, opportunistic infection prophylaxis, prevention and management of various comorbidities, and complications. a typical medication regimen consists of three classes of immunosuppression drugs (i.e., calcineurin inhibitor, cell-cycle inhibitor, and corticosteroids), as well as opportunistic infection prophylaxis against pneumocystis jiroveci, other fungal infections, and cmv. in early postoperative period and after hospital discharge, the recipients are closely monitored in outpatient setting. typical clinic visits include thorough medication reconciliation, clinical exam, pulmonary function testing, chest radiographs, and laboratory examinations. the role of surveillance bronchoscopies with transbronchial biopsies in monitoring of lung allograft remains unclear. while lung transplantation improves survival and quality of life in patients with end-stage lung disease, it is associated with multitude of noninfectious and infectious complications. lung transplant recipients have one of the shortest survival rates among other solid organ recipients, due to some unique characteristics of the lung allograft, including its unique blood supply and risk for ischemia, disruption of the native lymphatics and the neural supply during the transplant surgery, and exposure to immunogenic entities via ventilation. among noninfectious complications, pgd, vte, and rejection are the most important ones. clad affects most patients long term and remains a significant clinical concern and contributor to early mortality in lung transplant recipients. lung transplant recipients are also at increased risk for a variety of malignancies, due to their underlying disease, comorbidities, and immunosuppressed status; thus they require vigilant monitoring and screening for cancer. infectious complications (i.e., bacterial, fungal, viral) are also important contributors to morbidity and mortality, with bacterial pneumonias and cmv most commonly seen. patients require multidisciplinary and intensive follow-up and aftercare, ongoing vigilance, early recognition and treatment, and open and frequent communication between recipients, caregivers, and healthcare team providers. the registry of the international society for heart and lung transplantation: thirtieth adult lung and heart-lung transplant report-- ; focus theme: age long-term health status and quality of life outcomes of lung transplant recipients the registry of the international society for heart and lung transplantation: thirty-fourth adult heart transplantation report- ; focus theme: allograft ischemic time every allograft needs a silver lining lung transplant airway hypoxia: a diathesis to fibrosis? a critical role for airway microvessels in lung transplantation pulmonary complications of lung transplantation report of the ishlt working group on primary lung graft dysfunction, part i: definition and grading-a consensus group statement of the international society for heart and lung transplantation report of the ishlt working group on primary lung graft dysfunction part iii: mechanisms: a consensus group statement of the international society for heart and lung transplantation report of the international society for heart and lung transplantation working group on primary lung graft dysfunction, part ii: epidemiology, risk factors, and outcomesa consensus group statement of the international society for heart and lung transplantation report of the ishlt working group on primary lung graft dysfunction part iv: prevention and treatment: a consensus group statement of the international society for heart and lung transplantation venous thromboembolic complications of lung transplantation: a contemporary single-institution review pulmonary embolectomy after single-lung transplantation diaphragmatic paralysis: a complication of lung transplantation leuven lung transplant g. phrenic nerve dysfunction after heart-lung and lung transplantation post-surgical and obstructive gastroparesis gastroparesis is common after lung transplantation and may be ameliorated by botulinum toxin-a injection of the pylorus upper gastrointestinal dysmotility in heart-lung transplant recipients acute and chronic pleural complications in lung transplantation pleural space complications associated with lung transplantation pleural effusion from acute lung rejection mesothelioma after lung transplantation frequency and management of pneumothoraces in heart-lung transplant recipients shifting pneumothorax after heart-lung transplantation endovascular management of early lung transplant-related anastomotic pulmonary artery stenosis four-year prospective study of pulmonary venous thrombosis after lung transplantation pulmonary venous obstruction after lung transplantation. diagnostic advantages of transesophageal echocardiography primary graft dysfunction and other selected complications of lung transplantation: a single-center experience of patients airway complications and management after lung transplantation: ischemia, dehiscence, and stenosis airway complications after lung transplantation: treatment and long-term outcome segmental nonanastomotic bronchial stenosis after lung transplantation acute cellular and antibody-mediated allograft rejection are symptom reports useful for differentiating between acute rejection and pulmonary infection after lung transplantation? heart lung the role of transbronchial lung biopsy in the treatment of lung transplant recipients. an analysis of consecutive procedures revision of the working formulation for the standardization of nomenclature in the diagnosis of lung rejection acute allograft rejection: cellular and humoral processes transplant/immunology network of the american college of chest p. a survey of clinical practice of lung transplantation in north america antibody-mediated rejection of the lung: a consensus report of the international society for heart and lung transplantation acute antibody-mediated rejection after lung transplantation acute antibody-mediated rejection after lung transplantation chronic lung allograft dysfunction phenotypes and treatment update on chronic lung allograft dysfunction bronchiolitis obliterans syndrome: the achilles' heel of lung transplantation an international ishlt/ats/ers clinical practice guideline: diagnosis and management of bronchiolitis obliterans syndrome therapy options for chronic lung allograft dysfunction-bronchiolitis obliterans syndrome following first-line immunosuppressive strategies: a systematic review neutrophilic reversible allograft dysfunction (nrad) and restrictive allograft syndrome (ras) restrictive allograft syndrome (ras): a novel form of chronic lung allograft dysfunction comparison of the incidence of malignancy in recipients of different types of organ: a uk registry audit spectrum of cancer risk among us solid organ transplant recipients bronchogenic carcinoma complicating lung transplantation disseminated ureaplasma infection as a cause of fatal hyperammonemia in humans risk factors for development of new-onset diabetes mellitus after transplant in adult lung transplant recipients prevalence and predictors of diabetes after lung transplantation: a prospective, longitudinal study short-term and long-term outcomes of acute kidney injury after lung transplantation incidence and outcomes of acute kidney injury following orthotopic lung transplantation: a population-based cohort study chronic kidney disease after lung transplantation: incidence, risk factors, and treatment osteoporosis and fractures after solid organ transplantation: a nationwide population-based cohort study bone loss and fracture after lung transplantation contemporary analysis of incidence of post-operative atrial fibrillation, its predictors, and association with clinical outcomes in lung transplantation atrial arrhythmias after lung transplant: underlying mechanisms, risk factors, and prognosis new-onset cardiovascular risk factors in lung transplant recipients skeletal muscle force and functional exercise tolerance before and after lung transplantation: a cohort study maximal exercise capacity and peripheral skeletal muscle function following lung transplantation pneumonia after lung transplantation in the resitra cohort: a multicenter prospective study nocardia infections in solid organ transplantation significance of blood stream infection after lung transplantation: analysis in consecutive patients empyema complicating successful lung transplantation multidrug-resistant gram-negative bacteria infections in solid organ transplantation the impact of pan-resistant bacterial pathogens on survival after lung transplantation in cystic fibrosis: results from a single large referral centre methicillinresistant, vancomycin-intermediate and vancomycin-resistant staphylococcus aureus infections in solid organ transplantation invasive pneumococcal infections in adult lung transplant recipients clostridium difficile in solid organ transplant recipients mold infections in lung transplant recipients fungal infections in lung transplant recipients endemic fungal infections in solid organ transplantation cryptococcus neoformans infection in organ transplant recipients: variables influencing clinical characteristics and outcome dna viral infections complicating lung transplantation cytomegalovirus and lung transplantation community-acquired respiratory viral infections in lung transplant recipients: a single season cohort study quality of life in lung transplantation key: cord- - rgz t authors: radandt, siegfried; rantanen, jorma; renn, ortwin title: governance of occupational safety and health and environmental risks date: journal: risks in modern society doi: . / - - - - _ sha: doc_id: cord_uid: rgz t occupational safety and health (osh) activities were started in the industrialized countries already years ago. separated and specific actions were directed at accident prevention, and the diagnosis, treatment and prevention of occupational diseases. as industrialization has advanced, the complexity of safety and health problems and challenges has substantially grown, calling for more comprehensive approaches. such development has expanded the scope, as well as blurred the borders between specific activities. in the modern world of work, occupational safety and health are part of a complex system that involves innumerable interdependencies and interactions. these are, for instance, safety, health, well-being, aspects of the occupational and general environment, corporate policies and social responsibility, community policies and services, community social environment, workers’ families, their civil life, lifestyles and social networks, cultural and religious environments, and political and media environments. a well-functioning and economically stable company generates resources to the workers and to the community, which consequently is able to maintain a positive cycle in development. a high standard of safety and health brings benefits for everyone: the company, the workers and the whole community. these few above-mentioned interactions elucidate the need for an integrated approach, and the modelling of the complex entity. if we picture osh as a house, this integrated approach could be the roof, but in order to build a stable house, it is also necessary to construct a solid basement as a foundation to the house. these basement "stones" are connected to each other, and are described in more detail in sections . - . . section . focuses on the existing hazards, while section . mainly considers the exposure of workers to health hazards. health, due to its complexity, however, is not only influenced and impaired by work-related hazards, but also by hazards arising from the environment. these two sub-chapters are thus linked to section . . in addition, the safety levels of companies may affect the environment. the strategies and measures needed for effective risk management, as described in section . , therefore also contribute to reducing the risks to the environment. in the case of work that is done outdoors, the hazards arising from the environment understandably have to be given special attention. here, the methods applied to tackle the usual hazards at workplaces are less effective. it is necessary to develop protective measures to avoid or minimize hazards present in the environment. namely, agriculture, forestry and construction involve these types of hazards, and affect high numbers of workers on a global scale. finally, hazards in the environment or in leisure-time activities can lead to strain and injuries which -combined with hazards at work -may result in more severe health consequences. as an example one can mention the hazardous substances in the air causing allergies or other illnesses. another example is the strain on the musculoskeletal system from sports and leisuretime activities causing low back pain and other musculoskeletal disorders. depending on the type of hazard, the three topics, namely, safety, health and the environment, may share the common trait that the proper handling of risks, i.e., how to reduce probabilities and/or consequences of unwanted events is not always possible within a risk management system. this is true when one moves into the realm of uncertainty, i.e., when there is uncertain, insufficient or no knowledge of the consequences and/or probabilities (see chapter ). . integrated multi-sectorial bodies for policy design and planning (national safety and health committee). . comprehensive approach in osh activities. . multi-disciplinary expert resources in inspection and services. . multi-professional participation of employers' and workers' representatives. . joint training in integrated activities. . information support facilitating multi-professional collaboration. international labour office (ilo) ( ) international labour conference, st session, report vi, ilo standards-related activities in the area of occupational safety and health: an in-depth study for discussion with a view to the elaboration of a plan of action for such activities. sixth item on the agenda. international labour office, geneva. what are the main challenges arising from the major societal changes for business/companies and workers/employees? how can these challenges be met in order to succeed in the growing international competition? what is the role of occupational safety and health (osh) in this context? the above-mentioned changes create new possibilities, new tasks and new risks to businesses in particular, and to the workers as well. in order to optimize the relation between the possibilities and the risks (maximize possibilities -minimize risks) there is a growing need for risk management. risk management includes all measures required for the target-oriented structuring of the risk situation and safety situation of a company. it is the systematic application of management strategies for the detection, assessment, evaluation, mastering and monitoring of risks. risk management was first considered exclusively from the point of view of providing insurance coverage for entrepreneurial risks. gradually the demands of jurisdiction grew, and the expectations of users and consumers increased with regard to the quality and safety of products. furthermore, the ever more complex problems of modern technology and ultimately the socioeconomic conditions have led to the development of risk management into an independent interdisciplinary field of work. risks can be regarded as potential failures, which may decrease trust in realizing a company's goals. the aim of risk management is to identify these potential failures qualitatively and quantitatively, and to reduce them to the level of a non-hazardous and acceptable residual risk potential. the development and formulation of a company's risk policy is regarded as the basis of effective risk management. this includes, first and foremost, the management's target concept with respect to the organization of work, distribution of labour, and competence of the departments and persons in charge of risk management. risk issues are important as far as acceptance of technology is concerned. it is not enough to reduce the problem to the question of which risks are tolerable or acceptable. it appears more and more that, although the risks themselves are not accepted, the measures or technologies causing them are. value aspects have an important role in this consideration. a positive or negative view of measures and technologies is thus influenced strongly by value expectations that are new, contradictory and even disputed. comparing risks and benefits has become a current topic of discussion. the relation between risks and benefits remains an unanswered question. the general public has a far broader understanding of the risks and benefits of a given technology than the normal understanding professed by engineering sciences which is limited to probability x harm. the damage or catastrophe potential, qualitative attributes such as voluntary nature and controllability also play an important role in the risk assessment of a technology. a normative setting for a certain, universally accepted risk definition according to engineering science is therefore hardly capable of consensus at the moment. the balanced management of risks and possibilities (benefits) is capable of increasing the value of a company. it may by far surpass the extent of legal obligations: for example, in germany, there is a law on the control and transparency for companies (kontrag) . the respective parameters may be defined accurately as follows: • strategic decisions aim to offer opportunities for acquiring benefit, taking into consideration risks. • risks that can have negative consequences to the technological capacities, the profitability potential, the asset values and the reputation of a company, as well as the confidence of shareholders are identified and measured. • the management focuses on important possibilities and risks, and addresses them adequately or reduces them to a tolerable level. the aim is not to avoid risks altogether, but to create opportunities for promoting proactive treatment of all important risks. the traditional occupational health hazards, such as physical, chemical and biological risks, as well as accidents, will not totally disappear as a consequence of change, nor will heavy physical work. about - % of workers are still exposed to such hazards. there is thus need to still develop risk assessment, prevention and control methods and programmes for these often well-known hazards. in many industrialized countries, prevention and control programmes have had a positive impact by reducing the trends of occupational diseases and accidents, particularly in big industries. some developing countries, however, show an increase in traditional hazards. international comparisons, however, are difficult to make because of poor coverage, underreporting, and poor harmonization of concepts, definitions and registration criteria. statistics on occupational accidents are difficult to compare, and therefore data on their total numbers in europe should be viewed with caution. the majority of countries, however, have shown declining trends in accident rates irrespective of the absolute numbers of accidents. some exceptions to this general trend have nevertheless been seen. the accident risk also seems to shift somewhat as regards location, so that instead of risks related to machines and tools, particularly the risks in internal transportation and traffic within the workplace grow in relative importance. this trend may increase in future, particularly as the work place, as well as the speed and volume of material flows are increasing. a threat is caused by lengthened working hours, which tend to affect the vigilance of workers and increase the risk of errors. small-scale enterprises and micro-enterprises are known to have a lower capacity for occupational health and safety than larger ones. in fact, a higher accident risk has been noted in medium-sized companies, and a lower risk in very small and very large enterprises. we can conclude that this is due to the higher mechanization level and energy use in small and medium-sized enterprises (sme) compared with micro-enterprises, which usually produce services. on the other hand, the better capacity of very large enterprises in safety management is demonstrated by their low accident rates. the production of chemicals in the world is growing steadily. the average growth has been between - % a year during the past - decades. the total value of european chemical production in was about usd billion, i.e. % of the world's total chemical production, and it has increased % in the -year period of - . the european union (eu) is the largest chemical producer in the world, the usa the second, and japan the third. there are some , different chemical entities in industrial use, but only about , are so-called high-production volume (hpv) chemicals produced (and consumed) in amounts exceeding , tons a year. the number of chemicals produced in amounts of - , tons a year is about , . but volume is not necessarily the most important aspect in chemical safety at work. reactivity, toxicological properties, and how the chemicals are used, are more important. the european chemical companies number some , , and in addition there are , plants producing plastics and rubber. surprisingly, as many as % of these are smes employing fewer than workers, and % are micro-enterprises employing fewer than workers. thus, the common belief that the chemical industry constitutes only large firms is not true. small enterprises and self-employed people have much less competence to deal with chemical risk assessment and management than the large companies. guidance and support in chemical safety is therefore crucial for them. the number of workers dealing with chemicals in the european work life is difficult to estimate. the chemical industry alone employs some . million workers in europe, i.e. about % of the workforce of manufacturing industries. about %, i.e. over , work in chemical smes. but a much higher number of workers are exposed in other sectors of the economy. there is a distinct trend showing that the use of chemicals is spreading to all sectors, and thus exposures are found in all types of activities: agriculture, forestry, manufacturing, services and even in high-tech production. the national and european surveys on chemical exposures in the work environment give very similar results. while about % of the eu work-ers were exposed to hazardous chemicals, the corresponding figure in central and eastern european countries may be much higher. the workers are exposed simultaneously to traditional industrial chemicals, such as heavy metals, solvents and pyrolytic products, and to "new exposures", such as plastics monomers and oligomers, highly reactive additives, cross-linkers and hardeners, as well as to, for example, fungal spores or volatile compounds in contaminated buildings. this implies that some million people in the eu are exposed at work, and usually the level of exposure is one to three orders of magnitude higher than in any other environment. about the same proportion ( % of the workforce, i.e. , ) of the finnish workers in the national survey reported exposure. the chemicals to which the largest numbers of workers are exposed occur typically in smes; they are e.g. detergents and other cleaning chemicals, carbon monoxide, solvents, environmental tobacco smoke, and vegetable dusts. european directives on occupational health and safety require a high level of protection in terms of chemical safety in all workplaces and for all workers. risk assessment and risk management are key elements in achieving these requirements. the risk assessment of chemicals takes place at two levels: a) systems-level risk assessment, providing a dose-response relationship for a particular chemical, and serving as a basis for standard setting. risk assessment at the systems level is carried out in the pre-marketing stage through testing. this consequently leads to actions stipulated in the regulations concerning standards and exposure limits, labelling and marking of hazardous chemicals, limitations in marketing, trade and use. in this respect, the level of safety aimed at remains to be decided. is it the reasonably achievable level or, for example, the level achieved by the best available technology? the impact is expected to be system-wide, covering all enterprises and all workers in the country. this type of risk assessment is an interactive practice between the scientific community and the politically controlled decision making. a high level of competence in toxicology, occupational hygiene and epidemiology is needed in the scientific community. and the decision makers must have the ability to put the risk in concern into perspective. in most countries the social partners also take part in the political decision making regarding occupational risks. b) workplace risk assessment directed at identifying the hazards at an individual workplace and utilizing standards as a guide for making decisions on risk management. risk assessment at workplace level leads to practical actions in the company and usually ensures compliance with regulations and standards. risk assessment is done by looking at the local exposure levels and comparing these with standards produced in the type a) risk assessment. risk management is done through preventive and control actions by selecting the safest chemicals, by controlling emissions at their source, by general and local ventilation, and by introducing safe working practices. if none of the above is effective, personal protective devices must be taken into use. noise is a nearly universal problem. the products of technology, which have freed us from the day-to-day survival struggle, have been accompanied by noise as the price of progress. however, noise can no longer be regarded as an inevitable by-product of today's society. not only is noise an undesirable contaminant of our living environment, but high levels of noise are frequently present in a variety of work situations. many problems arise from noise: annoyance, interference with conversation, leisure or sleep, effects on work efficiency, and potentially harmful effects, particularly on hearing. in short, noise may affect health, productivity, and well-being. the selection of appropriate noise criteria for the industry depends on knowledge of the effects of noise on people, as well as on the activities in which they are engaged. many of the effects are dependent on the level, and the magnitude of the effects varies with this level. hearing damage is not the only criterion for assessing excessive noise. it is also important to consider the ability and ease of people to communicate with each other. criteria have therefore been developed to relate the existing noise environment to the ability of the typical individual to communicate in areas that are likely to be noisy. the effects of noise on job performance are difficult to evaluate. in general, one can say that sudden, intermittent, high-intensity noise impedes efficient work more than low-intensity and steady-state noise. the complexity of the task with which noise interferes plays a major role in determining how much noise actually degrades performance. two common ways in which noise can interfere with sleep are: delaying the onset of sleep, and shifting sleep stages. one effect of noise that does not seem to depend strongly on its level is annoyance. under some circumstances, a dripping water faucet can be as annoying as a jackhammer. there are no generally accepted criteria for noise levels associated with annoyance. if the noise consists of pure tones, or if it is impulsive in nature, serious complaints may arise. new information and communication technologies (ict) are being rapidly implemented in modern work life. about % of workers have computers at work, and about % are e-mail and internet users. there are three main problem areas in the use of new ict at work. these are: ) the visual sensory system, ) the cognitive processes, and ) the psychomotoric responses needed for employing hand-arm systems. all three have been found to present special occupational health and even safety problems, which are not yet fully solved. the design of new more user-friendly technology is highly desirable, and the criteria for such technology need to be generated by experts in neurophysiology, cognitive psychology and ergonomics. it is important to note that the productivity and quality of information-intensive work requiring the use of ict depends crucially on the user-friendliness of the new technology interface, both the hardware and software. communication and information technologies will change job contents, organization of work, working methods and competence demands substantially in all sectors of the economy in all countries. a number of new occupational health and safety hazards have already arisen or are foreseen, including problems with the ergonomics of video display units, and musculoskeletal disorders in shoulder-neck and arm-hand systems, information overload, psychological stress, and pressure to learn new skills. the challenge to occupational health and safety people is to provide health-based criteria for new technologies and new types of work organization. it is also important to contribute to the establishment of healthy and safe work environments for people. in the approved and draft standards of the international standardization organization, iso, there are altogether about different targets dealing with the standardization of eyesight-related aspects. vision is the most important channel of information in information-intensive work. from the point of view of seeing and eye fatigue, the commonly used visual display units (vdu) are not the most optimal solutions. stability of the image, poor lighting conditions, reflections and glare, as well as invisible flicker, are frequent problems affecting vision. the displays have, however, developed enormously in the s, and there is evidence that the so-called flat displays have gradually gained ground. information-intensive work may increasingly load the vision and sense of hearing, particularly of older workers. even relatively minor limitations in vision or hearing associated with ageing have a negative effect on receiving and comprehending messages. this affects the working capacity in information-intensive work. the growing haste in information-intensive work causes concern among workers and occupational health professionals. particularly older workers experience stress, learning difficulties and threat of exclusion. corrective measures are needed to adjust the technology to the worker. the most important extension of the man-technology interface has taken place in the interaction of two information-processing elements: the central nervous system and the microprocessor. the contact is transmitted visually and by the hands, but also by the software which has been developed during the s even more than the technology itself. many problems are still associated with the immaturity of the software, even though its user-friendliness has recently greatly improved. the logic and structure of the software and the user systems, visual ergonomics, information ergonomics, the speed needed, and the forgiving characteristics of programs, as well as the possibility to correct the commands at any stage of processing are the most important features of such improvements. also the user's skills and knowledge of information technology and the software have a direct effect on how the work is managed and how it causes workload. the user-friendliness and ergonomics of the technology, the disturbing factors in the environment, haste and time pressure, the work climate, and the age and professional skills of the individual user, even his or her physical fitness, all have an impact on the cognitive capacity of a person. this capacity can to a certain extent be improved by training, exercise and regulating the working conditions, as well as with expert support provided for the users when difficulties do occur. the use of new technologies has been found to be associated with high information overload and psychological stress. the problem is not only typical for older workers or those with less training, but also for the super-experts in ict who have shown an elevated risk of psychological exhaustion. there are four main types of ergonomic work loads: heavy dynamic work that may overload both the musculoskeletal and cardiovascular system; re-petitive tasks which may cause strain injuries; static work that may result in muscular pain; and lifting and moving heavy loads, which may result in overexertion, low back injury, or accidental injuries. visual ergonomics is gaining in importance in modern work life. the overload of the visual sensory system and unsatisfactory working conditions may strain the eye musculature, but can also cause muscle tension in the upper part of the body. this effect is aggravated if the worker is subjected to psychological stress or time pressure. in addition to being a biological threat, the risk of infections causes psychological stress to workers. the improved communication between health services and international organizations provides help in the early detection and control of previously unknown hazards. nevertheless, for example, the danger related to drug abusers continues to grow and present a serious risk to workers in, for example, health services and the police force. some new viral or re-emerging bacterial infections also affect health care staff in their work. the increase in the cases of drug-resistant tuberculosis is an example of such a hazard. the goal of preventive approaches is to exert control on the cause of unwelcome events, the course of such negative events or their outcome. in this context, one has to decide whether the harmful process is acute (an accident) or dependent on impact duration and stimulus (short-, medium-, and long-term). naturally, the prevention approaches depend on the phases of the harmful process, i.e. whether the harm is reversible, or whether it is possible only to maintain its status, or to slow down the process. it is assumed here that a stressful factor generates an inter-individual or intra-individual strain. thus the effects and consequences of stress are dependent on the situation, individual characteristics, capabilities, skills and the regulation of actions, and other factors. the overall consideration is related to work systems characterized by work contents, working conditions, activities, and actions. system performance is expected of this work system, and this system performance is characterized by a performance structure and its conditions and requirements (figure . ). the performance of the biopsychosocial unit, i.e. the human being, plays an important role within the human performance structure (see figures . and . ). the human being is characterized by external and internal features, which are closely related to stress compatibility, and thus to strain. in this respect, preventive measures serve to optimize and ensure performance, on the one hand, and to control stress and strain, on the other. preventive measures aim to prevent bionegative effects and to facilitate and promote biopositive responses. • the internal factors affecting performance are described by performance capacity and performance readiness. • performance capacity is determined by the individual's physiological and psychological capacity. • performance readiness is characterized by physiological fitness and psychological willingness. • the external factors affecting performance are described by organizational preconditions/requirements and technical preconditions/requirements. • regarding the organizational requirements, the organizational structure and organizational dynamics are of significance. • in the case of technical requirements, the difficulties of the task, characterized by machines, the entire plant and its constructions, task content, task design, technical and situation-related factors, such as work layout, anthropometrics, and quality of the environment, are decisive (table . ). mental stress plays an increasing role in the routine activities of enterprises. through interactive models of mental stress and strain, it is possible to represent the development of mental strain and its impairing effects (e.g. tension, fatigue, monotony, lack of mental satisfaction). it is important to distinguish the above-mentioned impairing effects from each other, since they can arise from different origins and can be prevented or eliminated by different means. activities that strain optimally enhance health and promote safe execution of work tasks. stress essentially results from the design parameters of the work system or workplace. these design parameters are characterized by, e.g.: • technology, such as work processes, work equipment, work materials, work objects; • anthropometric design; • work techniques, working hours, sharing of work, cycle dependence, job rotation; • physiological design that causes strain, fatigue; • psychological design that either motivates or frustrates; • information technology, e.g. information processing, cognitive ergonomic design; • sociological conditions; and • environmental conditions, e.g. noise, dust, heat, cold. the stress structure is very complex, and we therefore need to look at the individual parameters carefully, taking into account the interactions and links between the parameters at the conceptual level. the design parameters impact people as stress factors. as a result, they also turn into conditions affecting performance. such conditions can basically be classified into two types: a person's internal conditions, characterized in particular by predisposition and personality traits, and a person's external conditions, determined mainly by the design parameters. when we look at performance as resulting from regulated or reactive action, we find three essential approaches for prevention: • the first approach identifies strain. it is related to anatomical, biochemical, histological, physiological characteristic values, typical curves of organ systems, the degree of utilization of skills through stress, and thus the degree of utilization of the dynamics of physiological variables in the course of stress. • the second approach is related to the control of strain. the aim is to identify performance limits, the limits of training and practice, and to put them into positive use. adaptation and fatigue are the central elements here. • the third approach for prevention is related to reducing strain. the aim is to avoid harm, using known limits as guidelines (e.g. maximum workplace concentration limit values for harmful substances, maximum organspecific concentration values, biological tolerance values for substances, limit values for noise and physical loads). however, the use of guideline values can only be an auxiliary approach, because the stress-strain concept is characterized by highly complex connections between the exogenous stress and the resulting strain. an objectively identical stress will not always cause the same level of strain in an individual. due to action regulation and individual characteristic values and curves of the organ systems (properties and capabilities), differences in strain may occur. seemingly identical stress can cause differing strain due to the superposition of partial stress. combinations of partial stress can lead to compensatory differences (e.g. physiological stress can compensate for psychological stress) or accumulation effects. partial stress is determined by the intensity and duration of the stress, and can therefore appear in differing dimensions and have varying effects. in assessing overall stress, the composition of the partial stress according to type, intensity, course and time is decisive. partial stress can occur simultaneously and successively. in our considerations, the principle of homeostasis plays an important role. however, optimizing performance is only a means to an end in a prevention programme. the actual purpose is to avoid harm, and thus to control strain. harm is a bionegative effect of stress. the causative stress is part of complex conditions in a causal connection. causal relationships can act as dose-effect relationships or without any relation to the dose. in this respect, the causative stress condition can form a chain with a fixed or variable sequence; it can add up, multiply, intensify or have an effect in only specific combinations, and generate different effects (e.g. diseases). we are thus dealing with a multicausal model or a multi-factor genesis. low back pain is an example of a complex phenomenon. the incidence of musculoskeletal disorders, especially low back pain, is rapidly increasing. several occupational factors have been found to increase the risk for low back pain. some studies indicate that psychosocial and work-related conditions are far more accurate in the prognosis of disability than are physical conditions. chronic low back pain is perceived as a phenomenon which encompasses biological, social and psychological variables. according to the model of adaptation, the goal of reducing risks is to increase a person's physical abilities (i.e. flexibility, strength, endurance), the use of body mechanics, techniques to protect the back (following the rules of biomechanics), to improve positive coping skills and emotional control. the following unfavourable factors leading to back pain have been identified at workplaces: • the lifting of too heavy loads. • working in a twisted or bent-down position. • work causing whole-body vibration. • working predominantly in a sitting position. • carrying heavy loads on the shoulders. the prevention of acute back pain and the prevention of work disability must entail several features. one important element is work safety, which can be maximized by screening a worker's physical and intellectual capacities, by ensuring ergonomic performance of the work procedures, and by increasing awareness of proper working techniques that do not strain the back. the use of adaptation programmes makes it possible to attain a higher performance level and to be able to withstand more strain (figure . ) . research-based methods of training optimize and improve performance. they are a means for controlling stress and strain with the aim of preventing bionegative effects and facilitating and promoting biopositive responses. the stress (load) and strain model and human performance can be described as follows: • causative stress generates an inter-individual or intra-individual strain. • the effects and consequences depend on a person's properties, capabilities, skills and the regulation of actions, individual characteristics of the organ systems, and similar factors. • within the performance structure, the performance of the biopsychosocial unit, i.e. the human being, plays an important role. the human being is characterized by external and internal factors, which in turn are closely related to stress compatibility and thus to strain. the connection between stress and harm plays a significant role in the research on occupational health hazards. how should this connection be explored? different hypotheses exist in replying to this question, but none of them have been definitively proven. the three most common hypotheses today are: stress occurring in connection with a person's life events. the number and extent of such events is decisive. problem-coping behaviour and/or social conditions are variables explaining the connection between stress and harm. . the additive stress hypothesis. the ability to cope with problems and the social conditions has an effect on harm which is independent of the stress resulting from life events. when we refer to the complexity of risks in this context of occupational safety our focus shall be on the enterprise. there are different kinds of risks to be found in enterprises. many of them are of general importance, i.e. they are in principle rather independent of an enterprise's size or its type of activity. how to deal with such risks shall be outlined to some extent here. in order to treat those risks at work successfully resources are needed whose availability often depends on the enterprise's situation. the situation in enterprises usually is a determining indicator for available resources to control and develop safety and health and thus performance of the enterprise and its workers and employees through appropriate preventive measures. this situation has been described to some extent in chapter . big companies usually have well-developed safety and health resources, and they often transfer appropriate policies and practices to the less developed areas where they operate. even in big enterprises, however, there is fragmentation of local workplaces into ever smaller units. many of the formerly in-built activities of enterprises are outsourced. new types of work organizations are introduced, such as flat and lean organizations, increase of telework and call centres, many kinds of mobile jobs and network organizations. former in-company occupational health services are frequently transferred to service providers. this leads to the establishment of high numbers of micro-enterprises, small scale enterprises (sses), small and medium-sized enterprises (smes) and self-employed people. sses and smes are thus becoming the most important employers in the future. from a number of studies there is evidence that at least among a part of sses and smes awareness of osh risks is low. both managers and workers often do not see the need to improve occupational safety and health or ergonomic issues and their possibilities and benefits by reducing or eliminating risks at work. as these types of enterprises, even more the self-employed, do not have sufficient resources or expertise for implementing preventive measures, the need for external advisory support, services and incentives is evident and growing. interpersonal relations in sses and smes being generally very good provides a strong chance for effectively supporting them. other special features in the structure of small and medium-sized enterprises to be considered are: • direct participation of the management in the daily activities; • the management structure is designed to meet the requirements of the manager; • less formal and standardized work processes, organizational structures and decision processes; • no clear-cut division of work: -wide range of tasks; and -less specialization of the employees; • unsystematic ways of obtaining and processing information; • great importance of direct personal communication; • less interest in external cooperation and advice; • small range of internal, especially long-term and strategic planning; and • stronger inclination of individual staff members to represent their own interests. the role of occupational health services (ohs) in smes is an interdisciplinary task, consisting of: • risk assessment: -investigation of occupational health problems according to type of technology, organization, work environment, working conditions, social relationships. • surveillance of employees' health: -medical examinations to assess employees' state of health; and -offering advice, information, training. • advice, information, training: -measures to optimize safety and health protection; and -safe behaviour, safe working procedures, first aid preparedness. different kinds of risks are found in enterprises (see table . ). these different types of risks need to be handled by an interlinked system to control the risks and to find compromises between the solutions. figure . illustrates these linkages. the promotion of safety and health is linked to several areas and activities. all of these areas influence the risk management process. the results of risk treatment not only solve occupational health and safety problems, but they also give added value to the linked fields. specific risk management methods are needed to reach the set goal. one needs to know what a risk is. the definition of risk is essential: a risk is a combination of a probability -not frequency -of occurrence, and the associated unwelcome outcome or impact of a risk element (consequence). risk management is recognized as an integral part of good management practice. it is a recurring process consisting of steps which, when carried out in a sequence, allow decision making to be improved continuously. risk management is a logical and systematic method of identifying, analyzing, evaluating, treating, monitoring and communicating risks arising during any activity, function, or process in a manner enabling the organization to minimize losses and maximize productive opportunities. different methods are available for analyzing problems. each method is especially suited to respond to certain questions and less suited for others. a complex "thinking scheme" is necessary for arranging the different analyses correctly within the system review. such a scheme includes the following steps: . defining the unit under review: the actual tasks and boundaries of the system (a fictitious or a real system) must be specified: time, space and state. . problem analysis: all problems existing in the defined system, including problems which do not originate from the system itself, are detected and described. . causes of problems: all possible or probable causes of the problems are identified and listed. . identifying interaction: the dependencies of the effect mechanisms are described, and the links between the causes are determined. . establishing priorities and formulating targets: to carry out this step, it is necessary to evaluate the effects of the causes. . solutions to the problems: all measures needed for solving the individual problems are listed. the known lists usually include technical as well as non-technical measures. since several measures are often appropriate for solving one problem, a pre-selection of measures has to be done already at this stage. however, this can only be an approach to the solution; the actual selection of measures has to be completed in steps and . . clarifying inconsistencies and setting priorities: as the measures required for solving individual problems may be inconsistent in part, or may even have to be excluded as a whole, any inconsist-encies need to be clarified. a decision should then be made in favour or against a measure, or a compromise may be sought. . determining measures for the unit under review: the measures applicable to the defined overall system are now selected from the measures for the individual problems. . list of questions regarding solutions selected for the overall system: checking whether the selected measures are implementable and applicable for solving the problems of the overall system. . controlling for possible new problems: this step consists of checking whether new problems are created by the selected solution. the close link between cause and effect demands that the processes and sub-processes must be evaluated uniformly, and risks must be dealt with according to a coordinated procedure. the analysis is started by orientation to the problem. this is done in the following steps: . recognizing and analyzing the problem according to its causes and extent, by means of a diagnosis and prediction, and comparison with the goals aimed at. . description and division of the overall problem into individual problem areas, and specifying their dependencies. . defining the problem and structuring it according to the objectives, time relation, degree of difficulty, and relevance to the goal. . detailed analysis of the causes, and classification in accordance to the possible solution. the analysis of the problem should be integrated into the overall analytical process in accordance with the thinking schemes described earlier. the relevance and priorities related to the process determine the starting point for the remaining steps of the analysis. analyses are divided into quantitative and qualitative ones. quantitative analyses include risk analyses, that is, theoretical safety-related analysis methods and safety analyses, e.g. classical accident analyses. qualitative analyses include failure mode and effect analyses, hazard analyses, failure hazard analyses, operating hazard analyses, human error code and effect analyses, information error and effect analyses. the theoretical safety-related analysis methods include inductive and deductive analyses based on boolean models. inductive analyses are, e.g. fault process analyses. deductive analyses are fault tree analyses, analytical processes and simulation methods. theoretical safety-related analysis methods which are not based on boolean models are stochastic processes, such as markow's model, risk analyses and accident analyses which, as a rule, are statistical or probability-related analyses. a possible scheme to begin with is shown in figure . . since absolute safety, entailing freedom from all risks, does not exist in any sphere of life, the task of those dealing with safety issues is to avert hazards and to achieve a sustainable reduction of the residual risk, so that it does not exceed a tolerable limit. the extent of this rationally acceptable risk is also influenced by the level of risk which society intuitively considers as being acceptable. those who propose definitions of safety are neither authorized nor capable of evaluating the general benefit of technical products, processes and services. risk assessment is therefore focused at the potential harm caused by the use or non-use of the technology. the guidelines given in "a new approach to technical harmonization and standards" by the council resolution of may are valid in the european union. the legal system of a state describes the protective goals, such as protection of life, health, etc., in its constitution, as well as in individual laws and regulations. as a rule, these do not provide an exact limit as to what is still a tolerable risk. this limit can only be established indirectly and unclearly on the basis of the goals and conceptions set down by the authorities and laws of a state. in the european union, the limits are expressed primarily in the "basic safety and health requirements". these requirements are then put into more concrete terms in the safety-related definitions issued by the bodies responsible for preparing industrial standards. compliance with the standards is voluntary, but it is presumed that the basic requirements of the directives are met. the term which is opposite to "safety" is "hazard". both safe and hazardous situations are founded on the intended use of the technical products, processes and services. unintended use is taken into account only to the extent that it can be reasonably foreseen. the risks present in certain events are, in a more narrow sense, unwelcome and unwanted outcomes with negative effects (which exceed the range of acceptance). unwelcome events are • source conditions of processes and states; • processes and states themselves; and • effects of processes and states which can result in harm to persons or property. an unwelcome event can be defined as a single event or an event within a sequence of events. possible unwelcome events are identified for a unit under review. the causes may be inherent in the unit itself, or outside of it. in order to determine the risks involved in unwelcome events, it is necessary to identify probabilities and consequences. the question arises: are the extent and probability of the risk known? information is needed to answer this question. defining risk requires information concerning the probability of occurrence and the extent of the harm of the consequences. uncertainty is given if the effects are known but the probability is unknown. ignorance is given if both the effects and the probability are unknown. figure . shows the risk analysis procedure according to the type of information available. since risk analyses are not possible without practical, usable information, it is necessary to consider the nature of the information. the information is characterized by its content, truth and objectivity, degree of confirmation, the possibility of being tested, and the age of the information. the factors determining the content of the information are generality, precision and conditionality. the higher the conditionality, the smaller is the generality, and thus the smaller the information content of the statement. truth is understood as conformity of the statement with the real state of affairs. the closer that the information is to reality, the higher is its information content, and the smaller its logical margin. the degree of controllability is directly dependent on the information content: the bigger the logical margin, the smaller the information content, and thus the higher the probability that the information content will prove its worth. in this respect, probability plays a role in the information content: the greatest significance is attributed to the logical hypothetical probability and statistical probability of an event. objectivity and age are additional criteria for any information. the age and time relation of information play a particularly important role, because consideration of the time period is an important feature of analysis. as a rule, information and thus the data input in the risk analysis consist of figures and facts based on experience, materials, technical design, the organization and the environment. in this regard, most figures are based on statistics on incidents and their occurrences. factual information reveals something about the actual state of affairs. it consists of statements related to past conditions, incidents, etc. forecast-type predictions are related to real future conditions, foretelling that certain events will occur in the future. explanatory information replies to questions about the causes of phenomena, and provides explanations and reasons. it establishes links between different states based on presumed cause-effect relationships. subjunctive information expresses possibilities, implying that certain situations might occur at present or in the future, thus giving information about conceivable conditions, events and relationships. normative information expresses goals, standards, evaluations and similar matters; it formulates what is desirable or necessary. the main problem with risk analyses is incomplete information, in particular regarding the area of "uncertainty". in the eu commission's view, recourse to the so-called precautionary principle presupposes that potentially dangerous effects deriving from a phenomenon, product or process have been identified via objective scientific evaluation, and that scientific evaluation does not allow the risk to be determined with sufficient certainty. recourse to the precautionary principle thus takes place in the framework of general risk management that is concretely connected to the decision-making process. if application of the precautionary principle results in the decision that action is the appropriate response to a risk, and that further scientific information is not needed, it is still necessary to decide how to proceed. apart from adopting legal provisions which are subject to judicial control, a whole range of actions is available to the decision-makers (e.g. funding research, or deciding to inform the public about the possible adverse effects of a product or procedure). however, the measures may not be selected arbitrarily. in conclusion, the assessment of various risks and risk types which may be related to different types of hazards requires a variety of specific risk assessment methods. if one has dependable information about the probability and consequences of a serious risk or risky event, one should use the risk assessment procedure shown in figure . . • major industrial accidents; • damage caused by dangerous substances; • nuclear accidents; • major accidents at sea; • disasters due to forces of nature; and • acts of terrorism. • dangerous substances discharged (fire, explosion); • injury to people and damage to property; • immediate damage to the environment; • permanent or long-term damage to terrestrial habitats, to fresh water, to marine habitats, to aquifers or underground water supplies; and • cross-border damage. • technical failure: devices, mountings, containers, flanges, mechanical damage, corrosion of pipes, etc.; • human failure: operating error, organizational failure, during repair work; • chemical reaction; • physical reaction; and • environmental cause. system analysis is the basis of all hazard analyses, and thus needs to be done with special care. system analysis includes the examination of the system functions, particularly the performance goals and admissible deviations in the ambient conditions not influenced by the system, the auxiliary sources of the system (e.g. energy supply), the components of the system, and the organization and behaviour of the system. geographical arrangements, block diagrams, material flow charts, information flow charts, energy flow charts, etc. are used to depict technical systems. the objective is to ensure the safe behaviour of the technical systems by design methods, at least during the required service life and during intended use. qualitative analyses are particularly important in practice. as a rule, they are form sheet analyses and include failure mode and effect analyses, which examine and determine failure modes and their effects on systems. the preliminary hazard analysis looks for the hazard potentials of a system. the failure hazard analysis examines the causes of failures and their effects. the operating hazard analysis determines the hazards which may occur during operation, maintenance, repair, etc. the human error mode and effect analysis examines error modes and their effects which occur because of wrong behaviour of humans. the information error mode and effect analysis examines operating, maintenance and repair errors, fault elimination errors and the effects caused by errors in instructions and faulty information. theoretical analysis methods include the fault tree analysis, which is a deductive analysis. an unwelcome incident is provided to the system under review. then all logical links and/or failure combinations of components or partial system failures which might lead to this unwelcome incident are assembled, forming the fault tree. the fault tree analysis is suited for simple as well as for complex systems. the objective is to identify failures which might lead to an unwelcome incident. the prerequisite is exact knowledge about the functioning of the system under review. the process, the functioning of the components and partial systems therefore need to be present. it is possible to focus on the flow of force, of energy, of materials and of signals. the fault process analysis has a structure similar to that of the fault tree analysis. in this case, however, we are looking for all unwelcome incidents as well as their combinations which have the same fault trigger. analysis of the functioning of the system under review is also necessary for this. analyses can also be used to identify possible, probable and actual risk components. the phases of the analysis are the phases of design, including the preparation of a concept, project and construction, and the phases of use which are production, operation and maintenance. in order to identify the fault potential as completely as possible, different types of analyses are usually combined. documentation of the sufficient safety of a system can be achieved at a reasonable cost only for a small system. in the case of complex systems, it is therefore recommended to document individual unwelcome incidents. if solutions are sought and found for individual unwelcome incidents, care should be taken to ensure that no target conflicts arise with regard to other detail solutions. with the help of the fault tree, it is possible to analyse the causes of an unwelcome incident and the probability of its occurrence. decisions on whether and which redundancies are necessary can in most cases be reached by simple estimates. four results can be expected by using a fault tree: . the failure combination of inputs leading to the unwelcome event; . the probability of their occurrence; . the probability of occurrence of the unwelcome event; and . the critical path that this incident took from the failure combination through the fault tree. a systematic evaluation of the fault tree model can be done by an analytical evaluation (calculation) or by simulation of the model (monte-carlo method). a graphic analysis of the failure process is especially suited to prove the safety risk of previously defined failure combinations in the system. the failure mode and effect analysis and the preliminary hazard analysis as mentioned previously, no method can disclose all potential faults in a system with any degree of certainty. however, if one starts with the preliminary hazard analysis, then at least the essential components with hazard potential will be defined. the essential components are always similar, namely, kinetic energy, potential energy, source of thermal energy, radioactive material, biological material, chemically reactive substance. with the fault tree method, any possible failure combinations (causes), leading to an unwelcome outcome, can then be identified additionally. re- are especially suited for identifying failures in a system which pose a risk. liability parameters can be determined in the process, e.g. the frequency of occurrence of failure combinations, the frequency of occurrence of unwelcome events, non-availability of the system upon requests, etc. the failure effect analysis is a supplementary method. it is able to depict the effects of mistakes made by the operating personnel, e.g. when a task is not performed, or is performed according to inappropriate instructions, or performed too early, too late, unintentionally or with errors. it can pinpoint also effects resulting from operating conditions and errors in the functional process or its elements. an important aspect of all hazard analyses is that they are only valid for the respective case under review. every change in a parameter basically requires new analyses. this applies to changes in the personnel structure and the qualification of persons, as well as to technical specifications. for this reason, it is necessary to document the parameters on which each analysis is based. the results of the hazard analyses form the basis for the selection of protective measures and measures to combat the hazards. if the system is modified, the hazards inherent in the system may change, and the measures to combat the hazards may have to be changed as well. this may also mean that the protective measures or equipment which existed at time x for the system or partial system in certain operating conditions (e.g. normal operation, set-up operation, and maintenance phase) may no longer be compatible. different protective measures, equipment or strategies may then be needed. however, hazard analyses do not merely serve to detect and solve potential failures. they form the basis for the selection of protective measures and protective equipment, and they can also test the success of the safety strategies specified. a selection of methods used for hazard analysis is given in annex to section . . risk assessment is a series of logical steps enabling the systematic examination of the hazards associated with machinery. risk assessment is followed, whenever necessary, by actions to reduce the existing risks and by implementing safety measures. when this process is repeated, it eliminates hazards as far as possible. risk assessment includes: • risk analysis: -determining the limits of machinery; -identifying hazards; and -estimating risks. • risk evaluation. risk analysis provides the information required for evaluating risks, and this in turn allows judgements to be made on the safety of e.g. the machinery or plant under review. risk assessment relies on decisions based on judgement. these decisions are to be supported by qualitative methods, complemented, as far as possible, by quantitative methods. quantitative methods are particularly appropriate when the foreseeable harm is very severe or extensive. quantitative methods are useful for assessing alternative safety measures and for determining which measure gives best protection. the application of quantitative methods is restricted to the amount of useful data which is available, and in many cases only qualitative risk assessment will be possible. risk assessment should be conducted so that it is possible to document the used procedure and the results that have been achieved. risk assessment shall take into account: • the life cycle of machinery or the life span of the plant. • the limitations of the machinery or plant, including the intended use (correct use and operation of the machinery or plant, as well as the consequences of reasonably foreseeable misuse or malfunction). • the full range of foreseeable uses of the machinery (e.g. industrial, nonindustrial and domestic) by persons identified by sex, age, dominant hand usage, or limiting physical abilities (e.g. visual or hearing impairment, stature, strength). • the anticipated level of training, experience or ability of the anticipated users, such as: -operators including maintenance personnel or technicians; -trainees and juniors; and -general public. • exposure of other persons to the machine hazards, whenever they can be reasonably foreseen. having identified the various hazards that can originate from the machine (permanent hazards and ones that can appear unexpectedly), the machine designer shall estimate the risk for each hazard, as far as possible, on the basis of quantifiable factors. he must finally decide, based on the risk evaluation, whether risk reduction is required. for this purpose, the designer has to take into account the different operating modes and intervention procedures, as well as human interaction during the entire life cycle of the machine. the following aspects in particular must be considered: • construction; transport; • assembly, installation, commissioning; • adjusting settings, programming or process changeover; • instructions for users; • operating, cleaning, maintenance, servicing; and • checking for faults, de-commissioning, dismantling and safe disposal. malfunctioning of the machine due to, e.g. • variation in a characteristic or dimension of the processed material or workpiece; • failure of a part or function; • external disturbance (e.g. shock, vibration, electromagnetic interference); • design error or deficiency (e.g. software errors); • disturbance in power supply; and • flaw in surrounding conditions (e.g. damaged floor surface). unintentional behaviour of the operator or foreseeable misuse of the machine, e.g.: • loss of control of the machine by the operator (especially in the case of hand-held devices or moving parts); • automatic (reflexive) behaviour of a person in case of a machine malfunction or failure during operation; • the operator's carelessness or lack of concentration; • the operator taking the "line of least resistance" in carrying out a task; • behaviour resulting from pressure to keep the machine running in all circumstances; and • unexpected behaviour of certain persons (e.g. children, disabled persons). when carrying out a risk assessment, the risk of the most severe harm that is likely to occur from each identified hazard must be considered, but the greatest foreseeable severity must also be taken into account, even if the probability of such an occurrence is not high. this objective may be met by eliminating the hazards, or by reducing, separately or simultaneously, each of the two elements which determine the risk, i.e. the severity of the harm from the hazard in question, and the probability of occurrence of that harm. all protective measures intended to reach this goal shall be applied according to the following steps: this stage is the only one at which hazards can be eliminated, thus avoiding the need for additional protective measures, such as safeguarding machines or implementing complementary protective measures. . information about the residual risk. information for use on the residual risk is not to be a substitute for inherently safe design, or for safeguarding or complementary protective measures. risk estimation and evaluation must be carried out after each of the above three steps of risk reduction. adequate protective measures associated with each of the operating modes and intervention procedures prevent operators from being prone to use hazardous intervention techniques in case of technical difficulties. the aim is to achieve the lowest possible level of risk. the design process is an iterative cycle, and several successive applications may be necessary to reduce the risk, making the best use of available technology. four aspects should be considered, preferably in the following order: . the safety of the machine during all the phases of its life cycle; . the ability of the machine to perform its function; . the usability of the machine; and . the costs of manufacturing, operating and dismantling the machine. the following principles apply to technical design: service life, safe machine life, fail-safe and tamper-proof design. a design which ensures the safety of service life has to be chosen when neither the technical system nor any of its safety-relevant partial functions can be allowed to fail during the service life envisaged. this means that the components of the partial functions need to be exchanged at previously defined time intervals (preventive maintenance). in the case of a fail-safe design, the technical system or its partial functions allow for faults, but none of these faults, alone or in combination, may lead to a hazardous state. it is necessary to specify just which faults in one or several partial systems can be allowed to occur simultaneously without the overall system being transferred into a hazardous state (maximum admissible number of simultaneous faults). a failure or a reduction in the performance of the technical system is accepted in this case. tamper-proof means that it is impossible to intentionally induce a hazardous state of the system. this is often required of technical systems with a high hazard potential. strategies involving secrecy play a special role in this regard. in the safety principles described here, redundant design should also be mentioned. the probability of occurrence and the consequences of damage are reduced by multiple arrangements, allowing both for subsystems or elements to be arranged in a row or in parallel. it is possible to reduce the fault potential of a technical system by the diversification of principles: several different principles are used in redundant arrangements. the spatial distribution of the function carriers allows the possibilities to influence faults to be reduced to one function. in the redundant arrangements, important functions, e.g. information transmission, are therefore designed in a redundant manner at different locations. the measures to eliminate or avoid hazards have to meet the following basic requirements: their effect must be reliable and compulsory, and they cannot be circumvented. reliable effect means that the effect principle and construction design of the planned measure guarantee an unambiguous effect, that the components have been designed according to regulations, that production and assembly are performed in a controlled manner, and that the measure has been tested. compulsory effect includes the demand for a protective effect which is active at the start of a hazardous state and during it, and which is deactivated only when the hazardous state is no longer present, or stops when the protective effect is not active. technical systems are planned as determined systems. only predictable and intended system behaviour is taken into account when the system is designed. experience has shown, however, that technical systems also display stochastic behaviour. that is, external influences and/or internal modifications not taken into consideration in the design result in unintended changes in the system's behaviour and properties. the period of time until the unintended changes in behaviour and/or in properties occur, cannot be accurately determined; it is a random variable. we have to presume that there will be a fault in every technical system. we simply do not know in advance when it will take place. the same is true for repairs. we know that it is generally possible in systems requiring re-pair to complete a repair operation successfully, but we cannot determine the exact time in advance. using statistical evaluations, we can establish a timedependent probability at which a "fault event" or "completion of a repair operation" occurs. the frequency of these events determines the availability of the system requiring repair. technical systems are intended to perform numerous functions and, at the same time, to be safe. the influence of human action on safety has to be taken into account in safety considerations as well (i.e. human factor). a system is safe when there are no functions or action sequences resulting in hazardous effects for people and/or property. risks of unwelcome events (in the following called "risk of an event") are determined on the basis of the experience (e.g. catalogue of measures) with technical systems. in addition to this, safety analyses are used (e.g. failure mode and effect analysis, hazard analysis, failure hazard analysis, operating hazard analysis, information error analysis), as well as mathematical models (e.g. worst-case analysis, monte-carlo procedure, markow's models). unwelcome events are examined for their effects. this is followed by considerations about which design modifications or additional protective measures might provide sufficient safety against these unwelcome events. the explanations below present the basic procedure for developing safety-relevant arrangements and solutions, i.e. the thinking and decision-making processes, as well as selecting criteria that are significant for the identification of unwelcome events, the risk of an event, the acceptance limits and the adoption of measures. before preparing the final documentation, it is essential to verify that the limit risk has not been exceeded, and that no new unwelcome events have occurred. the sequence scheme describes the procedure for developing safety arrangements and for finding solutions aiming to avoid the occurrence of unwelcome events which exceed the acceptance limits, by selecting suitable measures. in this context, it is assumed that: • an unwelcome event is initially identified as a single event within a comprehensive event sequence (e.g. start-up of a plant), and the risk of an event and limit risk are determined. • the selection of technical and/or non-technical measures is subject to a review of the content and the system, and the decision regarding a solution is then made. • the number of applicable measures is limited, and therefore it may not be possible to immediately find a measure with an acceptable risk for a preliminary determination of the unwelcome event. • implementation of the selected solution can result in the occurrence of a new unwelcome event. • in the above cases, a more concrete, new determination of the unwelcome event and/or the unit under review, or the state of the unit under review, and another attempt at deciding upon measures may lead to the desired result, although this may have to be repeated several times before it is successful. in the case of complex event sequences, several unwelcome events may become apparent which have to be tackled by the respective set of measures. in accordance with the sequence scheme, the unit under review and its state have to be determined first. this determination includes information on, e.g., • product type, dimension, product parts/elements distinguished according to functional or construction aspects, if applicable; • intended use; • work system or field of application; • target group; • supply energy, transformed in the product, transmitted, distributed, output; • other parameters essential to safety assessment according to type and size of the product; • known or assumed effects on the product or its parts (e.g. due to transport, assembly, conditions at the assembly site, operation, maintenance); • weight, centre of gravity; • materials, consumables; • operating states (e.g. start-up, standstill, test run, normal operation); • condition (new, condition after a period in storage/shutdown, after repair, in case of modified operating conditions and/or other significant changes); and • known or suspected effects on humans. the next step is the identification of unwelcome events. they are source conditions of processes and states, or processes and states themselves. they can be the effects of processes and states which can cause harm to people or property. an unwelcome event can be a single event or part of a sequence of events. one should look for unwelcome events in sequences of processes and functions, in work activities and organizational procedures, or in the work environment. care has to be taken that the respective interfaces are included in the considerations. deviations and time-dependent changes in regard to the planned sequences and conditions have to be taken into account as well. the risk of an unwanted event results from the probability statement which takes into account both • the expected frequency of occurrence of the event; and • the anticipated extent of harm of the event. the expected frequency of occurrence of an event leading to harm is determined by, e.g., • the probability of the occurrence itself; • the duration and frequency of exposure of people (or of objects) in the danger zone, e.g. -extremely seldom (e.g. during repair), -seldom (e.g. during installation, maintenance and inspection), -frequently, and -very frequently (e.g. constant intervention during every work cycle); • the influence of users or third parties on the risk of an event. the extent of harm is determined by, e.g., • the type of harm (harm to people and/or property); • the severity of the harm (slight/severe/fatal injury of persons, or corresponding damage to property); and • number of people or objects affected. in principle, the safety requirements depend on the ratio of the risk of an event to the limit risk. criteria for determining the limit risk are, e.g., • personal and social acceptance of hazards; • people possibly affected (e.g. layman, trained person, specialized worker); • participation of those affected in the process; and • possibilities of averting hazards. the safety of various technical equipment with comparable risk can, for instance, be achieved • primarily by technical measures, in some cases; and • mainly by non-technical measures, in other cases. this means that several acceptable solutions with varying proportions of technical and non-technical measures may be found for a specific risk. in this context, the responsibility of those involved should be taken into consideration. technical measures are developed on the basis of e.g. the following principles: • avoiding hazardous interfaces (e.g. risk of crushing, shearing); hazard sources (e.g. radiation sources, flying parts, hazardous states and actions as well as inappropriate processes); • limiting hazardous energy (e.g. by rupture disks, temperature controllers, safety valves, rated break points); • using suitable construction and other materials (e.g. solid, sufficiently resistant against corrosion and ageing, glare-free, break-proof, non-toxic, non-inflammable, non-combustible, non-sliding); • designing equipment in accordance with its function, material, load, and ergonomics principles; • using fail-safe control devices employing technical means; • employing technical means of informing (e.g. danger signal); • protective equipment for separating, attaching, rejecting, catching, etc.; • suction equipment, exhaust hoods, when needed; • protection and emergency rooms; and • couplings or locks. technical measures refer to, e.g., • physical, chemical or biological processes; • energy, material and information flow in connection with the applied processes; • properties of materials and changes in the properties; and • function and design of technical products, parts and connections. the iterative (repeated) risk reduction process can be concluded after achieving adequate risk reduction and, if applicable, a favourable outcome of risk comparison. adequate risk reduction can be considered to have been achieved when one is able to answer each of the following questions positively: • have all operating conditions and all intervention procedures been taken into account? • have hazards been eliminated or their risks been reduced to the lowest practicable level? • is it certain that the measures undertaken do not generate new hazards? • are the users sufficiently informed and warned about the residual risks? • is it certain that the operator's working conditions are not jeopardized by the protective measures taken? • are the protective measures compatible with each other? • has sufficient consideration been given to the consequences that can arise from the use of a machine designed for professional/industrial use when it is used in a non-professional/non-industrial context? • is it certain that the measures undertaken do not excessively reduce the ability of the machine to perform its intended function? there are still many potential risks connected with hazardous substances about which more information is needed. because the knowledge about the relation between their dose and mode of action is not sufficient for controlling such risks, more research is needed. the following list highlights the themes of the numerous questions related to such risks: • potentially harmful organisms; • toxicants, carcinogens; • pesticides, pollutants, poisonous substances; • genetically engineered substances; • relation between chemical and structural properties and toxicity; • chemical structure and chemical properties and the relation to reactivity and reaction possibilities of organic compounds to metabolic reaction and living systems; • modes of action, genotoxicity, carcinogenicity, effects on humans/animals; • potentially harmful organisms in feedstuffs and animal faeces; • viruses and pathogens; • bacteria in feedstuffs and faeces; • parasites in feedstuffs and animal faeces; • pests in stored feedstuffs; • probiotics as feed additives; and • preservatives in feedstuffs. violent actions damaging society, property or people have increased, and they seem to spread both internationally as well as within countries. these new risks are difficult to predict and manage, as the very strategy of the actors is to create unexpected chaotic events. certain possibilities to predict the potential types of hazards do exist, and comprehensive predictive analyses have been done (meyerson, reaser ) . new methodologies are needed to predict the risk of terrorist actions, and also the strategies for risk management need to be developed. due to the numerous background factors, the preparedness of societies against these risks needs to be strengthened. table . lists important societal systems which are vulnerable to acts of terrorism. the situation in the developing countries needs to be tackled with specific methods. one has to answer the following questions: • what specific examples of prevention instruments can be offered? • what are the prerequisites for success? • how can industrialized countries assist the developing countries in carrying out preventive actions? • how should priorities be set according to the available resources? one possibility is to start a first-step programme, the goal of which is higher productivity and better workplaces. it can be carried out by improving • storage and handling of materials: -provide storage racks for tools, materials, etc.; -put stores, racks etc. on wheels, whenever possible; -use carts, conveyers or other aids when moving heavy loads; -use jigs, clamps or other fixtures to hold items in place. • work sites: -keep the working area clear of everything that is not in frequent use. • machine safety: -install proper guards to dangerous tools/machines; -use safety devices; -maintain machines properly. • control of hazardous substances: -substitute hazardous chemicals with less hazardous substances; -make sure that all organic solvents, paints, glues, etc., are kept in covered containers; -install or improve local exhaust ventilation; -provide adequate protective goggles, face shields, earplugs, safety footwear, gloves, etc.; -instruct and train workers; -make sure that workers wash their hands before eating and drinking, and change their clothes before going home. • lighting: -make sure that lighting is adequate. • social and sanitary facilities: -provide a supply of cool, safe drinking water; -have the sanitary facilities cleaned regularly; -provide a hygienic place for meals; -provide storage for clothing or other belongings; -provide first aid equipment and train a qualified first-aider. • premises: -increase natural ventilation by having more roof and wall openings, windows or open doorways; -move sources of heat, noise, fumes, arc welding, etc., out of the workshop, or install exhaust ventilation, noise barriers, or other solutions; -provide fire extinguishers and train the workers to use them; -clear passageways, provide signs and markings. • work organization: -keep the workers alert and reduce fatigue through frequent changes in tasks, opportunities to change work postures, short breaks, etc.; -have buffer stocks of materials to keep work flow constant; -use quality circles to improve productivity and quality. risk combination of the probability of an event and its consequences. the term "risk" is generally used only when there is at least a possibility of negative consequences. in some situation, risk arises from the possibility of deviation from the expected outcome or event. outcome of an event or a situation, expressed in quality and in quantity. it may result in a loss or in an injury or may be linked to it. the result can be a disadvantage or a gain. in this case the event or the situation is the source. in connection with every analysis it has to be checked whether the cause is given empirically, or follows a set pattern, and whether there is scientific agreement regarding these circumstances. note there can be more than one consequence from one event. note consequences can range from positive to negative. the consequences are always negative from the viewpoint of safety. extent to which an event is likely to occur. note iso - : gives the mathematical definition of probability as "a real number in the interval to attached to a random event. it can be related to a long-run relative frequency of occurrence or to a degree of belief that an event will occur. for a high degree of belief the probability is near ". note frequency rather than probability may be used in describing risk. degrees of belief about probability can be chosen as classes or ranks such as: rare/unlikely/moderate/likely/almost certain, or incredible/improbable/ remote/occasional/probable/frequent. remark: informal language often confuses frequency and probability. this can lead to wrong conclusions in safety technology. probability is the degree of coincidence of the time frequency of coincidental realization of a fact from a certain possibility. coincidence is an event which basically can happen, may be cause-related, but does not occur necessarily or following a set pattern. it may also not occur (yes-or-no-alternative). data for probability of occurring with specific kinds of occurrence and weight of consequences can be: in a statistical sense: empirical, retrospective, real in a prognostic sense: speculative, prospective, probabilistic occurrence of a particular set of circumstances regarding place and time. an event can be the source of certain consequences (empirically to be expected with certain regularity). the event can be certain or uncertain. the event can be a single occurrence or a series of occurrences. the probability associated with the event can be estimated for a given period of time. task range by which the significance of risk is assessed. note risk criteria can include associated costs and benefits, legal and statutory requirements, socio-economic and environmental aspects, the concerns of stakeholders, priorities and other inputs to the assessment. the way in which a stakeholder views a risk based on a set of values or concerns. note risk perception depends on the stakeholder's needs, issues and knowledge. note risk perception can differ from objective data. exchange or sharing of information about risk between the decision-makers and other stakeholders. overall process of risk analysis and risk evaluation. systematic use of information to identify sources and to estimate the risk. note risk analysis provides a basis for risk evaluation, risk treatment, and risk acceptance. note information can include historical data, theoretical analyses, informal opinions, and the concerns of stakeholders. process used to assign figures, values to the probability and consequences of a risk. note risk estimation can consider cost, benefits, the concerns of stakeholders, and other variables, as appropriate for risk evaluation. process of comparing the estimated risk against given risk criteria to determine the significance of a risk. process of selection and implementation of measures to modify risk. note risk treatment measures can include avoiding, optimizing, transferring or retaining risk. actions implementing risk management decisions. note risk control may involve monitoring, re-evaluation, and compliance with decisions. process, related to a risk, to minimize the negative and to maximize the positive consequences (and their respective probabilities). actions taken to lessen the probability, negative consequences or both, associated with a risk. limitation of any negative consequences of a particular event. decision not to become involved in, or action to withdraw from, a risk situation. sharing with another party the burden of loss or benefit of gain, for a risk. note legal or statutory requirements can limit, prohibit or mandate the transfer of a certain risk. note risk transfer can be carried out through insurance or other agreements. note risk transfer can create new risks or modify existing ones. note relocation of the source is not risk transfer. acceptance of the burden of loss, or benefit of gain, from a particular risk. note risk retention includes the acceptance of risks that have not been identified. note risk retention does not include means involving insurance, or transfer in other ways. this includes risk assessment, risk treatment, risk acceptance and risk communication. risk assessment is risk analysis, with identification of sources and risk estimation, and risk evaluation. risk treatment includes avoiding, optimizing, transferring and retaining risk. → risk acceptance → risk communication harm physical injury or damage to the health of people or damage to property or the environment [iso/iec guide ]. note harm includes any disadvantage which is causally related to the infringement of the object of legal protection brought about by the harmful event. note in the individual safety-relevant definitions, harm to people, property and the environment may be included separately, in combination, or it may be excluded. this has to be stated in the respective scope. potential source of harm [iso/iec guide ]. the term "hazard" can be supplemented to define its origin or the nature of the possible harm, e.g., hazard of electric shock, crushing, cutting, dangerous substances, fire, drowning. in every-day informal language, there is insufficient differentiation between source of harm, hazardous situation, hazardous event and risk. circumstance in which people, property or the environment are exposed to one or more hazards [iso/iec guide ]. note circumstance can last for a shorter or longer period of time. event that can cause harm [din en ] . the hazardous event can be preceded by a latent hazardous situation or by a critical event. combination of the probability of occurrence of harm and the severity of that harm [iso/iec guide ]. note in many cases, only a uniform extent of harm (e.g. leading to death) is taken into account, or the occurrence of harm may be independent of the extent of harm, as in a lottery game. in these cases, it is easier to make a probability statement; risk assessment by risk comparison [din en ] thus becomes much simpler. note risks can be grouped in relation to different variables, e.g. to all people or only those affected by the incident, to different periods of time, or to performance. the probabilistic expectation value of the extent of harm is suitable for combining the two probability variables. note risks which arise as a consequence of continuous emission, e.g. noise, vibration, pollutants, are affected by the duration and level of exposure of those affected. risk which is accepted in a given context based on the current values of society [iso/iec guide ]. the acceptable risk has to be taken into account in this context, too. note safety-relevant definitions are oriented to the maximum tolerable risk. this is also referred to as limit risk. note tolerability is also based on the assumption that the intended use in addition to a reasonably predictable misuse of the products, processes and services, is complied with. freedom from unacceptable risk [iso/iec guide ]. note safety is indivisible. it cannot be split into classes or levels. note safety is achieved by risk reduction, so that the residual risk in no case exceeds the maximum tolerable risk. existence of an unacceptable risk. note safety and danger exclude one another -a technical product, process or service cannot be safe and dangerous at the same time. means used to reduce risk [iso/iec guide ]. note protective measures at the product level have priority over protective measures at the workplace level. preventive measure means assumed, but not proven, to reduce risk. risk remaining after safety measures have been taken [din en ] . note residual risk may be related to the use of technical products, processes and services. systematic use of available information to identify hazards and to estimate their risks [iso/iec guide ]. determination of connected risk elements of all hazards as a basis for risk assessment. decision based on the analysis of whether the tolerable risk has been exceeded [iso/iec guide ]. overall process of risk analysis and risk evaluation [iso/iec guide ]. use of a product, process or service in accordance with information provided by the supplier [iso/iec guide ]. note information provided by the supplier also includes descriptions issued for advertising purposes. use of a product, process or service in a way not intended by the supplier, but which may result from readily predictable human behaviour [iso/iec guide ]. safety-related formulation of contents of a normative document in the form of a declaration, instructions, recommendations or requirements [compare en , safety related]. the information set down in technical rules is normally restricted to certain technical relations and situations; in this context, it is presumed that the general safety-relevant principles are followed. a procedure with the aim to reduce risk of a (technical) product, process or service according to the following steps serves to reach the safety goals in the design stage: • safety-related layout; • protective measures; • safety-related information for users. function inevitable to maintain safety. a function which, in case of failure, allows the tolerable risk to be immediately exceeded. depending on the situation, it is possible to use one method or a combination of several methods. intuitive hazard detection spontaneous, uncritical listing of possible hazards as a result of brainstorming by experts. group work which is as creative as possible. writing ideas down (on a flip chart) first, then evaluating them. technical documentation (instructions, requirements) is available for many industrial plant and work processes, describing the hazards and safety measures. this documentation has to be obtained before continuing with the risk analysis. the deviations between the set point and the actual situation of individual components are examined. information on the probability of failure of these elements may be found in technical literature. examining the safety aspects in unusual situations (emergency, repair, starting and stopping) when plans are made to modify the plant or processes. a systematic check of the processes and plant parts for effects in normal operation and in case of set point deviations, using selected question words (and -or -not -too much -too little?). this is used in particular for measurement, control units, programming of computer controls, robots. all possible causes and combinations of causes are identified for an unwanted operating state or an event, and represented in the graphic format of a tree. the probability of occurrence of an event can be estimated from the context. the fault tree analysis can also be used retrospectively to clarify the causes of events. additional methods may be: human reliability analysis a frequency analysis technique which deals with the behaviour of human beings affecting the performance of the system, and estimates the influence of human error on reliability. a hazard identification and frequency analysis technique which can be used at an early stage in the design phase to identify and critically evaluate hazards. operating safety block program a frequency analysis technique which utilizes a model of the system and its redundancies to evaluate the operating safety of the entire system. classifying risks into categories, to establish the main risk groups. all typical hazardous substances and/or possible accident sources which have to be taken into account are listed. the checklist may be used to evaluate the conformity with codes and standards. this method is used to estimate whether coincidental failures of an entire series of different parts or modules within a system are possible and what the probable effects would be. estimate the influence of an event on humans, property or the environment. simplified analytical approaches, as well as complex computer models can be used. a large circle of experts is questioned in several steps; the result of the previous step together with additional information is communicated to all participants. during the third or fourth step the anonymous questioning concentrates on aspects on which no agreement is reached so far. basically this technique is used for making predictions, but is also used for the development of new ideas. this method is particularly efficient due to its limitation to experts. a hazard identification and evaluation technique used to establish a ranking of the different system options and to identify the less hazardous options. a frequency analysis technique in which a model of the system is used to evaluate variations of the input conditions and assumptions. a means to estimate and list risk groups; reviews risk pairs and evaluates only one risk pair at a time. overview of data from the past a technique used to identify possible problem areas; can also be used for frequency analysis, based on accident and operation safety data, etc. a method to identify latent risks which can cause unforeseeable incidents. cssr differs from the other methods in that it is not conducted by a team, and can be conducted by a single person. the overview points out essential safety and health requirements related to a machine and simultaneously to all relevant (national, european, international) standards. this information ensures that the design of the machine complies with the issued "state of the art" for that particular type of machine. the "what-if" method is an inductive procedure. the design and operation of the machine in question are examined for fairly simple applications. at every step "what-if" questions are asked and answered to evaluate the effect of a failure of the machine elements or of process faults in view of the hazards caused by the machine. for more complex applications, the "what-if" method is most useful with the aid of a "checklist" and the corresponding work division to allocate specific features of the process to persons who have the greatest experience and practice in evaluating the respective feature. the operator's behaviour and professional knowledge are assessed. the suitability of the equipment and design of the machine, its control unit and protective devices are evaluated. the influence of the materials processed is examined, and the operating and maintenance records are checked. the checklist evaluation of the machine generally precedes the more detailed methods described below. fmea is an inductive method for evaluating the frequency and consequences of component failure. when operating procedures or operator errors are investigated, then other methods may be more suitable. fmea can be more time-consuming than the fault tree analysis, because every mode of failure is considered for every component. some failures have a very low probability of occurrence. if these failures are not analyzed in depth this decision should be recorded in the documentation. the method is specified in iec "analysis techniques for system reliability -procedure for failure mode and effects analysis (fmea)". in this inductive method, the test procedures are based on two criteria: technology and complexity of the control system. mainly, the following methods are applicable: • practical tests of the actual circuit and fault simulation on certain components, particularly in suspected areas of performance identified during the theoretical check and analysis. • simulation of control behaviour (e.g. by means of hardware and/or software models). whenever complex safety-related parts of control systems are tested, it may be necessary to divide the system into several functional sub-systems, and to exclusively submit the interface to fault simulation tests. this technique can also be applied to other parts of machinery. mosar is a complete approach in steps. the system to be analyzed (machinery, process, installation, etc.) is examined as a number of sub-systems which interact. a table is used to identify hazards and hazardous situations and events. the adequacy of the safety measures is studied with a second table, and a third table is used to look at their interdependency. a study, using known tools (e.g. fmea) underlines the possible dangerous failures. this leads to the elaboration of accident scenarios. by consensus, the scenarios are sorted in a severity table. a further table, again by consensus, links the severity with the targets of the safety measures, and specifies the performance levels of the technical and organizational measures. the safety measures are then incorporated into the logic trees and the residual risks are analyzed via an acceptability table defined by consensus. ilo ( ) , , , , , the risks to health at work are numerous and originate from several sources. their origins vary greatly and they cause vast numbers of diseases, injuries and other adverse conditions, such as symptoms of overexertion or overload. traditional occupational health risk factors and their approximate numbers are given in table . . the exposure of workers to hazards or other adverse conditions of work may lead to health problems, manifested in the workers' physical health, psysical workload, psychological disturbances or social aspects of life. workers may be exposed to various factors alone or in different types of combinations, which may or may not show interaction. the assessment of interacting risk factors is complex and may lead to substantial differences in the final risk estimates when compared with estimates of solitary factors. examples of interaction between different risk factors in the work environment are given in table . . the who estimate of the total number of occupational diseases among the billion workers of the world is million a year. this is likely to be an under-estimate due to the lack of diagnostic services, limited legislative coverage of both workers and diseases, and variation in diagnostic criteria between different parts of the world. the mortality from occupational diseases is substantial, comparable with other major diseases of the world population such as malaria or tuberculosis. the recent ilo estimate discloses . million deaths a year from work-related causes in the world including deaths from accidents, dangerous substances, and occupational diseases. eightyfive percent ( %) of these deaths take place in developing countries, where the diagnostic services, social security to families and compensation to workers are less developed. although the risk is decreasing in the industrialized world, the trend is increasing in the rapidly industrializing and transitory countries. a single hazard alone, such as asbestos exposure, is calculated to cause , cancers a year with a fatal outcome in less than two years after diagnosis (takala ). the incidence rates of occupational diseases in well registered industrialized countries are at the level of - cases/ , active employees/year, i.e., the incidence levels are comparable with major public health problems, such as cardiovascular diseases, respiratory disorders, etc. in the industrialized countries, the rate of morbidity from traditional occupational diseases, such as chemical poisonings, is declining, while musculoskeletal and allergic diseases are on the increase. about biological factors that are hazardous to workers' health have been identified in various work environments. some of the new diseases recognized are blood-borne infections, such as hepatitis c and hiv, and exotic bacterial or viral infections transmitted by increasing mobility, international travelling and migration of working people. also some hospital infections and, e.g., drug-resistant tuberculosis, are being contracted increasingly by health care personnel. in the developing countries the morbidity picture of occupational diseases is much less clear for several reasons: low recognition rates, rotation and turnover of workers, shorter life expectancy which hides morbidity with a long latency period, and the work-relatedness of several common epidemic diseases, such as malaria and hiv/aids (rantanen ). the estimation of so-called work-related diseases is even more difficult than that of occupational diseases. they may be about -fold more prevalent than the definite occupational diseases. several studies suggest that siegfried radandt, jorma rantanen and ortwin renn work-related allergies, musculoskeletal disorders and stress disorders are showing a growing trend at the moment. the prevention of work-related diseases is important in view of maintaining work ability and reducing economic loss from absenteeism and premature retirement. the proportion of work-relatedness out of the total morbidity figures has been estimated and found surprisingly high (nurminen and karjalainen , who ) (see table . ). the public health impact of work-related diseases is great, due to their high prevalence in the population. musculoskeletal disorders are among the three most common chronic diseases in every country, which implies that the attribution of work is very high. similarly, cardiovascular diseases in most industrialized countries contribute to % of the total mortality. even a small attributable fraction of work-relatedness implies high rates of morbidity and mortality related to work. the concept of disease is in general not a simple one. when discussing morbidity one has to recognize three different concepts: . illness = an individual's perception of a health problem resulting from either external or internal causes. . disease = an adverse health condition diagnosed by a doctor or other health professional. . sickness = a socially recognized disease which is related to, for example, social security actions or prescription of sick leave, etc. when dealing with occupational and work-related morbidity, one may need to consider any of the above three aspects of morbidity. a recognized occupational disease, however, belongs to group , i.e. it is a sickness defined by legal criteria. medical evidence is required to show that the condition meets the criteria of an occupational disease before recognition can be made. there are dozens of definitions for occupational disease. the content of the concept varies, depending on the context: a) the medical concept of occupational disease is based on a biomedical or other health-related etiological relationship between work and health, and is used in occupational health practice and clinical occupational medicine. b) the legal concept of occupational disease defines the disease or conditions which are legally recognized as conditions caused by work, and which lead to liabilities for recognition, compensation and often also prevention. the legal concept of occupational disease has a different background in different countries, often declared in the form of an official list of occupational diseases. there is universal discrepancy between the legal and medical concept, so that in nearly all countries the official list of recognized occupational diseases is shorter than the medically established list. this automatically implies that a substantial proportion of medically established occupational diseases remain unrecognized, unregistered, and consequently also uncompensated. the definition of occupational disease, as used in this chapter, summarizes various statements generated during the history of occupational medicine: an occupational disease is any disease contracted as a result of exposures at work or other conditions of work. the general criteria for the diagnosis and recognition of an occupational disease are derived from the core statements of various definitions: . evidence on exposure(s) or condition(s) in work or the work environment, which on the basis of scientific knowledge is (are) able to generate disease or some other adverse health condition. . evidence of symptoms and clinical findings which on the basis of scientific knowledge can be associated with the exposure(s) or condition(s) in concern. . exclusion of non-occupational factors or conditions as a main cause of the disease or adverse health condition. point often creates problems, as several occupationally generated clinical conditions can be caused also by non-occupational factors. on the other hand, several factors from different sources and environments are involved in virtually every disease. therefore the wordings "main cause" or "principal cause" are used. the practical solution in many countries is that the attribution of work needs to be more than %. usually the necessary generalizeable scientific evidence is obtained from epidemiological studies, but also other types of evidence, e.g. well documented clinical experience combined with information on working conditions may be acceptable. in some countries, like finland, any disease of the worker which meets the above criteria can be recognized as an occupational disease. in most other countries, however, there are official lists of occupational diseases which determine the conditions and criteria on which the disease is considered to be of occupational origin. in who launched a new concept: work-related disease (who ) . the concept is wider than that of an occupational disease. it includes: a) diseases in which the work or working conditions constitute the principal causal factor. b) diseases for which the occupational factor may be one of several causal agents, or the occupational factor may trigger, aggravate or worsen the disease. c) diseases for which the risk may be increased by work or work-determined lifestyles. the diseases in category (a) are typically recognized as legally determined occupational diseases. categories (b) and (c) are important regarding the morbidity of working populations, and they are often considered as important targets for prevention. in general, categories (b) and (c) cover greater numbers of people, as the diseases in question are often common noncommunicable diseases of the population, such as cardiovascular diseases, musculoskeletal disorders, and allergies and, to a growing extent, stressrelated disorders (see table . ). the concept of work-related disease is very important from the viewpoint of occupational health risk assessment and the use of its results for preventive purposes and for promoting health and safety at work. this is because preventive actions in occupational health practice cannot be limited only to legally recognized morbidity. the lists of occupational diseases contain great numbers of agents that show evidence on occupational morbidity. according to the ilo recommendation r ( ): list of occupational diseases, the occupational diseases are divided into four main categories: . diseases resulting from single causes following the categories listed in table . . the most common categories are physical factors, chemical agents, biological factors and physical work, including repetitive tasks, poor ergonomic conditions, and static and dynamic work. . diseases of the various organs: respiratory system, nervous system, sensory organs, internal organs, particularly liver and kidneys, musculoskeletal system, and the skin. . occupational cancers. . diseases caused by other conditions of work. research on risk perception shows differences in how different types of risks are viewed. instant, visible, dramatic risk events, particularly ones that cause numerous fatalities or severe visible injuries in a single event generally arouse much attention, and are given high priority. on the other hand, even great numbers of smaller events, such as fatal accidents of single workers, arouse less attention in both the media and among regulators, even though the total number of single fatal accidents in a year may exceed the number of fatalities in major events by several orders of magnitude. occupational diseases, with the exception of a few acute cases, are silent, develop slowly, and concern only one or a few individuals at a time. furthermore, the diseases take months or years to develop, in extreme cases even decades, after the exposure or as a consequence of accumulation of exposure during several years. as occupational health problems are difficult to detect and seriously under-diagnosed and under-reported, they tend to be given less priority than accidents. the perception of occupational disease risk remains low in spite of their severity and relatively high incidence. particularly in industrialized countries, the extent of occupational health problems is substantially greater than that of occupational accidents. on a global scale, the estimated number of fatalities due to occupational accidents is , and the respective estimate for fatalities due to work-related diseases is . million a year, giving a fatal accident/fatal disease ratio of to . the corresponding ratio in the eu- is to (takala ). the risk distribution of ods is principally determined by the nature of the work in question and the characteristics of the work environment. there is great variation in the risk of ods between the lowest and highest risk occupations. in the finnish workforce, the risk between the highest risk and the lowest risk occupations varies by a factor of . the highest risk occupations carry a risk which is - times higher than the average for all occupations. the risk of an occupational disease can be estimated on the basis of epidemiological studies, if they do exist in the case of the condition in question. on the other hand, various types of economic activity, work and occupations carry different types of risks, and each activity may have its own risk profile. by examining the available epidemiological evidence, we can recognize high-risk occupations and characterize the typical risks connected with them ( figure . , table . ). as an example, the risk of occupational asthma, dermatosis or musculoskeletal disorders is common in several occupations, but not in all. there may be huge differences in risks between different occupations. the occupations carrying the highest risk for occupational asthma, occupational skin diseases and work-related tenosynovitis, in - , are shown in table . . assessment of the risk of occupational diseases has an impact on research priorities. table . shows the priorities for research in four countries. the similarity of the priorities is striking, revealing that the problems related to the risks of occupational diseases are universal. the diagnosis of occupational diseases is important for the treatment of the disease, and for prevention, registration and compensation. the diagnosis is based on information obtained from: a) data on the work and the work environment usually provided by the employer, occupational health services, occupational safety committee, or expert bodies carrying out hygienic and other services for the workplace. b) information on the health examination of individual workers. the authorities in many countries have stipulated legal obligations for high-risk sectors to follow up the workers' health and promote early detection of changes in their health. occupational health services keep records on examinations. c) workers with special symptoms (for example, asthmatic reactions) are taken into the diagnostic process as early as possible. epidemiological evidence is a critical prerequisite for recognizing causal relationship between work and disease. epidemiology is dependent on three basic sources of information on work and the work environment: (a) exposure assessment that helps to define the "dose" of risk factor at work, (b) the outcome assumed to occur as a biological (or psychological) response to the exposures involved, and (c) time, which has a complex role in various aspects of epidemiology. all these sources are affected by the current dynamics of work life which has major impact on epidemiological research and its results. exposure assessment is the critical initial step in risk assessment. as discussed in this chapter, accurate exposure assessment will become more difficult and cumbersome than before in spite of remarkable achievements in measurement, analysis and monitoring methods in occupational hygiene, toxicology and ergonomics. great variations in working hours and individu-alization of exposures, growing fragmentation and mobility increase the uncertainties, which are multiplied. structural uncertainty, measurement uncertainty, modelling uncertainty, input data uncertainty and natural uncertainty amplify each other. as a rule, variation in any direction in exposure assessment tends to lead to underestimation of risk, and this has severe consequences to health. personal monitoring of exposures, considering variations in individual doses, and monitoring internal doses using biological monitoring methods help in the control of such variation. a monofactorial exposure situation in the past was ideal in the assessment because of its manageability. it also occurs usually as a constant determinant for long periods of time and can be regularly and continuously measured and monitored. this is very seldom the case today, and exposure assessment in modern work life is affected by discontinuities of the enterprise, of technologies and production methods, and turnover of the workforce, as well as the growing mobility and internationalization of both work and workers. company files that were earlier an important source of exposure and health data no longer necessarily fulfil that function. in addition, the standard -h time-weighted average for exposure assessment can no longer be taken as a standard, as working hours are becoming extremely heterogeneous. assessment of accurate exposure is thus more and more complex and cumbersome, and new strategies and methods for the quantification of exposure are needed. three challenges in particular can be recognized: a) the challenge arising from numerous discontinuities, fragmentation and changes in the company, employment and technology. although in the past company data were collected from all sources that were available, collective workroom measurements were the most valuable source of data. due to the high mobility of workers and variation in the work tasks, personal exposure monitoring is needed that follows the worker wherever he or she works. special smart cards for recording all personal exposures over years have been proposed, but so far no system-wide action has been possible. in radiation protection, however, such a personal monitoring system has long been a routine procedure. b) the complex nature of exposures where dozens of different factors may be involved (such as those in indoor air problems) and acting in combinations. table . gives a list of exposing factors in modern work life, many of which are difficult to monitor. c) new, rapidly spreading and often unexpected exposures that are not well characterized. often their mechanisms of action are not known, or the fast spread of problems calls for urgent action, as in the case of bovine spongiform encephalopathy (bse) in the s, sars outbreak in , and in the new epidemics of psychological stress or musculoskeletal disorders in modern manufacturing. the causes of occupational diseases are grouped into several categories by the type of factor (see table . ). a typical grouping is the one used in ilo recommendation no. . the lists of occupational diseases contain diseases caused by one single factor only, but also diseases which may have been caused by multifactorial exposures. exposure assessment is a crucial step in the overall risk assessment. the growing complexity of exposure situations has led to the development of new methods for assessing such complex exposure situations. these methods are based on construction of model matrices for jobs which have been studied thoroughly for their typical exposures. the exposure profiles are illustrated in job exposure matrices (jem) which are available for dozens of occupations (heikkilä et al. , guo . several factors can cause occupational diseases. the jem is a tool used to convert information on job titles into information on occupational risk factors. jem-based analysis is economical, systematic, and often the only reasonable choice in large retrospective studies in which exposure assessment at the individual level is not feasible. but the matrices can also be used in the practical work for getting information on typical exposure profiles of various jobs. the finnish national job-exposure matrix (finjem) is the first and so far the only general jem that is able to give a quantitative estimation of cumulative exposure. for example, finjem estimates were used for exposure profiling on chemical exposures and several other cancer-risk factors for occupational categories. the jem analysis has been further developed into task specific exposure matrices charting the exposure panorama of various tasks (benke et al. (benke et al. , . as the previous mono-causal, mono-mechanism, mono-outcome setting has shifted in the direction of multicausality, multiple mechanisms and multioutcomes, the assessment of risks has become more complex. some outcomes, as mentioned above, are difficult to define and measure with objective methods and some of them may be difficult to recognize by exposed groups themselves, or even by experts and researchers. for example, the objective measurement of stress reactions is still imprecise in spite of improvements in the analysis of some indicator hormones, such as adrenalin, noradrenalin, cortisol, prolactin, or in physical measurements, such as galvanic skin resistance and heart rate variability. questionnaires monitoring perceived stress symptoms are still the most common method for measuring stress outcomes. thanks to well organized registries, particularly in germany and the nordic countries, data on many of the relevant outcomes of exposure, such as cancer, pneumoconiosis, reproductive health disturbances and cardiovascular diseases can be accumulated, and long-term follow-up of outcomes at the group level is therefore possible. on the other hand, several common diseases, such as cardiovascular diseases, may have a work-related aetiology, but it may be difficult to show at individual level. the long-term data show that due to changes in the structure of economies, types of employment, occupational structures and conditions of work, many of the traditional occupational diseases, such as pneumoconiosis and acute intoxications have almost disappeared. several new outcomes have appeared, however, such as symptoms of physical or psychological overload, psychological stress, problems of adapting to a high pace of work, and uncertainty related to rapid organizational changes and risk of unemployment. in addition, age-related and work-related diseases among the ageing workforce are on the increase (kivimäki et al. , ilmarinen . these new outcomes may have a somatic, psychosomatic or psychosocial phenotype, and they often appear in the form of symptoms or groups of symptoms instead of well-defined diagnoses. practising physicians or clinics are not able to set an icd (international statistical classification of diseases and health-related conditions)-coded diagnosis for them. in spite of their diffuse nature, they are still problems for both the worker and the enterprise, and their consequences may be seen as sickness absenteeism, premature retirement, loss of job satisfaction, or lowered productivity. thus, they may have even a greater impact on the quality of work life and the economy than on clinical health. many such outcomes have been investigated by using questionnaire surveys among either representative samples of the whole workforce or by focusing the survey on a specific sector or occupational group. the combination of data from the surveys of "exposing factors", such as organizational changes, with questionnaire surveys of "outcomes", such as sickness absenteeism, provides epidemiological information on the association between the new exposures and the new outcomes. there are, however, major problems in both the accurate measurement of the exposures and outcomes, and also, the information available on the mechanisms of action is very scarce. epidemiology has expanded the focus of our observations from crosssectional descriptions to longitudinal perspectives, by focussing attention on the occurrence of diseases and finding associations between exposure and morbidity. such an extension of vision is both horizontal and vertical, looking at the causes of diseases. time is not only a temporal parameter in epidemiology, but has also been used for the quantification of exposure, measurement of latencies, and the detection of acceleration or slowing of the course of biological processes. as the time dimension in epidemiology is very important, the changes in temporal parameters of the new work life also affect the methods of epidemiological research. the time dimension is affected in several ways. first, the fragmentation and discontinuities of employment contracts, as described above, break the accumulation of exposure time into smaller fragments, and continuities are thus difficult to maintain. collecting data on cumulative exposures over time becomes more difficult. the time needed for exposure factors to cause an effect becomes more complex, as the discontinuities typical to modern work life allow time for biological repair and elimination processes, thus diluting the risk which would get manifested from continuous exposure. the dosage patterns become more pulse-type, rather than being continuous, stable level exposures. this may affect the multi-staged mechanisms of action in several biological processes. the breaking up of time also increases the likelihood of memory bias of respondents in questionnaire studies among exposed workers, and thus affects the estimation of total exposures. probably the most intensive effect, however, will be seen as a consequence of the variation in working hours. for example, instead of regular work of hours per day, hours per week and months per year, new time schedules and total time budgets are introduced for the majority of workers in the industrial society. the present distribution of weekly working hours in finland is less than hours per week for one third of workers, regular - hours per week for one third, and - hours per week for the remaining third. thus the real exposure times may vary substantially even among workers in the same jobs and same occupations, depending on the working hours and the employment contract (temporary, seasonal, part-time, full-time) (härmä , piirainen et al. . such variation in time distribution in "new work life" has numerous consequences for epidemiological studies, which in the past "industrial society" effectively utilized the constant time patterns at work for the assessment of exposures and outcomes and their interdependencies. the time dimension also has new structural aspects. as biological processes are highly deterministic in terms of time, the rapid changes in work life cannot wait for the maturation of results in longitudinal follow-up studies. the data are needed rapidly in order to be useful in the management of working conditions. this calls for the development of rapid epidemiological methods which enable rapid collection of the data and the making of analyses in a very short time, in order to provide information on the effects of potential causal factors before the emergence of a new change. often these methods imply the compromising of accuracy and reliability for the benefit of timeliness and actuality. as occupational epidemiology is not only interested in acute and short-term events, but looks at the health of workers over a - -year perspective, the introduction of such new quick methods should not jeopardize the interest and efforts to carry out long-term studies. epidemiology has traditionally been a key tool in making a reliable risk assessment of the likelihood of the adverse outcomes from certain levels of exposure. the new developments in work life bring numerous new challenges to risk assessment. as discussed above, the new developments in work life have eliminated a number of possibilities for risk assessment which prevailed in the stable industrial society. on the other hand, several new methods and new information technologies provide new opportunities for collection and analysis of data. traditionally, the relationship between exposure and outcome has been judged on the basis of the classical criteria set by hill ( ) . höfler ( ) crystallizes the criteria with their explanations as the following: . strength of association: a strong association is more likely to have a causal component than is a modest association. . consistency: a relationship is observed repeatedly. . specificity: a factor influences specifically a particular outcome or population. . temporality: the factor must precede the outcome it is assumed to affect. ing dose of exposure or according to a function predicted by a substantive theory. . plausibility: the observed association can be plausibly explained by substantive matter (e.g. biological) explanations. . coherence: a causal conclusion should not fundamentally contradict present substantive knowledge. . experiment: causation is more likely if evidence is based on randomized experiments. . analogy: for analogous exposures and outcomes an effect has already been shown. the hill criteria have been subjected to scrutiny, and sven hernberg has analyzed them in detail from the viewpoint of occupational health epidemiology. virtually all the hill criteria are affected by the changes in the new work life, and therefore methodological development is now needed. a few comments on causal inference are made here in view of the critiques by rothman ( ), hernberg ( ) and höfler ( ) : the strength of association will be more difficult to demonstrate due to the growing fragmentation that tends to diminish the sample sizes. the structural change that removes workers from high-level exposures to lower and shorterterm exposures may dilute the strength of effect, which may still prevail, but at a lower level. consistency of evidence may also be affected by the higher variation in conditions of work, study groups, multicultural and multiethnic composition of the workforce, etc. similarly, in the multifactorial, multi-mechanism, multi-outcome setting, the specificity criterion is not always relevant. the temporal dimension has already been discussed. in rapidly changing work life the follow-up times before the next change and before turnover in the workforce may be too short. the outcomes may also be defined by the exposures that have taken place long ago but have not been considered in the study design because historical data are not available. the biological gradient may be possible to demonstrate in a relatively simple exposure-outcome relationship. however, the more complex and multifactorial the setting becomes, the more difficult it may be to show the doseresponse relationship. the dose-response relationship may also be difficult to demonstrate in the cases of relatively ill-defined outcomes which are difficult to measure, but which can be detected as qualitative changes. biological plausibility is an important criterion which in a multimechanism setting may at least in part be difficult to demonstrate. on the other hand, the mechanisms of numerous psychological and psychosocial outcomes lack explanations, even though they undoubtedly are work-related. the missing knowledge of the mechanism of action did not prevent the establishment of causality between asbestos and cancer in a pleural sack or a lung. as many of the new outcomes may be context-dependent, the coherence criterion may be irrelevant. similarly, many of the psychosocial outcomes are difficult to put into an experimental setting, and it can be difficult to make inferences based on analogy. all of the foregoing implies that the new dynamic trends in work life challenge epidemiology in a new way, particularly in the establishment of causality. knowledge of causality is required for the prevention and management of problems. the hill criteria nevertheless need to be supplemented with new ones to meet the conditions of the new work life. similarly, more definitive and specific criteria and indicators need to be developed for the new exposures and outcomes. many of the challenges faced in the struggle to improve health and safety in modern work life can only be solved with the help of research. research on occupational health in the rapidly changing work life is needed more than ever. epidemiology is, and will remain, a key producer of information needed for prevention policies and for ensuring healthy and safe working conditions. the role of epidemiology is, however, expanding from the analysis of the occurrence of well-defined clinical diseases to studies on the occurrence of several other types of exposure and outcome, and their increasingly complex associations. as the baseline in modern work life is shifting in a more dynamic direction, and many parameters in work and the workers' situation are becoming more fragmented, incontinuous and complex, new approaches are needed to tackle the uncertainties in exposure assessment. the rapid pace of change in work life calls for the development of assessment methods to provide up-todate data quickly, so that they can be used to manage these changes and their consequences. many new outcomes which are not possible to register as clinical icd diagnoses constitute problems for today's work life. this is particularly true in the case of psychological, psychosocial and many musculoskeletal outcomes which need to be managed by occupational health physicians. methods for the identification and measurement of such outcomes need to be improved. the traditional hill criteria for causal inference are not always met even in cases where true association does exist. new criteria suitable for a new situation should be established without jeopardizing the original objective of ascertaining the true association. developing the bayesian inference further through utilization of a priori knowledge and a holistic approach may provide responses to new challenges. new neural network softwares may help in the management of the growing complexity. the glory of science does not lie in the perfection of a scientific method but rather in the recognition of its limitations. we must keep in mind the old saying: "absence of evidence is not evidence of absence". instead, it is merely a consequence of our ignorance that should be reduced through further efforts in systematic research, and particularly through epidemiology. and secondly, the ultimate value of occupational health research will be determined on the basis of its impact on practice in the improvement of the working conditions, safety and health of working people. changing conditions of work, new technologies, new substances, new work organizations and working practices are associated with new morbidity patterns and even with new occupational and work-related diseases. the new risk factors, such as rapidly transforming microbials and certain social and behavioural "exposures" may follow totally new dynamics when compared with the traditional industrial exposures (self-replicating nature of microbials and spreading of certain behaviours, such as terrorism) (smolinski et al. , loza . several social conditions, such as massive rural-urban migration, increased international mobility of working people, new work organizations and mobile work may cause totally new types of morbidity. examples of such development are, among others, the following: • mobile transboundary transportation work leading to the spread of hiv/aids. • increased risk of metabolic syndrome, diabetes and cardiovascular diseases aggravated by unconventional working hours. • increased risk of psychological burnout in jobs with a high level of longterm stress. • virtually a global epidemic of musculoskeletal disorders among vdu workers with high work load, psychological stress and poor ergonomics. the incidences of occupational diseases may not decline in the future, but the type of morbidity may change. the direction of trend in industrialized countries is the prominence of work-related morbidity and new diseases, while the traditional occupational diseases such as noise injury, pneumoconiosis, repetitive strain and chemical intoxications may continue to be prevalent in developing countries for long periods in the future. the new ergonomics problems are related to light physical work with a considerable proportion of static and repetitive workload. recent research points to an interesting interaction between unergonomic working conditions and psychological stress, leading to a combined risk of musculoskeletal disorders of the neck, shoulders and upper arms, including carpal tunnel syndrome in the wrist. the muscle tension in static work is amplified by the uncontrolled muscular tension caused by psychological stress. furthermore, there seems to be wide inter-individual variation in the tendency to respond with spasm, particularly in the trapezius muscle of neck, under psychological stress. about % of the health complaints of working-aged people are related to musculoskeletal disorders, of which a substantial part is work-related. the epidemics have been resistant against preventive measures. new regulatory and management strategies may be needed for effective prevention and control measures (westgaard et al. , paoli and merllié ) . the st century will be the era of the brain at work and consequently of psychological stress. between % and % of eu workers in certain occupations report psychological stress due to high time pressure at work (parent-thirion et al. ). the occurrence of work-related stress is most prevalent in occupations with tight deadlines, pressure from clients, or the high level of responsibility for productivity and quality given to the workers. undoubtedly, the threat of unemployment increases the perception of stress as well. as a consequence, for example, in finland some % of workers report symptoms of psychological overload and about % show clinical signs of burn out. these are not the problems of low-paid manual workers only, but also, for example, highly educated and well-paid computer super-experts have an elevated risk of burnout as a consequence of often self-committed workload (kalimo and toppinen ) . unconventional and ever longer working hours are causing similar problems. for example, one third of finns work over hours a week, and of these % work over hours, and % often work - hours per week. it is important to have flexibility in the work time schedules, but it is counterproductive if the biologically determined physiological time rhythms of the worker are seriously offended. over % have a sleep deficit of at least one hour each day, and % are tired and somnolent at work (härmä et al. ) . the toughening global competition, growing productivity demands and continuous changes of work, together with job insecurity, are associated with increased stress. up to - % of workers in different countries and different sectors of the economy report high time pressure and tight deadlines. this prevents them from doing their job as well as they would like to, and causes psychological stress. psychological stress is particularly likely to occur if the high demands are associated with a low degree of self-regulation by the workers (houtman ) . stress, if continuous, has been found to be detrimental to physical health (cardiovascular diseases), mental health (psychological burnout), safety (accident risks), and musculoskeletal disorders (particularly hand-arm and shoulder-neck disorders). it also has a negative impact on productivity, sickness absenteeism, and the quality of products and services. the resulting economic losses due to sickness absenteeism, work disability and lower quality of products and services are substantial. the prevention of stress consists not only of actions targeted at the individual worker. there is also a need for measures directed at the work organization, moderation of the total workload, competence building and collaboration within the workplace (theorell ). the support from foremen and supervisors is of crucial importance in stress management programmes. another type of psychological burden is the stress arising from the threat of physical violence or aggressive behaviour from the part of clients. in finland some % of workers have been subjected to insults or the threat of physical violence, % have experienced sexual harassment, and % mental violence or bullying at work. the risk is substantially higher for female workers than for men. stress has been found to be associated with somatic health, cardiovascular diseases, mental disorders and depression. one of the new and partly re-emerging challenges of occupational health services is associated with the new trends in microbial hazards. there are several reasons for these developments, for instance, the generation of new microbial strains, structural changes in human habitations with high population densities, growing international travel, and changes possibly in our microbiological environment as a consequence of global warming. of the to million species in the world, about million are microbes. the vast majority of them are not pathogenic to man, and we live in harmony and symbiosis with many of them. we also use bacteria in numerous ways to produce food, medicines, proteins, etc. the pathogenic bacteria have been well controlled in the th century; this control had an enormous positive impact on human health, including occupational health. but now the microbial world is challenging us in many ways. new or re-emerging biological hazards are possible due to the transformation of viruses, the increased resistance of some microbial strains (e.g. tuberculosis and some other bacterial agents) and the rapid spread of contaminants through extensive overseas travelling (smolinski et al. ) . the scenarios of health hazards from the use of genetically manipulated organisms have not been realized, but biotechnological products have brought along new risks of allergies. a major indoor air problem is caused by fungi, moulds and chemical emissions from contaminated construction materials. new allergies are encountered as a consequence of the increasingly allergic constitution of the population and of the introduction of new allergens into the work environment. health care personnel are increasingly exposed to new microbial hazards due to the growing mobility of people. evidence of high rates of hepatitis b antigen positivity has been shown among health care workers who are in contact with migrants from endemic areas. along with the growing international interactions and mobility, a number of viral and re-emerging bacterial infections also affect the health of people engaged in health care and the care of the elderly, as well as personnel in migrant and refugee services, in social services and other public services. this section applies the general framework for risk governance (chapter ) to the area of environmental risks. why should we include this topic in a book that is dominantly dealing with occupational health risks and safety issues? there are two major reasons for this decision: . most risks that impact health and safety of human beings are also affecting the natural environment. it is therefore necessary for risk managers to reflect the consequences of risk-taking activities with respect to workers, the public and the environment. these risk consequences are all interconnected. our approach to foster an integral approach to risk and risk management requires the integration of all risk consequences. . environmental risks are characterized by many features and properties that highlight exemplary issues for many generic risk assessment and management questions and challenges. for example, the question of how to balance benefits and risks becomes more accentuated, if not human life, but damage to environmental quality is at stake. while most people agree that saving human lives takes priority over economic benefits, it remains an open question of how much environmental change and potential damage one is willing to trade off against certain economic benefits. this section is divided into two major parts. part will introduce the essentials of environmental ethics and the application of ethical principles to judging the acceptability of human interventions into the environment. part addresses the procedures for an analytic-deliberative process of decision making when using the risk governance framework developed in chapter . it should be noted that this section draws from material that the author has compiled for the german scientific council for global environmental change and that has been partially published in german in a special report of the council (wbgu ). the last section on decision making has borrowed material from an unpublished background document on decision making and risk management that dr. warner north and the author had prepared for the us national academy of sciences. should people be allowed to do everything that they are capable of doing? this question is posed in connection with new technologies, such as nanotubes, or with human interventions in nature, such as the clearance of primaeval forests so that the land can be used for agriculture. intuitively everyone answers this question with a definitive "no": no way should people be allowed to everything that they are capable of doing. this also applies to everyday actions. many options in daily life, from lying to minor deception, from breaking a promise up to going behind a friend's back, are obviously actions that are seen by all well-intentioned observers as unacceptable. however, it is much more difficult to assess those actions where the valuation is not so obvious. is it justified to break a promise when keeping the promise could harm many other people? actions where there are conflicts between positive and negative consequences or where a judgement could be made one way or the other with equally good justification are especially common in risk management. there is hardly anyone who wilfully and without reason pollutes the environment, releases toxic pollutants or damages the health of individuals. people who pursue their own selfish goals on the cost and risk of others are obviously acting wrongly and every legislator will sanction this behaviour with the threat of punishment or a penalty. but there is a need for clarification where people bring about a benefit to society with the best intentions and for plausible reasons and, in the process, risk negative impacts on others. in ethics we talk about "conflicting values" here. most decisions involving risks to oneself or others are made for some reason: the actors who make such interventions want to secure goods or services to consumers, for example, to ensure long-term jobs and adequate incomes, to use natural resources for products and services or to use nature for recycling waste materials from production and consumption that are no longer needed. none of this is done for reasons of brotherly love, but to maintain social interests. even improving one's own financial resource is not immoral mere for this reason. the list of human activities that pose risks onto others perpetrated for existential or economic reasons could be carried on into infinity. human existence is bound to taking opportunities and risks. here are just a few figures: around , years ago about million people lived on the earth. under the production conditions those days (hunter-gatherer culture) this population level was the limit for the human species within the framework of an economic form that only interfered slightly with man's natural environment. the neolithic revolution brought a dramatic change: the carrying capacity of the world for human beings increased by a factor of and more. this agrarian pre-industrial cultural form was characterized by tightly limited carrying capacity, in around the earth was capable of feeding approx. million people. today the world supports billion people -and this figure is rising. the carrying capacity in comparison to the neolithic age has thus increased thousand-fold and continues to grow in parallel to new changes in production conditions (fritsch ; kesselring ; mohr ) . the five "promethean innovations" are behind this tremendous achievement of human culture: mastering fire, using the natural environment for agriculture, transforming fossil fuels into thermal and mechanical energy, industrial production and substituting material with information (renn ) . with today's settlement densities and the predominantly industrial way of life, the human race is therefore dependent on the technical remodelling of nature. without doubt, it needs this for survival, especially for the well-being of the innumerable people, goods and services that reduce the stock of natural resources. with regard to the question of the responsibility of human interventions in nature, the question cannot be about "whether" but -even better -about "how much", because it is an anthropological necessity to adapt and shape existing nature to human needs. for example, the philosopher klaus michael meyer-abich sees the situation as follows: ". . . we humans are not there to leave the world as though we had never been there. as with all other life forms, it is also part of our nature and our lives to bring about changes in the world. of course, this does not legitimise the destructive ways of life that we have fallen into. but only when we basically approve of the changes in the world can we turn to the decisive question of which changes are appropriate for human existence and which are not" (meyer-abich ). therefore, to be able to make a sensible judgement of the balance between necessary interventions into the environment and the risks posed by these interventions to human health and environmental quality, the range of products and services created by the consumption of nature has to be considered in relation to the losses that are inflicted on the environment and nature. with this comparison, it can be seen that even serious interventions in nature and the environment did not occur without reflection, but to provide the growing number of people with goods and services; these people need them to survive or as a prerequisite for a "good" life. however, at the same time it must be kept in mind that these interventions often inflict irreversible damage on the environment and destroy possible future usage potentials for future generations. above and beyond this, for the human race, nature is a cradle of social, cultural, aesthetic and religious values, the infringement of which, in turn, has a major influence on people's well-being. on both sides of the equation, there are therefore important goods that have to be appreciated when interventions in nature occur. but what form should such an appreciation take? if the pros and cons of the intervention in nature have to be weighed against each other, criteria are needed that can be used as yardsticks. who can and may draw up such criteria, according to which standards should the interventions be assessed and how can the various evaluative options for action be compared with each other for each criterion? taking risks always involves two major components: an assessment of what we can expect from an intervention into the environment (be it the use of resources or the use of environments as a sink for our waste). this is the risk and benefit assessment side of the risk analysis. secondly, we need to decide whether the assessed consequences are desirable. whereas the estimate of consequences broadly falls in the domain of scientific research and expertise, with uncertainties and ambiguities in particular having to be taken into account (irgc , klinke and renn ) , the question about the foundations for evaluating various options for action and about drawing up standards guiding action is a central function of ethics (taylor ) . ethics can provide an answer to the question posed in the beginning ("should people be allowed to do everything that they are capable of doing?") in a consistent and transparent manner. in section . . , environmental ethics will be briefly introduced. this review is inspired by the need for a pragmatic and policy-oriented approach. it is not a replacement for a comprehensive and theoretically driven compendium of environmental ethics. environmental ethics will then be applied to evaluate environmental assets. in this process, a simple distinction is made between categorical principles -that must under no circumstances be exceeded or violated -and compensatory principles, where compensation with other competing principles is allowed. this distinction consequently leads to a classification of environmental values, which, in turn, can be broken down into criteria to appreciate options for designing environmental policies. in section . . , these ideas of valuation will be taken up and used to translate the value categories into risk handling guidelines. at the heart of the considerations here is the issue of how the aims of ethically founded considerations can be used to support and implement risk-based balancing of costs and benefits. for this purpose, we will develop an integrative risk governance framework. the concept of risk governance comprises a broad picture of risk: not only does it include what has been termed "risk management" or "risk analysis", it also looks at how risk-related decision making unfolds when a range of actors is involved, requiring co-ordination and possibly reconciliation between a profusion of roles, perspectives, goals and activities. indeed, the problem-solving capacities of individual actors, be they government, the scientific community, business players, ngos or civil society as a whole, are limited and often unequal to the major challenges facing society today. then the ideas of the operational implementation of normative and factual valuations are continued and a procedure is described that is capable of integrating ethical, risk-based and work-related criteria into a proposed procedural orientation. this procedure is heavily inspired by decision analysis. answering the question about the right action is the field of practical philosophy, ethics. following the usual view in philosophy, ethics describes the theory of the justification of normative statements, i.e. those that guide action (gethmann , mittelstraß , nida-rümelin a , revermann . a system of normative statements is called "morals". ethical judgements therefore refer to the justifiability of moral instructions for action that may vary from individual to individual and from culture to culture (ott ) . basically, humans are purpose-oriented and self-determined beings who act not only instinctively, but also with foresight, and are subject to the moral standards to carry out only those actions that they can classify as good and justifiable (honnefelder ) . obviously, not all people act according to the standards that they themselves see as necessary, but they are capable of doing so. in this context, it is possible for people to act morally because, on the one hand, they are capable of distinguishing between moral and immoral action and, on the other, are largely free to choose between different options for action. whether pursuing a particular instruction for action should be considered as moral or immoral is based on whether the action concerned can be felt and justified to be "reasonable" in a particular situation. standards that cross over situations and that demand universal applicability are referred to as principles here. conflicts may arise between competing standards (in a specific situation), as well as between competing principles, the solution of which, in turn, needs justification (szejnwald-brown et al. ) . providing yardsticks for such justification or examining moral systems with respect to their justifiability is one of the key tasks of practical ethics (gethmann ) . in ethics a distinction is made between descriptive (experienced morality) and prescriptive approaches, i.e. justifiable principles of individual and collective behaviour (frankena , hansen . all descriptive approaches are, generally speaking, a "stock-taking" of actually experienced standards. initially, it is irrelevant whether these standards are justified or not. they gain their normative force solely from the fact that they exist and instigate human action (normative force of actual action). most ethicists agree that no conclusions about general validity can be drawn from the actual existence of standards. this would be a naturalistic fallacy (akademie der wissenschaften , ott ) . nevertheless, experienced morality can be an important indicator of different, equally justifiable moral systems, especially where guidance for cross-cultural behaviour is concerned. this means that the actual behaviour of many people with regard to their natural environment reveals which elements of this environment they value in particular and which they do not. however, in this case, too, the validity of the standards is not derived from their factuality, but merely used as a heurism in order to find an adequate (possibly culture-immanent) justification. but given the variety of cultures and beliefs, how can standards be justified inter-subjectively, i.e. in a way that is equally valid to all? is it not the case that science can only prove or disprove factual statements (and this only to a certain extent), but not normative statements? a brief discourse on the various approaches in ethics is needed to answer this question. first of all, ethics is concerned with two different target aspects: on the one hand, it is concerned with the question of the "success" of one's own "good life", i.e. with the standards and principles that enable a person to have a happy and fulfilled life. this is called eudemonistic ethics. on the other hand, it is concerned with the standards and principles of living together, i.e. with binding regulations that create the conditions for a happy life: the common good. this is called normative ethics (galert , ott . within normative ethics a distinction is made between deontological and teleological approaches when justifying normative statements (höffe ) . deontological approaches are principles and standards of behaviour that apply to the behaviour itself on the basis of an external valuation criterion. it is not the consequences of an action that are the yardstick of the valuation; rather, it is adhering to inherent yardsticks that can be used against the action itself. such external yardsticks of valuation are derived from religion, nature, intuition or common sense, depending on the basic philosophical direction. thus, protection of the biosphere can be seen as a divine order to protect creation (rock , schmitz , as an innate tendency for the emotional attachment of people to an environment with biodiversity (wilson ) , as a directly understandable source of inspiration and joy (ehrenfeld ) or as an educational means of practising responsibility and maintaining social stability (gowdy ) . by contrast, teleological approaches refer to the consequences of action. here, too, external standards of valuation are needed since the ethical quality of the consequences of action also have to be evaluated against a yardstick of some kind. with the most utilitarian approaches (a subset of the teleological approaches) this yardstick is defined as an increase in individual or social benefit. in other schools of ethics, intuition (can the consequence still be desirable?) or the aspect of reciprocity (the so-called "golden rule": "do as you would be done by") play a key role. in the approaches based on logical reasoning (especially in kant), the yardstick is derived from the logic of the ability to generalize or universalize. kant himself is in the tradition of deontological approaches ("good will is not good as a result of what it does or achieves, but just as a result of the intention"). according to kant, every principle that, if followed generally, makes it impossible for a happy life to be conducted is ethically impermissible. in this connection, it is not the desirability of the consequences that captures kant's mind, but the logical inconsistency that results from the fact that the conditions of the actions of individuals would be undermined if everyone were to act according to the same maxims (höffe ) . a number of contemporary ethicists have taken up kant's generalization formula, but do not judge the maxims according to their internal contradictions; rather, they judge them according to the desirability of the consequences to be feared from the generalization (jonas or zimmerli should be mentioned here). these approaches can be defined as a middle course between deontological and teleological forms of justification. in addition to deontological and teleological approaches, there is also the simple solution of consensual ethics, which, however, comprises more than just actually experienced morality. consensual ethics presupposes the explicit agreement of the people involved in an action. everything is allowed provided that all affected (for whatever reason) voluntarily agree. in sexual ethics at the moment a change from deontological ethics to a consensual moral code can be seen. the three forms of normative ethics are shown in figure . . the comparison of the basic justification paths for normative moral systems already clearly shows that professional ethicists cannot create any standards or des- ignate any as clearly right, even if they play a role in people's actual lives. much rather it is the prime task of ethics to ensure on the basis of generally recognized principles (for example, human rights) that all associated standards and behaviour regulations do not contradict each other or a higher order principle. above and beyond this, ethics can identify possible solutions that may occur with a conflict between standards and principles of equal standing. ethics may also reveal interconnections of justification that have proved themselves as examination criteria for moral action in the course of their disciplinary history. finally, many ethicists see their task as providing methods and procedures primarily of an intellectual nature by means of which the compatibility or incompatibility of standards within the framework of one or more moral systems can be completed. unlike the law, the wealth of standards of ethics is not bound to codified rules that can be used as a basis for such compatibility examinations. every normative discussion therefore starts with the general issues that are needed in order to allow individuals a "good life" and, at the same time, to give validity to the principles required to regulate the community life built on common good. but how can generally binding and inter-subjectively valid criteria be made for the valuation of "the common good"? in modern pluralistic societies, it is increasingly difficult for individuals and groups of society to draw up or recognize collectively binding principles that are perceived by all equally as justifiable and as self-obliging (hartwich and wewer , zilleßen ) . the variety of lifestyle options and subjectiernization. with increasing technical and organizational means of shaping the future, the range of behaviour options available to people also expands. with the increasing plurality of lifestyles, group-specific rationalities emerge that create their own worldviews and moral standards, which demand a binding nature and validity only within a social group or subculture. the fewer cross-society guiding principles or behaviour orientations are available, the more difficult is the process of agreement on collectively binding orientations for action. however, these are vital for the maintenance of economic cooperation, for the protection of the natural foundations of life and for the maintenance of cohesion in a society. no society can exist without the binding specification of minimum canons of principles and standards. but how can agreement be reached on such collectively binding principles and standards? what criteria can be used to judge standards? the answers to this question depend on whether the primary principles, in other words, the starting point of all moral systems, or secondary principles or standards, i.e. follow-on standards that can be derived from the primary principles, are subjected to an ethical examination. primary principles can be categorical or compensatory (capable of being compensated). categorical principles are those that must not be infringed under any circumstances, even if other prin- fication of meaning (individualization) are accompanying features of mod-ciples would be infringed as a result. the human right to the integrity of life could be named here as an example. compensatory principles are those where temporary or partial infringement is acceptable, provided that as a result the infringement of a principle of equal or higher ranking is avoided or can be avoided. in this way certain freedom rights can be restricted in times of emergency. in the literature on ethical rules, one can find more complex and sophisticated classifications of normative rules. for our purpose to provide a simple and pragmatic framework, the distinction in four categories (principles and standards; categorical and compensatory) may suffice. this distinction has been developed from a decision-analytical perspective. but how can primary principles be justified as equally valid for all people? although many philosophers have made proposals here, there is a broad consensus today that neither philosophy nor any other human facility is capable of stating binding metacriteria without any doubt and for all people, according to which such primary principles should be derived or examined (mittelstraß ) . a final justification of normative judgements cannot be achieved by logical means either, since all attempts of this kind automatically end either in a logical circle, in an unending regression (vicious cycle) or in a termination of the procedure and none of these alternatives is a satisfactory solution for final justification (albert ). the problem of not being able to derive finally valid principles definitively, however, seems to be less serious than would appear at first glance. because, regardless of whether the basic axioms of moral rules are taken from intuition, observations of nature, religion, tradition reasoning or common sense, they have broadly similar contents. thus, there is broad consensus that each human individual has a right to life, that human freedom is a high-value good and that social justice should be aimed at. but there are obviously many different opinions about what these principles mean in detail and how they should be implemented. in spite of this plurality, however, discerning and well-intentioned observers can usually quickly agree, whether one of the basic principles has clearly been infringed. it is more difficult to decide whether they have clearly been fulfilled or whether the behaviour to be judged should clearly be assigned to one or several principles. since there is no finally binding body in a secular society that can specify primary principles or standards ex cathedra, in this case consensus among equally defendable standards or principles can be used (or pragmatically under certain conditions also majority decisions). ethical considerations are still useful in this case as they allow the test of generalization and the enhancement of awareness raising capabilities. in particular, they help to reveal the implications of such primary principles and standards. provided that primary principles are not concerned (such as human rights), the ethical discussion largely consists of examining the compatibility of each of the available standards and options for action with the primary principles. in this connection, the main concerns are a lack of contradictions (consistency), logical consistency (deductive validity), coherence (agreement with other principles that have been recognized as correct) and other, broadly logical criteria (gethmann ) . as the result of such an examination it is entirely possible to reach completely different conclusions that all correspond to the laws of logic and thus justify new plurality. in order to reach binding statements or valuations here the evaluator can either conduct a discussion in his "mind" and let the arguments for various standards compete with each other (rather like a platonic dialogue) or conduct a real discussion with the people affected by the action. in both cases the main concern is to use the consensually agreed primary principles to derive secondary principles of general action and standards of specific action that should be preferred over alternatives that can be equally justified. a plurality of solutions should be expected especially because most of the concrete options for action comprise only a gradual fulfilment and infringement of primary principles and therefore also include conflicting values. for value conflicts at the same level of abstraction there are, by definition, no clear rules for solution. there are therefore frequently conflicts between conserving life through economic development and destroying life through environmental damage. since the principle of conserving life can be used for both options a conflict is unavoidable in this case. to solve the conflicts, ethical considerations, such as the avoidance of extremes, staggering priorities over time or the search for third solutions can help without, however, being able to convincingly solve this conflict in principle to the same degree for all (szejnwald-brown et al. ) . these considerations lead to some important conclusions for the matter of the application of ethical principles to the issue of human action with regard to the natural environment. first of all, it contradicts the way ethics sees itself to develop ethics of its own for different action contexts. just as there can be no different rules for the logic of deduction and induction in nomological science, depending on which object is concerned, it does not make any sense to postulate an independent set of ethics for the environment (galert ) . justifications for principles and moral systems have to satisfy universal validity (nida-rümelin b). furthermore, it is not very helpful to call for a special moral system for the environment since this -like every other moral system -has to be traceable to primary principles. instead, it makes sense to specify the generally valid principles that are also relevant with regard to the issue of how to deal with the natural environment. at the same time standards should be specified that are appropriate to environmental goods and that reflect those principles that are valid beyond their application to the environment. as implied above, it does not make much sense to talk about an independent set of environmental ethics. much rather, general ethics should be transferred to issues relating to the use of the environment (hargrove ) . three areas are usually dealt with within the context of environmental ethics (galert ): • environmental protection, i.e. the avoidance or alleviation of direct or indirect, current or future damage and pollution resulting from anthropogenic emissions, waste or changes to the landscape, including land use, as well as the long-term securing of the natural foundations of life for people and other living creatures (birnbacher a ). • animal protection, i.e. the search for reasonable and enforceable standards to avoid or reduce pain and suffering in sentient beings (krebs , vischer ). • nature conservation, i.e. the protection of nature against the transforming intervention of human use, especially all measures to conserve, care for, promote and recreate components of nature deemed to be valuable, including species of flora and fauna, biotic communities, landscapes and the foundations of life required there (birnbacher a) . regardless which of these three areas are addressed we need to explore which primary principles be applied to them. when dealing with the environment, the traditional basic and human rights, as well as the civil rights that have been derived from them, should be just as much a foundation of the consideration as other areas of application in ethics. however, with regard to the primary principles there is a special transfer problem when addressing human interventions into nature and the environment: does the basic postulate of conservation of life apply only to human beings, to all other creatures or to all elements of nature, too? this question does not lead to a new primary principle, as one may suspect at first glance. much rather, it is concerned with the delineation of the universally recognized principle of the conservation of life that has already been specified in the basic rights canon. are only people included in this principle (this is the codified version valid in most legal constitutions today) or other living creatures, too? and if yes, which ones? should non-living elements be included as well? when answering this question, two at first sight contradictory positions can be derived: anthropocentrism and physiocentrism (taylor , ott , galert . the anthropocentric view places humans and their needs at the fore. nature's own original demands are alien to this view. interventions in nature are allowed if they are useful to human society. a duty to make provisions for the future and to conserve nature exists in the anthropocentric world only to the extent that natural systems are classed as valuable to people today and subsequent generations and that nature can be classed as a means and guarantor of human life and survival (norton , birnbacher b . in the physiocentric concept, which forms an opposite pole to the anthropocentric view, the needs of human beings are not placed above those of nature. here, every living creature, whether humans, animals or plants, have intrinsic rights with regard to the chance to develop their own lives within the framework of a natural order. merit for protection is justified in the physiocentric view by an inner value that is unique to each living creature or the environment in general. nature has a value of its own that does not depend on the functions that it fulfils today or may fulfil later from a human society's point of view (devall and sessions , callicott , rolston , meyer-abich . each of these prevailing understandings of the human-nature relationship has implications that are decisive for the form and extent of nature use by humans (elliot , krebs . strictly speaking, it could be concluded from the physiocentric idea that all human interventions in nature have to be stopped so that the rights of other creatures are not endangered. yet, not even extreme representatives of a physiocentric view would go so far as to reject all human interventions in nature because animals, too, change the environment by their ways of life (e.g. the elephant prevents the greening of the savannah). the central postulate of a physiocentric view is the gradual minimization of the depth of interventions in human use of nature. the only interventions that are permitted are those that contribute to directly securing human existence and do not change the fundamental composition of the surrounding natural environment. if these two criteria were taken to the extreme, neither population development beyond the boundaries of biological carrying capacity nor a transformation of natural land into pure agricultural land would be allowed. such a strict interpretation of physiocentrism would lead to a radical reversal of human history so far and is not compatible with the values and expectations of most people. the same is true for the unlimited transfer of anthropocentrism to dealings with nature. in this view, the use of natural services is subjected solely to the individual cost-benefit calculation. this can lead to unscrupulous exploitation of nature by humans with the aim of expanding human civilization. both extremes quickly lead to counter-intuitive implications. when the issue of environmental design and policy is concerned, anthropocentric and physiocentric approaches in their pure form are found only rarely, much rather they occur in different mixtures and slants. the transitions between the concepts are fluid. moderate approaches certainly take on elements from the opposite position. it can thus be in line with a fundamentally physiocentric perspective if the priority of human interests is not questioned in the use of natural resources. it is also true that the conclusions of a moderate form of anthropocentrism can approach the implications of the physiocentric view. table . provides an overview of various types of anthropocentric and physiocentric perspectives. if we look at the behaviour patterns of people in different cultures, physiocentric or anthropocentric basic positions are rarely maintained consistently (bargatzky and kuschel ; on the convergence theory: birnbacher ) . in the strongly anthropocentric countries in the west, people spend more money on the welfare and health of their own pets than on saving human lives in other countries. in the countries of the far east that are characterized by physiocentrism, nature is frequently exploited even more radically than in the industrialized countries of the west. this inconsistent action is not a justification for one view or the other, it is just a warning for caution when laying down further rules for use so that no extreme -and thus untenable -demands be made. also from an ethical point of view, radical anthropocentrism should be rejected just as much as radical physiocentrism. if, to take up just one argument, the right to human integrity is largely justified by the fact that causing pain by others should be seen as something to avoid, this consideration without a doubt has to be applied to other creatures that are also capable of feeling pain (referred to as: pathocentrism). here, therefore, pure anthropocentrism cannot convince. in turn, with a purely physiocentric approach the primary principles of freedom, equality and human dignity could not be maintained at all if every part of living nature were equally entitled to use the natural environment. under these circumstances people would have to do without agriculture, the conversion of natural land into agricultural land and breeding farm animals and pets in line with human needs. as soon table . different perspectives on nature. adapted from renn and goble ( : ). as physiocentrism is related to species and not to individuals as is done in some biocentric perspectives human priority is automatically implied; because where human beings are concerned, nearly all schools of ethics share the fundamental moral principle of an individual right to life from birth. if this right is not granted to individual animals or plants, a superiority of the human race is implicitly assumed. moderate versions of physiocentrism acknowledge a gradual de-escalation with respect to the claim of individual table . different perspectives on nature (continued). adapted from renn and goble ( : ). life protection. the extreme forms of both physiocentrism and anthropocentrism are therefore not very convincing and are hardly capable of achieving a global consensus. this means that only moderate anthropocentrism or moderate biocentrism should be considered. the image of nature that is used as a basis for the considerations in this section emphasizes the uniqueness of human beings vis-à-vis physiocentric views, but does not imply carte blanche for wasteful and careless dealings with nature. this moderate concept derives society's duty to conserve nature -also for future generations -from the life-preserving and life-enhancing meaning of nature for society. this is not just concerned with the instrumental value of nature as a "store of resources", it is also a matter of the function of nature as a provider of inspiration, spiritual experience, beauty and peace (birnbacher and schicha ) . in this context it is important that human beings -as the addressees of the moral standard -do not regard nature merely as material and as a way towards their own self-realization, but can also assume responsibility for conservation of their cultural and so-cial function, as well as their existential value above and beyond the objective and technically available benefits (honnefelder ) . one of the first people to express this responsibility of human stewardship of nature in an almost poetic way was the american ecologist aldo leopold, who pointed out people's special responsibility for the existence of nature and land as early as the s with the essay "the conservation ethics". his most well-known work "a sand county almanac" is sustained by the attempt to observe and assess human activities from the viewpoint of the land (a mountain or an animal). this perspective was clearly physiocentric and revealed fundamental insights about the relationship between humans and nature on the basis of empathy and shifting perspectives. his point of view had a strong influence on american environmental ethics and the stance of conservationists. although this physiocentric perspective raises many concerns, the idea of stewardship has been one of the guiding ideas for the arguments used in this section (pickett et al. ) . we are morally required to exercise a sort of stewardship over living nature, because nature cannot claim any rights for itself, but nevertheless has exceptional value that is important to man above and beyond its economic utility value (hösle ) . since contemporary society and the generations to come certainly use, or will use, more natural resources than would be compatible with a lifestyle in harmony with the given natural conditions, the conversion of natural land into anthropogenically determined agricultural land cannot be avoided (mohr ) . many people criticized human interventions into natural cycles as infringements of the applicable moral standards of nature conservation (for example, fastened onto the postulate of sustainability). but we should avoid premature conclusions here, as can be seen with the example of species protection. for example, where natural objects or phenomena are concerned that turn out to be a risk to human or non-human living creatures, the general call for nature conservation is already thrown into doubt (gale and cordray ) . not many people would call the eradication of cholera bacteria, hiv viruses and other pathogens morally bad (mittelstraß ) if remaining samples were kept under lock and key in laboratories. also, combating highly evolved creatures, such as cockroaches or rats meets with broad support if we ignore the call for the complete eradication of these species for the time being. an environmental initiative to save cockroaches would not be likely to gain supporters. if we look at the situation carefully, the valuation of human behaviour in these examples results from a conflict. because the conservation of the species competes with the objective of maintaining human health or the objective of a hygienic place to live, two principles, possibly of equal ranking, come face to face. in this case the options for action, which may all involve a gradual infringement of one or more principles, would have to be weighed up against each other. a general ban on eradicating a species can thus not be justified ethically, in the sense of a categorical principle, unless the maintenance of human health were to be given lower priority than the conservation of a species. with regard to the issue of species conservation, therefore, different goods have to be weighed up against each other. nature itself cannot show society what it is essential to conserve and how much nature can be traded for valuable commodities. humans alone are responsible for a decision and the resulting conflicts between competing objectives. appreciation and negotiation processes are therefore the core of the considerations about the ethical justification of rules for interventions. but this does not mean that there is no room for categorical judgements along the lines of "this or that absolutely must be prohibited" in the matter of human interventions into the natural environment. it follows on from the basic principle of conserving human life that all human interventions that threaten the ability of the human race as a whole, or a significant number of individuals alive today or in the future, to exist should be categorically prohibited. this refers to intervention threats to the systemic functions of the biosphere. such threats are one of the guiding principles that must not be exceeded under any circumstances, even if this excess were to be associated with high benefits. in the language of ethics this is a categorical principle, in the language of economics a good that is not capable of being traded. the "club" of categorical prohibitions should, however, be used very sparingly because plausible trade-offs can be thought up for most principles, the partial exceeding of which appears intuitively. in the case of threats to existence, however, the categorical rejection of the behaviour that leads to this is obvious. but what does the adoption of categorical principles specifically mean for the political moulding of environmental protection? in the past, a number of authors have tried to specify the minimum requirements for an ethically responsible moral system with respect to biosphere use. these so-called "safe minimum standards" specify thresholds for the open-ended measurement scale of the consequences of human interventions that may not be ex-ceeded even if there is a prospect of great benefits (randall , randall and farmer ) . in order to be able to specify these thresholds in more detail the breakdown into three levels proposed by the german scientific council for global environmental change is helpful (wbgu ) . these levels are: • the global bio-geochemical cycles in which the biosphere is involved as one of the causes, modulator or "beneficiary"; • the diversity of ecosystems and landscapes that have key functions as bearers of diversity in the biosphere; and • the genetic diversity and the species diversity that are both "the modelling clay of evolution" and basic elements of ecosystem functions and dynamics. where the first level is concerned, in which the functioning of the global ecosystem is at stake, categorical principles are obviously necessary and sensible, provided that no one wants to shake the primary principle of the permanent preservation of the human race. accordingly, all interventions in which important substance or energy cycles are significantly influenced at a global level and where globally effective negative impacts are to be expected are categorically prohibited. usually no stringently causal evidence of the harmful nature of globally relevant information is needed; justified suspicion of such harmfulness should suffice. later in this chapter we will make a proposal for risk valuation and management how the problem of uncertainty in the event of possible catastrophic damage potential should be dealt with (risk type cassandra). on the second level, the protection of ecosystems and landscapes, it is much more difficult to draw up categorical rules. initially, it is obvious that all interventions in landscapes in which the global functions mentioned on the first level are endangered must be avoided. above and beyond this, it is wise from a precautionary point of view to maintain as much ecosystem diversity as possible in order to keep the degree of vulnerability to the unforeseen or even unforeseeable consequences of anthropogenic and nonanthropogenic interventions as low as possible. even though it is difficult to derive findings for human behaviour from observations of evolution, the empirically proven statement "he who places everything on one card, always loses in the long run" seems to demonstrate a universally valid insight into the functioning of systemically organized interactions. for this reason, the conservation of the natural diversity of ecosystems and landscape forms is a categorical principle, whereas the depth of intervention allowed should be specified on the basis of principles and standards capable of compensation. the same can be said for the third level, genetic and species protection. here too, initially the causal chain should be laid down: species conservation, landscape conservation, maintaining global functions. wherever this chain is unbroken, a categorical order of conservation should apply. these species could be termed primary key species. this includes such species that are not only essential for the specific landscape type in which they occur, but also for the global cycles above and beyond this specific landscape type thanks to their special position in the ecosystem. probably, it will not be possible to organize all species under this functional contribution to the surrounding ecosystem, but we could also think of groups of species, for example, humus-forming bacteria. in second place there are the species that characterize certain ecosystems or landscapes. here they are referred to as secondary key species. they, too, are under special protection that is not necessarily under categorical reservations. their function value, however, is worthy of special attention. below these two types of species there are the remaining species that perform ecosystem functions to a greater or lesser extent. what this means for the worthiness for protection of these species and the point at which the precise limit for permitted intervention should be drawn, is a question that can no longer be solved with categorical principles and standards, but with the help of compensatory principles and standards. generally, here, too, as with the issue of ecosystem and landscape protection, the conservation of diversity as a strategy of "reinsurance" against ignorance, global risks and unforeseeable surprises is recommended. it remains to be said that from a systemic point of view, a categorical ban has to apply to all human interventions where global closed loops are demonstrably at risk. above and beyond this, it makes sense to recognize the conservation of landscape variety (also of ecosystem diversity within landscapes) and of genetic variety and species diversity as basic principles, without being able to make categorical judgements about individual landscape or species types as a result. in order to evaluate partial infringements of compensatory principles or standards, which are referred to in the issue of environmental protection, we need rules for decision making that facilitate the balancing process necessary to resolve compensatory conflicts. in the current debate about rules for using the environment and nature, it is mainly teleological valuation methods that are proposed (hubig , ott . these methods are aimed at: • estimating the possible consequences of various options for action at all dimensions relevant to potentially affected people; • recording the infringements or fulfilments of these expected consequences in the light of the guiding standards and principles; and • then weighing them according to an internal key so that they can be weighed up in a balanced way. on the positive side of the equation, there are the economic benefits of an intervention and the cultural values created by use, for example, in the form of income, subsistence (self-sufficiency) or an aesthetically attractive landscape (parks, ornamental gardens, etc.); on the negative side, there are the destruction of current or future usage potentials, the loss of unknown natural resources that may be needed in the future and the violation of aesthetic, cultural or religious attributes associated with the environment and nature. there are therefore related categories on both sides of the equation: current uses vs. possible uses in the future, development potentials of current uses vs. option values for future use, shaping the environment by use vs. impairments to the environment as a result of alternative use, etc. with the same or similar categories on the credit and debit side of the balance sheet the decision is easy when there is one option that performs better or worse than all the other options for all categories. although such a dominant (the best for all categories) or sub-dominant option (the worst for all categories) is rare in reality, there are examples of dominant or sub-dominant solutions. thus, for example, the overfelling of the forests of kalimantan on the island of borneo in indonesia can be classed as a sub-dominant option since the short-term benefit, even with extremely high discount rates, is in no proportion to the long-term losses of benefits associated with a barren area covered in imperata grass. the recultivation of a barren area of this kind requires sums many times the income from the sale of the wood, including interest. apparently there are no cultural, aesthetic or religious reasons for conversion of primary or secondary woodland into grassland. this means that the option of deforestation should be classed as of less value than alternative options for all criteria, including economic and social criteria. at best, we can talk about a habit of leaving rainforests, as a "biotope not worthy of conservation", to short-term use. but habit is not a sound reason for the choice of any sub-optimum option. as mentioned at the start of this chapter, habit as experienced morality, does not have any normative force, especially when this is based on the illusion of the marginality of one's own behaviour or ignorance about sustainable usage forms. but if we disregard the dominant or sub-dominant solutions, an appreciation between options that violate or fulfil compensatory standards and principles depends on two preconditions: best possible knowledge of the consequences (what happens if i choose option a instead of option b?) and a transparent, consistent rationale for weighing up these consequences as part of a legitimate political decision process (are the foreseeable consequences of option a more desirable or bearable than the consequences of option b?) (akademie der wissenschaften ). adequate knowledge of the consequences is needed in order to reveal the systemic connections between resource use, ecosystem reactions to human interventions and socio-cultural condition factors (wolters ) . this requires interdisciplinary research and cooperation. the task of applied ecological research, for example, is to show the consequences of human intervention in the natural environment and how ecosystems are burdened by different interventions and practices. the economic approach provides a benefit-oriented valuation of natural and artificial resources within the context of production and consumption, as well as a valuation of transformation processes according to the criterion of efficiency. cultural and social sciences examine the feedback effects between use, social development and cultural self-perception. they illustrate the dynamic interactions between usage forms, socio-cultural lifestyles and control forms. interdisciplinary, problem-oriented and system-related research contribute to forming a basic stock of findings and insights about functional links in the relationship between human interventions and the environment and also in developing constructive proposals as to how the basic question of an ethically justified use of the natural environment can be answered in agreement with the actors concerned (wbgu ) . accordingly, in order to ensure sufficient environmental protection, scientific research, but especially transdisciplinary system research at the interface between natural sciences and social sciences is essential. bringing together the results of interdisciplinary research, the policy-relevant choice of knowledge banks and balanced interpretation in an environment of uncertainty and ambivalence are difficult tasks that primarily have to be performed by the science system itself. how this can happen in a way that is methodslogically sound, receptive to all reasonable aspects of interpretation and yet subjectively valid will be the subject of section . . . but knowledge alone does not suffice. in order to be able to act effectively and efficiently while observing ethical principles, it is necessary to shape the appreciation process between the various options for action according to rational criteria (gethmann ) . to do this it is, first of all, necessary to identify the dimensions that should be used for a valuation. the discussion about the value dimensions to be used as a basis for valuation is one of the most popular subjects within environmental ethics. to apply these criteria in risk evaluation and to combine the knowledge aspects about expected consequences of different behavioural options with the ethical principles is the task of what we have called risk governance. what contribution do ethics make towards clarifying the prospects and limits of human interventions into the natural environment? the use of environmental resources is an anthropological necessity. human consciousness works reflexively and humans have developed a causal recognition capacity that enables them to record cause and effect anticipatively and to productively incorporate assessed consequences in their own action. this knowledge is the motivating force behind the cultural evolution and the development of technologies, agriculture and urbanization. with power over an ever-increasing potential of design and intervention in nature and social affairs over the course of human history, the potential for abuse and exploitation has also grown. whereas this potential was reflected in philosophical considerations and legal standards at a very early stage with regard to moral standards between people, the issue of human responsibility towards nature and the environment has only become the subject of intensive considerations in recent times. ethical considerations are paramount in this respect. on the one hand, they offer concrete standards for human conduct on the bases of criteria that can be generalized, and, on the other hand, they provide procedural advice about a rational and decision-and policy-making process. a simple breakdown into categorical rules and prohibitions that are capable of being compensated can assist decision makers for the justification of principles and standards on environmental protection. as soon as human activities exceed the guidelines of the categorical principles, there is an urgent need for action. how can we detect whether such an excess has happened and how it can be prevented from the very outset that these inviolable standards and principles be exceeded? here are three strategies of environmental protection to be helpful for the implementation of categor-ical guidelines. the first strategy is that of complete protection with severe restrictions of all use by humans (protection priority). the second strategy provides for a balanced relationship between protection and use, where extensive resource use should go hand in hand with the conservation of the ecosystems concerned (equal weight). the third strategy is based on optimum use involving assurance of continuous reproduction. the guiding principle here would be an intensive and, at the same time, sustainable, i.e. with a view to the long term, use of natural resources (use priority). the following section will present a framework for applying these principles into environmental decision making under risk. the main line of argument is that risk management requires an analytic-deliberative approach for dealing effectively and prudently with environmental risks. assessing potential consequences of human interventions and evaluating their desirability on the basis of subsequent knowledge and transparent valuation criteria are two of the central tasks of a risk governance process. however, the plural values of a heterogeneous public and people's preferences have to be incorporated in this process. but how can this be done given the wealth of competing values and preferences? should we simply accept the results of opinion polls as the basis for making political decisions? can we rely on risk perception results to judge the seriousness of pending risks? or should we place all our faith in professional risk management? if we turn to professional help to deal with plural value input, economic theory might provide us an answer to this problem. if environmental goods are made individual and suitable for the market by means of property rights, the price that forms on the market ensures an appropriate valuation of the environmental good. every user of this good can then weigh up whether he is willing to pay the price or would rather not use the good. with many environmental goods, however, this valuation has to be made by collective action, because the environmental good concerned is a collective or open access good. in this case a process is needed that safeguards the valuation and justifies it to the collective. however, this valuation cannot be determined with the help of survey results. although surveys are needed to be able to estimate the breadth of preferences and people's willingness to pay, they are insufficient for a derivation of concrete decision-making criteria and yardsticks for evaluating the tolerability of risks to human health and the environment. • firstly, the individual values are so widely scattered that there is little sense in finding an average value here. • secondly, the preferences expressed in surveys change much within a short time, whereas ethical valuations have to be valid for a long time. • thirdly, as outlined in the subsection on risk perception, preferences are frequently based on flawed knowledge or ad hoc assumptions both of which should not be decisive according to rational considerations. what is needed, therefore, is a gradual process of assigning trade-offs in which existing empirical values are put into a coherent and logically consistent form. in political science and sociological literature reference is mostly made to three strategies of incorporating social values and preferences in rational decision-making processes (renn ) . firstly, a reference to social preferences is viewed solely as a question of legitimate procedure (luhmann , vollmer . the decision is made on the basis of formal decision-making process (such as majority voting). if all the rules have been kept, a decision is binding, regardless of whether the subject matter of the decision can be justified or whether the people affected by the decision can understand the justification. in this version, social consensus has to be found only about the structure of the procedures; the only people who are then involved in the decisions are those who are explicitly legitimated to do so within the framework of the procedure decided upon. the second strategy is to rely on the minimum consensuses that have developed in the political opinion-forming process (muddling through) (lindbloom (lindbloom , . in this process, only those decisions that cause the least resistance in society are considered to be legitimate. in this version of social pluralism groups in society have an influence on the process of the formation of will and decision making to the extent that they provide proposals capable of being absorbed, i.e. adapted to the processing style of the political system, and that they mobilize public pressure. the proposal that then establishes itself in politics is the one that stands up best in the competition of proposals, i.e. the one that entails the fewest losses of support for political decision makers by interest groups. the third strategy is based on the discussion between the groups involved (habermas , renn ). in the communicative exchange among the people involved in the discussion a form of communicative rationality that everyone can understand evolves that can serve as a justification for collectively binding decisions. at the same time, discursive methods claim to more appropriately reflect the holistic nature of human beings and also to provide fair access to designing and selecting solutions to problems. in principle, the justification of standards relevant to decisions is linked to two conditions: the agreement of all involved and substantial justification of the statements made in the discussion (habermas ) . all three strategies of political control are represented in modern societies to a different extent. legitimation conflicts mostly arise when the three versions are realized in their pure form. merely formally adhering to decisionmaking procedures without a justification of content encounters a lack of understanding and rejection among the groups affected especially when they have to endure negative side effects or risks. then acceptance is refused. if, however, we pursue the opposite path of least resistance and base ourselves on the route of muddling through we may be certain of the support of the influential groups, but, as in the first case, the disadvantaged groups will gradually withdraw their acceptance because of insufficient justification of the decision. at the same time, antipathy to politics without a line or guidance is growing, even the affected population. the consequence is political apathy. the third strategy of discursive control faces problems, too. although in an ideal situation it is suitable for providing transparent justifications for the decision-making methods and the decision itself, in real cases the conditions of ideal discourse can rarely be adhered to (wellmer ) . frequently, discussions among strategically operating players lead to a paralysis of practical politics by forcing endless marathon meetings with vast quantities of points of order and peripheral contributions to the discussion. the "dictatorship of patience" (weinrich ) ultimately determines which justifications are accepted by the participants. the public becomes uncertain and disappointed by such discussions that begin with major claims and end with trivial findings. in brief: none of the three ways out of the control dilemma can convince on its own; as so often in politics, everything depends on the right mixture. what should a mixture of the three elements (due process, pluralistic muddling through and discourse) look like so that a maximum degree of rationality can come about on the basis of social value priorities? a report by the american academy of sciences on the subject of "understanding environmental risks" (national research council ) comes to the conclusion that scientifically valid and ethically justified procedure for the collective valuation of options for risk handling can only be realized within the context of -what the authors coin -an analytic-deliberative process. analytic means that the best scientific findings about the possible consequences and conditions of collective action are incorporated in the negotiations; deliberative means that rationally and ethically transparent criteria for making trade-offs are used and documented externally. moreover, the authors consider that fair participation by all groups concerned is necessary to ensure that the different moral systems that can legitimately exist alongside each other should also be incorporated in the process. to illustrate the concept of analytic-deliberative decision making consider a set of alternative options or choices, from which follow consequences (see basic overview in dodgson et al. ) . the relationship between the choice made, and the consequences that follow from this choice, may be straightforward or complex. the science supporting environmental policy is often complicated, across many disciplines of science and engineering, and also involving human institutions and economic interactions. because of limitations in scientific understanding and predictive capabilities, the consequences following a choice are normally uncertain. finally, different individuals and groups within society may not agree on how to evaluate the consequences -which may involve a detailed characterization of what happens in ecological, economic, and human health terms. we shall describe consequences as ambiguous when there is this difficulty in getting agreement on how to interpret and evaluate them. this distinction has been further explained in chapter (see also klinke and renn ) . environmental assessment and environmental decision making inherently involve these difficulties of complexity, uncertainty, and ambiguity (klinke and renn ) . in some situations where there is lots of experience, these difficulties may be minimal. but in other situations these difficulties may constitute major impediments to the decision-making process. to understand how analysis and deliberation interact in an iterative process following the national research council (nrc) report, one must consider how these three areas of potential difficulty can be addressed. it is useful to separate questions of evidence with respect to the likelihood, magnitude of consequences and related characteristics (which can involve complexity and uncertainty) from valuation of the consequences (i.e. ambiguity). for each of the three areas there are analytical tools that can be helpful in identifying, characterizing and quantifying cause-effect relationships. some of these tools have been described in chapter . the integration of these tools of risk governance into a consistent procedure will be discussed in the next subsections. the possibility to reach closure on evaluating risks to human health or the environment rests on two conditions: first, all participants need to achieve closure on the underlying goal (often legally prescribed, such as prevention of health detriments or guarantee of an undisturbed environmental quality, for example, purity laws for drinking water); secondly, they need to agree with the implications derived from the present state of knowledge (whether and to what degree the identified hazard impacts the desired goal). dissent can result from conflicting values as well as conflicting evidence. it is crucial in environmental risk management to investigate both sides of the coin: the values that govern the selection of the goal and the evidence that governs the selection of cause-effect claims. strong differences in both areas can be expected in most environmental decision-making contexts but also in occupational health and safety and public health risks. so for all risk areas it is necessary to explore why people disagree about what to do -that is, which decision alternative should be selected. as pointed out before, differences of opinion may be focused on the evidence of what is at stake or which option has what kind of consequences. for example: what is the evidence that an environmental management initiative will lead to an improvement, such as reducing losses of agricultural crops to insect pests -and what is the evidence that the management initiative could lead to ecological damage -loss of insects we value, such as bees or butterflies, damage to birds and other predators that feed on insects -and health impacts from the level of pesticides and important nutrients in the food crops we eat? other differences of opinion may be about values -value of food crops that contain less pesticide residue compared to those that contain more, value of having more bees or butterflies, value of maintaining indigenous species of bees or butterflies compared to other varieties not native to the local ecosystem, value ascribed to good health and nutrition, and maybe, value ascribed to having food in what is perceived to be a "natural" state as opposed to containing manufactured chemical pesticides or altered genetic material. separating the science issues of what will happen from the value issues of how to make appropriate trade-offs between ecological, economic, and human health goals can become very difficult. the separation of facts and values in decision making is difficult to accomplish in practical decision situations, since what is regarded as facts includes a preference dependent process of cognitive framing (tversky and kahneman ) and what is regarded as value includes a prior knowledge about the factual implica-tions of different value preferences (fischhoff ) . furthermore, there are serious objections against a clear-cut division from a sociological view on science and knowledge generation (jasanoff ) . particularly when calculating risk estimates, value-based conventions may enter the assessment process. for example, conservative assumptions may be built into the assessment process, so that some adverse effects (such as human cancer from pesticide exposure) are much less likely to be underestimated than overestimated (national research council ) . at the same time, ignoring major sources of uncertainty can evoke a sense of security and overconfidence that is not justified from the quality or extent of the data base (einhorn and hogarth ) . perceptions and world views may be very important, and difficult to sort out from matters of science, especially with large uncertainties about the causes of environmental damage. a combination of analytic and deliberative processes can help explore these differences of opinions relating to complexity, uncertainty, and ambiguity in order to examine the appropriate basis for a decision before the decision is made. most environmental agencies go through an environmental assessment process and provide opportunities for public review and comment. many controversial environmental decisions become the focus of large analytical efforts, in which mathematical models are used to predict the environmental, economic, and health consequences of environmental management alternatives. analysis should be seen as an indispensable complement to deliberative processes, regardless whether this analysis is sophisticated or not. even simple questions need analytic input for making prudent decisions, especially in situations where there is controversy arising from complexity, uncertainty, and ambiguity. in many policy arenas in which problems of structuring human decisions are relevant, the tools of normative decision analysis (da) have been applied. especially in economics, sociology, philosophical ethics, and also many branches of engineering and science, these methods have been extended and refined during the past several decades. (edwards , howard , north , howard et al. , north and merkhofer , behn and vaupel , pinkau and renn , van asselt , jaeger et al. . da is a process for decomposing a decision problem into pieces, starting with the simple structure of alternatives, information, and prefer-ences. it provides a formal framework for quantitative evaluation of alternative choices in terms of what is known about the consequences and how the consequences are valued (hammond et al. , skinner . the procedures and analytical tools of da provide a number of possibilities to improve the precision and transparency of the decision procedure. however, they are subject to a number of limitations. the opportunities refer to: • different action alternatives can be quantitatively evaluated to allow selection of a best choice. such evaluation relies both on a description of uncertain consequences for each action alternative, with uncertainty in the consequences described using probabilities, and a description of the values and preferences assigned to consequences. (explicit characterization of uncertainty and values of consequences) • the opportunity to assure transparency, in that ( ) models and data summarizing complexity (e.g., applicable and available scientific evidence) ( ) probabilities characterizing judgement about uncertainty, and ( ) values (utilities) on the consequences are made explicit and available. so the evaluation of risk handling alternatives can be viewed and checked for accuracy by outside observers. (outside audit enabled of basis for decision) • a complex decision situation can be decomposed into smaller pieces in a formal analytical framework. the level of such composition can range from a decision tree of action alternatives and ensuing consequences that fits on a single piece of paper, to extremely large and complex computerimplemented models used in calculating environmental consequences and ascribing probabilities and values of the consequences. a more complex analysis is more expensive and is less transparent to observers. in principle, with sufficient effort any formal analytical framework can be checked to assure that calculations are made in the way that is intended. (decomposition possible to include extensive detail) on the other hand, there are important limitations: • placing value judgements (utilities) on consequences may be difficult, especially in a political context where loss of life, impairment of health, ecological damage, or similar social consequences are involved. utility theory is essentially an extension of cost-benefit methods from economics to include attitude toward risk. the basic trade-off judgements needed for cost-benefit analysis remain difficult and controversial, and often, inherently subjective. (difficulties in valuing consequences) • assessing uncertainty in the form of a numerical probability also poses difficulties, especially in situations when there is not a statistical data base on an agreed-on model as the basis for the assessment. (difficulty in quantifying uncertainty, assigning probabilities) • the analytical framework may not be complete. holistic or overarching considerations or important details may have been omitted. (analytical framework incomplete) • da is built upon an axiomatic structure, both for dealing with uncertainty (i.e., the axiomatic foundation of probability theory), and for valuing consequences (i.e., the axiomatic basis for von neumann-morgenstern utility theory). especially when the decision is to be made by a group rather than an individual decision maker, rational preferences for the group consistent with the axioms may not exist (the "impossibility" theorem of arrow, ) . so in cases of strong disagreements on objectives or unwillingness to use a rational process, decision analysis methods may not be helpful. decision analytical methods should not be regarded as inherently "mechanical" or "algorithmic", in which analysts obtain a set of "inputs" about uncertainty and valuing consequences, then feed these into a mathematical procedure (possibly implemented in a computer) that produces an "output" of the "best" decision. da can only offer coherent conclusions from the information which the decision maker provides by his/her preferences among consequences and his/her state of information on the occurrence of these consequences. where there is disagreement about the preferences or about the information, da may be used to implore the implications of such disagreement. so in application, there is often a great deal of iteration (sensitivity analysis) to explore how differences in judgement should affect the selection of the best action alternative. da thus merely offers a formal framework that can be effective in helping participants in a decision process to better understand the implications of differing information and judgement about complex and uncertain consequences from the choice among the available action alternatives. insight about which factors are most important in selecting among the alternatives is often the most important output of the process, and it is obtained through extensive and iterative exchange between analysts and the decision makers and stakeholders. the main advantage of the framework is that it is based on logic that is both explicit and checkable -usually facilitated by the use of mathematical models and probability calculations. research on human judgement supports the superiority of such procedures for decomposing complex decision problems and using logic to integrate the pieces, rather than relying on holistic judgement on which of the alternatives is best (this is not only true for individual decisions, see heap et al. : ff., jungermann ; but also for collective decisions, see heap et al. : ff., pettit . one should keep in mind, however, that "superior" is measured in accordance with indicator of instrumental rationality, i.e. measuring means-ends effectiveness. if this rationality is appropriate, the sequence suggested by da is intrinsically plausible and obvious. even at the level of qualitative discussion and debate, groups often explore the rationale for different action alternatives. decision analysis simply uses formal quantitative methods for this traditional and common-sense process of exploring the rationale -using models to describe complexity, probability to describe uncertainty, and to deal with ambiguity, explicit valuation of consequences via utility theory and other balancing procedures, such as cost-benefit or cost-effectiveness analyses. by decomposing the problem in logical steps, the analysis permits better understanding of differences in the participants' perspective on evidence and values. da offers methods to overcome these differences, such as resolving questions about underlying science through data collection and research, and encouraging tradeoffs, compromise, and rethinking of values. based on this review of opportunities and shortcomings we conclude that decision analysis provides a suitable structure for guiding discussion and problem formulation, and offers a set of quantitative analytical tools that can be useful for environmental decisions, especially in conjunction with deliberative processes. da can assist decision makers and others involved in, and potentially affected by, the decision (i.e., participants, stakeholders) to deal with complexity and many components of uncertainty, and to address issues of remaining uncertainties and ambiguities. using these methods promises consistency from one decision situation to another, assurance of an appropriate use of evidence from scientific studies related to the environment, and explicit accountability and transparency with respect to those institutionally responsible for the value judgements that drive the evaluation of the alternative choices. collectively the analytical tools provide a framework for a systematic process of exploring and evaluating the decision alternatives -assembling and validating the applicable scientific evidence relevant to what will happen as the result of each possible choice, and valuing how bad or how good these consequences are based on an agreement of common objectives. yet, it does not replace the need for additional methods and processes for including other objectives, such as finding common goals, defining preferences, revisiting assumptions, sharing visions and exploring common grounds for values and normative positions. the value judgements motivating decisions are made explicit and can then be criticized by those who were not involved in the process. to the extent that uncertainty becomes important, it will be helpful to deal with uncertainty in an orderly and consistent way (morgan and henrion ). those aspects of uncertainty that can be modelled by using probability theory (inter-target variation, systematic and random errors in applying inferential statistics, model and data uncertainties) will be spelled out and those that remain in forms of indeterminacies, system boundaries or plain ignorance will become visible and can then be fed into the deliberation process (van asselt , klinke and renn ) . the term deliberation refers to the style and procedure of decision making without specifying which participants are invited to deliberate (national research council (nrc) , rossi ) . for a discussion to be called deliberative it is essential that it relies on mutual exchange of arguments and reflections rather than decision making based on the status of the participants, sublime strategies of persuasion, or social-political pressure. deliberative processes should include a debate about the relative weight of each argument and a transparent procedure for balancing pros and cons (tuler and webler ) . in addition, deliberative processes should be governed by the established rules of a rational discourse. in the theory of communicative action developed by the german philosopher jürgen habermas, the term discourse denotes a special form of a dialogue, in which all affected parties have equal rights and duties to present claims and test their validity in a context free of social or political domination (habermas (habermas , b . a discourse is called rational if it meets the following specific requirements (see mccarthy , habermas a , kemp , webler . all participants are obliged: • to seek a consensus on the procedure that they want to employ in order to derive the final decision or compromise, such as voting, sorting of positions, consensual decision making or the involvement of a mediator or arbitrator; • to articulate and criticize factual claims on the basis of the "state of the art" of scientific knowledge and other forms of problem-adequate knowledge; (in the case of dissent all relevant camps have the right to be represented); • to interpret factual evidence in accordance with the laws of formal logic and analytical reasoning; • to disclose their relevant values and preferences, thus avoiding hidden agendas and strategic game playing; and • to process data, arguments and evaluations in a structured format (for example a decision-analytic procedure) so that norms of procedural rationality are met and transparency can be created. the rules of deliberation do not necessarily include the demand for stakeholder or public involvement. deliberation can be organized in closed circles (such as conferences of catholic bishops, where the term has indeed been used since the council of nicosia), as well as in public forums. it may be wise to use the term "deliberative democracy" when one refers to the combination of deliberation and public or stakeholder involvement (see also cohen , rossi . what needs to be deliberated? firstly, deliberative processes are needed to define the role and relevance of systematic and anecdotal knowledge for making far-reaching choices. secondly, deliberation is needed to find the most appropriate way to deal with uncertainty in environmental decision making and to set efficient and fair trade-offs between potential over-and under-protection. thirdly, deliberation needs to address the wider concerns of the affected groups and the public at large. why can one expect that deliberative processes are better suited to deal with environmental challenges than using expert judgement, political majority votes or relying on public survey data? • deliberation can produce common understanding of the issues or the problems based on the joint learning experience of the participants with respect to systematic and anecdotal knowledge (webler and renn , pidgeon ). • deliberation can produce a common understanding of each party's position and argumentation and thus assist in a mental reconstruction of each actor's argumentation (warren , tuler . the main driver for gaining mutual understanding is empathy. the theory of communicative action provides further insights in how to mobilize empathy and how to use the mechanisms of empathy and normative reasoning to explore and generate common moral grounds (webler ). • deliberation can produce new options and novel solutions to a problem. this creative process can either be mobilized by finding win-win solutions or by discovering identical moral grounds on which new options can grow (renn ) . • deliberation has the potential to show and document the full scope of ambiguity associated with environmental problems. deliberation helps to make a society aware of the options, interpretations, and potential actions that are connected with the issue under investigation (wynne , de marchi and ravetz ) . each position within a deliberative discourse can only survive the crossfire of arguments and counter-arguments if it demonstrates internal consistency, compatibility with the legitimate range of knowledge claims and correspondence with the widely accepted norms and values of society. deliberation clarifies the problem, makes people aware of framing effects, and determines the limits of what could be called reasonable within the plurality of interpretations (skillington ) . • deliberations can also produce agreements. the minimal agreement may be a consensus about dissent (raiffa ) . if all arguments are exchanged, participants know why they disagree. they may not be convinced that the arguments of the other side are true or morally strong enough to change their own position; but they understand the reasons why the opponents came to their conclusion. in the end, the deliberative process produces several consistent and -in their own domain -optimized positions that can be offered as package options to legal decision makers or the public. once these options have been subjected to public discourse and debate, political bodies, such as agencies or parliaments can make the final selection in accordance with the legitimate rules and institutional arrangements such as majority vote or executive order. final selections could also be performed by popular vote or referendum. • deliberation may result in consensus. often deliberative processes are used synonymously with consensus-seeking activities (coglianese ). this is a major misunderstanding. consensus is a possible outcome of deliberation, but not a mandatory requirement. if all participants find a new option that they all value more than the one option that they preferred when entering the deliberation, a "true" consensus is reached (renn ) . it is clear that finding such a consensus is the exception rather than the rule. consensus is either based on a win-win solution or a solution that serves the "common good" and each participant's interests and values better than any other solution. less stringent is the requirement of a tolerated consensus. such a consensus rests on the recognition that the selected decision option might serve the "common good" best, but on the expense of some interest violations or additional costs. in a tolerated consensus some participants voluntarily accept personal or group-specific losses in exchange for providing benefits to all of society. case studies have provided sufficient evidence that deliberation has produced a tolerated consensus solution, particularly in siting conflicts (one example in schneider et al. ) . consensus and tolerated consensus should be distinguished from compromise. a compromise is a product of bargaining where each side gradually reduces its claim to the opposing party until they reach an agreement (raiffa ) . all parties involved would rather choose the option that they preferred before starting deliberations, but since they cannot find a win-win situation or a morally superior alternative they look for a solution that they can "live with" knowing that it is the second or third best solution for them. compromising on an issue relies on full representation of all vested interests. in summary, many desirable products and accomplishments are associated with deliberation (chess et al. ) . depending on the structure of the discourse and the underlying rationale deliberative processes can: • enhance understanding; • generate new options; • decrease hostility and aggressive attitudes among the participants; • explore new problem framing; • enlighten legal policy-makers; • produce competent, fair and optimized solution packages; and • facilitate consensus, tolerated consensus and compromise. in a deliberative setting, participants exchange arguments, provide evidence for their claims and develop common criteria for balancing pros and cons. this task can be facilitated and often guided by using decision analytic tools (overview in merkhofer ) . decision theory provides a logical framework distinguishing action alternatives or options, consequences, likelihood of consequences, and value of consequences, where the valuation can be over multiple attributes that are weighted based on tradeoffs in multi-attribute utility analysis (edwards ) . a sequence of decisions and consequences may be considered, and use of mathematical models for predicting the environmental consequences of options may or may not be part of the process (humphreys , bardach , arvai et al. ): a) the structuring potential of decision analysis has been used in many participatory processes. it helps the facilitator of such processes to focus on one element during the deliberation, to sort out the central from the peripheral elements, provide a consistent reference structure for ordering arguments and observations and to synthesize multiple impressions, observations and arguments into a coherent framework. the structuring power of decision analysis has often been used without expanding the analysis into quantitative modelling. b) the second potential, agenda setting and sequencing, is also frequently applied in participatory settings. it often makes sense to start with problem definition, then develop the criteria for evaluation, generate options, assess consequences of options, and so on. c) the third potential, quantifying consequences, probabilities and relative weights and calculating expected utilities, is more controversial than the other two. whether the deliberative process should include a numerical analysis of utilities or engage the participants in a quantitative elicitation process is contested among participation practitioners . one side claims that quantifying helps participants to be more precise about their judgements and to be aware of the often painful trade-offs they are forced to make. in addition, quantification can make judgements more transparent to outside observers. the other side claims that quantification restricts the participants to the logic of numbers and reduces the complexity of argumentation into a mere trade-off game. many philosophers argue that quantification supports the illusion that all values can be traded off against other values and that complex problems can be reduced to simple linear combinations of utilities. one possible compromise between the two camps may be to have participants go through the quantification exercise as a means to help them clarify their thoughts and preferences, but make the final decisions on the basis of holistic judgements (renn ) . in this application of decision analytic procedures, the numerical results (i.e. for each option the sum over the utilities of each dimension multiplied by the weight of each dimension) of the decision process are not used as expression of the final judgement of the participant, but as a structuring aid to improve the participant's holistic, intuitive judgement. by pointing out potential discrepancies between the numerical model and the holistic judgements, the participants are forced to reflect upon their opinions and search for potential hidden motives or values that might explain the discrepancy. in a situation of major value conflicts, the deliberation process may involve soliciting a diverse set of viewpoints, and judgements need to be made on what sources of information are viewed as responsible and reliable. publication in scientific journals and peer review from scientists outside the government agency are the two most popular methods by which managers or organizers of deliberative processes try to limit what will be considered as acceptable evidence. other methods are to reach a consensus among the participants up front which expertise should be included in the deliberation or to appoint representatives of opposing science camps to explain their differences in public. in many cases, participants have strong reasons for questioning scientific orthodoxy and would like to have different science camps represented. many stakeholders in environmental decisions have access to expert scientists, and often such scientists will take leading roles in criticizing agency science. such discussions need to be managed so that disagreements among the scientific experts can be evaluated in terms of the validity of the evidence presented and the importance to the decision. it is essential in these situations to have a process in place that distinguishes between those evidence claims that all parties agree on, those where the factual base is shared but not its meaning for some quality criterion (such as "healthy" environment), and those where even the factual base is contested (foster ) . in the course of practical risk management different conflicts arise in deliberative settings that have to be dealt with in different ways. the main conflicts occur at the process level (how should the negotiations be conducted?), on the cognitive level (what is factually correct?), the interest level (what benefits me?), the value level (what is needed for a "good" life?) and the normative level (what can i expect of all involved?). these different conflict levels are addressed in this subsection. first of all, negotiations begin by specifying the method that structures the dialogue and the rights and duties of all participants. it is the task of the chairman or organizer to present and justify the implicit rules of the talks and negotiations. above and beyond this, the participants have to specify joint rules for decisions, the agenda, the role of the chairman, the order of hearings, etc. this should always be done according to the consensus principle. all partners in the negotiations have to be able to agree to the method. if no agreement is reached here the negotiations have to be interrupted or reorganized. once the negotiation method has been determined and, in a first stage, the values, standards and objectives needed for judgement have been agreed jointly, then follows the exchange of arguments and counter arguments. in accordance with decision theory, four stages of validation occur: • in a first stage, the values and standards accepted by the participants are translated into criteria and then into indicators (measurement instructions). this translation needs the consensual agreement of all participants. experts are asked to assess the available options with regard to each indicator according to the best of their knowledge (factual correctness). in this context it makes more sense to specify a joint methodological procedure or a consensus about the experts to be questioned than to give each group the freedom to have the indicators answered by their own experts. often many potential consequences remain disputed as a result of this process, especially if they are uncertain. however, the bandwidth of possible opinions is more or less restricted depending on the level of certainty and clarity associated with the issue in question. consensus on dissent is also of help here in separating contentious factual claims from undisputed ones and thus promotes further discussion. • in a second stage, all participating parties are required to interpret bandwidths of impacts to be expected for each criterion. interpretation means linking factual statements with values and interests to form a balanced overall judgement (conflicts of interests and values). this judgement can and should be made separately for each indicator. in this way, each of the chains of causes for judgements can be understood better and criticized in the course of the negotiations. for example, the question of trustworthiness of the respective risk management agencies may play an important role in the interpretation of an expected risk value. then it is the duty of the participating parties to scrutinize the previous performance of the authority concerned and propose institutional changes where appropriate. • third stage: even if there were a joint assessment and interpretation for every indicator, this would by no means signify that agreement is at hand. much rather, the participants' different judgements about decisionmaking options may be a result of different value weightings for the indicators that are used as a basis for the values and standards. for example, a committed environmentalist may give much more weight to the indicator for conservation than to the indicator of efficiency. in the literature on game theory, this conflict is considered to be insoluble unless one of the participants can persuade the other to change his preference by means of compensation payments (for example, in the form of special benefits), transfer services (for example, in the form of a special service) or swap transactions (do, ut des). in reality, however, it can be seen that participants in negotiations are definitely open to the arguments of the other participants (i.e. they may renounce their first preference) if the loss of benefit is still tolerable for them and, at the same time, the proposed solution is considered to be "conducive to the common good", i.e. is seen as socially desirable in public perception. if no consensus is reached, a compromise solution can and should be reached, in which a 'fair' distribution of burdens and profits is accomplished. • fourth stage: when weighing up options for action formal methods of balancing assessment can be used. of these methods, the cost-benefit analysis and the multi-attribute or multi-criteria decision have proved their worth. the first method is largely based on the approach of revealed "preferences", i.e. on people's preferences shown in the past expressed in relative prices, the second on the approach of "expressed preferences", i.e. the explicit indication of relative weightings between the various cost and benefit dimensions (fischhoff et al. ) . but both methods are only aids in weighing up and cannot replace an ethical reflection of the advantages and disadvantages. normative conflicts pose special problems because different evaluative criteria can always be classified as equally justifiable or unjustifiable as explained earlier. for this reason, most ethicists assume that different types and schools of ethical justification can claim parallel validity, it therefore remains up to the groups involved to choose the type of ethically legitimate justification that they want to use (ropohl , renn . nevertheless, the limits of particular justifications are trespassed wherever primary principles accepted by all are infringed (such as human rights). otherwise, standards should be classed as legitimate if they can be defended within the framework of ethical reasoning and if they do not contradict universal standards that are seen as binding for all. in this process conflicts can and will arise, e.g. that legitimate derivations of standards from the perspective of group a contradict the equally legitimate derivations of group b (shrader-frechette ). in order to reach a jointly supported selection of standards, either a portfolio of standards that can claim parallel validity should be drawn up or compensation solutions will have to be created in which one party compensates the other for giving up its legitimate options for action in favour of a common option. when choosing possible options for action or standards, options that infringe categorical principles, for example, to endangering the systematic ability of the natural environment to function for human use in the future and thus exceeding the limits of tolerability are not tolerable even if they imply major benefits to society. at the same time, all sub-dominant options have to be excluded. frequently sub-dominant solutions, i.e. those that perform worse than all other options with regard to all criteria at least in the long term, are so attractive because they promise benefits in the short term although they entail losses in the long term, even if high interest rates are assumed. often people or groups have no choice other than to choose the sub-dominant solution because all other options are closed to them due to a lack of resources. if large numbers of groups or many individuals act in this way, global risks become unmanageable (beck ) . to avoid these risks intermediate financing or compensation by third parties should be considered. the objective of this last section of chapter was to address and discuss the use of decision analytic tools and structuring aids for participatory processes in environmental management. organizing and structuring discourses goes beyond the good intention to have all relevant stakeholders involved in decision making. the mere desire to initiate a two-way communication process and the willingness to listen to stakeholder concerns are not sufficient. discursive processes need a structure that assures the integration of technical expertise, regulatory requirements, and public values. these different inputs should be combined in such a fashion that they contribute to the deliberation process the type of expertise and knowledge that can claim legitimacy within a rational decision-making procedure (von schomberg ). it does not make sense to replace technical expertise with vague public perceptions, nor is it justified to have the experts insert their own value judgements into what ought to be a democratic process. decision analytic tools can be of great value for structuring participatory processes. they can provide assistance in problem structuring, in dealing with complex scientific issues and uncertainty, and in helping a diverse group to understand disagreements and ambiguity with respect to values and preferences. decision analysis tools should be used with care. they do not provide an algorithm to reach an answer as to what is the best decision. rather, decision analysis is a formal framework that can be used for environmental assessment and risk handling to explore difficult issues, to focus debate and further analysis on the factors most important to the decision, and to provide for increased transparency and more effective exchange of information and opinions among the process participants. the basic concepts are relatively simple and can be implemented with a minimum of mathematics (hammond et al. ) . many participation organizers have restricted the use of decision analytic tools to assist participants in structuring problems and ordering concerns and evaluations, and have refrained from going further into quantitative trade-off analysis. others have advocated quantitative modelling as a clarification tool for making value conflicts more transparent to the participants. the full power of decision analysis for complex environmental problem may require mathematical models and probability assessment. experienced analysts may be needed to guide the implementation of these analytical tools for aiding decisions. skilled communicators and facilitators may be needed to achieve effective interaction between analysts and participants in the deliberative process whose exposure to advanced analytical decision aids is much less, so that understanding of both process and substance, and therefore transparency and trust, can be achieved. many risk management agencies are already making use of decision analysis tools. we urge them to use these tools in the context of an iterative, deliberative process with broad participation by the interested and affected parties to the decision in the context of the risk governance framework. the analytical methods, the data and judgement, and the assumptions, as well as the analytical results should be readily available and understood by the participants. we believe that both the risk management agencies and the interested groups within the public that government agencies interact with on environmental decisions should all gain experience with these methods. improper or premature use of sophisticated analytical methods may be more destructive to trust and understanding than helpful in resolving the difficulties of complexity, uncertainty, and ambiguity. bioterrorism, preparedness, attack and response. elsevier beyond the job exposure matrix (jem): the task exposure matrix (tem) chapter occupational safety and health and environmental risks a task exposure database for use in the alumina and primary aluminium industry work-related neck and upper limb musculoskeletal disorders. european agency for safety and health at work finnish national job-exposure matrix (finjem) in register-based cancer research. people and work toimivat ja terveelliset työajat (well-functioning and healthy working times). finnish institute of occupational health, ministry of social affairs and health and ministry of labour finnish database of occupational exposure measurements (fdoem). finnish institute of occupational health, finland. ioha, pilanesberg introduction to occupational epidemiology the environment and disease: association or causation? the bradford hill considerations on causality: a counterfactual perspective work-related stress. european foundation for the improvement of living and working conditions the ageing workforce -challenges for occupational health recommendation concerning the list of occupational diseases and the recording and notification of occupational accidents and diseases: r . international labour conference. international labour office työuupumus suomen työikäisellä väestöllä (burn-out among the working-aged population in finland). finnish institute of occupational health, helsinki work-related disease risk in different occupations. työterveiset , special issue. information newsletter of the finnish institute of occupational health ammattitaudit (occupational diseases ). finnish institute of occupational health work is related to a substantial portion of adult-onset asthma incidence in the finnish population work stress in the etiology of coronary heart disease -a meta-analysis occupational injury and illness in the united states. estimates of costs, morbidity, and mortality the psychology of extremism and terrorism: a middle-eastern perspective musculoskeletal disorders and the workplace: low back and upper extremities testing a structured decision approach: value focused thinking for deliberative risk communication the eight-step path of policy analysis the invention of nature Ökologische fragen im bezugsrahmen fabrizierter unsicherheiten quick analysis for busy decision makers grundzüge der ökologischen ethik natur" als maßstab menschlichen handelns. zeitschrift für philosophische forschung landschaftsschutz und artenschutz. wie weit tragen utilitaristische begründungen? in: nutzinger hg vorsorge statt nachhaltigkeit -ethische grundlagen der zukunftsverantwortung on the intrinsic value of nonhuman species who should deliberate when? assessing consensus; the promise and performance of negotiated rule making procedure and substance in deliberative democracy risk management and governance: a post-normal science approach deep ecology: living as if nature mattered multi-criteria analysis: a manual. department of the environment, transport and the regions the theory of decision making how to use multi-attribute utility measurement for social decision making beginning again: people and nature in the new millennium confidence in judgment: persistence of the illusion of validity environmental ethics hindsight versus foresight: the effect of outcome knowledge on judgment under uncertainty the experienced utility of expected utility approaches the role of experts in risk-based decision making. hse risk assessment policy unit. web manuscript. www.trustnetgovernance.com/library/pdf/doc Ökologie und konsensfindung: neue chancen und risiken making sense of sustainability: nine answers to "what should be sustained? literaturreview und bibliographie. graue reihe nr. . europäische akademie zur erforschung von folgen wissenschaftlich-technischer entwicklungen ethische aspekte des handelns unter risiko rationale technikfolgenbeurteilung the value of biodiversity decision aiding, not dispute resolution: a new perspective for environmental negotiation towards a theory of communicative competence vorbereitende bemerkungen zu einer theorie der kommunikativen kompetenz theorie des kommunikativen handelns. suhrkamp, frankfurt/m the philosophical discourse of modernity smart choices: a practical guide to making better decisions marketing und soziale verantwortung foundations of environmental ethics systemsteuerung und "staatskunst": theoretische konzepte und empirische befunde. leske und budrich the theory of choice: a practical guide politische gerechtigkeit. grundlegung einer kritischen philosophie von recht und staat. suhrkamp, frankfurt/m welche natur sollen wir schützen? gaia philosophie der ökologischen krise decision analysis: applied decision theory the foundations of decision analysis the decision to seed hurricanes application of multi-attribute utility theory risk governance: towards an integrative appraoch. irgc beyond epistemology: relativism and engagement in the politics of science risk, uncertainty and rational action das prinzip verantwortung. versuch einer ethik für die technologische zivilisation judgment and decision making: an interdisciplinary reader planning, political hearings, and the politics of discourse Ökologie global: die auswirkungen von wirtschaftswachstum, bevölkerungswachstum und zunehmendem nord-süd-gefälle auf die umwelt a new approach to risk evaluation and management: risk-based, precaution-based and discourse-based management naturethik im Überblick the science of muddling through the intelligence of democracy. decision making through mutual adjustment legitimation durch verfahren. suhrkamp, frankfurt/m translator's introduction comparative analysis of formal decision-making approaches sustainable development? wie nicht nur die menschen eine "dauerhafte" entwicklung überdauern können ist biologisches produzieren natürlich? leitbilder einer naturgemäßen technik methodische philosophie umwelt. bemerkungen eines philosophen zum umweltverträglichen wirtschaften committee on institutional means for assessing risk to public health ( ) risk assessment in the federal government: managing the process ethik des risikos angewandte ethik. die bereichsethiken und ihre theoretische fundierung a tutorial introduction to decision theory a methodology for analyzing emission control strategies why preserve natural variety? Ökologie und ethik. ein versuch praktischer philosophie. ethik in den wissenschaften zur ethischen bewertung von biodiversität. externes gutachten für den wbgu. unveröffentlichtes manuskript decision theory and folk psychology towards a comprehensive conservation theory the limits to safety? culture, politics, learning and manmade disasters environmental standards. scientific foundations and rational procedures of regulation with emphasis on radiological risk management the art and science of negotiation what mainstream economists have to say about the value of biodiversity benefits, costs, and the safe minimum standard of conservation decision analytic tools for resolving uncertainty in the energy debate Ökologisch denken -sozial handeln: die realisierbarkeit einer nachhaltigen entwicklung und die rolle der sozial-und kulturwissenschaften ein diskursives verfahren zur bildung und begründung kollektiv verbindlicher bewertungskriterien a model for an analytic-deliberative process in risk management the challenge of integrating deliberation and expertise: participation and discourse in risk management a regional concept of qualitative growth and sustainability -support for a case study in the german state of baden-württemberg was heißt hier bioethik? tab-brief theologie der natur und ihre anthriopologisch-ethischen konsequenzen values in nature and the nature of value ob man die ambivalenzen des technischen fortschritts mit einer neuen ethik meistern kann? participation run amok: the costs of mass participation for deliberative agency decisionmaking ist die schöpfung noch zu retten? umweltkrise und christliche verantwortung experiences from germany: application of a structured model of public participation in waste management planning environmental ethics politics and the struggle to define: a discourse analysis of the framing strategies of competing actors in a "new introduction to decision analysis corporate environmentalism in a global economy respect for nature. a theory of environmental ethics meanings, understandings, and interpersonal relationships in environmental policy discourse. doctoral dissertation designing an analytic deliberative process for environmental health policy making in the u.s. nuclear weapons complex the framing of decisions and the psychology of choice perspectives on uncertainty and risk akzeptanzbeschaffung: verfahren und verhandlungen the erosion of the valuespheres. the ways in which society copes with scientific, moral and ethical uncertainty can participatory democracy produce better selves? psychological dimensions of habermas discursive model of democracy welt im wandel: erhaltung und nachhaltige nutzung der biosphäre. jahresgutachten discourse in citizen participation. an evaluative yardstick the craft and theory of public participation: a dialectical process a brief primer on participation: philosophy and practice system, diskurs, didaktik und die diktatur des sitzfleisches konsens als telos der sprachlichen kommunikation? in: giegel biophilia: the human bond with other species rio" oder die moralische verpflichtung zum erhalt der natürlichen vielfalt risk and social learning: reification to engagement die modernisierung der demokratie im zeichen der umweltproblematik wandelt sich die verantwortung mit technischem wandel? key: cord- -gcduh u authors: katsikopoulos, panagiotis v. title: individual and community resilience in natural disaster risks and pandemics (covid- ): risk and crisis communication date: - - journal: mind soc doi: . /s - - - sha: doc_id: cord_uid: gcduh u civil protection and disaster risk specific agencies legally responsible to enhance individual and community resilience, still utilize in their risk and crisis communication efforts, the “deficit model” even though its basic assumption and approach have been criticized. recent studies indicate that information seeking behavior is not necessarily a measure of enhanced individual preparedness. a qualitative change from “blindly” following directions to practicing emergency planning and becoming your own disaster risk manager is required. for pandemics, the challenge is even more complicated due to their unique characteristics. community based exercises (cbex), a framework concept encompassing a variety of interactive activities, have recently started being utilized to develop resilience amongst citizens. existing models of resilience can pinpoint to the required knowledge, skills and attitude. research in the factors influencing behavioral change could offer new understanding of the interplay between cognitive and demographic drivers/factors of resilience. such knowledge could be utilized for setting targeted objectives, developing appropriate activities and the corresponding training for the cbex facilitators. despite the importance of preparation, the current covid- crisis indicates that high levels of adaptive resilience can be displayed even in the absence of any risk communication effort beforehand by utilizing a pre-existing collective understanding of the system situation. one of the fundamental principles of emergency planning, that every student has been taught, is that it takes place in an environment of apathy, even resistance, and with limited resources (fema ) . this is even more valid for risk communication also termed "public preparedness education" in the emergency management field. governments and citizens (both faced with limited resources and attention) tend to consider prudent to focus on "frequently" occurring disaster risks at the national and/or local level, as these risks are naturally presumed to be of the highest priority (wilkinson et al. ) . it is an issue of risk assessment and management. the countries participating in the union civil protection mechanism regularly submit to the european commission (ec) their national risk assessment. in the latest report available (european commission ), one finds listed as priority risk along earthquakes, floods, and forest fires, the biological threat/pandemics risk as well. only nine countries (cyprus, france, greece, italy, montenegro, north macedonia, portugal, spain, and turkey) have not included the latter. the risk and crisis communication mandate rests predominantly on civil protection authorities and disaster risk specific agencies (e.g. responsible for earthquakes, floods, public health, etc.) at central, regional, and local level. to fulfill their mandate they provide information about various disaster risks in order to enhance individual and community disaster preparedness. the staff of these organizations has, in general, limited knowledge of the state of the art in the relevant to risk communication scientific fields and lacks the necessary links with academia that could inform their effort (haddow et al. ) . as a result, the approach has been and is still based on the so-called "deficit model" even though its basic assumption (the public is an empty vessel to be filled with information) has been criticized (nisbet and mooney ; boersma et al. ) . a number of alternative two-way interaction models (also called engagement models) have been proposed (boersma et al. ). ec has funded relatively few programs to experiment with the so-called engagement models (boersma et al. ; musacchio and solarino ; yovkov et al. ) . in europe and the usa the assessment is that despite more use of various two-way communication approaches, the efforts still fall short of inducing the necessary changes in behavior (boersma et al. ; national academies of sciences, engineering, and medicine ). the conclusion is that more research is needed to determine how to motivate behavior change, as well as to identify what other factors contribute to successful public disaster education campaigns. the information provided to the public usually covers facts for a) basic understanding of specific disaster risks (e.g. earthquakes, floods, etc.), and b) directions to strengthen prevention, inform and educate about the actions and behavior to decrease personal exposure to each specific risk [e.g. for earthquakes: "stop, drop under or beside, and cover" against the natural tendency to flight -evacuate the building during shaking, for severe weather phenomena: having a stock of emergency supplies (water, canned food, etc)], and how to make an individual and/or household emergency plan. it is commonplace to specifically address certain vulnerable populations, like school students. the websites of the responsible authorities and of other organizations (ngos, professional associations, civil society) are full with information of that sort. nevertheless, the effectiveness of the means used (traditional, like pamphlets and contemporary, like social media) has not really been the subject of serious study. recent studies (maduz et al. ; kohn et al. ) have indicated that information seeking behavior is not necessarily a measure of enhanced preparedness. the actual challenge is to make the transition from the so-called "weak preparation" (like storing emergency supplies) to "hard preparation" (e.g. establishing an individual and/or household emergency plan). in a sense, what is required is a qualitative change from "blindly" following some directions to cooperating with others to practice emergency planning and even further, become your own emergency or even disaster risk manager. the concept of resilience with the abundance of definitions (alexander ) and the various numbers of dimensions (called domains, properties, or stages) that have been proposed to operationalize it, can be useful in informing that goal. especially, the definition of resilience as "the ability to prepare and plan for, absorb, recover from, or more successfully adapt to actual or potential adverse events" (national research council ), the proposition that it encompasses the properties of robustness, redundancy, resourcefulness, and rapidity (bruneau et al. ; tierney and bruneau ) , and the view (cutter et al. ) that it consists of inherent resilience and adaptive resilience (through improvisation and learning), when applied to individuals and groups can guide us. they help pinpoint to the knowledge, skills and attitude we need to focus on and communicate to the public in order to enhance its resilience level. in conjunction with individual disaster preparedness surveys that could help identify barriers and possible triggers for its enhancement, they can provide paths to research that could offer some interesting and useful insights for effective risk and crisis communication. depending on the application unit, we can talk about resilience of individuals, infrastructures, institutions, ecosystems and communities (kahan ) . it is important to realize that there is inherent, not well understood, interaction between them, e.g. when citizens know what to expect from the authorities then the latter's emergency plans have an increased effectiveness and efficiency and vice versa. pandemics risk, like covid- , displays a number of characteristics differentiating it from natural disasters, namely: • relatively infrequent occurrence that "deprives" the public from firsthand experience (recent epidemics sars, mers were contained fairly easily). • pandemics risk and its management are surrounded by lots of unknowns and uncertainty, including variable positions by experts, sometimes even seemingly changing over time [adams ( ) considers that it belongs to the so-called "virtual risks"]. • early recognition of the forthcoming crisis is not (always) obvious, resulting in challenges in sense-making. • the guidelines (social distancing, wearing gloves, washing hands frequently, avoiding vulnerable friends and family, staying at home, not going to work, etc.) the public is "advised" -"ordered" to follow are "unnatural". they oppose people's tendency of wanting to spend even more time together when confront-ing a disaster situation. it is this very closeness and camaraderie that have been observed to build people's adaptive resilience in natural disasters. • emergency response lasts much longer (many months) as compared to natural disasters (days to weeks). in addition, as by definition emergency response ends when the situation stabilizes i.e. threat to life and property has fallen down to the pre-emergency level (fema ) , even the end is not straightforward but a matter of debate in pandemics. • in pandemics the public has an apparently passive role contrary to natural disasters where the public actively participates in response and recovery activities. it should also be pointed out that pandemics is probably the only disaster risk, that the protective measures bear the potential to cause individual and community resilience decreases in the future, as they negatively influence community competence factors (psychopathologies, health & wellness (brooks et al. ) ) of the population and especially of vulnerable groups (elderly, people with chronic sickness, etc.) as well as economic factors (cutter et al. ) . therefore, the main question is "how do you protect now, without negatively affecting long term resilience at the individual and community level?" more specifically, what should be the characteristics of the risk and crisis communication campaigns as far as message content and frequency of repetition, how is the effect of the unknowns on the message content managed, what are the possible shortcomings of "the war against an invisible enemy" model (adopted in many countries), what is an effective and viable mixture of coercion and persuasion, etc. all these together form a quite different and more complicated environment for the risk and crisis communication than the one we are used to in natural disasters. they superimpose on the general risk communication challenges referred to previously. besides specific disaster risk communication, risk communication efforts should be targeted to instilling to the public the emergency planning toolbox of knowledge, skills, and attitude, i.e. identifying and assessing disaster risks and vulnerabilities, understanding that preparation is key, and that key to preparation is planning together with others (family members, co-workers, neighbors, etc.), and finally acquiring the mindset for the implementation of a plan and the need to improvise if necessary. the traditional one-way communication approach is not an effective way to achieve these objectives. a framework concept encompassing a variety of two-way communication interactive activities on the above topics altogether in one event, is the so-called community based exercise (cbex). cbex have recently started being utilized with main objective to develop resilience amongst citizens (yovkov et al. ) . a cbex uses as trainers/facilitators experts and personnel of organizations that are part of the civil protection system and utilizes the participation of other stakeholders that are part of a community. the choice of the specific objectives, topics and the design of activities (facilitated work sessions, workshops, competitions, demonstrations, short exercises, games, etc.) for the various target groups of a community, are of great importance for the effectiveness of the effort. research in the factors that influence behavioral change could offer new and practical understanding of the interplay between cognitive and demographic drivers/factors of preparedness/resilience. subsequently, such knowledge could be utilized for setting clearly targeted objectives, developing appropriate activities and the corresponding training for the trainers/facilitators. in particular regarding the pandemics risk, it is necessary to understand how the specific characteristics listed before, affect and shape the risk communication effort regarding "hard preparations" as it is of a quite different nature as compared to natural disaster risks. it is also interesting to study the interplay between risk and crisis communication in pandemics as it is also defined by the pandemics' very characteristics. as a final point, we turn to the actual reason we care about all that, which is to see how individual, community, societal resilience is realized in time of crisis and identify ways to enhance it. for an effective crisis management effort, stern ( ) has emphasized the importance of early recognition of a forthcoming crisis, expedient sense-making and use of an appropriate narrative to guide crisis communication by the crisis management team. the distinguishing features of risk communication and crisis communication have been stated by reynolds and seeger ( ) . an underlying assumption is that the latter builds on the former. we would like to claim that, in the presence of early recognition and expedient sense-making, it is possible to display high levels of adaptive resilience even in the absence of any risk communication campaign beforehand by utilizing a pre-existing collective understanding of the system situation. specifically a, well known to all (crisis managers and public), weakness (e.g. the quite limited capacity of the health system) can, almost explicitly, serve as the critical element in the persuasion of the public to adhere to strict measures (movement restriction, etc.). in that way, a weakness is turned around, becoming instead an enabler for crisis communication success. unexpected allies can arise in dire situations. one should not rely on this possibility but it exists and sometimes can save even the ones that have not prepared but showed a particular type of vigilance in crisis. eur en, publications office of the european union the psychological impact of quarantine and how to reduce it: rapid review of the evidence a framework to quantitatively assess and enhance the seismic resilience of communities a place-based model for understanding community resilience to natural disasters commission staff working document, overview of natural and manmade disaster risks the european union may face, swd( ) final fema ( ) fundamentals of emergency management online textbook. fema higher education project introduction to emergency management resilience redux: buzzword or basis for homeland security. homeland security affairs personal disaster preparedness: an integrative review of the literature individual disaster preparedness: explaining disaster-related information-seeking and preparedness behaviour in switzerland, risk and resilience report emergency alert and warning systems: current knowledge and future research directions. the national academies press, washington disaster resilience: a national imperative framing science crisis and emergency risk communication as an integrative model from warning to sense-making: understanding, identifying and responding to strategic crises conceptualizing and measuring resilience. a key to disaster loss reduction science for disaster risk management : knowing better and losing less community-based emergency preparedness exercise (cbe) guide, funded by the eu civil protection financial instrument the author would like to acknowledge colleagues and students over the years and especially th. sfetsos (ncsr demokritos) and p. dimitropoulou (univ. of crete) for fruitful discussions. conflict of interest the author declares that he has no conflict of interest. key: cord- - f m kfc authors: che huei, lin; ya-wen, lin; chiu ming, yang; li chen, hung; jong yi, wang; ming hung, lin title: occupational health and safety hazards faced by healthcare professionals in taiwan: a systematic review of risk factors and control strategies date: - - journal: sage open med doi: . / sha: doc_id: cord_uid: f m kfc background: healthcare professionals in taiwan are exposed to a myriad of occupational health and safety hazards, including physical, biological, chemical, ergonomic, and psychosocial hazards. healthcare professionals working in hospitals and healthcare facilities are more likely to be subjected to these hazards than their counterparts working in other areas. objectives: this review aims to assess current research literature regarding this situation with a view to informing policy makers and practitioners about the risks of exposure and offer evidence-based recommendations on how to eliminate or reduce such risks. methods: using the preferred reporting items for systematic reviews and meta-analyses review strategy, we conducted a systematic review of studies related to occupational health and safety conducted between january and january using medline (ovid), pubmed, pmc, toxline, cinahl, plos one, and access pharmacy databases. results: the review detected studies addressing the issue of occupational health and safety hazards; of these, articles were included in this systematic review. these articles reported a variety of exposures faced by healthcare professionals. this review also revealed a number of strategies that can be adopted to control, eliminate, or reduce hazards to healthcare professionals in taiwan. conclusion: hospitals and healthcare facilities have many unique occupational health and safety hazards that can potentially affect the health and performance of healthcare professionals. the impact of such hazards on healthcare professionals poses a serious public health issue in taiwan; therefore, controlling, eliminating, or reducing exposure can contribute to a stronger healthcare workforce with great potential to improve patient care and the healthcare system in taiwan. eliminating or reducing hazards can best be achieved through engineering measures, administrative policy, and the use of personal protective equipment. implications: this review has research, policy, and practice implications and provides future students and researchers with information on systematic review methodologies based on the preferred reporting items for systematic reviews and meta-analyses strategy. it also identifies occupational health and safety risks and provides insights and strategies to address them. according to the world health organization (who), an estimated million people work in healthcare facilities globally, accounting for roughly % of the working population. the who also reports that all healthcare workers, including healthcare professionals, are exposed to occupational hazards. the international labour organization (ilo) reported that millions of healthcare workers suffer from work-related diseases and accidents, and many succumb to occupational hazards. scholars and practitioners in the field of healthcare and occupational health and safety (ohs) are striving to raise awareness of the risk factors and importance of workplace health and safety among this population. , , schulte et al. defined an occupational hazard as the shortterm and long-term dangers or risks associated with unhealthy workplace environments. tullar et al. and joseph and joseph stated that the healthcare workers at greatest risk are doctors, healthcare professionals, nurses, laboratory technicians, and medical waste handlers. occupational hazards pose health and safety risks and have negative impact on the economy, which accounts for roughly a % loss in global annual gross domestic product (i.e. $ . trillion annually). the who, ilo, and nelson et al. noted a lack of universally applicable data on the impact of occupational hazards. ohs hazards, and their negative impacts on health and well-being among healthcare professionals, is an issue of growing concern in the asia and pacific region, particularly in taiwan; however, research in this area has been somewhat limited. according to the taiwanese ministry of health and welfare (mohw) in taiwan, , health and medical personnel are working at health care organizations in taiwan, including , healthcare professionals and , pharmacist assistants. the healthcare professionals serve a taiwanese population of , , in , medical care institutions ( hospitals and , clinics). of the hospitals, are public and are privately owned; of the , clinics, are public and , are privately owned. taiwanese healthcare professionals face a variety of ohs hazards, which increase the incidences of work-related disease, the country's burden of disease, the total number of accidents, the incidences of job-related health problems, and the number of cases involving incapacitation or disablement. this study reviewed previous works on ohs hazards, as well as their risk factors and control strategies, with a focus on healthcare professionals in taiwan. cochrane identified eight steps of a systematic review, which are adopted in this study. this study employed the preferred reporting items for systematic reviews and meta-analyses (prisma) protocol to organize the flow of information through the various steps of the review. we used the following key words in our literature search: occupational health and safety, risk factors, healthcare professionals, control strategies, and taiwan to ensure specificity and exclude irrelevant studies, we employed boolean logic (and, or, not) in combining terms as search strings. the operator and was used to reduce the search yield for two key terms (e.g. "healthcare professionals (p) and occupational health and safety"). the operator or was used to increase the search yield (e.g. "healthcare professionals and occupational health and safety or risk factors"). note that in this example, the two search terms are synonyms. the operator "not" was used to exclude specific terms or term combinations. this research obtained a large number of initial articles (n initial = ); however, the application of inclusion and exclusion criteria considerably reduced the number of articles for inclusion in the review (n = articles). the articles focused on ohs, occupational hazards, and healthcare professionals in taiwan. figure presents a flow diagram depicting the application of eligibility criteria, the process of identification and screening, and the reasons for inclusion and exclusion. in documenting and assessing individual publications, we collected key information from the relevant studies to populate an evidence table (see appendix c) and conducted a critical appraisal of the included studies. the study population included adult pharmacy workers (male and female). data were extracted only from studies that included samples that were deemed significant given the justification of the authors of the studies. a critical appraisal of all studies was performed to assess their quality in terms of validity and reliability, as based on performance bias, information bias, selection bias, and detection bias. cochrane and khan et al. reported that biases tend to exaggerate or underestimate the "true" outcome of exposure to an occupational hazard. our ultimate objective was to compare (without any form of bias) groups that were exposed to occupational hazards and those that were not exposed in terms of risk factors and outcomes. for the sake of validity and reliability, all of the studies selected for inclusion were prospective in nature and included data pertaining to exposure and outcomes, while controlling confounding factors. we also looked for studies with high internal reliability (consistency across items within a test) and high external reliability (consistency in agreeability between uses/rates). in our final analysis, we considered whether the research had been conducted in an appropriate manner (internal validity). we also considered the generalizability of the results, that is, whether the results were pertinent to other situations (external validity). data synthesis. the final step involved the synthesis of evidence from the included studies; that is, organized into homogeneous categories, under which the results were to be summarized. the evidence was also graded (i.e. assessed in terms of quality) and integrated (i.e. weighted across categories to address the multidisciplinary nature of ohs research). in this review, the synthesis, grading, integration, interpretation, and summary of the evidence were presented in narrative form, due to difficulties in textual and statistical pooling. after completing our systematic review, we employed the prisma reporting scheme, which is endorsed for ohs studies by hempel et al. briefly, the prisma structure is laid out in the following format: topic, summary/abstract, introduction, methods, results, conclusion, and recommendations. a meta-analysis was not conducted. the ilo categorizes ohs hazards that affect healthcare professionals as biological, chemical, physical, ergonomics, and psychosocial. from the studies in this review, this study identified the ohs hazards, injuries, and diseases affecting healthcare professionals working in hospitals and healthcare facilities. this section provides the biological hazards, as identified in the review, as the most commonly encountered in hospitals and healthcare facilities in taiwan. according to who, the managers and administrators of hospital and healthcare facilities, in our case those in taiwan, should carefully assess the potential for exposure to biohazards and put effective biohazard control plans in place. the following chart provides a summary of the identified biological hazards, their risk factors, and control strategies (table ) . the review established some of the most commonly faced chemical hazards present in hospitals and healthcare immunization and vaccines; and biological safety cabinets, needleless systems or safety-engineered needles, suitable ventilation, and an appropriate medical waste management system. administrative controls: written and documented infection control plans; decontamination procedures; enforcement of these systems; and the training of hospital staff in the implementation of occupational health and safety measures. immunization programs; detection and followup of infections; periodic screening; codes of practice; and staff orientation. designing all work systems with the aim of minimizing the risk of exposure. personal protective equipment (ppe): includes devices for the protection of the eyes (e.g. face shields, goggles), respiratory system (e.g. surgical masks), and skin (e.g. latex gloves, protective aprons, gown. based on risk assessments and careful training. [ ] [ ] [ ] infection from human immunodeficiency virus (hiv), hepatitis b virus (hbv), and hepatitis c virus (hcv) needle-stick injuries (nsi) and accidents with other sharp objects: occupational exposure resulting in hiv, hbv surface antigen-positive, or hcv transmission is largely due to inoculation of pathogens into cutaneous abrasions, lesions, scratches, or burns, as well as mucocutaneous exposure involving inoculation or accidental splashes onto non-intact mucosal surfaces of the nose, mucous membranes, mouth, or eyes. facilities, as well as the documented control strategies, which are summarized in table . physical hazards, which are defined as environmental risk factors that can harm the body without contact, were found to account for a substantial proportion of risks among healthcare professionals in taiwan. , [ ] [ ] [ ] the physical hazards, risk factors, and control strategies are summarized in table . the review established that healthcare professionals are exposed to musculoskeletal disorders and injuries, such as low back pain due to the nature of their work, such as lifting patients. table summarizes the risk factors and control strategies for this hazard. psychosocial hazards have attracted considerable attention in the research community, as well as among policy makers and practitioners in healthcare. [ ] [ ] [ ] this study found that in taiwan, psychosocial hazards have prompted a larger number of studies combining physical, chemical, and biological hazards. the who reported that psychosocial hazards are closely linked to work-related stress, workplace violence (e.g. violent patients), and other workplace stressors. table provides a summary of the risk factors and control strategies of psychosocial hazards. this review provides detailed information regarding the ohs hazards that affect healthcare professionals working in hospitals and healthcare facilities in taiwan. the review summarizes the risk factors for hazards, as well as the control strategies to control, eliminate, or reduce them. from the reviewed studies, it was clear that ohs hazards can potentially result in a number of injuries, sickness, and harm. a wide range of ohs hazards were identified, including biological hazards chemical hazards, ergonomic hazards, psychosocial hazards, and physical hazards. , the review has shown that healthcare professionals are at a significantly high risk of occupational related hazards. injuries and sickness prevent healthcare workers from discharging their duties effectively, which can have negative impact on the overall healthcare system in taiwan. physical hazards, such as falls, noise, and mechanical hazards, could have long-term physiological effects, such as hearing impairments; therefore, there is need to introduce various control strategies, such as engineering noise control measures. there should be the provision of good ppe for healthcare professionals to protect themselves from physical harms in the workplace. according to our findings, it is evident that healthcare professionals are exposed to chemical hazards, some of which can be carcinogenic. there is also the risk of exposure to occupational dermatitis. it is therefore important that healthcare professionals are screened for cancer on a regular basis. the workers can also be trained about skin care and be provided with safety equipment and other useful interventions, such as sunscreen cream. such efforts can help in early detection, prevention, and intervention. as part of their routine occupation, biological hazards can affect healthcare professionals due to contact with patients and visitors. the review of healthcare professionals on duty demonstrates how important it is to manage blood borne and airborne biological pathogens in the healthcare workforce. there should be administrative guidance and training on how healthcare professionals can deal with biological hazards, and these professionals should be encouraged to report work-related incidents as soon as they occur or are suspected to have occurred to aid early intervention. ergonomic hazards in healthcare professionals tend to arise from lifting patients and hospital equipment. this requires careful prevention, assessment, and intervention, as the impact of ergonomic hazards on the musculoskeletal system of the affected healthcare professionals cannot be ignored. hospital administrators need to alleviate frequent job pressures by providing the necessary safe and ergonomic equipment, and hiring an adequate number of personnel. the professionals can work in properly planned shifts and teams to reduce fatigue, they should be trained in the correct techniques for lifting patients and equipment, and policies should be enforced to ensure compliance. the findings on psychosocial hazards show that healthcare professionals can be affected by mental and psychological hazards, such as stress, as it is evident that healthcare professionals who suffer from stress are likely to suffer from fatigue and exhaustion. healthcare professionals are trained to show less emotion, and thus, find it difficult to seek medical intervention. there is need for counseling and stress management for healthcare professionals, and the workers should be trained to manage stress. the workplace should be designed in such a manner as to prevent invasion, harassment, and violence against healthcare professionals. overall, hospital administrations and healthcare professionals should focus on evidence-based strategies (engineering, administrative, and ppe) to manage ohs hazards. the increasing prevalence of occupational hazards and work-related diseases among healthcare professionals in taiwan is a concern. risk factors include exposure to hazards and a failure to follow hierarchical control strategies. health care workers and administrators must work together to eliminate or minimize these hazards through the introduction of and strict adherence to engineering, administrative, and personal protective equipment (ppe) controls. the the main routes of exposure to chemical hazards include ingestion, injection, skin contact or absorption, and inhalation. , contamination and exposure are both affected by the duration and frequency of exposure, the quantity of drugs undergoing preparation, and the use of ppe. the adverse health effects can be attributed to compounds deemed carcinogenic (cancer causing), mutagenic (promoting mutations), teratogenic (causing birth defects), or toxic to various organs. alcohol hand sanitizers commonly used by healthcare professionals are flammable and harmful to the skin. there have also been reports on the dangers of detergents used to clean surfaces, which can lead to irritation and promote allergies of the skin, eyes, and respiratory tract. , there is also evidence that some detergents can react with other products commonly stocked in healthcare facilities to produce toxic vapors. , , it has been found that low concentration disinfectants, such as quaternary ammonium salts, alcohols, hydrogen peroxide, iodophors, and phenolic and chlorine compounds, can have toxic effects and irritate the skin, eyes, and respiratory system. the inhalation of powdered medications and vapors exposes healthcare professionals to the risk of poisoning and allergic reactions. , engineering control strategies: isolating and segregating hospital or healthcare facility areas and equipment; providing exhaust hoods to provide local ventilation when compounding and mixing drugs; providing biological safety cabinets to safeguard chemicals; and providing containers to prevent needle stick injuries. flammable chemicals should be stored away from sources of ignition and dangerous chemicals substituted with less harmful ones. cuts, burns, hearing loss, motion sickness, and muscle cramps. engineering controls: minimize the use of sharp tools, use machine guarding, use quality sockets, and close water faucets when not in use. administrative controls: promote and practice safe work procedures, such as when using electrical equipment (e.g. cords). educating workers about the cleaning equipment and cleaning up broken glass is also recommended. ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing . tripping, slipping, cuts, and falling poor housekeeping, poor layout, and slippery tiled floors. open power cables, live wires, broken glassware, lancets, knives, scissors, and scalpels. bruised skin, cuts, broken bones, and muscular injuries. engineering control: proper lighting, the construction of safe stairwells, and regular building maintenance (e.g. floors and workspaces). ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing . exposure to microwave radiation, and ionizing and non-ionizing radiation. risks imposed by radiation from x-ray machines and other diagnostic imaging systems, and the radionuclides used in nuclear medicine and radiation therapy. workers face risks from nonionizing radiation, lasers, ultraviolet rays, and magnetic resonance imaging. the risk increases when using heat sealers and poorly maintained or insulated radio-diagnostic equipment. tissue damage, risk of cancer, and abnormal cell mutation (e.g. abnormal leukocytes). , engineering control: reducing the time of exposure, increasing the distance to x-ray machines, and increasing the amount of shielding. ppe: use of appropriate footwear, gloves, eye and nose protection, and protective clothing perceptions of workers can greatly affect their implementation of risk-mitigation strategies. selection bias is a concern here, despite the fact that we selected published and peer-reviewed articles, as well as unpublished but authoritative gray articles; the fact is that other unverifiable but potentially valuable reports were no doubt excluded. our reliance on observational studies (to the exclusion of intervention studies) and the heterogeneity of the included articles (in terms of methodology) posed a risk of bias and limited standardization. this study discovered relatively little research focusing on hospital workers in taiwan, and thus, further empirical studies focusing on this group of healthcare givers are required and recommended. researchers should focus on the health status, work performance, and workplace retention of healthcare professionals, including the prevalence of morbidity and mortality. the insights in this review provide a valuable reference for policy makers in establishing goals to deal with workplace hazards. hazard control strategies must be based on objective assessments of existing risks and the most appropriate measures to deal with them. this systematic review confirmed a positive correlation between ohs hazards (biological, physical, chemical, and psychosocial), and work-related injuries, occupational health problems, and work-related diseases. the burden of disease and attributable fraction of work-related diseases and occupational injuries has been shown to cause considerable social and economic losses for employees, families, companies, countries, and societies at large. generally, the burden of disease is assessed using disease/disability adjusted life years. the burden of disease is measured as the impact of morbidity and premature mortality within a given area. , scholars and professionals agree that reducing, substituting, or eliminating ohs hazards in healthcare facilities is important for healthcare workers, helps to ensure patient safety, and enhances the overall quality of healthcare. many researchers have used the "hierarchy of controls," which is based on the assumption that interventions are most effective when implemented at the source and least effective when applied at the worker level. gorman et al. listed control interventions from most to least effective as follows: elimination, substitution, engineering, administrative, and ppe. researchers have also emphasized the importance of eliminating hazards or substituting hazardous materials with less hazardous materials. , taimela et al. argued that administrative controls, such as training and ensuring adequate staffing, are crucial to eliminating or minimizing occupational hazards. engineering controls, such as redesigning work spaces, ensuring adequate ventilation, and introducing automated systems for repetitive tasks, were emphasized by liberati et al. ppe, such as the use of gloves, clothing, and eye wear, are considered the least effective and have the most profound consequences in the event of failure by exposing the individual directly to the hazard. nonetheless, many researchers and professionals agree that all such controls should be applied collectively, in order to minimize the effects of hazards. , - musculoskeletal disorders (msds) due to repetitive actions, less than optimal computer equipment, and a poorly engineered workspace in which healthcare professionals are forced to overreach and/or sit while maintaining an awkward posture. healthcare professionals are tasked with lifting and transferring equipment, tools, and instruments. one's physical fitness level and demographic background were shown to affect the risk of developing msds. workplace and job-related demands, poor administrative and team support, and a negative attitude toward job tasks were all strongly correlated with msds. ergonomic hazards can lead to chronic pain in the arms, back, or neck. frequently, they lead to msds, such as carpel tunnel syndrome, which tends to reduce work performance and productivity and can have a serious detrimental effect on one's health-related quality of life. strained movement due to localized pain, stiffness, sleep disturbances, twitching muscles, burning sensations, and feelings of overworked muscles. engineering control strategies: redesign workstations with appropriate chairs and computer equipment. workstations should be configurable to a wide range of medical personnel with different body shapes and sizes. it is also recommended that lifting and handling equipment, such as trolleys, be installed in areas requiring heavy lifting. automation should be adopted when resources and practicability allow. healthcare professionals also face violence during robberies and the theft of addictive prescription pain killers, such as oxycontin and vicodin. we also identified organizational culture and structure, interpersonal relationships at work, job content and satisfaction, homework balance, and the changing nature of work as important psychosocial hazard risk factors among healthcare professionals. , , work-related stressors have a detrimental impact on worker's health and safety, in terms of mental, musculoskeletal, chronic degenerative disorders, metabolic syndrome diabetes, and cardiovascular diseases. psychological hazards at work were associated with heart disease, depression, physical health problems, and psychological strain. low back pain was the most common work-related ailment among healthcare workers in taiwan. employees who experience job insecurity and/or workplace injustice were more likely to suffer from burnout. job demands and the level of control experienced by the worker were significantly associated with fatigue; exposure to workplace violence affects psychological stress, sleep quality, and subjective health status among healthcare professionals. engineering control strategies: creation of isolation areas for agitated patients and designing an office layout that prevents the healthcare professionals from coming into direct contact with customers/patients or being trapped. spaces should be well lit and separated to ensure that client-care provider contact is controlled and access is allowed only when absolutely necessary. proper working communication devices and video surveillance, as well as panic buttons and alarm systems. administrative control: management policies make unequivocal declarations of non-violence/anti-abuse. management can encourage workers to participate in the design of forwardrotating (day-evening-night) shifts and work schedules that impose gradual shift changes and ease the adaptation to nonregular work shifts to ensure that all concerned get adequate sleep. educate healthcare professionals about the risks associated with shift work. well-trained security personnel should be hired to deal with unruly customers. training in conflict management and problem-solving could also help workers to prevent or de-escalate violence. , nametags should be used by employees, and reporting and response procedures should be enhanced. the manuscript has not previously been published and is not under consideration by another journal. the author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. the ethical approval was not sought for this study because this is a systematic review and all the literature has been published. the author(s) received no financial support for the research, authorship, and/or publication of this article. lin ming hung https://orcid.org/ - - - x supplemental material for this article is available online. occupational health: health workers occupational health: data and statistics international labour standards on occupational safety and health workplace safety and health: healthcare workers interaction of occupational and personal risk factors in workforce health and safety occupational safety and health interventions to reduce musculoskeletal symptoms in the health care sector the health of the healthcare workers the global burden of selected occupational diseases and injury risks: methodology and summary national development council (ndc) what is a systematic review systematic reviews for occupational safety and health questions: resources for evidence synthesis a search strategy for occupational health intervention studies risk and management of blood-borne infections in health care workers the occupational safety of health professionals working at community and family health centers five steps to conducting a systematic review sleep disorder in taiwanese nurses: a random sample survey safety culture in a pharmacy setting using a pharmacy survey on patient safety culture: a cross-sectional study in china science of safety topic coverage in experiential education in us and taiwan colleges and schools of pharmacy controlling health hazards to hospital workers perception and prevalence of work-related health hazards among health care workers in public health facilities in southern india the prevalence of occupational health-related problems in dentistry: a review of the literature workplace safety and health improvements through a labor/management training and collaboration tuberculosis in healthcare workers: a matched cohort study in taiwan health care visits as a risk factor for tuberculosis in taiwan: a population-based casecontrol study estimation of the risk of bloodborne pathogens to health care workers after a needlestick injury in taiwan epidemiological profile of tuberculosis cases reported among health care workers at the university hospital in vitoria, brazil risk of tuberculosis among healthcare workers in an intermediate-burden country: a nationwide population study risk of tuberculosis infection and disease associated with work in health care settings sars in healthcare facilities reproductive health risks associated with occupational exposures to antineoplastic drugs in health care settings: a review of the evidence overview of emerging contaminants and associated human health effects guidelines for safe handling of hazardous drugs: a systematic review critical care medicine in taiwan from to under national health insurance niosh health and safety practices survey of healthcare workers: training and awareness of employer safety procedures potential risks of pharmacy compounding development of taiwan's strategies for regulating nanotechnology-based pharmaceuticals harmonized with international considerations an overview of the healthcare system in taiwan chemical and biological work-related risks across occupations in europe: a review n-hexane intoxication in a chinese medicine pharmaceutical plant: a case report occupational neurotoxic diseases in taiwan the impact of physical and ergonomic hazards on poultry abattoir processing workers: a review musculoskeletal disorders and ergonomic hazards among iranian physicians occupational safety and related impacts on health and the environment prevalence of workplace violent episodes experienced by nurses in acute psychiatric settings occupational hazards in the thai healthcare sector prevalence of work related musculoskeletal disorders (wmsds) and ergonomic risk assessment among readymade garment workers of bangladesh: a cross sectional study the study of the effects of ionizing and non-ionizing radiations on birth weight of newborns to exposed mothers healthcare worker safety: a vital component of surgical capacity development in low-resource settings comparisons of musculoskeletal disorders among ten different medical professions in taiwan: a nationwide, population-based study occupational exposure to ionizing and non-ionizing radiation and risk of glioma effect of systematic ergonomic hazard identification and control implementation on musculoskeletal disorder and injury risk the impact of occupational psychological hazards and metabolic syndrome on the -year risk of cardiovascular diseases-a longitudinal study employment insecurity, workplace justice and employees' burnout in taiwanese employees: a validation study risks of treated anxiety, depression, and insomnia among nurses: a nationwide longitudinal cohort study occupational health: occupational and work-related diseases tackling psychosocial hazards at work violence against health workers in family medicine centers impact of workplace violence and compassionate behaviour in hospitals on stress, sleep quality and subjective health status among chinese nurses: a cross-sectional survey the association between jobrelated psychosocial factors and prolonged fatigue among industrial employees in taiwan psychosocial factors and workers' health and safety psychosocial hazard analysis in a heterogeneous workforce: determinants of work stress in blue-and white-collar workers of the european steel industry an evaluation of the policy context on psychosocial risks and mental health in the workplace in the european union: achievements, challenges, and the future a national study on nurses' exposure to occupational violence in lebanon: prevalence, consequences and associated factors review of the literature on determinants of chemical hazard information recall among workers and consumers prevalence and determinants of workplace violence of health care workers in a psychiatric hospital in taiwan a brief overview of systematic reviews and meta-analyses maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources the global burden of occupational disease hazard identification, risk assessment, and control measures as an effective tool of occupational health assessment of hazardous process in an iron ore pelletizing industry an occupational health intervention programme for workers at high risk for sickness absence. cost effectiveness analysis based on a randomised controlled trial learning from high risk industries may not be straightforward: a qualitative study of the hierarchy of risk controls approach in healthcare key: cord- -eagbwk j authors: williamson, brian title: beyond covid‐ lockdown: a coasean approach with optionality date: - - journal: nan doi: . /ecaf. sha: doc_id: cord_uid: eagbwk j nan maintaining across-the-board restrictions is socially and economically costly, has adverse distributional impacts, and is poorly targeted in terms of protecting health and health care provision. instead a win-win 'coasean' social contract could be forged to protect older people and other atrisk groups coupled with freedom from lockdown for everyone else. the social contract could involve a period of support and extra payments to older age groups to commit to home quarantine, but with the possibility of opting out. younger cohorts would be given the option of taking greater risks in return for liberty, fraternity, and greater economic participation. by doing so they would benefit themselves, but also support society economically and through acquired immunity. a few countries, such as new zealand and australia, which acted early and had a limited number of covid- cases, may be able to eliminate covid- and reopen their societies with ongoing strict border controls. however, given that covid- transmission is well established in most countries, near-term options in other countries are limited to mitigation. in the longer term, immunity acquired via infection, or ideally a vaccine, offers the prospect of a solution. however, the timing and effectiveness of vaccines are uncertain, and this is not a nearterm option. in the near term, mitigation options include hygiene measures, physical distancing, testing and contact tracing to limit transmission, in addition to improved clinical care. however, physical distancing (especially via lockdown) is very costly in terms of liberty as well as economics. elimination strategies. a number of studies do not consider such differences, for example barro ( ) . first, unlike an influenza pandemic, which may involve waves over a period of months but typically evolves into less serious seasonal strains, covid- is expected to persist. influenza also has a lower reproduction number (r ) than covid- (biggerstaff, cauchemez, reed, gambhir, & finelli, ) , so population immunity could be acquired at a lower level of infection. a short-term quarantine can therefore be effective in avoiding or limiting the impact of an influenza pandemic but not covid- , though it buys valuable time. second, unlike influenza, covid- is very unusual in its markedly disproportionate risk of killing older people, in addition to those at risk due to chronic health conditions. in contrast, mortality during the - influenza pandemic was u-shaped or w-shaped with age; that is, many younger people died in addition to older people (taubenberger, ) . this difference is relevant to the economic impact in terms of reduction in workforce participation due to deaths or fear of deaths. a further consideration is immunity. covid- infection produces immunity, but the longevity of such immunity and the extent of cross-immunity with other coronaviruses is unknown. the nature of immunity, which may differ from that for influenza, will impact the dynamics of the covid- infection (kissler, tedijanto, goldstein, grad, & lipsitch, ) . figure shows the marked variation in the infection-fatality ratio by age group. the risk of death if infected in the -plus age group is around times that for - year-olds; more broadly, the risk of death if infected in the under- age group is . per cent, versus . per cent in the -plus age group, a -fold difference. while younger people are at greatly reduced risk from covid- , they are on the other hand likely to suffer some of the more severe impacts in terms of forgone education, employment, and social and longer-term opportunities from measures to increase physical distancing. economic harm, particularly unemployment, can in turn be expected to have an adverse impact on mental health in particular. the combination of low health risk for younger people from covid- with disproportionately high economic and social costs from the current policy response suggests that a more targeted policy response is desirable. given that the risk of dying from covid- is a sharply increasing function of age, two broad suggestions have been put forward in the uk: • extend the stay-at-home recommendation for those aged over to those aged over (osama, pankhania, & majeed, ) . • release the under- s from lockdown (oswald & powdthavee, ) . however, while these proposals better reflect population risk, neither is sustainable or sufficient to restart the economy and protect the most vulnerable. a larger group than those under need to be released, and some, but not all, of those aged - are at significant risk and account for a significant fraction of years of life lost. sustainable responses that can bridge the gap to longer-term solutions are required which do not involve the high costs of lockdown in terms of liberty, employment, education, income, and broader physical and mental health outcomes. this is a pressing challenge, in particular for those at the start of their adult lives. in this article, building on a blog post where the idea was first suggested (williamson & wilson, ) , what is proposed is a coasean social contract that recognises the reciprocal nature of the problem of mitigating the risk of harm to health, welfare, and the economy from the covid- pandemic. the bargain is 'coasean' in recognising that social costs (externality) can be reciprocalan idea developed by ronald coase, the nobel prize-winning economist. coase analysed the case of sparks from trains setting fire to crops where the train company could mitigate sparks while the farmer could avoid cultivation of crops close to railways; and an efficient bargain between the two could in principle be struck (coase, ) . externalities arise because individual actions have consequences for others in terms of infection (from contact), in terms of reduced population immunity (from avoiding contact), and in terms of the risk that health care resources are redirected to treat those with covid- . a functioning economy is also required to support society, and economic harm can be expected to have health implications (case & deaton, ) . reciprocity arises because either those at higher risk can isolate themselves or those who might infect them but are at lower risk can be locked down to reduce the spread of covid- . however, while some risk factors can be identified, individuals will have information about their own risk, risk preferences, and opportunity cost of lockdown that is not available to central authority but which should inform decisions about who is isolated. what is proposed is a combination of central design and individual decisionsa coasean social contractthat recognises the reciprocal nature of the problem and allows individuals to opt in or out of defined categories in return for receiving or forgoing social and financial support. reflecting different individual preferences, there would be optionality for those at low risk to isolate without payment (they are not contributing to the societal good of a build-up of population immunity), while those at higher risk could opt out of isolation but would forgo support and payment. research on individual preferences could be conducted to inform the choice of default thresholds and incentives for a coasean social contract. a further option would be to require those at risk who opt out of self-isolation to pay a riskadjusted health insurance surcharge reflecting the broader risk they pose in terms of health care costs and the risk of overloading the health care system (in contrast to the moral hazard involved with health insurance opt-in those choosing to pay an insurance surcharge to opt out of isolation might be at lower risk than average for their cohort). however, a surcharge for opting out may be considered inequitable. this approach may also have the benefit of permitting an additional feedback loop as the load on the health system evolves, namely by changing the eligibility cohort and/or by changing the payment in return for isolation or potentially holding an online auction to achieve a given level of additional opt-in. the ability to influence r via modest distancing measures and to keep the growth in covid- cases manageable would also be enhanced by growth in the proportion of people with a degree of immunity (well short of herd immunity) as younger cohorts re-enter education and work and socialise. again, the covid- health risk for this group, while non-zero, is low. this approach may also have lower costs to the economy than turning off or on distancing measures for everyone as epidemic spread subsides or picks up again, since the ongoing uncertainty associated with such epidemic dynamics limits individuals' ability to plan and invest, and may make some businesses non-viable, for example in hospitality and tourism. while financial incentives could undermine incentives for voluntary sacrifice and compliance for behavioural reasons, they are also more tuneable. it can be difficult, for example, to communicate clearly to the public the changes in the detailed rules in relation to home quarantine and physical distancing, or potentially to maintain a high level of compliance while extending the period of compliance (briscese, lacetera, macis, & tonin, ) . centralised and decentralised responses to covid- can, and would, both play a part in mitigating overall harm under a coasean social contract. the proposed approach could substantially reduce the economic and social cost of the covid- policy response while limiting mortality and the risk of overloading the health-care system. the degree of heterogeneity in terms of risk and preferences across individuals in the age group most at risk may be very large. some of those at increased risk may be highly productive or simply value outside economic and social opportunities highly, others less so. some may have a diminished quality of life, and/or may have died in the near term irrespective of covid- . others who are healthy may consider the increased risk of premature death to be a small price to pay in return for freedom. this is relevant to the trade-offs individuals might make and to a societal assessment of alternative policy options. for deaths involving covid- that occurred in march in the uk, there was at least one pre-existing condition in per cent of cases (ons, ). neil ferguson ( ) , director of the mrc centre for global infectious disease analysis at imperial college london, considered that it might be that as many as half to two-thirds of those who had died from covid- in the uk early in would have died by the end of the year from other causes. however, a study of hospital cases (excluding care homes) found that stratifying by age and multimorbidity counts showed that average years of life remaining were rarely below three (hanlon et al., ) . infection-fatality ratios can be combined with expected years of life remaining from life tables (ons, ) to obtain expected years of life lost, conditional on catching covid- . these estimates can also be adjusted based on estimated health-adjusted life years remaining. covid- infection, on the assumption that years of life lost are per cent lower for those aged - and two-thirds lower for those aged -plus. those most at risk face an expected loss of around six months of life on the assumption of typical health, and around two months of life if the assumed impact of comorbidity is allowed for. compared with the reduced quality of life associated with lockdown, some might regard this risk as modest, though individuals tend to be risk-averse. individuals are likely to have very different attitudes to this uncertain prospect. there are, therefore, grounds for not only moving to an age-specific policy response to covid- , but also moving from mandates to incentives given large variations in individual trade-offs and private information about such trade-offs. what is proposed is a shift to an agespecific set of policy defaults, but with optionality and incentives to allow individuals to make individual choices. developing an approach which recognises individual heterogeneity and the importance of private information and preferences to individual and socially efficient trade-offs is more likely to prove sustainable, since it more closely aligns with individual preferences and incorporates support and compensation for those bearing the greatest burden in terms of isolation. the approach is intended as a bridge to a time when population immunity develops, ideally via an effective vaccine. the novel social contract set out here could be explored further by governments who have pursued mitigation via physical distancing but find that population fatigue is limiting its effectiveness or that the economic and social cost for younger cohorts in particular is simply too high. the approach seeks to recognise both individual preferences and two particular social benefits. first, society as a whole would benefit from getting younger cohorts back into education, training and work; and from the immunity this group would build up. they should therefore be not only allowed to return to 'normal' life but encouraged to do so, via a reduction in financial support to pre-existing safety net levels. second, society as a whole would also benefit by encouraging those groups considered to be at high risk to stay at home for the medium term. compulsion may not be sustainable, may be regarded as discriminatory, and is not ideal as some individuals may have low risk or high productivity or simply prefer liberty alongside the risk from covid- . a combination of support and financial incentives coupled with the option to opt out is preferable to compulsion. the goal of this possible 'third way' is not to minimise deaths per se, but to go beyond a health optimisation approach to a broader well-being-maximising one, taking account of individual preferences and trade-offs. the proposed approach places greater weight on individual choice coupled with incentives rather than mandates, in part because such an approach may be more likely to have legitimacy over an extended time frame than the prevailing lockdown approach. it also recognises that individuals have more information about their risks and preferences, and these will differ across individuals. it also has the benefit of representing a social contract which, in contrast to across-the-board restrictions, recognises the contribution everyone is making while improving intergenerational equity. the views expressed in this article are the author's own and not those of communications chambers, which has no collective view. non-pharmaceutical interventions and mortality in u.s. cities during the great influenza pandemic, - . nber working paper estimates of the reproduction number for seasonal, pandemic, and zoonotic influenza: a systematic review of the literature. bmc infectious diseases compliance with covid- social-distancing measures in italy: the role of expectations and duration rising morbidity and mortality in midlife among white non-hispanic americans in the st century the problem of social cost science and technology committee. parliamentlive.tv covid- -exploring the implications of long-term condition type and extent of multimorbidity on years of life lost: a modelling study projecting the transmission dynamics of sars-cov- through the postpandemic period national life tables: uk. september deaths involving covid- , england and wales: deaths occurring in protecting older people from covid- : should the united kingdom start at age ? the case for releasing the young from lockdown: a briefing paper for policymakers the origin and virulence of the 'spanish' influenza virus estimates of the severity of coronavirus disease : a model-based analysis. the lancet might a 'coasean' social contract mitigate overall societal harm from covid- ? public health expert how to cite this article: williamson b. beyond covid- lockdown: a coasean approach with optionality key: cord- - l le authors: yang, honglin; pang, xiaoping; zheng, bo; wang, linxian; wang, yadong; du, shuai; lu, xinyi title: a strategy study on risk communication of pandemic influenza: a mental model study of college students in beijing date: - - journal: risk manag healthc policy doi: . /rmhp.s sha: doc_id: cord_uid: l le purpose: to understand the characteristics of risk perception of influenza pandemic in college students with prominent frequency and the differences between these risk perceptions and professionals. then, offering a proposal for the government to improve the efficiency of risk communication and health education. methods: according to the mental model theory, researchers first draw a framework of key risk factors, and then they ask these students about the understanding of the framework with questionnaire and then making concept statistics and content analysis on the respondents’ answers. results: researchers find some students’ misunderstanding of pandemic including excessive optimism to the consequences of a pandemic, a lack of detailed understanding of mitigation measures, and negative attitudes towards health education and vaccination. most students showed incomplete and incorrect views about concepts related to the development and exposure factors, impact and mitigation measures. once threatened, it may lead to the failure of decision-making. the majority of students we interviewed had positive attitudes towards personal emergency preparedness for a pandemic influenza and specialized health education in the future. conclusion: researchers suggest that the government should make a specific pandemic guidance plan by referring to the risk cognitive characteristics of college students shown in the research results, and update the methods of health education to college students. influenza, which is a highly variable infectious disease that can quickly evolve into a pandemic, can pose a significant threat to people's health. the corresponding emergency response measures require the active cooperation of the public to work effectively. because of its wide range of impact and potential mortality, effective risk communication will help the public understand information related to influenza. compared to risk communication in other fields, when public health events occur, the government often turns to experts to ask them what the public should know. so, it is a challenge that how to effectively transform scientific knowledge into useful structures and non-professional backgrounds. our researchers use influence diagrams from mental model interview to analyze the critical risk factors of flu, which can improve student`s ability of decision-making to maintain their physical health. [ ] [ ] [ ] [ ] [ ] morgan et als monograph on mental model theory argues that everyone relies on their mental models to understand information. it grows into a unique and intrinsic pattern as individuals grow, similar to a workflow chart. splitting the outside world into multiple components to help us understand may not be perfect; however, it affects our way of thinking and behavior choice. , , a person's mental model is influenced by various factors, including personal experience, acquired learning, and living environment, and these factors are changeable and also important in affecting our health-related behaviors. , [ ] [ ] [ ] [ ] [ ] therefore, targeted education can help an individual correct misunderstanding in their mental models and then improve their risk management. in china, there is no application of mental model theory in the field of health education and no special pandemic preparedness guideline for the general public. however, in western countries, particularly the united states, many scholars have conducted substantial research in this area. lazrus et al have studied the public mountain flood communication framework in boulder county, colorado state. casman et al used the influence map to establish a dynamic risk model for waterborne cryptosporidiosis, which defines "key awareness variables" in risk communication and assigns scores for evaluation. our researchers hope to use the mental model theory to analyze the most critical risk factors of influenza pandemic from a broader perspective and find out college students` risk perception of these factors. the understanding and cognitive characteristics help improve the communication work of the government, which is the aim of this article. this study refers to the impact map formed by morss et al in the flood risk communication of boulder county and draws the risk factor framework of the influenza pandemic. the entire frame is an analysis of disaster events from a macro perspective, including "causes," "development," "response," "event impact" and "risk information dissemination." then, through literature research and expert consultation, the researchers summarized the concept of the communication framework and initially formed its content suitable for the influenza epidemic. the content of the whole frame consists of the causes of influenza epidemics, the impact of pandemics, emergency preparedness and strategies of different groups, risk information, and emergency response decisions, as shown in figure . the researchers subsequently searched for the corresponding supporting documents according to the content of the framework and conducted expert seminars. combined with the materials of the literature and expert opinions, the authors initially wrote identical concept items under each part of the frame. finally, we used the delphi method to invite experts from the related fields to judge the structure, importance and scientific nature of these items. the purpose of mental model interviews is to determine which concepts or beliefs are "out there" with sufficient frequency such that in smaller samples, these concepts or beliefs become reasonable. there is no standard method for determining sample size in relevant theories and research practice. according to professor morgan's monograph and related research examples, the sample size for a mental model interview should be ~ , at which point new information has reached saturation. based on these research facts, combined with the research design of lazrus and morss, , we recruited the first respondents from randomly selected non-medical college by telephone and posters. to avoid confounding bias, these students are also from non-medical majors (including russian, finance, urban planning and marketing) and have not studied medical related professional courses. after all the investigations have been completed, we discussed the results and deleted two poor interview results, and then drew a line chart of information saturation according to the number of concepts mentioned by the respondents in figure . we found that after the nd interviewee, information saturation began to show a downward trend, and subsequent respondents did not propose new concepts. we believe that the information provided by these respondents can meet the sample size required for the analysis of this study, because the purpose of mental model study is not to use statistical methods to analyze the distribution of some risk cognition in the population, but to find out which concepts or beliefs, are "out there" with some reasonable frequency, so as to help government departments identify what should be focused on when developing guidance programs and health education materials for this population. the interview began with an open question, such as "please tell us about the pandemic." our investigators guided the respondents to elaborate on their main concepts, then details of the outbreak, as well as the mitigation measures that should be employed. if the interviewer had experienced emergencies, then they were encouraged to talk about the decision or idea at that time. the interview results were subsequently transcribed, encoded and classified using the coding software atlas.ti. we also conducted a quantitative analysis of the results of the compilation, then created a statistical chart, observed the degree of attention of the respondents, and compared these results with the risk perception of experts to determine the interviewee's understanding of the related concepts and other features. the questions used in this interview refer to a questionnaire in the study of skarlatidou et al. the interview covers the content in figure . two researchers simultaneously coded the results of the interviews. the classification consistency index (holsti reliability) of the coder was subsequently calculated, which fluctuated between . and . , and the average reliability statistic was . . according to the study of boyatzis and burrus, the coding reliability of trained different coders ranges from . to . ; therefore, the reliability of the coder was within the normal range and displayed adequate consistency. information saturation trend provided by respondents. for each of the respondents' answers to the number of concepts noted, the researchers first mapped the scatter plots. then, to better show the increase and decrease in the information provided by the respondents, polylines were used to connect the points. the content of the concept is derived from the framework of figure and is described by the responses of all respondents. here is the result of two rounds of delphi expert consultation. the value of the authority coefficient is . (> . ), which indicates that the study has a good expert score. , , as shown in table , in the first round of expert consultation, the coordination coefficient of each item was . (p< . ), and in the second round of expert consultation, the coordination coefficient was . (p< . ), which was better than the first round and indicates that the opinions of the experts are consistent from the perspective of significance test. finally, we created a communication framework for an influenza pandemic, as shown in figure . it serves as the basis for our investigation of the problem content of college students and can also be regarded as a kind of "standardized communication content". the respondent may have a higher probability of taking the correct protective measures if he has a good understanding of the entire framework. communication framework of pandemic influenza. the frame is composed of six main conceptual dimensions; the central concept is the bold label, and the ndlevel concept in the box is the part. more complicated concepts in the framework are omitted; refer to the coding manual in the appendix. the whole frame contains concepts, and the arrowhead represents the influence relationship of each part. the analogy part is listed separately to describe the events associated with the respondents. note: table shows the statistical coefficient calculation results of the two delphi studies, and the p values of the two coefficients all meet the requirements. the researchers counted the percentage of respondents that mentioned a concept item. also, this study used a stacked bar chart to show the number of concepts mentioned by the respondents ( figure ). as shown in the graph, we distinguish the concept of different attributes in terms of dimensions (risk factors). the richness of the color can visually distinguish the depth of the mental model of each interviewee [the number of concepts mentioned by an interviewee], and we can determine which dimensions of the expert`s risk perception the public is highly aware of and in which areas the public lacks awareness. furthermore, the length of the bar graph reflects the number of concepts mentioned in the dimension: a taller bar graph reflects more relevant concept items indicated by the respondents and a deeper degree of understanding of the related content. for example, respondents , and knew more about the emergency response decisions during the pandemic, whereas interviewee # was less aware in this regard. figure shows the differences in thinking about the risk of and coping with the influenza pandemic among different groups. even with a higher education level, each college student interviewee displayed a significant difference in the depth and detail of their mental model. some of the respondents' mental models appear particularly "scarce" (such as respondents # and ). nearly all respondents discussed less information than the risk perception of experts. only one interviewee (interviewee # ) cited concepts that reflected almost all the parts of the communication framework in figure . the other students did not suggest many more new concepts in the interview. their conceptual descriptions reflect the concern for specific content and common cognitive deficiencies and misunderstandings. the following sections discuss these best features of the interview answers. the interactions between multiple factors may affect the formation and development of pandemic influenza. several factors mentioned by our respondents are shown in table ; % of the respondents believed that influenza virus variation was an essential cause of the pandemic. they used statements such as "new virus," "virus mutation," and "an unknown virus." additionally, % of the respondents referred to disease surveillance, which included "poor supervision of the source of infection" and "unchecked work", and they were more inclined to use terms to express their views (for example, "gene mutation", "isolation treatment", "infrared surveillance", and "take the body temperature"). forty-six percent of the respondents cited characteristics related to the international spread of the pandemic. interviewee # indicated "foreign virus carriers from foreign places into beijing." however, some respondents believed that climate factors could lead to flu cases because they confused pandemic influenza with seasonal flu, such as interviewee # , who answered "when the seasons change, people may catch a cold easily. if they do not pay attention, a pandemic will happen if they don`t do that." many respondents ( %) also cited the impact of population density, including densely populated places and more floating cities with higher risk areas for influenza. other factors were less frequently cited by less than % of interviewee, including virus resistance, viral power, avian influenza immunity, and a human lack of immunity to new viruses. compared to the experts, the mental models of many of the students interviewed contained only part of the communication framework. although some key factors were cited by most of the respondents, other essential factors were rarely cited or were misunderstood by the respondents. for example, interviewee # believed that the flu was a "foodborne disease" and "caused by drugs." for individuals infected with influenza, no respondents discussed the impact of vulnerable groups on the development of the pandemic, and there was no further detailed description of the virus variation. a full understanding of these information can help people to evaluate the risk level in the environment, including which situations may have a higher risk of infectious diseases. another neglected concept is the lethality of the virus. no respondents mentioned this concept or discuss the content related to us. in fact, the lethal rate is also an important indicator of a new infectious virus. from the perspective of scientific disease control, the lethal rate affects whether the virus has the characteristics of limited regional transmission (for example, ebola virus, its lethal rate is - %, making the virus only intermittently epidemic in individual countries and regions, with certain limitations in time and space.) from the perspective of promoting public participation in disease response, highrisk events can promote individual polar to make protective decisions. knowing the virulence of the virus can avoid the negative attitude to personal disease prevention caused by fluke psychology. as shown in table , approximately % of the respondents discussed the fatality of the flu, while only % of the respondents described the severe symptoms that could occur after the infection, such as interviewee # : " . . . if there goes a pandemic, it would be more than a common cold. runny nose and sneezing or, maybe, pneumonia?" none of the respondents cited complications related to influenza infection. even if a real pandemic is only composed of common symptoms of fever and fatigue, complications such as pneumonia, myocarditis, and bronchitis are the real causes of death in some vulnerable patients. , therefore, although most of the respondents understood that the flu could have serious health threats, they did not understand how people die as a result of the flu. these misunderstandings may be related to some respondents' personal and onesided understanding of the pandemic and the lack of targeted health education. for example, interviewee # stated "that is, people usually do not pay attention to clothes, then they catch a cold. it is quite a normal situation every year." most respondents also discussed the social and economic impacts of the pandemic, and % of the respondents referred to negative effects on schools, shops, public transport, and other infrastructure during the pandemic, such as interviewee # : "schools may shut down . . . the shops outside may be closed because of this disease, and the economy may be seriously affected because everyone will hide at home." most of the types of infrastructure, of which transportation was the most frequently cited, were generally quoted as examples of people during the sars or bird flu period, such as interviewee # who stated, "everyone is not going out at the time of the outbreak . . . wearing a mask if you have to go outside." thirty-two percent of the respondents were worried about overburdened hospital patients during the pandemic. some of the respondents ( %) also imagined disastrous consequences, including the impact of the pandemic on the community. according to interviewee # , for a long time . . . our life may be threatened, many people steal food and drugs and will be locked inside their house . . . not just the direct impact, it will bring other serious problems. although the respondents mentioned the relevant concepts in the communication framework, they fail to understand the severe damage that pandemic influenza could cause to individual health; moreover, they are not fully aware of panic actions during the outbreak. the most common panic behavior is to escape from the epidemic area. to avoid disaster is people's instinctive behavior, especially in the outbreak of infectious diseases. in fact, during the outbreak of the novel coronavirus (covid- ) in china, people in some areas fled the outbreak area. and it happened to be the chinese new year's holiday. lots of college students returned home to celebrate festival, which strongly increased the risk of virus transmission. although these situations have not caused irreparable serious consequences, they have also brought great interference to the case investigation and disease monitoring in all provinces of the country. surprisingly, there are % of the respondents believed that a negative impact of a flu pandemic would be minimal or more positive, and nearly all of them stated that it "feels like the pandemic is far away from me." according to interviewee # , it "is a kind of epidemic disease, but speaking of cold and flu, what is generally not a major disease, easier to treat the feeling, plus the pandemic, it is only a larger scope of infection, right?" the content reflects that some students do not pay substantial attention to public health and their health. more people choose to passively wait and accept the strategies and measures employed by the school or the state government; they lack the initiative to understand the relevant information and take preventive actions. the coping strategies in table are essential to pandemic emergency work and a necessary part of the communication framework in figure . twenty-nine percent of the respondents cited the importance of personal hygiene habits, such as wearing masks and isolating patients; however, there are not many people who provided detail regarding these aspects. a few respondents described these strategies on the government, organization, and individual levels. most of them referred to "masks" and "be far away from the cough" in the relevant description and noted details of whether to use a special mask or separate the patient from the family. for example, interviewee # stated: "if it is a more serious situation, we will wear a mask, and then the hospital will be more nervous about the flu . . . " another % of the respondents believed that there was no need to isolate the suspected patients, such as interviewee # : "you cannot go to the hospital first because most of the cases are not true flu, to the hospital may be isolated, so look first." for the government's decision-making, % of the respondents cited health education and counseling. most of them were willing to accept the necessary emergency response; over / of the respondents referred to influenza surveillance, public disinfection, and hospital treatment. these answers demonstrate these students still make mistakes and lack of understanding of the most effective protection decisions, although they have better educational backgrounds and a high degree of potential coordination. moreover, although vaccination is the most effective way to prevent the flu, only two of the respondents said they were willing to receive the flu vaccine, and the other respondents said they would not vaccinate themselves if it were not compulsory. "there is no need for voluntary vaccination" (respondents and ), "some vaccines may have side effects . . . it will hurt me" (interviewee ). notably, interviewee # , who originated from hong kong, was able to describe all the individual and government contingency strategies and discussed his own experience of avian influenza in hong kong in addition to elaborating on the entire process of emergency work. this fully embodies the maturity and perfection of the hong kong government in the risk communication of emergencies and the higher risk awareness. thus, the related communication and publicity strategies are worth referencing. the risk of pandemic influenza can be reduced by timely warning, access to correct information, and attitudes towards communication and interest in the face of threats. as shown in table , % of the respondents had a specific information identification ability; % of the respondents chose to obtain their information on pandemic risk from the official channels. all respondents were willing to take several methods to search for risk information, including using the internet. however, although knowing first-hand influenza warning and decision support information originates from the cdc, very few of the respondents ( %) were able to clarify what types of communicators can provide help and detailed descriptions on this topic, including the specific types of early warning information that is available, where the information is, and how it is transmitted. individuals have only mastered the general concept, such as interviewee # : " . . . go to the official website or wechat (to) find how to prevent." for health education and publicity, most of the respondents indicated that they would not take the initiative to participate in similar activities. the reasons were "traditional lectures are boring," "the publicity manual was not attractive". moreover, as interviewee # indicated, "i think all of them are theoretical knowledge which can be seen on the internet. if they can tell us something that you need to deal with when an event comes, it would be better." regarding suggestions for future risk communication. most of the respondents were satisfied with the current government's work and had a positive attitude towards the emergency plan of the official guide form; they were more focused on "the details of the emergency work" (cited by % of the respondents) and "hope to get official plan" (cited by % of the respondents). for example, interviewee indicated that " . . . the way must be change, not as before, because the flu is not like a common cold, people will not pay much attention to it. communication, whether it is a family or school, it is best to have some specific suggestions, such as how to wash hands and disinfection, everyone can refer to themselves to do it." in general, there was a clear difference in the breadth and depth of the overall understanding of the pandemic-related information and communication framework among each student interviewed. as expected, in the context of the communication framework, most of the students' mental models were not as rich as those of the experts. they were more concerned with the critical information necessary to make individual decisions in the interpretation of risk information, for example, interviewee # says: "now, i want to know what type of impact will it cause, and what type of protection measures can protect me?" most respondents only referred to the critical concepts in the communication framework, without a detailed description or in an inaccurate or unclear manner; therefore, these gaps may reduce the ability of people to manage their behavior and their compliance with expert opinions. compared to the communication framework in figure , the respondents used personal experience and analogies to produce more related concepts to establish the information base they needed to make decisions. table the discussion of related items complete negation of self-media % dialectical view of self-media % willing to participate in publicity % refusing to participate in publicity % access to information from the authority % access to information from other mass media % obtaining information from other trusted sources % differences between pandemic and seasonal influenza % the influence of rumors % note: table shows the concept of risk information mentioned by respondents and their suggestions on current government risk communication. the infection of flu often brings many complications, in the heart and lung systems, to those who have low immunity, such as infants and young children, and these are also the significant causes of virus' potential lethality. , the interview results show that some students do not pay sufficient attention to the impact of pandemic influenza and remain optimistic, particularly the lethality of virus, serious complications, and identification of vulnerable populations. our respondents trust in the country's sound epidemic prevention system. however, because we still have a lot of unknown information to explore about the virus, the outbreak of a new virus often brings challenges to the health system of a region. for example, virus identification, targeted program formulation, and information release all need time. for the existence of these time lags between case generation and interventions, if we want to carry out successful disease control actions, it is more important for the public to actively carry out personal protection rather than passively wait for the intervention of government departments. moreover, the desalination of the history of the epidemic, and the lack of targeted health education may also be reasons for the over-optimism of the pandemic. consequently, those who have inaccurate risk perception will estimate themselves as "the strongest young people" or "a person who having enough understanding about the flu." once a new virus outbreaks, these people may also bring misleading information to other individuals in their social circle, which will affect others' emergency decisions. in particular, for those who have experienced influenza pandemic without being negatively affected, luck may cause them to have a more positive response to future pandemics. , furthermore, although the h n , h n , and other influenza outbreaks have been derived from new viruses following mutation, the repetition of the old virus and the prevention of the flu season risk becoming a pandemic. being able to distinguish the key differences between the pandemic and common flu can effectively improve the level of personal risk cognition. among the respondents, we found that some students remained confusion: they believed that a pandemic is the mass spread of seasonal influenza or a pandemic is an almost impossible "super calamity". moreover, a pandemic is often unpredictable and generally involves international outbreak. therefore, it is important for the public to understand that the pandemic is not far away from us. we need to pay attention to our own prevention during the flu season, and at the same time, we need to be alert to unusual cold symptoms, especially when we go abroad. otherwise, patients may mistakenly think that they are suffering from common influenza, choose to place or take medicine, thus delaying the diagnosis and treatment time, infecting others and causing serious consequences. finally, concerning vaccination, our respondents have negative views regarding this issue. only of the respondents cited the importance of the vaccine and had a history of active vaccination, and the reasons mainly focused on the conventional "i feel good and don`t need vaccination" and "doubts about the safety of vaccines." therefore, our risk communication at present seems inadequate in promoting the necessity of vaccination. the public is not aware of the importance of the vaccine for influenza prevention or the misperceptions caused by its one-sided understanding of the pandemic, as discussed in "the countermeasures of the pandemic". in an investigation of the willingness of the elderly to be vaccinated, shaoliang geng found that the primary sources of influenza and related knowledge in elderly adults were family, relatives, friends, and television, and the most trusted means of knowledge were doctors. there are cracks in clinical and public health knowledge, and patients lack knowledge about the importance of vaccination. the correction of this misunderstanding is vital for college students and because it can promote the dissemination of inoculation knowledge of young students in the family, thus improving the injection of the recommended groups (old people and young children). as discussed in the acquisition of risk information and public suggestion, in the absence of relevant knowledge and information, the respondents applied personal experiences and analogies to compose the foundation of their mental model and help themselves understand the risk of the pandemic. understanding differences in causality between risk factors can also lead to substantial differences in risk perception and coping between individuals. many students only know a few general concepts and have not formed a complete emergency preparedness mode of thinking in a communication framework, knowing what one can do during the pandemic but not much about what to do and what is truly meaningful. for example, although nearly all respondents cited wearing masks and bringing in patients in time for medical treatment, the most basic measures can be limited in the presence of a real pandemic, which is only a result of a personal experience analogy (compared to a cold or related disease). what ` s more, for those in the outbreak area, especially those with suspected symptoms, it is the right and effective decision to stay at home and seek the help of local medical institutions to protect personal health than to conceal facts and escaping from outbreak area in panic. but none of our respondents know that. also, most respondents have only basic concepts (the government and the health department) regarding the types of communicators who provide the relevant risk information. these overly broad understandings may limit their ability to rapidly identify critical information or influence their knowledge of specific report under the threat of severe flu, mainly when their typical sources of information or communication channels are not available, or the necessary information is not provided. if the government is unable to offer exact messages or be out of protection from the spread of information. public trust in official authority may be reduced. students always prefer health education with new styles and systematic content. the appeal of traditional lectures and guideline books full of academic words is far less attractive, and it is hoped that the government will "reduce the over the generality of the description" and "release relevant data to increase persuasion" in future communication work. foltz's research confirms that it is necessary to use various mechanisms in the risk communication of emergencies. individuals with nonprofessional backgrounds tend to think in more specific terms, their vocabulary is less expansive, and subtle expressions cannot be well understood. bright colors and charts easily attract them. complex text information transmission will make people feel tired and irritable. if possible, two student respondents also suggested organizing practical exercises, which they think is more helpful to deepen the impression and understand self-protection measures used to cope with the pandemic. information consistency is the decisive factor in understanding and perceiving personal risk. in terms of communication effectiveness, multiple sources of consistent messages are typically more effective than messages from a single source or with different contents. the earlier the warning people receive and the higher the threat of information is, the higher the possibility that people take active preventive measures. therefore, the government department should incorporate the outbreak situational information and the proposed measures into influenza warnings, while maintaining the consistency of multiple communication messages. first, the results of this research reflect some misunderstanding in the respondents with a more prominent frequency: ) influenza virus mutation and seasonal influenza have the potential to evolve into a pandemic, and the prevention of common influenza cannot be ignored. ) the impact of an influenza pandemic is often unprecedented, and influenza virus infection can be lethal; in addition to severe cold symptoms, it also results in severe complications in patients. ) influenza vaccination plays an active role in pandemic prevention and should be actively vaccinated, particularly children with low immunity and elderly adults, a vulnerable group. ) for suspected patients in the family, the first choice is a social isolate, and it is very dangerous for family members to remain in close contact with their protection work. it is imperative for individuals to have common knowledge regarding influenza, the correct personal response and the degree of risk in our area for making the right decisions. therefore, we suggest that the government should put the above content as the focus of communication when communicating the risks related to the pandemic, or formulating the corresponding health education materials, so as to improve the compliance of the audience. on the other hand, the content of government risk communication should not be limited to medical advice. the public health department should develop a response plan for individuals and organizations. in terms of organization, a pandemic does not directly damage related facilities in contrast to many other catastrophic events. however, the regular work of employees within the organization will be affected. the absence of ill employees in central positions will have a severe impact on the regular operation of the organization. therefore, we need to develop a "continuous work plan" for these particular circumstances. the government should release relevant risk information on an influenza pandemic in the form of a preparation plan, or, use the network for distance health education or guiding emergency response work through local radio or television stations. finally, we should update the channels and methods of risk communication and health education. the government should strengthen the application of new media to adapt to young people's information acquisition preferences. in the form of communication, it can be gradually changed from traditional lectures to novel approaches, such as public welfare videos, songs, and scene construction experiences. moreover, scene effects can play an essential role in enhancing the personal experience because analogies are encountered in the event of a risk event to facilitate their correct risk assessment and response behavior. risk communication for public health emergencies the perception of risk risk communicaiton: a mental models approach rational choice and the framing of decisions a warning shot: influenza and the flu vaccine shortage the determinants of trust and credibility in environmental risk communication: an empirical study news influence on our pictures of the world health information on the internet: accessibility, quality, and readability in english and spanish? best practices in public health risk and crisis communication? risk perception and communication unplugged: twenty years of progress risk society. ho po wen translation acceptable risk the nature of explanation rating the risk know what to do if you encounter a flash flood": mental models analysis for improving flash flood risk communication and public decision making an integrated risk model of a drinking-water -borne cryptosporidiosis outbreak flash flood risks and warning decisions: a mental models' study of forecasters, public officials, and media broadcasters in application of delphi method in screening self-rated health evaluation index system what do lay people want to know about the disposal of nuclear waste? a mental model approach to the design and development of an online risk communication validity in the qualitative research interview the competent manager: a model for effective performance delphi method and its application in medical research and decision making research on the structure of public risk communication ability of influenza pandemic in health sector coordination coefficient w test and its spss implementation influenza century: review and enlightenment of influenza pandemic in the th century discrete logistic dynamic model and its parameter identification for the ebola epidemic modern epidemiology methods and applications beijing: beijing medical university peking union medical college joint publishing house research on monitoring and evaluation index system of national essential medicine system in primary health care institutions. hubei: hua zhong university of science and technology analysis of the clinical characteristics of influenza a (h n ) how does the general public evaluate risk information? the impact of associations with other risks prevalence and characteristics of children at increased risk for complications from influenza analysis of the information demand characteristics of public health emergencies of infectious diseases investigation on knowledge and willingness of influenza vaccination among the elderly over years old in xuchang city social and hydrological responses to extreme precipitations: an interdisciplinary strategy for post-flood investigation the authors would like to acknowledge linxian wang for helping compiling interview questionnaires, making suggestions on interview skills and finding supporting documents. we also express the sincere gratitude to students involved in the interviews of this research. this research did not involve any experiments or investigation which need ethical approval, and did not receive any specific funding too. the authors report no conflicts of interest for this work. risk management and healthcare policy is an international, peerreviewed, open access journal focusing on all aspects of public health, policy, and preventative measures to promote good health and improve morbidity and mortality in the population. the journal welcomes submitted papers covering original research, basic science, clinical & epidemiological studies, reviews and evaluations, guidelines, expert opinion and commentary, case reports and extended reports. the manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. visit http://www.dovepress.com/testimonials.php to read real quotes from published authors. key: cord- -l trtvil authors: kanno, takeshi; moayyedi, paul title: who needs gastroprotection in ? date: - - journal: curr treat options gastroenterol doi: . /s - - - sha: doc_id: cord_uid: l trtvil purpose of review: peptic ulcer disease (pud) is a recognized complication of non-steroidal anti-inflammatory drugs (nsaids). stress ulcers are a concern for intensive care unit (icu) patients; pud is also an issue for patients taking anticoagulation. helicobacter pylori test and treat is an option for patients starting nsaid therapy, and proton pump inhibitors (ppis) may reduce pud in nsaid patients and other high-risk groups. recent findings: there are a large number of trials that demonstrate that helicobacter pylori eradication reduces pud in nsaid patients. ppi is also effective at reducing pud in this group and is also effective in icu patients and those on anticoagulants. the effect is too modest for ppi to be recommended in everyone, and more research is needed as to which groups would benefit the most. increasing age, past history of pud, and comorbidity are the most important risk factors. summary: h. pylori test and treat should be offered to older patients starting nsaids, while ppis should be prescribed to patients that are at high risk of developing pud and at risk of dying from pud complications. upper gastrointestinal (gi) bleeding is a major health problem, and mortality from this problem has remained relatively unchanged for the last years [ ] [ ] [ ] . the apparent stability of a - % in-patient -day mortality rate hides significant changes in the epidemiology and management of the condition. major advances have been made in the management of upper gastrointestinal bleeding including the routine use of proton pump inhibitor therapy after a peptic ulcer bleed which improves outcomes and probably reduces mortality [ ] . endoscopic therapy also improves the outcomes of peptic ulcer and variceal bleeding [ ] . the age-adjusted rates of peptic ulcer (pu) bleeding have fallen globally over the last years largely due to the falling prevalence of helicobacter pylori (h. pylori) [ , ] , but a modest contribution may relate to the increasing use of acid suppression in the community [ ] . these positive factors have been balanced by the fluctuating use of non-steroidal anti-inflammatory drug (nsaid) [ ] and by the increased use of antiplatelet [ ] and anticoagulant therapy [ ] over time. furthermore, the absolute numbers of patients with peptic ulcer bleeding are not falling as dramatically as might be expected due to populations living longer with more comorbidities, which are a major risk factor for both pu bleeding incidence [ ] and death [ ] . given that pu bleeding remains an important problem, it is helpful to develop strategies that will prevent this complication particularly as antiplatelet and anticoagulation therapy continue to rise [ ] . there have been recent guidelines [ , ] on nonvariceal upper gastrointestinal bleeding, but these have predominantly focused on management of the problem when it occurs rather than preventing the complication from happening in the first place. the main approaches to prevent peptic ulcer bleeding are h. pylori screening and treating those that are positive, long-term proton pump inhibitor (ppi), or h receptor antagonists (h ra) therapy. h ra therapy is less effective than ppi [ ] and will not be considered further in this review. in those taking nsaids, there are the additional approaches of replacing them with cyclooxygenase- (cox- ) inhibitors or adding prostaglandin analogues. none of these strategies will be cost-effective if used in the general population, and most guidelines would recommend that these interventions should only be used in highrisk groups [ ] . this article will therefore evaluate risk factors for pu complications such as age, nsaid use, concomitant antiplatelet therapy, anticoagulant therapy, patients admitted to intensive care, and those with severe comorbidities [ ] . we will then summarize the evidence for the efficacy of h. pylori eradication, ppi therapy, cox- inhibition, and prostaglandin analogues in preventing peptic ulcer bleeding and focus on which high-risk groups these approaches could be recommended. the most important determinant of pu complication population attributable to risk is increasing age. the risk of pu complication is -fold higher than those over the age of years compared to younger age groups [ ] . the vast majority of deaths from pu complications also occur in older age groups with a -fold increase in mortality in those over compared to those less than years old [ ] . while mortality from pu complications in those under that age of years is very rare, this cut off is somewhat arbitrary. the risk of pu complications is still modest in a -year-old but steadily increases with advancing age with a roughly two-fold increase in incidence with every decade [ ] . many risk factors increase with age, and it is difficult to evaluate age separate from other risk factors such as increasing prevalence of h. pylori, polypharmacy, and comorbidity. nevertheless, it is likely that age is an important independent risk factor for pu complications. the message for the clinician is that gastroprotection is unlikely to be cost-effective in younger age groups and should mainly be considered in those over the age of years. in those over the age of , the threshold to offer gastroprotection should decrease as age increases with a particular consideration given to those over the age of years [ ] . the potential for nsaids to cause peptic ulcer disease is well known. the analgesic effect of nsaids is mediated through reducing prostaglandin synthesis by inhibiting cyclooxygenase (cox) enzymes. there are two cox isoenzymes; cox- is present in most cells whereas cox- is present in only a few tissues and is induced by inflammation [ ] . the gastrointestinal toxicity of nsaids is mediated by cox- , and the reduction in gi prostaglandin caused by this isoform leads to loss of cytoprotection and increased risk of peptic ulceration. all traditional nsaids have a mixture of cox- and cox- inhibitor activity, but the proportions differ, and this is the main reason their gastrointestinal toxicity also varies [ ] . the least toxics are ibuprofen and diclofenac with relative risks (rr) of around two followed by naproxen with rr of four, and the most toxics are piroxicam and ketoprofen with rr of for the development of peptic ulcer disease [ , ] . low-dose acetylsalicylic acid (asa) also has an increased risk of peptic ulcer complications with a rr of approximately . [ ] . a modeling study [ ] from rct and cohort study data suggested that : to : chronic nsaid users will die from peptic ulcer complications attributable to the drug. adenosine diphosphate-receptor inhibitors such as clopidogrel are typically used after acute coronary syndromes and following percutaneous coronary stenting as they reduce the risk of future coronary events at least over the next year [ ] . the seminal study [ ] that reported the benefit of dual antiplatelet therapy with clopidogrel and asa in acute coronary syndromes also found that . % developed gi bleeding over the next months. the pathways that asa causes gastrointestinal mucosal damage are well described, as with all nsaids, but the mechanism by which antiplatelet therapy leads to peptic ulcer bleeding is less clear. inhibition of platelet activity in a peptic ulcer that is already hemorrhaging will aggravate the problem and may lead to more peptic ulcers presenting with bleeding that would otherwise have remained "silent." plateletderived growth factors promote angiogenesis, and this is important in ulcer healing [ ] . disruption of these growth factors by clopidogrel may impair peptic ulcer healing and lead to more complications. a population-based cohort [ ] estimated the number needed to harm ranged between and for a gastrointestinal hemorrhage within the first months of clopidogrel compared to those not taking this drug. this excess could be related to bias and confounding factors inherent with database studies, but a systematic review [ ] of rcts supported this finding, and patients on dual antiplatelet therapy had almost twice the rate of gastrointestinal bleeding compared to those taking asa alone. anticoagulants are commonly used to prevent thromboembolic events in patients with venous thromboembolism, atrial fibrillation, or mechanical heart valves. clinicians and patients are well aware of the risk of bleeding from vitamin k antagonist anticoagulants such as warfarin. the risk of peptic ulcer bleeding is remarkably difficult to quantify as there are no rcts evaluating the risks of warfarin compared to placebo and there are remarkably few robust epidemiological studies. most older studies follow cohorts of patients taking warfarin with no comparator group [ ] and suggest a large risk. it is generally believed that traditional views of the gi bleeding risks of warfarin are overestimated [ ] and more contemporary assessments of risk support a more modest increased risk [ ] . a systematic review [ ] of randomized trials comparing anticoagulation with asa in atrial fibrillation found that major bleeding adverse events were more common in the anticoagulation group (or = . ; % ci = . to . ). this was all bleeding events and was not limited to peptic ulcer bleeding, but if we assume that this also reflects upper gi bleeding and factor in that asa alone also causes an increased risk of bleeding [ ] , the overall risk from vitamin k antagonists is approximately increased three-fold. more recently the non-vitamin k antagonist oral anticoagulants (noacs) have been developed and shown to be more efficacious than warfarin in many settings, particularly related to atrial fibrillation [ ] . as a result, noacs have overtaken the prescription of vitamin k antagonists for atrial fibrillation and deep vein thrombosis in the usa and several other countries [ ] . noacs also cause less intracranial bleeding than vitamin k antagonist but are associated with greater risk of gi bleeding [ ] . this meta-analysis of rcts [ ] is considering all noacs together, and there are significant differences in risk of gi bleeding between drugs in this class. one database study [ ] suggested that apixaban was associated with less gi bleeding than dabigatran or rivaroxaban although another found dabigatran was associated with less upper gi bleeding [ ] . interestingly these database studies found similar rates of gi bleeding with noacs compared to vitamin k antagonists. this is in contrast to rct data, and this may relate to confounding factors or may relate to patients outside of rcts having their coagulation less rigorously monitored. the development of noacs has lowered the threshold at which anticoagulation is considered, and they are being used for ever wider indications [ ] . this emphasizes the need to offer gastroprotection in patients taking anticoagulation if they are at a high risk of pu bleeding. defining high-risk groups is a challenge but there is rct evidence [ ] that adding a naoc to asa doubles the risk compared to asa alone. there is also cohort evidence [ ] that adding warfarin to clopidogrel triples the bleeding risk. corticosteroids have a wide range of actions including profound immunomodulatory effects. they are used in a wide range of inflammatory and autoimmune conditions [ ] , and their adverse event profiles such as osteoporosis, obesity, mood disorder, diabetes mellitus, and risk of infection are well known. corticosteroids also delay wound healing, so it is logical that they may also inhibit peptic ulcer healing and be associated with increased risk of ulcer complications. clinicians are well aware of this putative risk and often provide patients with ulcer prophylaxis [ ] . the rct evidence that they cause peptic ulcer complications is however less clear. a systematic review of rcts [ ] did find an approximately % increase in risk peptic ulcer bleeding or perforation in those taking corticosteroids. however, the statistically significant effect was only seen in hospitalized patients and with events only occurring in . % of ambulatory patients. these data suggest that the main risk is in patients with other risk factors for peptic ulcer complications, particularly those admitted to hospital, and there is no need to routinely provide gastroprotection to those in the community. the main focus should be on limiting the duration of therapy given the other adverse events related to corticosteroids rather than focusing on gastroprotection. selective serotonin reuptake inhibitors (ssris) are the most commonly prescribed antidepressant [ ] and have been advocated for a variety of psychiatric and medical conditions [ ] . they have a favorable adverse event profile compared to more traditional antidepressants [ ] , but concerns have been raised regarding the risk of gi bleeding [ ] . ssris decrease platelet serotonin, and this can result in reduced platelet aggregation [ ] . ssris also increase gastric acid production, which could lead to a greater propensity to develop peptic ulceration [ ] . an initial uk database study [ ] did suggest a threefold increase in gi bleeding in those taking an ssri compared to controls, and this was supported by another cohort study [ ] . there have been no rcts evaluating gi bleeding as an outcome, but further observational data has accrued. a systematic review [ ] identified case-control studies involving almost participants and found an increased risk of upper gi bleeding with ssri therapy compared to controls with an odds ratio (or) of . ( % confidence intervals (ci) = . to . ). the systematic review [ ] also identified four cohort studies, and the increased risk was similar (or = . ; % ci = . to . ). the number needed to harm over year varied between in a lowrisk dutch population and in a higher risk us population [ ] . the systematic review [ ] also evaluated the impact of nsaids on the risk of upper gi bleed and found at least an additive effect. the risk of upper gi bleeding in patients taking ssris alone was . , in those taking nsaid alone it was . , and in those taking both drugs the or was . . the number needed to harm for those taking both nsaids and ssris was for a low-risk population and for a higher risk us population [ ] . these results could be due to bias or residual confounding as they relate to observational data, but these findings are supported by a hong kong study that attempted to reduce this concern [ ] . this study evaluated ssri users and , non-users and only included patients that had h. pylori eradication therapy. this approach makes the population more homogeneous, and they further reduced the possibility of confounding by conducting a propensity match analysis. the propensity-matched analysis found patients taking ssri had a hazard ratio of . ( % ci = . to . ) for developing upper gi bleed compared to non-users [ ] . h. pylori is the leading cause of peptic ulcer disease worldwide [ , ] , and a proportion of both gastric and duodenal ulcers caused by this infection will go on to develop complications. a systematic review of observational studies suggested h. pylori is associated with a two-fold increase in peptic ulcer bleeding [ ] . there also appears to be an interaction between nsaids and h. pylori as the same systematic review [ ] found an approximately four-fold increased risk of developing peptic ulcer bleeding in those taking nsaids and a -fold increase in patients where both factors were present. a further systematic review [ ] also found a two-fold increase in upper gastrointestinal bleeding in asa users infected with h. pylori compared to those that were not infected. the number needed to treat varied between and depending on the underlying risk of peptic ulcer disease in the population [ ] . serious comorbidity is associated with peptic ulcer bleeding although definitions of comorbidity vary between studies [ ] . patients admitted to the intensive care unit (icu) exemplify the risk facing patients with severe stress and comorbidity with around % developing significant gi bleeding [ ] , and this is associated with length of stay severity of underlying illness [ ] . various scoring systems [ , ] that evaluate risk of mortality from upper gi bleeding include comorbidity as part of the calculations. a systematic review of death from peptic ulcer bleeding [ ] found that mortality was significantly higher in those with comorbidity than those without. in particular those with malignancy had a -fold, those with renal disease a -fold, and those with hepatic disease a -fold increased risk of mortality [ ] . respiratory and cardiac disease were each associated with a two-fold risk of dying from peptic ulcer bleed and diabetes mellitus a relative risk of . [ ] . it is important to note that only three of the studies identified in this review were low risk of bias so the quality of the evidence is low, but nevertheless the impact of comorbidity seems to be important, and there is less research on this than many other risk factors for peptic ulcer bleeding. past history of peptic ulcer disease is a strong risk factor for future peptic ulcer although the impact is less strong after successful h. pylori eradication [ ] . there is a paucity of data on the risk of developing complicated peptic ulcer in comparison with population-based controls. systematic review data [ ] suggest that in patients taking nsaids, a previous history of peptic ulcer increases the risk of future peptic ulcer two to three-fold, and this increases for - fold for a past history of bleeding peptic ulcer. this is also supported by subgroup analyses of randomized controlled trials [ ] . patients with a previous history of peptic ulcer prescribed oral anticoagulants have a doubling of their risk of having a gi bleed over a -year follow-up [ ] . there are a number of risk factors for developing peptic ulcer disease complications, but the main focus of research has related to preventing nsaid-related peptic ulcer complications. this is understandable as this causes one of the highest increases in risk. the strategies that reduce nsaid-related bleeding are adding ppi therapy, substituting for a cox- inhibitor, or adding a prostaglandin analogue. the other approach is screening and treating for h. pylori, and this is the only approach that could be considered for patients other than those taking nsaids. seven days of eradication therapy can heal most patients with h. pylori-positive pud [ ] , and treating the infection also dramatically reduces future ulcer recurrence [ ] . this also applies to bleeding peptic ulcer as a systematic review of rcts involving infected bleeding patients reported that h. pylori eradication was more effective than anti-secretory therapy in preventing future bleeding recurrence [ ] . the recurrence rate was % in the anti-secretory group and % in the h. pylori eradication group with a number needed to treat of seven. most guidelines [ , ] therefore recommend testing for h. pylori in those with bleeding pud and treating those infected. randomized controlled trials have shown that population h. pylori screening and treatment reduces the incidence of peptic ulceration in the community [ , ] . the impact on peptic ulcer complications in the general population is less certain, however, as these events are too rare for randomized trials to be powered to detect an impact on this outcome. the rare events observed in these trials highlight that testing for h. pylori is unlikely to be cost-effective in all groups and any population strategy to screen and treat cannot be instituted on the basis of reduced peptic ulcer complications. population h. pylori screening and treatment is advocated in countries that have a high incidence of gastric adenocarcinoma [ , ] as systematic reviews of randomized controlled trials have shown that this reduced risk of gastric adenocarcinoma [ , ] . population h. pylori screening and treatment could increase both the length and quality of life, and it has been estimated that almost million disability-adjusted life years could be gained globally [ , ] . this estimate just focuses on reduction in gastric cancer, and if prevention of peptic ulcer complications was considered, then the disability life years gained could be even higher. furthermore, a randomized controlled trial has suggested that h. pylori population test and treat could be cost neutral due to the reduction in dyspepsia in the population [ ] [ ] [ ] . guidelines do not support population h. pylori screening and treatment in north america [ ] , but the other benefits that could accrue from this approach suggest the clinician should have a low threshold for instituting this strategy when considering patients that may be at risk of developing pu complications. for example, patient taking only low-dose asa who are h. pylori positive may benefit from eradication therapy as one study reported that infected patients who had an asa-related pu bleed given eradication therapy had a similar risk for future bleeding as patients who were asa naïve that had not had a bleed [ ] . similarly, a systematic review of rcts [ ] reported that patients allocated to h. pylori eradication had an almost % reduction in incidence of pud compared to infected nsaid patients in the control group. as this involves one of course of antibiotics for weeks rather than long-term treatment with acid suppressive agents, this could be a very cost-effective approach [ ] at preventing nsaid-related ulceration, and guidelines are now recommending this strategy [ , ] . however, h. pylori eradication is not as effective as ppi therapy in patients on long-term nsaid therapy [ ] , so this strategy is not sufficient for some patient groups. nsaids reduce gastric prostaglandin production and loss of mucosal defenses leading to an increased risk of pud [ ] . the main reason that mucosal protection is necessary is the highly acid environment of the stomach. blocking acid production should reduce the risk of pud even if there is nsaid-mediated loss of mucosal protection. clinical data support this hypothesis with a systematic review [ ] of rcts involving over , participants demonstrating that ppis reduced pud bleeds by approximately % compared to controls although the effect was less marked in patients who were already taking nsaid therapy long term. overall the number needed to treat (nnt) was around in these trials although this was heavily dependent on the underlying risk of the population. ppis also prevented symptomatic and endoscopic ulcers in patients taking nsaids with an nnt of and , respectively [ , ] . a systematic review [ , ] of rcts involving over participants also reported that ppis are effective in reducing pu bleeding related to clopidogrel-based antithrombotic therapy. there was a % reduction in pu bleed in patients allocated to ppi compared to placebo or famotidine with an nnt of [ ] . research on the gastroprotective role of ppi therapy has focused on patients taking nsaid and/or asa. there are a growing number of patients on anticoagulation therapy [ ] , and these patients are at increased risk of pu bleed [ ] . this was evaluated as part of the cardiovascular outcomes for people using anticoagulant strategies (compass) trial [ ] . participants were randomized to rivaroxaban . mg twice daily with aspirin mg once daily, rivaroxaban mg twice daily alone, or aspirin mg once daily alone to evaluate cardiovascular death, stroke, or myocardial infarction in these groups [ ] .this is a -by- partial factorial rct as those that were not on a ppi were also randomized to pantoprazole mg or placebo [ ••] . a total of , patients were randomized to the ppi or placebo, and there was no statistically significant difference in the primary outcome of the trial, which was clinically significant upper gi events [ ••] . there was a % reduction in the gastroduodenal bleeding in the ppi arm, but events were low and the nnt = after years of ppi use. the definitions of pud and pu bleeding were very stringent, and this may have resulted in the nnt being so high. a post hoc analysis was therefore conducted relaxing definitions, and this did result in a % reduction in bleeding pud bleed, and a similar reduction in uncomplicated pud as well as a % reduction in gastric erosions in the ppi group. even when these outcomes were combined, the nnt was still around [ ••] . furthermore, the main benefit of ppi therapy was seen in the asa alone group emphasizing ppis have little impact in patients taking anticoagulants alone. evidence therefore suggests that any benefit of ppis relates to patients taking nsaid or asa. the final group to consider are patients admitted to the icu as these patients are at increased risk of bleeding from upper gastrointestinal stress ulceration [ , ] . systematic reviews [ , ] of rcts involving over icu patients found that ppi therapy reduced overt gi bleeding by % with no impact on length of stay, pneumonia, or mortality. ppis were superior to h ra in these reviews although this is disputed by a network meta-analysis [ ••] of rcts involving over , patients evaluating clinically important upper gi bleeding as an outcome. this review concluded both ppis and h ras reduced gi bleeding, and ppis were possibly superior, but the % ci were wide (or = . ; % ci . to . ). this review highlighted that either ppi or h ra were probably not beneficial in low-risk patients and this intervention should be reserved for those at high risk. the benefits of ppi therapy in preventing pu bleeding should be weighed against the harm of this approach. patients need to take ppi therapy for the duration of risk which may be life-long in the case of asa users. previously this would have been a significant expense, but as most ppis are now available generically in most countries, the costs have reduced significantly. ppi therapy is also very safe in the short term [ ] , but concerns have been raised around the long-term adverse effects associated with these drugs [ ] . ppis have been associated with pneumonia [ ] , bone fracture [ ] , enteric infections [ ] , cardiovascular events [ ] , chronic kidney disease [ ] , dementia [ ] , gastric cancer [ ] , and even all-cause mortality [ ] . the list of concerns increases with each passing year, and the latest harms that have been highlighted are an increased risk of renal calculi [ , ] and risk of covid- [ ] . the problem with all of these associations is that they are based on observational data, usually related to administrative databases. all of these studies have shown that sicker patients tend to be prescribed ppi therapy and comorbidities are a strong risk factor for developing other diseases [ ] . it is possible that being prescribed ppi therapy is a good marker for comorbidity and all of these harms relate to residual confounding [ ] . to evaluate this possibility, the rct evaluating ppi in patients taking anticoagulation and/or asa [ ••] described above also prospectively collected information on adverse events [ •] . in over , patient years of follow up, there was no difference in risk of pneumonia, fracture, chronic kidney disease, dementia, myocardial infarction, gastrointestinal cancers, and all-cause mortality between the ppi and placebo groups [ • ]. the ppi group had slightly more enteric infections than those taking placebo, but the number needed to harm was over for each year of ppi therapy. this trial followed patient for years, and it is possible that adverse events may take longer to accrue, but there was no divergence on the curves over time in the rct [ •] . furthermore, an rct also found no adverse events in ppi arm compared to surgery in reflux patients over years [ ] although this trial was underpowered. finally, there was actually a reduction in mortality in the high-dose ppi arm of a barrett's esophagus trial comparing esomeprazole mg versus mg bid given over a mean of years in over patients [ ] . there are also concerns that ppi may interact with clopidogrel reducing efficacy [ • ] and this could not be addressed in the compass trial as patients had to discontinue this drug. a systematic review of rcts [ , ] did not find any difference in cardiovascular events in the ppi arm compared to the placebo/famotidine arms in patients taking clopidogrel suggesting that the results of observational data probably relate to residual confounding. these data suggest that the benefits of ppi therapy outweigh any putative risk provided the appropriate patients are selected for gastroprotection. the gastrointestinal adverse effects of nsaids largely relate to the cox- activity of the drug, while the analgesic effects of nsaids relate to cox- inhibition. cox- selective inhibitors were therefore developed on the principle that these drugs could provide similar analgesic properties to traditional nsaids without the gastrointestinal events [ ] . systematic reviews of rcts confirmed this hypothesis with cox- inhibitors having a similar efficacy profile [ ] but with a % reduction in endoscopic ulcers [ ] and a % reduction in pu bleed and pu complications [ ] . cox- inhibitors were initially used widely to protect against nsaid-related gi injury, but enthusiasm for this approach waned once it became apparent from rcts [ , ] that the risk of cardiovascular events was increased from these drugs. a systematic review [ ] of rcts comparing nsaids/cox- inhibitor with placebo and rcts comparing nsaids with another nsaid/cox- inhibitor confirmed that cox- inhibitors increased the risk of cardiovascular events by about % and this outweighed any benefits in terms of reduction of pu complications. an increase in cardiovascular event risk was also seen with nsaids such as diclofenac and ibuprofen, and the impact seemed as great as with cox- inhibitors [ ] . in contrast, naproxen was not associated with an increased risk of cardiovascular events, suggesting this was a safer nsaid to use [ ] . these data raise the question of whether any nsaid other than naproxen is safe to use in the long term as a % increase in cardiovascular disease will outweigh any improvement in quality of life for most patients. furthermore, another systematic review of rcts [ ] suggested cox- inhibitors were associated with an increased risk of dementia, highlighting there may be other risks to taking these drugs long-term. another approach to gastroprotection is to replace the upper gastrointestinal deficiency in prostaglandin caused by nsaids with a prostaglandin analogue. misoprostol, a synthetic prostaglandin e analogue, dramatically reduces endoscopic ulcers in a systematic review of rcts involving almost patients taking nsaids with an nnt of [ ] . there was early promise [ ] that this would translate into a reduction in pu complications, but this was not supported by a systematic review of three rcts involving almost patients, where there was no statistically significant reduction in pu bleeding [ ] . the use of misoprostol is also limited by adverse events such as diarrhea with up to % patients withdrawing because of adverse events [ ] . this can be mitigated by lowering the dose [ ] , but it remains a significant problem when used for long-term prophylaxis. prostaglandin analogues therefore cannot be recommended for gastroprotection routinely in patients taking nsaids. there may be selected patients where this might be the appropriate drug. for example, rct data suggest that misoprostol may reduce nsaid-related small bowel ulcers detected by video capsule endoscopy [ ] . whether this translates to improvement in clinical outcomes remains to be determined, but this may be an option for patients with predominantly small bowel ulcer problems that cannot discontinue nsaids. the above evidence provides a framework to selecting which patients should receive gastroprotection. in general, these should be reserved for patients over the age of years taking nsaids or those being admitted to icu. even in these groups, the risk is not sufficiently high to warrant gastroprotection to everyone [ ••] . ideally what is required is a validated risk calculator that gives the absolute risk of developing peptic ulcer disease over a given period of time similar to those used to determine risk of cardiovascular disease [ ] . patients starting long-term nsaid or low-dose asa therapy should all routinely be screened for h. pylori, and those infected should receive eradication therapy with regimens that follow latest guidelines [ , ] . this will reduce pu complications but will also have added benefits in reducing future dyspepsia and future gastric cancer risk. as this is a one-off treatment, this is likely to be cost-effective. for those over years of age taking long term nsaids, naproxen should be the drug of choice due to the favorable cardiovascular risk profile. additional risk factors should be ascertained according to the scoring system outlined in table . those that have points or more for naproxen or points for lowdose asa (as the underlying risk of developing pu complication is lower than for naproxen) should be offered long-term ppi therapy with careful discussion of the risks and benefits. for patients admitted to icu, additional risk factor should also be ascertained as determined by a systematic review [ ] that identified observational studies involving over , icu patients. patients with chronic liver disease and/or coagulopathy should be given prophylaxis with ppi therapy during their hospital admission [ ] . similarly, those that need mechanical ventilation and are also in shock may benefit from ppi therapy [ ] . those that are discharged should have their ppi discontinued if there is no indication for continued therapy [ ] . there is a wealth of rct evidence on the benefits of h. pylori eradication and ppi therapy to prevent pu complications in patients taking nsaid. there is also rct evidence on the benefits of ppi therapy in patients taking anticoagulation and for icu patients. it is clear from these trials that these interventions are effective, but high-risk groups need to be identified, and this should be the focus of future research. conflict of interest takeshi kanno declares that he has no conflict of interest. paul moayyedi declares that he has no conflict of interest. papers of particular interest, published recently, have been highlighted as: of major importance trends for incidence of hospitalization and death due to gi complications in the united states from acute upper gi bleeding: did anything change? time trend analysis of incidence and outcome of acute upper gi bleeding between trends and outcomes of hospitalizations for peptic ulcer disease in the united states systematic review and meta-analysis of proton pump inhibitor therapy in peptic ulcer bleeding epinephrine injection versus epinephrine injection and a second endoscopic method in high-risk bleeding ulcers systematic review: the global incidence and prevalence of peptic ulcer disease systematic review of the epidemiology of complicated peptic ulcer disease: incidence, recurrence, risk factors and mortality burden and cost of gastrointestinal, liver, and pancreatic diseases in the united states: update trends in opioid and nonsteroidal anti-inflammatory use and adverse events trends in ambulatory prescribing of antiplatelet therapy among us ischemic stroke patients trends in anticoagulant prescribing: a review of local policies in english primary care peptic ulcer disease mortality from peptic ulcer bleeding: the impact of comorbidity and the use of drugs that promote bleeding gastrointestinal bleeding with oral anticoagulation: understanding the scope of the problem management of nonvariceal upper gastrointestinal bleeding: guideline recommendations from the international consensus group asia-pacific working group consensus on non-variceal upper gastrointestinal bleeding: an update ppi versus histamine h receptor antagonists for prevention of upper gastrointestinal injury associated with low-dose aspirin: systematic review and meta-analysis canadian association of gastroenterology consensus group. canadian consensus guidelines on long-term nonsteroidal antiinflammatory drug therapy and the need for gastroprotection: benefits versus risks the relative efficacies of gastroprotective strategies in chronic users of nonsteroidal anti-inflammatory drugs epidemiology of perforated peptic ulcer: age-and gender-adjusted analysis of incidence and mortality comparison of mortality from peptic ulcer bleed between patients with or without peptic ulcer antecedents peptic ulcer disease and non-steroidal antiinflammatory drugs new insights into the use of currently available non-steroidal anti-inflammatory drugs risk of upper gastrointestinal ulcer bleeding associated with selective cyclo-oxygenase- inhibitors, traditional non-aspirin non-steroidal antiinflammatory drugs, aspirin and combinations individual nsaids and upper gastrointestinal complications low doses of acetylsalicylic acid increase risk of gastrointestinal bleeding in a meta-analysis quantitative estimation of rare adverse events which follow a biological progression: a new model applied to chronic nsaid use effects of clopidogrel in addition to aspirin in patients with acute coronary syndromes without st-segment elevation platelets modulate gastric ulcer healing: role of endostatin and vascular endothelial growth factor release gastrointestinal events with clopidogrel: a nationwide population-based cohort study age-related risks of long term oral anticoagulant therapy bleeding risks of antithrombotic therapy population-based cohort study of warfarin-treated patients with atrial fibrillation: incidence of cardiovascular and bleeding outcomes systematic review of long term anticoagulation or antiplatelet treatment in patients with non-rheumatic atrial fibrillation comparison of the efficacy and safety of new oral anticoagulants with warfarin in patients with atrial fibrillation: a meta-analysis of randomised trials trends and variation in oral anticoagulant choice in patients with atrial fibrillation gastrointestinal safety of direct oral anticoagulants: a large population-based study risks and benefits of direct oral anticoagulants versus warfarin in a real world setting: cohort study in primary care rivaroxaban with or without aspirin in stable cardiovascular disease risk of bleeding in patients with acute myocardial infarction treated with different combinations of aspirin, clopidogrel, and vitamin k antagonists in denmark: a retrospective analysis of nationwide registry data corticosteroidmechanisms of action in health and disease a surviving myth"-corticosteroids are still considered ulcerogenic by a majority of physicians corticosteroids and risk of gastrointestinal bleeding: a systematic review and meta-analysis increased use of antidepressants in canada effect of antidepressants and psychological therapies in irritable bowel syndrome: an updated systematic review and meta-analysis efficacy and tolerability of selective serotonin reuptake inhibitors compared with tricyclic antidepressants in depression treated in primary care: systematic review and meta-analysis selective serotonin reuptake inhibitors and increased bleeding risk: are we missing something? serotonin reuptake inhibitor antidepressants and abnormal bleeding: a review for clinicians and a reconsideration of mechanisms fluoxetine and sertraline stimulate gastric acid secretion via a vagal pathway in anaesthetised rats association between selective serotonin reuptake inhibitors and upper gastrointestinal bleeding: population based case-control study use of selective serotonin reuptake inhibitors and risk of upper gastrointestinal tract bleeding: a population-based cohort study risk of upper gastrointestinal bleeding with selective serotonin reuptake inhibitors with or without concurrent nonsteroidal anti-inflammatory use: a systematic review and meta-analysis risks of hospitalization for upper gastrointestinal bleeding in users of selective serotonin reuptake inhibitors after helicobacter pylori eradication therapy: a propensity score matching analysis global prevalence of helicobacter pylori infection: systematic review and meta-analysis role of helicobacter pylori infection and non-steroidal anti-inflammatory drugs in peptic-ulcer disease: a meta-analysis helicobacter pylori infection and the risk of upper gastrointestinal bleeding in low dose aspirin users: systematic review and meta-analysis prevalence and outcome of gastrointestinal bleeding and use of acid suppressants in acutely ill adult intensive care patients the attributable mortality and length of intensive care unit stay of clinically important gastrointestinal bleeding in critically ill patients risk assessment after acute upper gastrointestinal haemorrhage a risk score to predict need for treatment for upper-gastrointestinal haemorrhage effect of comorbidity on mortality in patients with peptic ulcer bleeding: systematic review and metaanalysis eradication therapy for peptic ulcer disease in helicobacter pylori-positive people risk for serious gastrointestinal complications related to use of nonsteroidal anti-inflammatory drugs. a meta-analysis misoprostol reduces serious gastrointestinal complications in patients with rheumatoid arthritis receiving nonsteroidal anti-inflammatory drugs bleeding risk and major adverse events in patients with previous ulcer on oral anticoagulation therapy systematic review and metaanalysis: is -week proton pump inhibitor-based triple therapy sufficient to heal peptic ulcer? helicobacter pylori eradication therapy vs. antisecretory non-eradication therapy (with or without long-term maintenance antisecretory therapy) for the prevention of recurrent bleeding from peptic ulcer effect of population screening and treatment for helicobacter pylori on dyspepsia and quality of life in the community: a randomised controlled trial impact of helicobacter pylori eradication on dyspepsia, health resource use, and quality of life in the bristol helicobacter project: randomised controlled trial helicobacter pylori eradication as a strategy for preventing gastric cancer second asian-pacific consensus guidelines for helicobacter pylori eradication helicobacter pylori eradication for the prevention of gastric neoplasia helicobacter pylori eradication therapy to prevent gastric cancer: systematic review and meta-analysis the global, regional, and national burden of stomach cancer in countries, - : a systematic analysis for the global burden of disease study a community screening program for helicobacter pylori saves money: -year follow-up of a randomised controlled trial the cost-effectiveness of population helicobacter pylori screening and treatment: a markov model using economic data from a randomised controlled trial clinical trial: prolonged beneficial effect of helicobacter pylori eradication on dyspepsia consultations -the bristol helicobacter project acg clinical guideline: treatment of helicobacter pylori infection effects of helicobacter pylori infection on long-term risk of peptic ulcer bleeding in low-dose aspirin users meta-analysis: role of helicobacter pylori eradication in the prevention of peptic ulcer in nsaid users systematic reviews of the clinical effectiveness and costeffectiveness of proton pump inhibitors in acute upper gastrointestinal bleeding guidelines for prevention of nsaid-related ulcer complications gastric mucosal defense and cytoprotection: bench to bedside effects of gastroprotectant drugs for the prevention and treatment of peptic ulcer disease and its complications: meta-analysis of randomised trials prevention of nsaid-induced gastroduodenal ulcers proton-pump inhibitors for the prevention of upper gastrointestinal bleeding in adults receiving antithrombotic therapy clopidogrel-based antithrombotic therapy for cardiovascular prevention: a systematic review and meta-analysis of randomized controlled trials safety of proton pump inhibitors based on a large, multi-year, randomized trial of patients receiving rivaroxaban or aspirin this trial suggested that ppi therapy reduced the risk of peptic ulcer and peptic ulcer bleeding, but the event rate was too low for this to be cost-effective in all patients taking anticoagulation proton pump inhibitors versus histamine recpetor antagonists for stress ulcer prophylaxis in critically ill patients: a systematic review and metaanalysis efficacy and safety of proton pump inhibitors for stress ulcer prophylaxis in critically ill patients: a systematic review and meta-analysis of randomized trials efficacy and safety of gastrointestinal bleeding prophylaxis in critically ill patients: systematic review and network meta-analysis comprehensive network meta-analysis of all treatment options to prevent stress ulcer bleeding in intensive care patients. acid suppressive therapy was effective but probably not beneficial in low risk patients medical treatments in the short term management of reflux oesophagitis complications of proton pump inhibitor therapy risk of community-acquired pneumonia and use of gastric acid suppressive drugs long-term proton pump inhibitor therapy and risk of hip fracture omeprazole as a risk factor for campylobacter gastroenteritis: casecontrol study proton pump inhibitor use and risk of adverse cardiovascular events in aspirin treated patient with first time myocardial infarction: a nationwide propensity score matched analysis proton pump inhibitor use and risk of chronic kidney disease association of proton pump inhibitors with risk of dementia: a pharmacoepidemiological claims data analysis long-term proton pump inhibitors and risk of gastric cancer development after treatment for helicobacter pylori: a population-based study risk of death among users of proton pump inhibitors: a longitudinal observational cohort study of united states veterans use of proton pump inhibitors increases risk of incident kidney stones leaving no stone unturned in the search for adverse events associated with the use of proton pump inhibitors increased risk of covid- among users of proton pump inhibitors the risks of ppi therapy safety of proton pump inhibitors based on a large, multi-year, randomized trial of patients receiving rivaroxaban or aspirin this trial evaluated the safety of ppis with over , patients years of follow-up. there was no evidence of harm of ppis apart from risk of enteric infections and the data suggested that the various harms of ppi described in observational studies are likely to be overestimated long-term safety of proton pump inhibitor therapy assessed under controlled, randomised clinical trial conditions: data from the sopran and lotus studies esomeprazole and aspirin in barrett's oesophagus (aspect): a randomised factorial trial the first randomized trial evaluating ppi and aspirin to prevent progression of barrett's esophagus to neoplasia. if this trial started today there would be objection to such high doses of ppi being used for an average of years, but mortality was reduced in the twice daily ppi group. the gi bleeding rate was very low in this trial giving nsaid induced gastrointestinal damage and designing gisparing nsaids efficacy, tolerability, and upper gastrointestinal safety of celecoxib for treatment of osteoarthritis and rheumatoid arthritis: systematic review of randomised controlled trials gastrointestinal safety of cyclooxygenase- inhibitors: a cochrane collaboration systematic review cardiovascular events associated with rofecoxib in a colorectal adenoma chemoprevention trial cardiovascular risk associated with celecoxib in a clinical trial for colorectal adenoma prevention cnt) collaboration. vascular and upper gastrointestinal effects of non-steroidal anti-inflammatory drugs: metaanalyses of individual participant data from randomised trials aspirin and other nonsteroidal anti-inflammatory drugs for the prevention of dementia misoprostol dosage in the prevention of nonsteroidal anti-inflammatory drug-induced gastric and duodenal ulcers: a comparison of three regimens misoprostol for small bowel ulcers in patients with obscure bleeding taking aspirin and non-steroidal anti-inflammatory drugs (masters): a randomised, double-blind, placebo-controlled, phase trial framinghambased tools to calculate the global risk of coronary heart disease the toronto helicobacter pylori consensus in context reply predictors of gastrointestinal bleeding in adult icu patients: a systematic review and meta-analysis prevalence and predictors of inappropriate prescribing according to the screening tool of older people's prescriptions and screening tool to alert to right treatment version criteria in older patients discharged from geriatric and internal medicine wards: a prospective observational multicenter study publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- -ejk wjr authors: crilly, colin j.; haneuse, sebastien; litt, jonathan s. title: predicting the outcomes of preterm neonates beyond the neonatal intensive care unit: what are we missing? date: - - journal: pediatr res doi: . /s - - - sha: doc_id: cord_uid: ejk wjr abstract: preterm infants are a population at high risk for mortality and adverse health outcomes. with recent improvements in survival to childhood, increasing attention is being paid to risk of long-term morbidity, specifically during childhood and young-adulthood. although numerous tools for predicting the functional outcomes of preterm neonates have been developed in the past three decades, no studies have provided a comprehensive overview of these tools, along with their strengths and weaknesses. the purpose of this article is to provide an in-depth, narrative review of the current risk models available for predicting the functional outcomes of preterm neonates. a total of studies describing separate models were considered. we found that most studies used similar physiologic variables and standard regression techniques to develop models that primarily predict the risk of poor neurodevelopmental outcomes. with a recently expanded knowledge regarding the many factors that affect neurodevelopment and other important outcomes, as well as a better understanding of the limitations of traditional analytic methods, we argue that there is great room for improvement in creating risk prediction tools for preterm neonates. we also consider the ethical implications of utilizing these tools for clinical decision-making. impact: based on a literature review of risk prediction models for preterm neonates predicting functional outcomes, future models should aim for more consistent outcomes definitions, standardized assessment schedules and measurement tools, and consideration of risk beyond physiologic antecedents. our review provides a comprehensive analysis and critique of risk prediction models developed for preterm neonates, specifically predicting functional outcomes instead of mortality, to reveal areas of improvement for future studies aiming to develop risk prediction tools for this population. to our knowledge, this is the first literature review and narrative analysis of risk prediction models for preterm neonates regarding their functional outcomes. preterm infants have long been recognized as a population at high risk for mortality and adverse functional outcomes, including cerebral palsy and intellectual impairment. as mortality rates for preterm neonates decline and more survive to childhood, , attention has increasingly turned towards measuring longer-term morbidities and related functional impairments during childhood and young-adulthood, as well as identifying risk factors related to these complications. , while child-specific characteristics, such as gestational age, birth weight, and sex, are well established as predictors of adverse neurodevelopmental outcomes, - recent work has identified additional factors, including bronchopulmonary dysplasia and family socioeconomic status, that are correlated with relevant outcomes, such as poor neuromotor performance and low intelligence quotient at school age. in clinical settings, the assessment of prognosis can vary widely across neonatologists, making a valid and reliable predictive model for long-term outcomes a highly sought-after clinical tool. moreover, predicting outcomes is vital when making decisions regarding which therapeutic interventions to apply, when providing critical data to parents for informed decision-making, and when matching infants with outpatient services to best meet their needs. in addition, prediction models are useful in evaluating neonatal intensive care unit (nicu) performance and allowing for between-center comparisons with proper adjustment for the severity of cases being treated. numerous prediction tools have been developed to quantify the risk of death for preterm neonates in the nicu setting, including the score for neonatal acute physiology (snap) and the clinical risk index for babies (crib). the national institute of child health and human development (nichd) risk calculator, predicting survival with and without neurosensory impairment, is widely used to counsel families in the setting of threatened delivery at the edges of viability. furthermore, there are numerous other models that use clinical data from the nicu stay to predict risk for poor functional outcomes in infancy and school age. , while several studies have categorized and evaluated the risk prediction models developed and validated in recent decades for mortality, , no studies have compared and contrasted risk prediction models for non-mortality outcomes. recently, linsell et al. published a systematic review of risk factor models for neurodevelopmental outcomes in children born very preterm or very low birth weight (vlbw). however, this review focused primarily on overall trends in model development and validation rather than a detailed consideration of individual models. in this article, we conduct an in-depth, narrative review of the current risk models available for predicting the functional outcomes of preterm neonates, evaluating their relative strengths and weaknesses in variable and outcome selection, and considering how risk model development and validation can be improved in the future. towards this, we first provide an overview of the different risk models developed since . we then frame our review of these models in terms of the outcomes predicted, the range of predictors considered, and the statistical methods used to select the variables included in the final model, as well as to assess the predictive performance of the model. finally, the ethical implications of integrating risk stratification into standard clinical care for preterm neonates are considered. we conducted a manual search for relevant literature via pubmed, entering combinations of key terms synonymous with "prediction tool," "preterm," and "functional outcome" and reading the abstracts of resulting studies (table ). studies with abstracts that appeared related to our review were then read in full to identify prediction models that were eligible for inclusion. reference lists of included studies were also reviewed, as were articles that later cited these original studies. prediction tools were defined as multivariable risk factor analyses (> variables) aiming to predict the probability of developing functional outcomes beyond months corrected age. models that solely investigated associations between individual risk factors and outcomes were excluded, as were models that were not evaluated for predictive ability in terms of either a validation study or an assessment for performance, discrimination, or calibration. tests used to evaluate a model's overall performance were r , adjusted r , and the brier score. the use of a receiver operating characteristic (roc) curve or a c-index evaluated a model's discrimination, and the hosmer-lemeshow test was considered to evaluate a model's calibration. preterm neonates were defined as < weeks of completed gestational age. models with vlbw neonates < g were also included, since in the past birth weight served as a substitute for measuring prematurity when gestational age could not be accurately determined. models were excluded if they used a cohort entirely composed of infants born prior to january ; those born after were likely to have had surfactant therapy available in the event of respiratory distress syndrome, which significantly reduced the morbidity and mortality rates among preterm neonates nationwide. , models were also excluded if they limited their prediction to the outcome of survival, if they incorporated variables measured after initial nicu discharge, or if they included subjects who were not necessarily transferred to a nicu for further care following delivery. finally, we excluded tools that only predicted outcomes to an age of < months corrected age, as well as case reports, narrative reviews, and tools reported in languages other than english. overview of risk prediction models table lists all studies with risk prediction models that meet the inclusion and exclusion criteria. [ ] [ ] [ ] from these, a total of distinct models were reported. from mortality to neurodevelopmental impairment since , several mortality prediction tools have been evaluated in regards to their ability to predict the likelihood of neurodevelopmental impairment (ndi) among neonates surviving to nicu discharge. one such model is the crib, which incorporates six physiologic variables collected within the first h of the preterm infant's life: birth weight, gestational age, presence of congenital malformations, maximum base excess, and minimum and maximum fio requirement. fowlie et al. evaluated how crib models obtained at differing time periods over the first days of life predicted severe disability among a group of infants born > weeks gestational age or vlbw. in another study, fowlie et al. incorporated cranial ultrasound findings on day of life along with crib scores between and h of life into their prediction model. subsequent studies analyzed the crib in its original -h form and, with only one exception, determined that it was not a useful tool for predicting long-term ndi or other morbidities. [ ] [ ] [ ] [ ] a second example is the snap score. snap uses physiologic parameters collected over the first h of life to predict survival to nicu discharge, and was modified to predict ndi at year and - years of age. a subsequent assessment of both the snap and the snap with perinatal extension showed a poor predictive value for morbidity at years of age for children born vlbw and/or with gestational age ≤ weeks. finally, the neonatal therapeutic intervention scoring system, a comprehensive exam-based prediction tool for mortality, was found to have a poor predictive value for adverse outcomes at years of age in children born very preterm or vlbw. shortened forms of the early physiology-based scoring systems were developed and assessed for their ability to predict outcomes in childhood. application of the crib-ii on a small cohort (n = ) of infants born < g predicted significant ndi at years of age. however, a subsequent evaluation in a much larger cohort (n = ) of preterm infants < weeks gestational age concluded that the crib-ii did no better than gestational age or birth weight alone in predicting moderate to severe functional disability at - years of age. studies have supported an association between the snap-ii and snappe-ii scores and neurodevelopmental outcomes and small head circumference at months corrected age. high snap-ii scores were shown to correlate with adverse neurological, cognitive, and behavioral outcomes up to years of age within a large cohort (n = ) of children born very preterm. antenatal risk factors several groups have used data from the nichd's neonatal research network (nrn) to design and test various risk prediction models for extremely low birth weight (elbw) newborns. one of the most widely used risk prediction tools developed from this cohort was by tyson et al., postnatal morbidity a large cohort study (n = ) from schmidt et al. , used data from elbw neonates - g enrolled in the international trial of indomethacin prophylaxis in preterms (tipp). they found that the presence of three morbidities at weeks post-menstrual age -bronchopulmonary dysplasia, serious brain injury, and severe retinopathy of prematurity-had a significant and additive effect on the risk for death or poor neurologic outcome at months corrected age. they developed a model from this relationship that has been corroborated in two studies with smaller samples and by schmidt et al. in a separate, large cohort in which the definition of poor outcome was expanded from solely ndi to "poor general health." , letting the machines decide some innovative work has been recently performed by ambalavanan et al. , in creating several risk prediction models. along with studies developing risk prediction tools with data from the nrn and the tipp to predict the outcomes of death and ndi or solely ndi, the group made the only risk prediction tool for the outcome of rehospitalization, both general and specifically for respiratory complications, using a combination of physiologic and socioeconomic variables incorporated into a decision tree approach. they have also been the only group to create neural network-trained models, using the same small cohort to predict major handicap, low mental development index (mdi), or low psychomotor development index (pdi). the advantage of using neural networks-algorithms that can "learn" mathematical relationships between a series of independent variables and a set of outcomes-is the ability to model complex or nonlinear relationships that can be elucidated by the model without having to consider these relationships a priori (as is typically required when using multiple regression models). despite the use of innovative approaches, however, none of these models differed from other studies in predictive strength or even had high predictive efficacy. limitations of prior approaches the above literature review highlights the substantial interest in developing a clinically useful risk prediction model and the limits of efforts to date. notwithstanding their differing inclusion and exclusion criteria, existing risk prediction models are relatively similar in terms of variables selected, outcomes analyzed, and statistical strategies employed. with few exceptions, the limitations of existing risk prediction models are especially apparent in their reliance on solely biologic variables and traditional analytic methods ill-equipped to handle the statistical complexity necessary for risk modeling. identifying important outcomes. the majority of risk prediction models defined ndi as their primary outcome of interest. making a determination of impairment often relies on standardized measures of cognition in concert with neurosensory deficits. yet, researchers often define ndi in different ways, making betweenstudy comparisons difficult. ndi is a construct relating to global abilities encompassing cognition, language, motor function, and vision and hearing. while the tools used to identify ndi are often also used to make diagnoses of developmental delay, ndi is not a clinical term or diagnosis in and of itself. many of the remaining studies also predicted functional outcomes, such as academic performance, executive function, language ability, and autism spectrum disorder (asd). these outcomes may be more meaningful to parents and providers than ndi. to date, only four studies have considered outcomes unrelated to neurodevelopment, such as impaired pulmonary function, "poor general health," and rehospitalization rates. , , , while the emphasis on ndi is unsurprising given the high-risk population, moderate to severe ndi only affects a minority of the preterm population. , studies have revealed numerous additional adverse outcomes that preterm individuals are more likely to experience compared to their full-term counterparts, such as impaired respiratory, cardiovascular, and metabolic function. [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ] neurodevelopment has been linked to chronic health problems in later childhood. limiting risk prediction to moderate to severe ndi therefore ignores other, more common complications that preterm infants are likely to face that have an impact on neurodevelopment. this represents a missed opportunity for researchers to better understand what variables influence the likelihood that these problems occur. the impact of developmental disability on the child and family is completely absent from current risk models. health-related quality of life (hrql), which distinguishes itself as a personal rather than third-party valuation of a patient's physical and emotional well-being, is being increasingly appreciated as an important metric necessary to fully understand the impact of prematurity. in a french national survey, the majority of neonatologists, obstetricians, and pediatric neurologists stated that predicting hrql in the long term for preterm infants would be beneficial for consulting parents about what additional responsibilities they can anticipate in caring for their child. the trajectory of hrql from childhood to young-adulthood appears to improve in both vlbw and extremely low gestational age populations. prediction modeling might aid in determining which factors could positively or negatively impact hrql in this vulnerable population. finally, we must consider the age at which outcomes are being predicted. it is evident that lower gestational age is inversely proportional to rates of ndi and academic achievement in adolescence. , however, the vast majority of risk prediction models assessed outcomes at the age of years or less, with only three studies doing so at years of age or above. although early childhood outcomes may give clues about later development, many problems do not manifest until later in childhood, such as learning disabilities and certain psychiatric disorders. developmental disability severity can fluctuate throughout childhood, with catch-up occurring in early preterm children and worsening delay in some moderate and late preterm children. , although cohorts of preterm infants are not usually followed for more than several years, likely due to lack of resources and expense, recent studies have used data from national registries to link neonatal clinical data to sampled adults, providing evidence of increased rates of adverse neurodevelopmental, behavioral, and educational outcomes among adults born preterm. , opportunities are therefore available to use long-term data to extend risk prediction models beyond the first few years of life. variable selection. most of the risk models reviewed relied primarily on physiologic and clinical measures obtained during the nicu stay. while an emphasis on biologic risk factors is clearly reasonable given the known associations between perinatal morbidities and long-term outcomes, there is strong evidence in the literature suggesting associations between sociodemographic factors like parental race, education, and age, and outcomes such as cognitive impairment, cerebral palsy, and mental health disorders in children born preterm. more specific socioeconomic variables such as lower parental education, maternal income, insurance status, foreign country of birth by a parent, and socioeconomic status as defined by the elly-irving socioeconomic index have been repeatedly correlated with reduced mental development index, psychomotor development index, intelligence quotient, and social competence throughout childhood. , , [ ] [ ] [ ] [ ] [ ] [ ] the geographic area in which preterm neonates are raised could also have a profound influence on their development. neighborhood poverty rate, high school dropout rate, and place of residence (metropolitan vs. non-metropolitan) have all been correlated with academic skills and rate of mental health disorders among low birth weight children. , only of the models reviewed included socioeconomic variables. this may be due, at least in part, to the difficulty in obtaining social, economic, and demographic data; these variables are often not collected upon hospital admission. additionally, socioeconomic information is often poorly, inaccurately, and variably recorded or is largely missing. some risk prediction models collected socioeconomic variables at the follow-up visit when outcomes were assessed. this is an imperfect method given that factors such as household setting and family income may change substantially in the years following nicu discharge and affect children's health. , in some models, socioeconomic variables were not included because they did not significantly improve the model's predictive ability. testing the effects of social factors on infant and child outcomes requires samples that are socially and economically diverse. even large, diverse study populations may become more homogeneous over time, as subjects of lower socioeconomic status and non-white race are more likely to drop out of studies dependent on long-term follow-up. and treating socioeconomic variables as statistically independent factors rather than interrelated might minimize the impact of contextual information on neurodevelopmental outcomes. model development. of the papers included in the review, reported on de novo risk prediction tools. the other studies either evaluated a previous model or adjusted a prior model by changing the times at which data were collected or by adding additional variables. the approach to prediction tool development was almost uniform among the studies, with nine of the models solely using regression techniques to select variables. ambalavanan et al. deviated from this method in three separate studies: two using classification tree analysis, , and one using a four-layer back-propagation neural network. each new model-with the exception of the neural networkbased model by ambalavanan et al. , -depended on an approach in which individual variables were selected and treated as independent of one another as they were analyzed in their ability to predict the outcome of interest. yet, variables may, in fact, not act independently. while parsing the roles of potential interrelationships may be computationally onerous and treating them independently may lead to a more parsimonious model, this may be at the expense of accuracy. alternative computational approaches are needed to account for the differential likelihoods of certain outcomes on the causal pathway from preterm birth to later childhood outcome. nonlinear statistical tools should be further utilized in risk prediction model development to examine the relationships between variables and outcomes of interest. machine learning, for instance, is a method of inputting a group of variables and generating a predictive model without making assumptions of independence between the factors or that specific factors would contribute the most to the model. different forms of machine learning have already been employed in nicu's to extract the most important variables for predicting outcomes such as days to discharge. the non-independence of risk factors is also complicated by the role of time in models of human health and development. the lifecourse framework describes how an accumulation or "chains" of risk experienced over time and at certain critical periods impact later health outcomes. in the context of preterm birth, the risk of being born early is not uniform across populations and dependent on a given set of maternal risks. in turn, the degree of prematurity imparts differential risk for developing complications such as bronchopulmonary dysplasia, necrotizing enterocolitis, or retinopathy of prematurity. these morbidities then, in turn, increase risks for further medical and developmental impairment. these time-varying probabilities can be modeled and incorporated into prediction tools to more accurately capture the longitudinal and varying relationships between exposures and outcomes and improve thereby estimations of risk. [ ] [ ] [ ] a final methodological concern regarding model development is whether and how the competing risk of death is considered when the outcome being predicted is non-terminal. consider, for example, the task of developing a model for the risk of ndi at years of age. how one handles death can have a dramatic effect on the model, especially since mortality is relatively high among preterm infants. moreover, if death is treated simply as a censoring mechanism, as it is often done in time-to-event analyses such as those based on the cox model, then the overall risk of ndi will be artificially reduced; those children who die before being diagnosed with ndi will be viewed as remaining at risk even though they cannot possibly be subsequently diagnosed with ndi. while an alternative to this would be to use a composite outcome of time the first of ndi or death, doing so may result in a model that is unable to predict either event well. instead, one promising avenue is to frame the development of a prediction model for ndi within the semi-competing risks paradigm. , briefly, semicompeting risks refer to settings where one event is a competing risk for the other, but not vice versa. this is distinct from standard competing risks, where each event is competing for the other (e.g., death due to one cause or another). to the best of our knowledge, however, semi-competing risks have not been applied to the study of long-term outcomes among preterm infants. model evaluation. waljee et al. provide a summary of methods for assessing the performance of a predictive model, categorizing them into three types: overall model performance, which focuses on the extent of variation in risk explained by the model; calibration, which assesses differences between observed and predicted event rates; and discrimination, which assesses the ability to distinguish between patients who do and do not experience the outcome of interest. the majority of studies in our review assessed their models with roc curve analysis, a method of assessing discrimination. while widely used, there is some debate with regard to roc-based assessments, specifically in regard to its lack of sensitivity in assessing differences between good predictive models. although several novel performance measures for comparing discrimination among models have been proposed, none have been employed in the context of comparing risk prediction tools for preterm neonates. , few studies employed analyses other than roc. only six in our review assessed overall performance with r or partial r , and five evaluated calibration using the hosmer-lemeshow test. another four studies assessed internal validation with either an internal validation set or bootstrapping techniques. there were nine studies meeting inclusion criteria solely because they had models that were externally validated via other studies. schmidt et al. reported odds ratio associations for their -morbidity model, which are not a reliable method of determining the strength of risk prediction tools. future risk model assessments for preterm neonates should at minimum include an roc curve analysis, although assessments of overall performance and calibration would also be helpful. validation with a different sample from the development set is also advised, ideally with a population outside the original cohort. conclusion risk assessment and outcomes prediction are valuable tools in medical decision-making. fortunately, infants born prematurely enjoy ever-increasing likelihood of survival. research over the past several decades has highlighted the many influences, physiologic and psychosocial, affecting neurodevelopment, hrql, and health services utilization. yet, the wealth of knowledge gained from longitudinal studies of growth and development is not reflected in current risk prediction models. moreover, some of the most wellknown and widely used tools today, such as tyson et al.'s fivefactor model, were developed nearly two decades ago. as advances in neonatal intensive care progressively reduce the risk of certain outcomes, it is clear that these older models require updating if they are to be of continued clinical use. it should be recognized that there are potential ethical ramifications to incorporating more psychosocial factors and outcomes into risk prediction models, such as crossing the line from risk stratification to "profiling" patients and offering different treatment decisions based on race or class. however, physician predictions without the aid of prediction tools are highly inconsistent during counseling at the margins of viability, and further research is needed regarding the level of influence that physicians actually have on caregiver decision-making during counseling, as well as the extent to which risk prediction tools would change their approach to counseling. in addition, despite recent innovation in statistical approaches to risk modeling, such as machine learning, most prediction tools rely on standard regression techniques. insofar that risk prediction models will continue to be developed for preterm neonatal care, making use of the clinical data available in most modern electronic health records and taking into consideration the analytic challenges related to unequal prior probabilities of exposures, non-independence of variables, and semi-competing risk can only strengthen our approach to predicting outcomes. we therefore recommend taking a broader view of risk, incorporating these concepts in creating stronger risk prediction tools that can ultimately serve to benefit the long-term care of preterm neonates. c.j.c. and j.s.l. designed and carried out this literature review. c.j.c., j.s.l., and s.h. worked jointly in the analysis and interpretation of the literature review results, as well as the drafting and revision of this article. all three authors gave final approval of the version to be published. on the influence of abnormal parturition, difficult labours, premature birth, and asphyxia neonatorum, on the mental and physical condition of the child, especially in relation to deformities trends in care practices, morbidity, and mortality of extremely preterm neonates survival of infants born at periviable gestational ages outcomes of preterm infants: morbidity replaces mortality institute of medicine committee on understanding premature birth and assuring healthy outcomes. preterm birth: causes, consequences, and prevention influence of birth weight, sex, and plurality on neonatal loss in united states preterm neonatal morbidity and mortality by gestational age: a contemporary cohort gestational age and birthweight for risk assessment of neurodevelopmental impairment or death in extremely preterm infants neurodevelopmental outcome at years of age of a national cohort of extremely low birth weight infants who were born in - comparing neonatal morbidity and mortality estimates across specialty in periviable counseling prognosis and prognostic research: what, why, and how? neonatal disease severity scoring systems intensive care for extreme prematurity-moving beyond gestational age outcome trajectories in extremely preterm infants prediction of late death or disability at age years using a count of neonatal morbidities in very low birth weight infants prediction of mortality in very premature infants: a systematic review of prediction models risk factor models for neurodevelopmental outcomes in children born very preterm or with very low birth weight: a systematic review of methodology and reporting a primer on predictive models pulmonary surfactant therapy the future of exogenous surfactant therapy nursery neurobiologic risk score and outcome at months evaluation of the ability of neurobiological, neurodevelopmental and socio-economic variables to predict cognitive outcome in premature infants. child care health dev increased survival and deteriorating developmental outcome in to week old gestation infants, - compared with - measurement properties of the clinical risk index for babies-reliabilty, validity beyond the first hours, and responsiveness over days predicting the outcomes of preterm neonates beyond the neonatal intensive predicting outcome in very low birthweight infants using an objective measure of illness severity and cranial ultrasound scanning is the crib score (clinical risk index for babies) a valid tool in predicting neurodevelopmental outcome in extremely low birth weight infants? the crib (clinical risk index for babies) score and neurodevelopmental impairment at one year corrected age in very low birth weight infants can severity-of-illness indices for neonatal intensive care predict outcome at years of age? neurodevelopment of children born very preterm and free of severe disabilities: the nord-pas de calais epipage cohort study chronic physiologic instability is associated with neurodevelopmental morbidity at one and two years in extremely premature infants prediction of neurologic morbidity in extremely low birth weight infants impact of bronchopulmonary dysplasia, brain injury, and severe retinopathy on the outcome of extremely low-birth-weight infants at months: results from the trial of indomethacin prophylaxis in preterms impact at age years of major neonatal morbidities in children born extremely preterm effect of severe neonatal morbidities on long term outcome in extremely low birthweight infants early prediction of poor outcome in extremely low birth weight infants by classification tree analysis consequences and risks of < -g birth weight for neuropsychological skills, achievement, and adaptive functioning clinical data predict neurodevelopmental outcome better than head ultrasound in extremely low birth weight infants infant outcomes after periviable birth; external validation of the neonatal research network estimator with the beam trial clinical risk index for babies score for the prediction of neurodevelopmental outcomes at years of age in infants of very low birthweight nsw and act neonatal intensive care units audit group. can the early condition at admission of a high-risk infant aid in the prediction of mortality and poor neurodevelopmental outcome? a population study in australia autism spectrum disorders in extremely preterm children snap-ii and snappe-ii and the risk of structural and functional brain disorders in extremely low gestational age newborns: the elgan study early postnatal illness severity scores predict neurodevelopmental impairments at years of age in children born extremely preterm high prevalence/low severity language delay in preschool children born very preterm identification of extremely premature infants at high risk of rehospitalization screening for autism spectrum disorders in extremely preterm infants perinatal risk factors for neurocognitive impairments in preschool children born very preterm correlation between initial neonatal and early childhood outcomes following preterm birth bronchopulmonary dysplasia and perinatal characteristics predict -year respiratory outcomes in newborns born at extremely low gestational age: a prospective cohort study the international neonatal network. the crib (clinical risk index for babies) score: a tool for assessing initial neonatal risk and comparing performance of neonatal intensive care units score for neonatal acute physiology: a physiologic severity index for neonatal intensive care neonatal therapeutic intervention scoring system: a therapy-based severity-of-illness index prediction of death for extremely premature infants in a population-based cohort parental perspectives regarding outcomes of very preterm infants: toward a balanced approach risk of developmental delay increases exponentially as gestational age of preterm infants decreases: a cohort study at age years preterm birth-associated neurodevelopmental impairment estimates at regional and global levels for late respiratory outcomes after preterm birth respiratory health in pre-school and school age children following extremely preterm birth preterm delivery and asthma: a systematic review and metaanalysis preterm birth, infant weight gain, and childhood asthma risk: a meta-analysis of , european children preterm birth: risk factor for early-onset chronic diseases preterm heart in adult life: cardiovascular magnetic resonance reveals distinct differences in left ventricular mass, geometry, and function right ventricular systolic dysfunction in young adults born preterm elevated blood pressure in preterm-born offspring associates with a distinct antiangiogenic state and microvascular abnormalities in adult life preterm birth and the metabolic syndrome in adult life: a systematic review and meta-analysis prevalence of diabetes and obesity in association with prematurity and growth restriction prematurity: an overview and public health implications measurement of quality of life of survivors of neonatal intensive care: critique and implications quality of life assessment in preterm children: physicians' knowledge, attitude, belief, practice -a kabp study health-related quality of life and emotional and behavioral difficulties after extreme preterm birth: developmental trajectories prognostic factors for poor cognitive development in children born very preterm or with very low birth weight: a systematic review prognostic factors for cerebral palsy and motor impairment in children born very preterm or very low birthweight: a systematic review evidence for catchup in cognition and receptive vocabulary among adolescents born very preterm the economic burden of prematurity in canada changing definitions of long-term followup: should "long term" be even longer? functional outcomes of very premature infants into adulthood social competence of preschool children born very preterm prediction of cognitive abilities at the age of years using developmental follow-up assessments at the age of and years in very preterm children predicting the outcomes of preterm neonates beyond the neonatal intensive perinatal risk factors of adverse outcome in very preterm children: a role of initial treatment of respiratory insufficiency? the relationship between behavior ratings and concurrent and subsequent mental and motor performance in toddlers born at extremely low birth weight prognostic factors for behavioral problems and psychiatric disorders in children born very preterm or very low birth weight: a systematic review neurodevelopmental outcomes of extremely low birth weight infants < weeks' gestation between neighborhood influences on the academic achievement of extremely low birth weight children mental health outcomes in us children and adolescents born prematurely or with low birthweight measurement of socioeconomic status in health disparities research family income trajectory during childhood is associated with adiposity in adolescence: a latent class growth analysis family income trajectory during childhood is associated with adolescent cigarette smoking and alcohol use machine learning in medicine: a primer for physicians predicting discharge dates from the nicu using progress note data a life course approach to chronic diseases epidemiology nd edn. a life course approach to adult health scientists rise up against statistical significance the asa's statement on p-values: context, process, and purpose time for clinicians to embrace their inner bayesian? reanalysis of results of a clinical trial of extracorporeal membrane oxygenation semi-competing risks data analysis: accounting for death as a competing risk when the outcome of interest is nonterminal beyond composite endpoints analysis: semicompeting risks as an underutilized framework for cancer research use and misuse of the receiver operating characteristic curve in risk prediction assessing the performance of prediction models: a framework for traditional and novel measures novel metrics for evaluating improvement in discrimination: net reclassification and integrated discrimination improvement for normal variables and nested models multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors limitations of the odds ratio in gauging the performance of a diagnostic, prognostic, or screening marker just health: on the conditions for acceptable and unacceptable priority settings with respect to patients' socioeconomic status auc: . sensitivity: . % specificity: . % competing interests: the authors declare no competing interests.publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. key: cord- - zf xr authors: singer, professor donald rj title: health policy implications of the links between cardiovascular risk and covid- date: - - journal: health policy technol doi: . /j.hlpt. . . sha: doc_id: cord_uid: zf xr nan influenza by immunization now have reduced confidence in leaving home. they are therefore much less likely to be prepared to access community health services to receive an influenza immunization. an epidemic of simultaneous influenza and covid- is therefore a serious concern. this would result in higher morbidity and mortality in vulnerable people and greater pressure on acute medical services. approaches to improving outcomes of covid- include development of effective vaccines. in the meantime, public health measures are the mainstay for containing spread of infection with sars-cov- , complemented by access to high quality supportive treatment and efforts to develop targeted approaches to reduce infection and disease severity in people at high risk of serious morbidity and death from covid- . however, eight months since this new respiratory syndrome was first reported to international authorities, effective test and trace systems have not yet been internationally implemented, even across all well-developed healthcare systems. for example, in the uk, reporting of test results has fallen to below % within hours and one in seven home testing kits are reported to fail to yield a result [ ] . there are major global efforts underway to develop vaccines against covid- , with candidates as of july entered into clinical studies, including phase and trials [ ] . however, their short and long-term effectiveness and safety remain to be established. the usual questions for a new vaccine remain to be answered. will vaccines prevent covid- or at least improve prognosis from the infection? will groups at higher risk from covid- respond as well as the often healthier volunteers in clinical trials? the timeline also remains uncertain for widespread public protection if and when safe and effective vaccines become available. international networks for pharmacovigilance against adverse effects of covid- vaccines are needed, with for example utrecht university in the netherlands being commissioned by the european medicines agency as a hub for a europe-wide network [ ] . people with comorbidities are more likely to be infected with sars-cov- , especially those with hypertension, coronary heart disease, diabetes mellitus and obesity. they are also more likely to have worse outcomes from covid- , with similar associations in reports for example from china, the usa and italy [ , ] . people with cardiovascular risk factors or established cardiovascular disease also experience a high case-fatality rate from covid- [ , ] . for example, hypertension was reported in % of patients who died [odds ratio for death, . ( % ci: . - . )] in a meta-analysis of over , confirmed covid patients in china [ ] . in the same report, cardiovascular disease [cvd] was associated with a -fold increase in risk of death from covid- [ ] . although the elderly are at greater risk of infection and death, younger adults are also at risk, especially those who are obese [ ] and/or from black and asian ethnic minorities [ ] . a recent meta-analysis of almost , subjects [ ] reported that patients with a bmi over kg/m were ~ % more likely to develop covid- and for those with covid- , over twice as likely to be admitted to hospital for treatment, ~ % more likely to be admitted to an intensive care unit and had a ~ % greater mortality than the less overweight. for patients from bame groups, a lower bmi threshold of over kg/m appeared associated with worse severity from covid- . in addition to being at increased risk of covid- , obese patients also appear less likely to respond effectively to the influenza immunization [ ] . there are therefore concerns that obese people may also respond less well to immunization against sars-cov . however, as an example of the global health challenge, despite international efforts, including sustainable development goals for health adopted by g countries [ ] , obesity remains an international epidemic, despite its being recognized as a disease by many organisations [ ] including by the american medical association since , and the long-established role of obesity as a major contributor to serious disorders of the heart, brain and circulation, as well as many cancers, joint disease and poor mental health. the who estimates that obesity has tripled since and that by there were million obese people globally ( . billion overweight) [ ] . reasons why black and ethnic minorities (bame) are more at risk of infection with sars-cov and of worse outcomes from covid- are unclear [ ] . for example, in one study in the uk, one third of patients admitted to icu due to covid- were from an ethnic minority [ ] with similar reports from the usa. possible reasons include a higher prevalence in bame populations of cardiovascular risk factors e.g. hypertension, diabetes mellitus, insulin resistance and obesity, socioeconomic, cultural, or lifestyle factors and genetic predisposition. there may also be pathophysiological differences in susceptibility or response to infection due for example to increased prevalence of vitamin d deficiency. an increased inflammatory burden may also contribute to worse outcomes. ace- (angiotensin converting enzyme ii) is the key docking protein by which the covid- virus binds to cells [ ] . this is also the key cell entry receptor used by the initial sars-cov [ ] . ace- is mainly found on vascular endothelial cells, the renal tubular epithelium and the leydig cells of the testis. copies of the ace- protein are present in increased numbers in patients with risk factors for heart disease. ace- could thus be a therapeutic target in the treatment of covid- . however enzymatic activity of ace controls activation of the renin-angiotensin-aldosterone system (raas), a current therapeutic target in cardiovascular and renal disease. there were concerns that common medicines such as ace inhibitors (acei) or angiotensin receptor blockers (arbs) used to treat hypertension or heart failure by inhibiting the renin-angiotensin system could adversely affect ace expression. however, studies to date in sars-cov- -infected patients do not suggest that these raas modulators influence susceptibility to the infection or cause more severe covid- . indeed, in a meta-analysis of almost , patients with covid- , use of raas inhibitors for any conditions showed a trend to lower risk of death or critical events (odds ratio . , % ci . to . , p = . ). within the hypertensive cohort, treatment with acei or arbs was associated with one third less mortality from covid- (odds ratio . , ci . to . , p = . ) and a one third reduction in the combined end-point of death and critically severe outcomes (odds ratio . , ci . to . , p = . ) [ ] . this was however an observational study and there is as yet no evidence as to whether adding an acei or arb to treatment would influence the outcome of covd- . myocardial injury is found in > % of critical cases of covid- and presents in patterns: acute myocardial injury and dysfunction on presentation and delayed myocardial injury that develops as illness severity intensifies [ ] . there are also potentially serious drug-cardiac disease interactions affecting patients with covid- and associated cardiovascular disease, for example from empirical anti-inflammatory treatments. [ ] . sars-cov may also cause hypercoagulability, resulting in unexpectedly severe lung damage from widespread thromboses and disseminated intravascular coagulation adding to lung injury from covid- pneumonia [ ] . these features suggest complement-mediated thrombotic microangiopathy as a contributory factor and may give clues to treatment beyond anticoagulation to prevent life-threatening microangiopathy [ , ] . an indirect factor in covid- -related increased severity of cardiovascular disease is malnutrition in patients self-isolating at home. this may directly increase risk of falls, heart attack and stroke, especially when patients continue diuretics and other blood pressure-lowering medicines despite reduced oral intake of food and drinka recognized cause of hypotension and falls. other indirect reasons for concern about increased prevalence and severity of cardiovascular disease because of the covid- pandemic include poorer recognition and control of cardiovascular risk factors and established serious disorders of the heart, brain and circulation due to reduced access to medical services. particularly in less developed countries, public transport is vital for access to health care facilities. both public transport services and medical facilities have been seriously limited during covid- restrictions and availability of funds to pay for medical services has been severely reduced. for example, in india, over % of the country's substantial workforce of million migrant workers lost their jobs overnight, public transport services were critically reduced, and many healthcare facilities closed [ ] . increasing recognition of these links between cardiovascular risk and disease and severity of covid- , including mortality, offer opportunities to improve outcomes of covid- in the large number of patients with these common disorders. understanding the pathophysiology and exploring potential solutions and treatments to reverse worse outcomes in patients at increased cardiovascular risk is a priority for health researchers and clinical health services around the world. this is all the more pressing as there is an international epidemic of the preventable cardiovascular risk factors which have been linked to increased severity of covid- . health policy makers also need to take steps to extend influenza immunization to all groups now recognized to be at risk of more serious covid- , including the obese, others with increased cardiovascular risk and people from black and other at risk ethnic minorities. policy makers will need to make extra efforts to make sure that these vulnerable people take part in influenza immunization programmes. this requires measures to make sure that accessing points of care will not put people at risk of acquiring covid- . policy makers also need to build public awareness of the current extra importance of influenza immunization and confidence in the safety of accessing medical services. the involvement of policy makers to ensure sustained financial and social solutions for covid- is urgently needed, to complement the efforts against covid- of health professionals, regulators and the pharmaceutical and biotechnology industries. these efforts will not be successful without also addressing the cardiovascular and other factors that contribute to higher risk from covid- . links to the severity of covid- make it all the more pressing for policy makers and public health agencies to address underlying causes and to reduce the incidence and severity of preventable cardiovascular risk. situation updates. website for the european centre for disease prevention and control. accessed th a pneumonia outbreak associated with a new coronavirus of probable bat origin uk coronavirus live: daily cases tally jumps by nearly to reach , . uk guardian news website vaccines and treatment of covid- . european centre for disease prevention and control ema to monitor real world use of covid- treatments and vaccines covid- and cardiovascular disease: from basic mechanisms to clinical perspectives clinical determinants for fatality of , patients with covid- . crit care individuals with obesity and covid- : a global perspective on the epidemiology and biological relationships asian and minority ethnic groups in england are at increased risk of death from covid- : indirect standardisation of nhs mortality data obesity impairs the adaptive immune response to influenza virus soft power and global health: the sustainable development goals (sdgs) era health agendas of the g , g and brics. bmc public health regarding obesity as a disease: evolving policies and their implications is ethnicity linked to incidence or outcomes of covid- ? ace as a therapeutic target for covid- ; its role in infectious processes and regulation by modulators of the raas system effect of renin-angiotensin-aldosterone system inhibitors in patients with covid- : a systematic review and meta-analysis of covid- cytokine storm: the interplay between inflammation and coagulation emerging evidence of a covid- thrombotic syndrome has treatment implications the author has no conflict of interest to declare. he is the president of the fellowship of postgraduate medicine, for which health policy and technology is an official journal.during he was a physician and pharmacologist in rwanda within the us aid and us cdc human resources for health program. key: cord- - v brjf authors: nicholson, felicity title: infectious diseases: the role of the forensic physician date: journal: clinical forensic medicine doi: . / - - - : sha: doc_id: cord_uid: v brjf infections have plagued doctors for centuries, in both the diagnosis of the specific diseases and the identification and subsequent management of the causative agents. there is a constant need for information as new organisms emerge, existing ones develop resistance to current drugs or vaccines, and changes in epidemiology and prevalence occur. in the st century, obtaining this information has never been more important. population migration and the relatively low cost of flying means that unfamiliar infectious diseases may be brought into industrialized countries. an example of this was an outbreak of severe acute respiratory syndrome (sars), which was first recognized in . despite modern technology and a huge input of money, it took months for the agent to be identified, a diagnostic test to be produced, and a strategy for disease reporting and isolation to be established. there is no doubt that other new and fascinating diseases will continue to emerge. infections have plagued doctors for centuries, in both the diagnosis of the specific diseases and the identification and subsequent management of the causative agents. there is a constant need for information as new organisms emerge, existing ones develop resistance to current drugs or vaccines, and changes in epidemiology and prevalence occur. in the st century, obtaining this information has never been more important. population migration and the relatively low cost of flying means that unfamiliar infectious diseases may be brought into industrialized countries. an example of this was an outbreak of severe acute respiratory syndrome (sars), which was first recognized in . despite modern technology and a huge input of money, it took months for the agent to be identified, a diagnostic test to be produced, and a strategy for disease reporting and isolation to be established. there is no doubt that other new and fascinating diseases will continue to emerge. for the forensic physician, dealing with infections presents two main problems. the first problem is managing detainees or police personnel who have contracted a disease and may be infectious or unwell. the second problem is handling assault victims, including police officers, who have potentially been exposed to an infectious disease. the latter can be distressing for those involved, compounded, in part, from an inconsistency of management guidelines, if indeed they exist. with the advent of human rights legislation, increasing pressure is being placed on doctors regarding consent and confidentiality of the detainee. therefore, it is prudent to preempt such situations before the consultation begins by obtaining either written or verbal consent from the detainee to allow certain pieces of information to be disclosed. if the detainee does not agree, then the doctor must decide whether withholding relevant details will endanger the lives or health of those working within custody or others with whom they may have had close contact (whether or not deliberate). consent and confidentiality issues are discussed in detail in chapter . adopting a universal approach with all detainees will decrease the risk to staff of acquiring such diseases and will help to stop unnecessary overreaction and unjustified disclosure of sensitive information. for violent or sexual assault victims, a more open-minded approach is needed (see also chapter ) . if the assailant is known, then it may be possible to make an informed assessment of the risk of certain diseases by ascertaining his or her lifestyle. however, if the assailant is unknown, then it is wise to assume the worst. this chapter highlights the most common infections encountered by the forensic physician. it dispels "urban myths" and provides a sensible approach for achieving effective management. the risk of exposure to infections, particularly blood-borne viruses (bbvs), can be minimized by adopting measures that are considered good practice in the united kingdom, the united states, and australia ( ) ( ) ( ) . forensic physicians or other health care professionals should wash their hands before and after contact with each detainee or victim. police officers should be encouraged to wash their hands after exposure to body fluids or excreta. all staff should wear gloves when exposure to body fluids, mucous membranes, or nonintact skin is likely. gloves should also be worn when cleaning up body fluids or handling clinical waste, including contaminated laundry. single-use gloves should only be used and must conform to the requirements of european standard or equivalent ( ) ( ) ( ) . a synthetic alternative conforming to the same standards should also be available for those who are allergic to latex. all staff should cover any fresh wounds (< hours old), open skin lesions, or breaks in exposed skin with a waterproof dressing. gloves cannot prevent percutaneous injury but may reduce the chance of acquiring a bloodborne viral infection by limiting the volume of blood inoculated. gloves should only be worn when taking blood, providing this does not reduce manual dexterity and therefore increase the risk of accidental percutaneous injury. ideally, a designated person should be allocated to ensure that the clinical room is kept clean and that sharps containers and clinical waste bags are removed regularly. clinical waste must be disposed of in hazard bags and should never be overfilled. after use, the clinical waste should be doublebagged and sealed with hazard tape. the bags should be placed in a designated waste disposal (preferably outside the building) and removed by a professional company. when cells are contaminated with body fluids, a professional cleaning company should be called to attend as soon as possible. until such time, the cell should be deemed "out of action." there is a legal requirement in the united kingdom under the environmental protection act ( ) and the control of substances hazardous to health regulations to dispose of sharps in an approved container. in the united states, the division of health care quality promotion on the centers for disease control and prevention (cdc) web site provides similar guidance. in custody, where sharps containers are transported off site, they must be of an approved type. in the united kingdom, such a requirement is contained within the carriage of dangerous goods (classification, packaging and labelling) and use of transportable pressure receptacles regulations . these measures help to minimize the risk of accidental injury. further precautions include wearing gloves when handling sharps and never bending, breaking, or resheathing needles before disposal. sharps bins should never be overfilled, left on the floor, or placed above the eye level of the smallest member of staff. any bedding that is visibly stained with body fluids should be handled with gloves. there are only three acceptable ways of dealing with contaminated bedding: the bbvs that present the most cross-infection hazard to staff or victims are those associated with persistent viral replication and viremia. these include hbv, hcv, hepatitis d virus (hdv), and hiv. in general, risks of transmission of bbvs arise from the possible exposure to blood or other body fluids. the degree of risk varies with the virus concerned and is discussed under the relevant sections. figure illustrates the immediate management after a percutaneous injury, mucocutaneous exposure, or exposure through contamination of fresh cuts or breaks in the skin. hbv is endemic throughout the world, with populations showing a varying degree of prevalence. approximately two thousand million people have been infected with hbv, with more than million having chronic infection. worldwide, hbv kills about million people each year. with the development of a safe and effective vaccine in , the world health organization (who) recommended that hbv vaccine should be incorporated into national immunization programs by in those countries with a chronic infection rate of % or higher, and into all countries by . although countries had achieved this goal by the end of , the poorest countries-often the ones with the highest prevalence-have been unable to afford it. in particular these include china, the indian subcontinent, and sub-saharan africa. people in the early stages of infection or with chronic carrier status (defined by persistence of hepatitis b surface antigen [hbsag] beyond mo) can transmit infection. in the united kindgom, the overall prevalence of chronic hbv is approx . - . % ( , ) . a detailed breakdown is shown in table . the incubation period is approx weeks to months. as the name suggests, the virus primarily affects the liver. typical symptoms include malaise, anorexia, nausea, mild fever, and abdominal discomfort and may last from days to weeks before the insidious onset of jaundice. joint pain and skin rashes may also occur as a result of immune complex formation. infections in the newborn are usually asymptomatic. * in the united kingdom, written consent from the contact must be sent with the sample, countersigned by the health care practitioner and, preferably, an independent police officer. the majority of patients with acute hbv make a full recovery and develop immunity. after acute infection, approx in patients develop liver failure, which may result in death. chronic infection develops in approx % of neonates, approx % of children, and between and % of adults. neonates and children are usually asymptomatic. adults may have only mild symptoms or may also be asymptomatic. approximately - % of chronically infected individuals (depending on age of acquisition) will develop cirrhosis over a number of years. this may also result in liver failure or other serious complications, including hepatocellular carcinoma, though the latter is rare. the overall mortality rate of hbv is estimated at less than %. a person is deemed infectious if hbsag is detected in the blood. in the acute phase of the illness, this can be as long as months. by definition, if hbsag persists after this time, then the person is deemed a carrier. carriers are usually infectious for life. the degree of infectivity depends on the stage of disease and the markers present table . the major routes include parenteral (e.g., needlestick injuries, bites, unscreened blood transfusions, tattooing, acupuncture, and dental procedures where equipment is inadequately sterilized), mucous membrane exposure (including mouth, eyes, and genital mucous membranes), and contamination of broken skin (especially when < hours old). hbv is an occupational hazard for anyone who may come into contact with blood or bloodstained body fluids through the routes described. saliva alone may transmit hbv. the saliva of some people infected with hbv contains hbv-dna concentrations / - / , of that found in their serum ( ) . this is especially relevant for penetrating bite wounds. infection after exposure to other body fluids (e.g., bile, urine, feces, and cerebrospinal fluid) has never been demonstrated unless the fluids are contaminated with blood. intravenous drug users who share needles or other equipment are also at risk. hbv can also be transmitted through unprotected sexual contact, whether homosexual or heterosexual. the risk is increased if blood is involved. sexual assault victims should be included in this category. evidence has shown that the virus may also be spread among members of a family through close household contact, such as through kissing and sharing toothbrushes, razors, bath towels, etc. ( ) ( ) ( ) . this route of transmission probably applies to institutionalized patients, but there are no available data. studies of prisoners in western countries have shown a higher prevalence of antibodies to hbv and other bbvs than the general population ( ) ( ) ( ) ; the most commonly reported risk factor is intravenous drug use. however, the real frequency of transmission of bbvs in british prisons is unknown owing to the difficulty in compiling reliable data. hbv can be transmitted vertically from mother to baby during the perinatal period. approximately % of babies born to mothers who have either acute or chronic hbv become infected, and most will develop chronic hbv. this has been limited by the administration of hbv vaccine to the neonate. in industrialized countries, all prenatal mothers are screened for hbv. vaccine is given to the neonate ideally within the first hours of birth and at least two more doses are given at designated intervals. the who recommends this as a matter of course for all women in countries where prevalence is high. however, the practicalities of administering a vaccine that has to be stored at the correct temperature in places with limited access to medical care means that there is a significant failure of vaccine uptake and response. in industrialized countries, hbv vaccination is recommended for those who are deemed at risk of acquiring the disease. they include the following: . through occupational exposure. . homosexual/bisexual men. . intravenous drug users. . sexual partners of people with acute or chronic hbv. . family members of people with acute or chronic hbv. . newborn babies whose mothers are infected with hbv. if the mother is hbsag positive, then hepatitis b-specific immunoglobulin (hbig) should be given at the same time as the first dose of vaccine. . institutionalized patients and prisoners. ideally, hbv vaccine should be administered before exposure to the virus. the routine schedule consists of three doses of the vaccine given at , , and months. antibody levels should be checked - weeks after the last dose. if titers are greater than miu/ml, then an adequate response has been achieved. in the united kingdom, this is considered to provide protection for - years. in the united states, if an initial adequate response has been achieved, then no further doses of vaccine are considered necessary. vaccine administration after exposure varies according to the timing of the incident, the degree of risk involved, and whether the individual has already been partly or fully vaccinated. an accelerated schedule when the third dose is given months after the first dose with a booster year later is used to prevent postnatal transmission. where risks are greatest, it may be necessary to use a rapid schedule. the doses are given at , , and - days after presentation, again with a booster dose at - months. this schedule is currently only licensed with engerix b. hbig may also be used either alone or in conjunction with vaccine. the exact dose given is age dependent but must be administered by deep intramuscular injection in a different site from the vaccine. in an adult, this is usually into the gluteus muscle. hbig is given in conjunction with the first dose of vaccine to individuals who are deemed at high risk of acquiring disease and the incident occurred within hours of presentation. it is also used for neonates born to mothers who are hbeag-positive. between and % of adults fail to respond to the routine schedule of vaccine. a further full course of vaccine should be tried before deeming the patients as "nonresponders." such individuals involved in a high-risk exposure should be given two doses of hbig administered mo apart. ideally, the first dose should be given within hours after exposure and no later than weeks after exposure. other measures include minimizing the risk of exposure by adopting the safe working practices outlined in subheading . any potential exposures should be dealt with as soon as possible. in industrialized countries blood, blood products, and organs are routinely screened for hbv. intravenous drug users should be encouraged to be vaccinated and to avoid sharing needles or any other drug paraphernalia (see subheading . . .). for staff or victims in contact with disease, it is wise to have a procedure in place for immediate management and risk evaluation. an example is shown in fig. . although forensic physicians are not expected to administer treatment, it is often helpful to inform persons concerned what to expect. tables and outline treatment protocols as used in the united kingdom. detainees with disease can usually be managed in custody. if the detainee is bleeding, then the cell should be deemed out of action after the detainee has left until it can be professionally cleaned. contaminated bedding should be dealt with as described in subheading . . if the detainee has chronic hbv and is on an antiviral agent (e.g., lamivudine), then the treatment course should be continued, if possible. hcv is endemic in most parts of the world. approximately % ( million) of the world's population is infected with hcv ( ) . for many countries, no reliable prevalence data exist. seroprevalence studies conducted among blood donors have shown that the highest prevalence exists in egypt ( - %). this has been ascribed to contaminated needles used in the treatment of schistosomiasis conducted between the s and the s ( ) . intermediate prevalence ( - %) exists in eastern europe, the mediterranean, the middle east, the indian subcontinent, and parts of africa and asia. in western europe, most of central america, australia, and limited regions in africa, including south africa, the prevalence is low ( . - . %). previously, america was included in the low prevalence group, but a report published in ( ) indicated that almost million americans (i.e., . % of the population) have antibody to hcv, representing either ongoing or previous infection. it also states that hcv accounts for approx % of acute viral hepatitis in america. the lowest prevalence ( . - . %) has been found in the united kingdom and scandinavia. however, within any country, there are certain groups that have a higher chance of carrying hcv. these united kingdom figures are given in table . after an incubation period of - weeks, the acute phase of the disease lasts approx - years. unlike hepatitis a (hav) or hbv, the patient is usually asymptomatic; therefore, the disease is often missed unless the individual has reported a specific exposure and is being monitored. other cases are found by chance, when raised liver enzymes are found on a routine blood test. a "silent phase" follows the acute phase when the virus lies dormant and the liver enzymes are usually normal. this period lasts approx - years. reactivation may then occur. subsequent viral replication damages the hepatocytes, and liver enzymes rise to moderate or high levels. eighty percent of individuals who are hcv antibody-positive are infectious, regardless of the levels of their liver enzymes. approximately % of people develop chronic infection, one-fifth of whom progress to cirrhosis. there is a much stronger association with hepatocellular carcinoma than with hbv. an estimated . - . % of patients with hcv-related cirrhosis develop liver cancer ( ) . less than % of chronic cases resolve spontaneously. approximately % of cases are parenteral (e.g., needle-stick, etc.) ( ) . transmission through the sexual route is not common and only appears to be significant if there is repeated exposure with one or more people infected with hcv. mother-to-baby transmission is considered to be uncommon but has been reported ( ) . theoretically, household spread is also possible through sharing contaminated toothbrushes or razors. because the disease is often silent, there is a need to raise awareness among the general population on how to avoid infection and to encourage high-risk groups to be tested. health care professionals should also be educated to avoid occupationally acquired infection. an example of good practice blood or blood-stained body fluids need to be involved for a risk to occur. saliva alone is not deemed to be a risk. the risk from a single needlestick incident is . % (range - %). contact through a contaminated cut is estimated at %. for penetrating bite injuries, there are no data, but it is only considered a risk if blood is involved. blood or blood-stained body fluids have to be involved in transmission through mucous membrane exposure. this may account for the lower-than-expected prevalence among the gay population. follow the immediate management flow chart, making sure all available information is obtained. inform the designated hospital and/or specialist as soon as possible. if the contact is known and is believed to be immunocompromised and he or she has consented to provide a blood sample, it is important to tell the specialist, because the antibody tests may be spuriously negative. in this instance, a different test should be used (polymerase chain reaction [pcr] , which detects viral rna). the staff member/victim will be asked to provide a baseline sample of blood with further samples at - weeks and again at weeks. if tests are negative at weeks but the risk was deemed high, then follow-up may continue for up to weeks. if any of the follow-up samples is positive, then the original baseline sample will be tested to ascertain whether the infection was acquired through the particular exposure. it is important to emphasize the need for prompt initial attendance and continued monitoring, because treatment is now available. a combination of ribavirin (antiviral agent and interferon a- b) ( ) or the newer pegylated interferons ( ) may be used. this treatment is most effective when it is started early in the course of infection. unless they are severely ill, detainees can be managed in custody. special precautions are only required if they are bleeding. custody staff should wear gloves if contact with blood is likely. contaminated bedding should be handled appropriately, and the cell cleaned professionally after use. this defective transmissible virus was discovered in and requires hbv for its own replication. it has a worldwide distribution in association with hbv, with approx million people infected. the prevalence of hdv is higher in southern italy, the middle east, and parts of africa and south america, occurring in more than % of hbv carriers who are asymptomatic and more than % of those with chronic hbv-related liver disease. despite the high prevalence of hbv in china and south east asia, hdv in these countries is rare. hdv is associated with acute (coinfection) and chronic hepatitis (superinfection) and can exacerbate pre-existing liver damage caused by hbv. the routes of transmission and at-risk groups are the same as for hbv. staff/victims in contact with a putative exposure and detainees with disease should be managed as for hbv. interferon-α (e.g., roferon) can be used to treat patients with chronic hbv and hdv ( ) , although it would not be practical to continue this treatment in the custodial setting. hiv was first identified in , years after the first reports were made to the cdc in atlanta, ga, of an increased incidence of two unusual diseases (kaposi's sarcoma and pneumocystis carinii pneumonia) occurring among the gay population in san francisco. the scale of the virus gradually emerged over the years and by the end of , there were an estimated million people throughout the world living with hiv or acquired immunodeficiency syndrome (aids). more than % of the world's population lives in africa and india. a report by the joint united nations programme on hiv/aids and the who in stated that one in five adults in lesotho, malawi, mozambique, swaziland, zambia, and zimbabwe has hiv or aids. there is also expected to be a sharp rise in cases of hiv in china, papua new guinea, and other countries in asia and the pacific during the next few years. in the united kingdom, by the end of , the cumulative data reported that there were , individuals with hiv, aids (including deaths from aids) reported, though this is likely to be an underestimate ( ) . from these data, the group still considered at greatest risk of acquiring hiv in the united kingdom is homosexual/bisexual men, with , of the cumulative total falling into this category. among intravenous drug users, the overall estimated prevalence is %, but in london the figure is higher at . % ( , ) . in the s, up to % of users in edinburgh and dundee were reported to be hiv positive, but the majority have now died. individuals arriving from africa or the indian subcontinent must also be deemed a risk group because % of the world's total cases occur in these areas. the predominant mode of transmission is through unprotected heterosexual intercourse. the incidence of mother-to-baby transmission has been estimated at % in europe and approx % in africa. the transmission rates among african women are believed to be much higher owing to a combination of more women with end-stage disease with a higher viral load and concomitant placental infection, which renders it more permeable to the virus ( , ) . the use of antiretroviral therapy during pregnancy, together with the advice to avoid breastfeeding, has proven efficacious in reducing both vertical and horizontal transmission among hiv-positive women in the western world. for those in third-world countries, the reality is stark. access to treatment is limited, and there is no realistic substitute for breast milk, which provides a valuable source of antibodies to other life-threatening infections. patients receiving blood transfusions, organs, or blood products where screening is not routinely carried out must also be included. the incubation is estimated at weeks to months after exposure. this depends, to some extent, on the ability of current laboratory tests to detect hiv antibodies or viral antigen. the development of pcr for viral rna has improved sensitivity. during the acute phase of the infection, approx % experience a seroconversion "flu-like" illness. the individual is infectious at this time, because viral antigen (p ) is present in the blood. as antibodies start to form, the viral antigen disappears and the individual enters the latent phase. he or she is noninfectious and remains well for a variable period of time ( - years). development of aids marks the terminal phase of disease. viral antigen reemerges, and the individual is once again infectious. the onset of aids has been considerably delayed with the use of antiretroviral treatment. parenteral transmission included needlestick injuries, bites, unscreened blood transfusions, tattooing, acupuncture, and dental procedures where equipment is inadequately sterilized. risk of transmission is increased with deep penetrating injuries with hollow bore needles that are visibly bloodstained, especially when the device has previously been in the source patient's (contact) artery or vein. other routes include mucous membrane exposure (eyes, mouth, and genital mucous membranes) and contamination of broken skin. the higher the viral load in the contact, the greater the risk of transmission. this is more likely at the terminal stage of infection. hiv is transmitted mainly through blood or other body fluids that are visibly blood stained, with the exception of semen, vaginal fluid, and breast milk. saliva alone is most unlikely to transmit infection. therefore, people who have sustained penetrating bite injuries can be reassured that they are not at risk, providing the contact was not bleeding from the mouth at the time. the risk from a single percutaneous exposure from a hollow bore needle is low, and a single mucocutaneous exposure is even less likely to result in infection. the risk from sexual exposure varies, although it appears that there is a greater risk with receptive anal intercourse compared with receptive vaginal intercourse ( ). high-risk fluids include blood, semen, vaginal fluid, and breast milk. there is little or no risk from saliva, urine, vomit, or feces unless they are visibly bloodstained. other fluids that constitute a theoretical risk include cerebrospinal, peritoneal, pleural, synovial, or pericardial fluid. management in custody of staff/victims in contact with disease includes following the immediate management flow chart (fig. ) and contacting the designated hospital/specialist with details of the exposure. where possible, obtain a blood sample from the contact. regarding hbv and hcv blood samples in the united kingdom, they can only be taken with informed consent. there is no need for the forensic physician to go into details about the meaning of the test, but the contact should be encouraged to attend the genitourinary department (or similar) of the designated hospital to discuss the test results. should the contact refuse to provide a blood sample, then any information about his or her lifestyle, ethnic origin, state of health, etc., may be useful for the specialist to decide whether postexposure prophylaxis (pep) should be given to the victim. where only saliva is involved in a penetrating bite injury, there is every justification to reassure the victim that he or she is not at risk. if in doubt, then always refer. in the united kingdom, the current recommended regime for pep is combivir ( mg of zidovudine twice daily plus mg of lamivudine twice daily) and a protease inhibitor ( mg of nelfanivir twice daily) given for weeks ( ) . it is only given after a significant exposure to a high-risk fluid or any that is visibly bloodstained and the contact is known or is highly likely to be hiv positive. ideally, treatment should be started within an hour after exposure, although it will be considered for up to weeks. it is usually given for weeks, unless the contact is subsequently identified as hiv negative or the "victim" develops tolerance or toxicity occurs. weekly examinations of the "victim" should occur during treatment to improve adherence, monitor drug toxicity, and deal with other concerns. other useful information that may influence the decision whether to treat with the standard regimen or use alternative drugs includes interaction with other medications that the "victim" may be taking (e.g., phenytoin or antibiotics) or if the contact has been on antiretroviral therapy or if the "victim" is pregnant. during the second or third trimester, only combivir would be used, because there is limited experience with protease inhibitors. no data exist regarding the efficacy of pep beyond occupational exposure ( ) . pep is not considered for exposure to low-or no-risk fluids through any route or where the source is unknown (e.g., a discarded needle). despite the appropriate use and timing of pep, there have been reports of failure ( , ) . unless they are severely ill, detainees can be kept in custody. every effort should be made to continue any treatment they may be receiving. apply universal precautions when dealing with the detainee, and ensure that contaminated cells and/or bedding are managed appropriately. cases of this highly infectious disease occur throughout the year but are more frequent in winter and early spring. this seasonal endemicity is blurring with global warming. in the united kingdom, the highest prevalence occurs in the -to -years age group. ninety percent of the population over the age of is immune ( ) . a similar prevalence has been reported in other parts of western europe and the united states. in south east asia, varicella is mainly a disease of adulthood ( ) . therefore, people born in these countries who have moved to the united kingdom are more likely to be susceptible to chicken pox. there is a strong correlation between a history of chicken pox and serological immunity ( - %). most adults born and living in industrialized countries with an uncertain or negative history of chicken pox are also seropositive ( - %). in march , a live-attenuated vaccine was licensed for use in the united states and a policy for vaccinating children and susceptible health care personnel was introduced. in summer , in the united kingdom, glaxosmithkline launched a live-attenuated vaccine called varilrix. in december , the uk department of health, following advice from the joint committee on vaccination and immunisation recommended that the vaccine be given for nonimmune health care workers who are likely to have direct contact with individuals with chicken pox. any health care worker with no previous history of chicken pox should be screened for immunity, and if no antibodies are found, then they should receive two doses of vaccine - weeks apart. the vaccine is not currently recommended for children and should not be given during pregnancy. following an incubation period of - days (this may be shorter in the immunocompromised), there is usually a prodromal "flu-like" illness before the onset of the rash. this coryzal phase is more likely in adults. the lesions typically appear in crops, rapidly progressing from red papules through vesicles to open sores that crust over and separate by days. the distribution of the rash is centripetal (i.e., more over the trunk and face than on the limbs). this is the converse of small pox. in adults, the disease is often more severe, with lesions involving the scalp and mucous membranes of the oropharynx. in children, the disease is often mild, unless they are immunocompromised, so they are unlikely to experience complications. in adults (defined as yr or older), the picture is rather different ( ) . secondary bacterial infection is common but rarely serious. there is an increased likelihood of permanent scarring. hemorrhagic chicken pox typically occurs on the second or third day of the rash. usually, this is limited to bleeding into the skin, but lifethreatening melena, epistaxis, or hematuria can occur. varicella pneumonia ranges from patchy lung consolidation to overt pneumonitis and occurs in in cases ( ) . it can occur in previously healthy individuals (particularly adults), but the risk is increased in those who smoke. immunocompromised people are at the greatest risk of developing this complication. it runs a fulminating course and is the most common cause of varicella-associated death. fibrosis and permanent respiratory impairment may occur in those who survive. any suspicion of lung involvement is an indication for immediate treatment, and any detainee or staff member should be sent to hospital. involvement of the central nervous system includes several conditions, including meningitis, guillain-barre, and encephalitis. the latter is more common in the immunocompromised and can be fatal. this is taken as days before the first lesions appear to the end of new vesicle formation and the last vesicle has crusted over. this typically is - days after onset but may last up to days. the primary route is through direct contact with open lesions of chicken pox. however, it is also spread through aerosol or droplets from the respiratory tract. chicken pox may also be acquired through contact with open lesions of shingles (varicella zoster), but this is less likely because shingles is less infectious than chicken pox. nonimmune individuals are at risk of acquiring disease. approximately % of the adult population born in the united kingdom and less than % of adults in the united states fall into this category. therefore, it is more likely that if chicken pox is encountered in the custodial setting, it will involve people born outside the united kingdom (particularly south east asia) or individuals who are immunocompromised and have lost immunity. nonimmune pregnant women are at risk of developing complications. pneumonia can occur in up to % of pregnant women with chicken pox, and the severity is increased in later gestation ( ) . they can also transmit infection to the unborn baby ( ) . if infection is acquired in the first weeks, there is a less than % chance of it leading to congenital varicella syndrome. infection in the last trimester can lead to neonatal varicella, unless more than days elapse between onset of maternal rash and delivery when antibodies have time to cross the placenta leading to either mild or inapparent infection in the newborn. in this situation, varicella immunoglobulin (vzig) should be administered to the baby as soon as possible after birth ( ). staff with chicken pox should stay off work until the end of the infective period (approx - days). those in contact with disease who are known to be nonimmune or who have no history of disease should contact the designated occupational health physician. detainees with the disease should not be kept in custody if at all possible (especially pregnant women). if this is unavoidable, then nonimmune or immunocompromised staff should avoid entering the cell or having close contact with the detainee. nonimmune, immunocompromised, or pregnant individuals exposed to chickenpox should seek expert medical advice regarding the administration of vzig. aciclovir (or similar antiviral agent) should be given as soon as possible to people who are immunocompromised with chicken pox. it should also be considered for anyone over years old because they are more likely to develop complications. anyone suspected of severe complications should be sent straight to the hospital. after chicken pox, the virus lies dormant in the dorsal root or cranial nerve ganglia but may re-emerge and typically involves one dermatome ( ) . the site of involvement depends on the sensory ganglion initially involved. shingles is more common in individuals over the age of years, except in the immunocompromised, when attacks can occur at an earlier age. the latter are also more susceptible to secondary attacks and involvement of more than one dermatome. bilateral zoster is even rarer but is not associated with a higher mortality. in the united kingdom, there is an estimated incidence of . - . per -person years ( ). there may be a prodromal period of paraesthesia and burning or shooting pains in the involved segment. this is usually followed by the appearance of a band of vesicles. rarely, the vesicles fail to appear and only pain is experienced. this is known as zoster sine herpete. in individuals who are immuno-compromised, disease may be prolonged and dissemination may occur but is rarely fatal. shingles in pregnancy is usually mild. the fetus is only affected if viremia occurs before maternal antibody has had time to cross the placenta. the most common complication of shingles is postherpetic neuralgia, occurring in approx % of cases. it is defined as pain lasting more than days from rash onset ( ) . it is more frequent in people over years and can lead to depression. it is rare in children, including those who are immunocompromised. infection of the brain includes encephalitis, involvement of motor neurones leading to ptosis, paralysis of the hand, facial palsy, or contralateral hemiparesis. involvement of the oculomotor division of the trigeminal ganglion can cause serious eye problems, including corneal scarring. shingles is far less infectious than chicken pox and is only considered to be infectious up to days after lesions appear. shingles is only infectious after prolonged contact with lesions. unlike chickenpox, airborne transmission is not a risk. individuals who are immunocompromised may reactivate the dormant virus and develop shingles. people who have not had primary varicella are at risk of developing chickenpox after prolonged direct contact with shingles. despite popular belief, it is untrue that people who are immunocompetent who have had chicken pox develop shingles when in contact with either chicken pox or shingles. such occurrences are merely coincidental, unless immunity is lowered. staff with shingles should stay off work until the lesions are healed, unless they can be covered. staff who have had chickenpox are immune (including pregnant women) and are therefore not at risk. if they are nonimmune (usually accepted as those without a history of chicken pox), they should avoid prolonged contact with detainees with shingles. pregnant nonimmune women should avoid contact altogether. detainees with the disease may be kept in custody, and any exposed lesions should be covered. it is well documented that prompt treatment attenuates the severity of the disease, reduces the duration of viral shedding, hastens lesion healing, and reduces the severity and duration of pain. it also reduces the likelihood of developing postherpetic neuralgia ( ) . prompt treatment with famciclovir (e.g., mg three times a day for days) should be initiated if the onset is d ays or less. it should also be considered after this time if the detainee is over age years. pregnant detainees with shingles can be reassured that there is minimal risk for both the mother and the unborn child. expert advice should be given before initiating treatment for the mother. this tiny parasitic mite (sarcoptes scabiei) has infested humans for more than years. experts estimate that in excess of million cases occur worldwide each year. the female mite burrows into the skin, especially around the hands, feet, and male genitalia, in approx . min. eggs are laid and hatch into larvae that travel to the skin surface as newly developed mites. the mite causes intense itching, which is often worse at night and is aggravated by heat and moisture. the irritation spreads outside the original point of infection resulting from an allergic reaction to mite feces. this irritation may persist for approx weeks after treatment but can be alleviated by antihistamines. crusted scabies is a far more severe form of the disease. large areas of the body may be involved. the crusts hide thousands of live mites and eggs, making them difficult to treat. this so-called norwegian scabies is more common in the elderly or the immunocompromised, especially those with hiv. after a primary exposure, it takes approx - weeks before the onset of itching. however, further exposures reduce the incubation time to approx - days. without treatment, the period of infectivity is assumed to be indefinite. with treatment, the person should be considered infectious until the mites and eggs are destroyed, usually - days. crusted scabies is highly infectious. because transmission is through direct skin-to-skin contact with an infected individual, gloves should be worn when dealing with individuals suspected of infestation. usually prolonged contact is needed, unless the person has crusted scabies, where transmission occurs more easily. the risk of transmission is much greater in households were repeated or prolonged contact is likely. because mites can survive in bedding or clothing for up to hour, gloves should also be worn when handling these items. bedding should be treated using one of the methods in subheading . . professional cleaning of the cell is only warranted in cases of crusted scabies. the preferred treatment for scabies is either permethrin cream ( %) or aqueous malathion ( . %) ( ) . either treatment has to be applied to the whole body and should be left on for at least hours in the case of permethrin and hours for malathion before washing off. lindane is no longer considered the treatment of choice, because there may be complications in pregnancy ( ) . treatment in custody may not be practical but should be considered when the detainee is believed to have norwegian scabies. like scabies, head lice occur worldwide and are found in the hair close to the scalp. the eggs, or nits, cling to the hair and are difficult to remove, but they are not harmful. if you see nits, then you can be sure that lice are also present. the latter are best seen when the hair is wet. the lice bite the scalp and suck blood, causing intense irritation and itching. head lice can only be passed from direct hair-to-hair contact. it is only necessary to wear gloves when examining the head for whatever reason. the cell does not need to be cleaned after use, because the lice live on or near skin. bedding may be contaminated with shed skin, so should be handled with gloves and laundered or incinerated. the presence of live lice is an indication for treatment by either physical removal with a comb or the application of an insecticide. the latter may be more practical in custody. treatment using . % aqueous malathion should be applied to dry hair and washed off after hours. the hair should then be shampooed as normal. crabs or body lice are more commonly found in the pubic, axillary, chest, and leg hair. however, eyelashes and eyebrows may also be involved. they are associated with people who do not bath or change clothes regularly. the person usually complains of intense itching or irritation. the main route is from person to person by direct contact, but eggs can stick to fibers, so clothing and bedding should be handled with care (see subheading . . .). staff should always wear gloves if they are likely to come into contact with any hirsute body part. clothing or bedding should be handled with gloves and either laundered or incinerated. treatment of a detainee in custody is good in theory but probably impractical because the whole body has to be treated. fleas lay eggs on floors, carpets, and bedding. in the united kingdom, most flea bites come from cats or dogs. the eggs and larvae fleas can survive for months and are reactivated in response to animal or human activity. because animal fleas jump off humans after biting, most detainees with flea bites will not have fleas, unless they are human fleas. treatment is only necessary if fleas are seen. after use, the cell should be vacuumed and cleaned with a proprietary insecticide. any bedding should be removed wearing gloves, bagged, and either laundered or incinerated. bedbugs live and lay eggs on walls, floors, furniture, and bedding. if you look carefully, fecal tracks may be seen on hard surfaces. if they are present for long enough, they emit a distinct odor. bedbugs are rarely found on the person but may be brought in on clothing or other personal effects. bedbugs bite at night and can cause sleep disturbance. the detainee does not need to be treated, but the cell should deemed out of use until it can be vacuumed and professionally cleaned with an insecticide solution. any bedding or clothing should be handled with gloves and disposed of as appropriate. staphylococcus aureus is commonly carried on the skin or in the nose of healthy people. approximately - % of the population is colonized with the bacteria but remain well ( ) . from time to time, the bacteria cause minor skin infections that usually do not require antibiotic treatment. however, more serious problems can occur (e.g., infection of surgical wounds, drug injection sites, osteomyelitis, pneumonia, or septicemia). during the last years, the bacteria have become increasingly resistant to penicillin-based antibiotics ( ) , and in the last years, they have become resistant to an increasing number of alternative antibiotics. these multiresistant bacteria are known as methicillinresistant s. aureus (mrsa). mrsa is prevalent worldwide. like nonresistant staphylococci, it may remain undetected as a reservoir in colonized individuals but can also produce clinical disease. it is more common in individuals who are elderly, debilitated, or immunocompromised or those with open wounds. clusters of skin infections with mrsa have been reported among injecting drug users (idus) since in america ( , ) , and more recently, similar strains have been found in the united kingdom in idus in the community ( ) . this may have particular relevance for the forensic physician when dealing with idus sores. people who are immunocompetent rarely get mrsa and should not be considered at risk. the bacteria are usually spread via the hands of staff after contact with colonized or infected detainees or devices, items (e.g., bedding, towels, and soiled dressings), or environmental surfaces that have been contaminated with mrsa-containing body fluids. with either known or suspected cases (consider all abscesses/ulcers of idus as infectious), standard precautions should be applied. staff should wear gloves when touching mucous membranes, nonintact skin, blood or other body fluids, or any items that could be contaminated. they should also be encouraged to their wash hands with an antimicrobial agent regardless of whether gloves have been worn. after use, gloves should be disposed of in a yellow hazard bag and not allowed to touch surfaces. masks and gowns should only be worn when conducting procedures that generate aerosols of blood or other body fluids. because this is an unlikely scenario in the custodial setting, masks and gowns should not be necessary. gloves should be worn when handling bedding or clothing, and all items should be disposed of appropriately. any open wounds should be covered as soon as possible. the cell should be cleaned professionally after use if there is any risk that it has been contaminated. during the last decade, there has been an increasing awareness of the bacterial flora colonizing injection sites that may potentially lead to life-threatening infection ( ) . in , a sudden increase in needle abscesses caused by a clonal strain of group a streptococcus was reported among hospitalized idus in berne, switzerland ( ) . a recent uk study showed that the predominant isolate is s. aureus, with streptococcus species forming just under one-fifth ( % β-hemolytic streptococci) ( ) . there have also been reports of both nonsporing and sporing anerobes (e.g., bacteroides and clostridia species, including clostridia botulinum) ( , ) . in particular, in , laboratories in glasgow were reporting isolates of clostridium novyi among idus with serious unexplained illness. by june , , a total of cases ( definite and probable) had been reported. a definite case was defined as an idu with both severe local and systemic inflammatory reactions. a probable case was defined as an idu who presented to the hospital with an abscess or other significant inflammation at an injecting site and had either a severe inflammatory process at or around an injection site or a severe systemic reaction with multiorgan failure and a high white cell count ( ) . in the united kingdom, the presence of c. botulinum in infected injection sites is a relatively new phenomenon. until the end of , there were no cases reported to the public health leadership society. since then, the number has increased, with a total of cases in the united kingdom and ireland being reported since the beginning of . it is believed that these cases are associated with contaminated batches of heroin. simultaneous injection of cocaine increases the risk by encouraging anerobic conditions. anerobic flora in wounds may have serious consequences for the detainee, but the risk of transmission to staff is virtually nonexistent. staff should be reminded to wear gloves when coming into contact with detainees with infected skin sites exuding pus or serum and that any old dressings found in the cell should be disposed of into the yellow bag marked "clinical waste" in the medical room. likewise, any bedding should be bagged and laundered or incinerated after use. the cell should be deemed out of use and professionally cleaned after the detainee has gone. the health care professional managing the detainee should clean and dress open wounds as soon as possible to prevent the spread of infection. it may also be appropriate to start a course of antibiotics if there is abscess formation or signs of cellulites and/or the detainee is systemically unwell. however, infections can often be low grade because the skin, venous, and lymphatic systems have been damaged by repeated penetration of the skin. in these cases, signs include lymphedema, swollen lymph glands, and darkly pigmented skin over the area. fever may or may not be present, but septicemia is uncommon unless the individual is immunocompromised (e.g., hiv positive). co-amoxiclav is the preferred treatment of choice because it covers the majority of staphylococci, streptococci, and anerobes (the dose depends on the degree of infection). necrotizing fasciitis and septic thrombophlebitis are rare but life-threatening complications of intravenous drug use. any detainee suspected of either of these needs hospital treatment. advice about harm reduction should also be given. this includes encouraging drug users to smoke rather than inject or at least to advise them to avoid injecting into muscle or skin. although most idus are aware of the risk of sharing needles, they may not realize that sharing any drug paraphernalia could be hazardous. advice should be given to use the minimum amount of citric acid to dissolve the heroin because the acid can damage the tissue under the skin, allowing bacteria to flourish. drugs should be injected at different sites using fresh works for each injection. this is particularly important when "speedballing" because crack cocaine creates an anerobic environment. medical help should be requested if any injection site become painful and swollen or shows signs of pus collecting under the skin. because intravenous drug users are at increased risk of acquiring hbv and hav, they should be informed that vaccination against both diseases is advisable. another serious but relatively rare problem is the risk from broken needles in veins. embolization can take anywhere from hours to days or even longer if it is not removed. complications may include endocarditis, pericarditis, or pulmonary abscesses ( , ) . idus should be advised to seek medical help as soon as possible, and should such a case present in custody, then send the detainee straight to the hospital. the forensic physician may encounter bites in the following four circumstances: a detailed forensic examination of bites is given in chapter . with any bite that has penetrated the skin, the goals of therapy are to minimize soft tissue deformity and to prevent or treat infection. in the united kingdom and the united states, dog bites represent approximately three-quarters of all bites presenting to accident and emergency departments ( ) . a single dog bite can produce up to psi of crush force in addition to the torsional forces as the dog shakes its head. this can result in massive tissue damage. human bites may cause classical bites or puncture wounds (e.g., impact of fists on teeth) resulting in crush injuries. an estimated - % of dog bites and - % of human bites lead to infection. compare this with an estimated - % of nonbite wounds managed in accident and emergency departments. the risk of infection is increased with puncture wounds, hand injuries, full-thickness wounds, wounds requiring debridement, and those involving joints, tendons, ligaments or fractures. comorbid medical conditions, such as diabetes, asplenia, chronic edema of the area, liver dysfunction, the presence of a prosthetic valve or joint, and an immunocompromised state may also increase the risk of infection. infection may spread beyond the initial site, leading to septic arthritis, osteomyelitis, endocarditis, peritonitis, septicemia, and meningitis. inflammation of the tendons or synovial lining of joints may also occur. if enough force is used, bones may be fractured or the wounds may be permanently disfiguring. assessment regarding whether hospital treatment is necessary should be made as soon as possible. always refer if the wound is bleeding heavily or fails to stop when pressure is applied. penetrating bites involving arteries, nerves, muscles, tendons, the hands, or feet, resulting in a moderate to serious facial wound, or crush injuries, also require immediate referral. if management within custody is appropriate, ask about current tetanus vaccine status, hbv vaccination status, and known allergies to antibiotics. wounds that have breached the skin should be irrigated with . % (isotonic) sodium chloride or ringer's lactate solution instead of antiseptics, because the latter may delay wound healing. a full forensic documentation of the bite should be made as detailed in chapter . note if there are clinical signs of infection, such as erythema, edema, cellulitis, purulent discharge, or regional lymphadenopathy. cover the wound with a sterile, nonadhesive dressing. wound closure is not generally recommended because data suggest that it may increase the risk of infection. this is particularly relevant for nonfacial wounds, deep puncture wounds, bites to the hand, clinically infected wounds, and wounds occurring more than - hours before presentation. head and neck wounds in cosmetically important areas may be closed if less than hours old and not obviously infected. • dog bites-pasteurella canis, pasteurella multocida, s. aureus, other staphylococci, streptococcus species, eikenella corrodens, corynebacterium species, and anerobes, including bacteroides fragilis and clostridium tetani • human bites-streptococcus species, s. aureus, e. corrodens, and anerobes, including bacteroides (often penicillin resistant), peptostreptococci species, and c. tetani. tuberculosis (tb) and syphilis may also be transmitted. • dog bites-outside of the united kingdom, australia, and new zealand, rabies should be considered. in the united states, domestic dogs are mostly vaccinated against rabies ( ) , and police dogs have to be vaccinated, so the most common source is from racoons, skunks, and bats. • human bites-hbv, hbc, hiv, and herpes simplex. antibiotics are not generally needed if the wound is more than days old and there is no sign of infection or in superficial noninfected wounds evaluated early that can be left open to heal by secondary intention in compliant people with no significant comorbidity ( ) . antibiotics should be considered with high-risk wounds that involve the hands, feet, face, tendons, ligaments, joints, or suspected fractures or for any penetrating bite injury in a person with diabetes, asplenia, or cirrhosis or who is immunosuppressed. coamoxiclav (amoxycillin and clavulanic acid) is the first-line treatment for mild-moderate dog or human bites resulting in infections managed in primary care. for adults, the recommended dose is / mg three times daily and for children the recommended does is mg/kg three times daily (based on amoxycillin component). treatment should be continued for - days. it is also the first-line drug for prophylaxis when the same dose regimen should be prescribed for - days. if the individual is known or suspected to be allergic to penicillin, a tetracycline (e.g., doxycycline mg twice daily) and metronidazole ( mg three times daily) or an aminoglycoside (e.g., erythromycin) and metronidazole can be used. in the united kingdom, doxycycline use is restricted to those older than years and in the united states to those older than years old. specialist advice should be sought for pregnant women. anyone with severe infection or who is clinically unwell should be referred to the hospital. tetanus vaccine should be given if the primary course or last booster was more than years ago. human tetanus immunoglobulin should be considered for tetanus-prone wounds (e.g., soil contamination, puncture wounds, or signs of devitalized tissue) or for wounds sustained more than hours old. if the person has never been immunized or is unsure of his or her tetanus status, a full three-dose course, spaced at least month apart, should be given. penetrating bite wounds that involve only saliva may present a risk of hbv if the perpetrator belongs to a high-risk group. for management, see subheadings . . . and . . . hcv and hiv are only a risk if blood is involved. the relevant management is dealt with in subheadings . . . and . . . respiratory tract infections are common, usually mild, and self-limiting, although they may require symptomatic treatment with paracetamol or a nonsteroidal antiinflammatory. these include the common cold ( % rhinoviruses and % coronaviruses), adenoviruses, influenza, parainfluenza, and, during the summer and early autumn, enteroviruses. special attention should be given to detainees with asthma or the who are immunocompromised, because infection in these people may be more serious particularly if the lower respiratory tract is involved. the following section includes respiratory pathogens of special note because they may pose a risk to both the detainee and/or staff who come into close contact. there are five serogroups of neisseria meningitidis: a, b, c, w , and y. the prevalence of the different types varies from country to country. there is currently no available vaccine against type b, but three other vaccines (a+c, c, and acwy) are available. overall, % of the uk population carry n. meningitidis ( % in the - age group) ( ) . in the united kingdom, most cases of meningitis are sporadic, with less than % occurring as clusters (outbreaks) amongst school children. between and , % of cases were group b, % were group c, and w and a accounted for %. there is a seasonal variation, with a high level of cases in winter and a low level in the summer. the greatest risk group are the under year olds, with a peak incidence under year old. a secondary peak occurs in the -to -year-old age group. in sub-saharan africa, the disease is more prevalent in the dry season, but in many countries, there is background endemicity year-round. the most prevalent serogroup is a. routine vaccination against group c was introduced in the united kingdom november for everybody up to the age of years old and to all firstyear university students. this has since been extended to include everyone under the age of years old. as a result of the introduction of the vaccination program, there has been a % reduction of group c cases in those younger than under years and an % reduction in those under year old ( , ) . an outbreak of serogroup w meningitis occurred among pilgrims on the hajj in . cases were reported from many countries, including the united kingdom. in the united kingdom, there is now an official requirement to be vaccinated with the quadrivalent vaccine (acwy vax) before going on a pilgrimage (hajj or umra), but illegal immigrants who have not been vaccinated may enter the country ( ). after an incubation period of - days ( , ) , disease onset may be either insidious with mild prodromal symptoms or florid. early symptoms and signs include malaise, fever, and vomiting. sever headache, neck stiffness, photophobia, drowsiness, and a rash may develop. the rash may be petechial or purpuric and characteristically does not blanche under pressure. meningitis in infants is more likely to be insidious in onset and lack the classical signs. in approx - % of cases, septicemia is the predominant feature. even with prompt antibiotic treatment, the case fatality rate is - % in meningitis and - % in those with septicemia. ( ). a person should be considered infectious until the bacteria are no longer present in nasal discharge. with treatment, this is usually approx hour. the disease is spread through infected droplets or direct contact from carriers or those who are clinically ill. it requires prolonged and close contact, so it is a greater risk for people who share accommodation and utensils and kiss. it must also be remembered that unprotected mouth-to-mouth resuscitation can also transmit disease. it is not possible to tell if a detainee is a carrier. nevertheless, the risk of acquiring infection even from an infected and sick individual is low, unless the individual has carried out mouth-to-mouth resuscitation. any staff member who believes he or she has been placed at risk should report to the occupational health department (or equivalent) or the nearest emergency department at the earliest opportunity for vaccination. if the detainee has performed mouth-to-mouth resuscitation, prophylactic antibiotics should be given before receiving vaccination. rifampicin, ciprofloxacin, and ceftriaxone can be used; however, ciprofloxacin has numerous advantages ( ) . only a single dose of mg (adults and children older than years) is needed and has fewer side effects and contraindications than rifampicin. ceftriaxone has to be given by injection and is therefore best avoided in the custodial setting. if the staff member is pregnant, advice should be sought from a consultant obstetrician, because ciprofloxacin is not recommended ( ) . for anyone dealing regularly with illegal immigrants (especially from the middle east or sub-saharan africa) (e.g., immigration services, custody staff at designated stations, medical personnel, and interpreters), should consider being vaccinated with acwy vax. a single injection provides protection for years. detainees suspected of disease should be sent directly to the hospital. human tb is caused by infection with mycobacterium tuberculosis, mycobacterium bovis, or mycobacterium africanum. it is a notifiable disease under legislation specific to individual countries; for example, in the united kingdom, this comes under the public health (control of disease) act of . in , the who declared tb to be a global emergency, with an estimated - million new cases and million deaths occurring each year, the majority of which were in asia and africa. however, these statistics are likely to be an underestimate because they depend on the accuracy of reporting, and in poorer countries, the surveillance systems are often inadequate because of lack of funds. even in the united kingdom, there has been an inconsistency of reporting particularly where an individual has concomitant infection with hiv. some physicians found themselves caught in a dilemma of confidentiality until , when the codes of practice were updated to encourage reporting with patient consent ( ) . with the advent of rapid identification tests and treatment and the use of bacillus calmette-guérin (bcg) vaccination for prevention, tb declined during the first half of the th century in the united kingdom. however, since the early s, numbers have slowly increased, with some cases reported in ( ) . in , % of reported cases were from people born outside the united kingdom and % were associated with hiv infection ( , ) . london has been identified as an area with a significant problem. this has been attributed to its highly mobile population, the variety of ethnic groups, a high prevalence of hiv, and the emergence of drug-resistant strains ( . % in ) (phls, unpublished data-mycobnet). a similar picture was initially found in the united states, when there was a reversal of a long-standing downward trend in . however, between and , the number of cases increased from , to , ( ) . there were also serious outbreaks of multidrug-resistant tb (mdr-tb) in hospitals in new york city and miami ( ) . factors pertinent to the overall upswing included the emergence of hiv, the increasing numbers of immigrants from countries with a high prevalence of tb, and perhaps more significantly, stopping categorical federal funding for control activities in . the latter led to a failure of the public health infrastructure for tb control. since , the trend has reversed as the cdc transferred most of its funds to tb surveillance and treatment program in states and large cities. from to , the annual decline averaged by . % ( ) , but the following year this was reduced to %, indicating that there was no room for complacency. the who has been proactive and is redirecting funding to those countries most in need. in october , a global partnership called stop tb was launched to coordinate every aspect of tb control, and by , the partnership had more than member states. a target was set to detect at least % of infectious cases by . the acquisition of tb infection is not necessarily followed by disease because the infection may heal spontaneously. it may take weeks or months before disease becomes apparent, or infection may remain dormant for years before reactivation in later life especially if the person becomes debilitated or immunocompromised. contrary to popular belief, the majority of cases of tb in people who are immunocompetent pass unnoticed. of the reported cases, % involve the lung, whereas nonrespiratory (e.g., bone, heart, kidney, and brain) or dissemination (miliary tb) are more common in immigrant ethnic groups and individuals who are immunocompromised ( ) . they are also more likely to develop resistant strains. in the general population, there is an estimated % lifetime risk of tb infection progressing to disease ( ) . there has been an increase in the number of cases of tb associated with hiv owing to either new infection or reactivation. tb infection is more likely to progress to active tb in hiv-positive individuals, with a greater than % lifetime risk ( ) . tb can also lead to a worsening of hiv with an increase in viral load ( ) . therefore, the need for early diagnosis is paramount, but it can be more difficult because pulmonary tb may present with nonspecific features (e.g., bilateral, unilateral, or lower lobe shadowing) ( ). after an incubation period of - weeks, symptoms may develop (see table ). the main route is airborne through infected droplets, but prolonged or close contact is needed. nonrespiratory disease is not considered a risk unless the mycobacterium is aerosolized under exceptional circumstances (e.g., during surgery) or there are open abscesses. a person is considered infectious as long as viable bacilli are found in induced sputum. untreated or incompletely treated people may be intermittently sputum positive for years. after weeks of appropriate treatment, the individual is usually considered as noninfectious. this period is often extended for treatment of mdr-tb or for those with concomitant hiv. patient compliance also plays an important factor. the risk of infection is directly proportional to the degree of exposure. more severe disease occurs in individuals who are malnourished, immunocompromised (e.g., hiv), and substance misusers. people who are immunocompromised are at special risk of mdr-tb or mycobacterium avium intracellulare (mai). staff with disease should stay off work until the treatment course is complete and serial sputum samples no longer contain bacilli. staff in contact with disease who have been vaccinated with bcg are at low risk of acquiring disease but should minimize their time spent in the cell. those who have not received bcg or who are immunocompromised should avoid contact with the detainee wherever possible. detainees with mai do not pose a risk to a staff member, unless the latter is immunocompromised. any staff member who is pregnant, regardless of bcg status or type of tb, should avoid contact. anyone performing mouth-to-mouth resuscitation with a person with untreated or suspected pulmonary tb should be regarded as a household contact and should report to occupational health or their physician if no other route exists. they should also be educated regarding the symptoms of tb. anyone who is likely to come into repeated contact with individuals at risk of tb should receive bcg (if he or she has not already done so), regardless of age, even though there is evidence to suggest that bcg administered in adult life is less effective. this does not apply to individuals who are immunocompromised or pregnant women. in the latter case, vaccination should preferably be deferred until after delivery. detainees with disease (whether suspected or diagnosed) who have not been treated or treatment is incomplete should be kept in custody for the minimum time possible. individuals with tb who are immunocompromised are usually too ill to be detained; if they are, they should be considered at greater risk of transmitting disease to staff. any detainee with disease should be encouraged to cover his or her mouth and nose when coughing and sneezing. staff should wear gloves when in contact with the detainee and when handling clothing and bedding. any bedding should be bagged after use and laundered or incinerated. the cell should be deemed out of action until it has been ventilated and professionally decontaminated, although there is no hard evidence to support that there is a risk of transmission from this route ( ). on march , , the who issued a global warning to health authorities about a new atypical pneumonia called sars. the earliest case was believed to have originated in the guandong province of china on november , . the causative agent was identified as a new corona virus-sars-cov ( , ) . by the end of june , cases had been reported from different countries, with a total of deaths. approximately % of cases occurred in china (including hong kong, taiwan, and macao). the case fatality rate varied from less than % in people younger than years, % in persons aged - years, % in those aged - years, and more than % in persons years or older. on july , , the who reported that the last human chain of transmission of sars had been broken and lifted the ban from all countries. however, it warned that everyone should remain vigilant, because a resurgence of sars is possible. their warning was well given because in december , a new case of sars was detected in china. at the time of this writing, three more cases have been identified. knowledge about the epidemiology and ecology of sars-cov and the disease remains limited; however, the experience gained from the previous outbreak enabled the disease to be contained rapidly, which is reflected in the few cases reported since december . there is still no specific treatment or preventative vaccine that has been developed. the incubation period is short, approx - days (maximum days), and, despite the media frenzy surrounding the initial outbreak, sars is less infectious than influenza. the following clinical case definition of sars has been developed for public health purposes ( ) . a person with a history of any combination of the following should be examined for sars: • fever (at least °c); and • one of more symptoms of lower respiratory tract illness (cough, difficulty in breathing, or dyspnea); and • radiographic evidence of lung infiltrates consistent with pneumonia or respiratory distress syndrome or postmortem findings of these with no identifiable cause; and • no alternative diagnosis can fully explain the illness. laboratory tests have been developed that include detection of viral rna by pcr from nasopharyngeal secretions or stool samples, detection of antibodies by enzyme-linked immunosorbent assay or immunofluorescent antibody in the blood, and viral culture from clinical specimens. available information suggests that close contact via aerosol or infected droplets from an infected individual provide the highest risk of acquiring the disease. most cases occurred in hospital workers caring for an index case or his or her close family members. despite the re-emergence of sars, it is highly unlikely that a case will be encountered in the custodial setting in the near future. however, forensic physicians must remain alert for the sars symptoms and keep up-to-date with recent outbreaks. information can be obtained from the who on a daily basis from its web site. if sars is suspected, medical staff should wear gloves and a surgical mask when examining a suspected case; however, masks are not usually available in custody. anyone suspected of sars must be sent immediately to the hospital, and staff who have had prolonged close contact should be alerted as to the potential symptoms. the most consistent feature of diseases transmitted through the fecaloral route is diarrhea (see table ). infective agents include bacteria, viruses, and protozoa. because the causes are numerous, it is beyond the remit of this chapter to cover them all. it is safest to treat all diarrhea as infectious, unless the detainee has a proven noninfectious cause (e.g., crohn's disease or ulcerative colitis). all staff should wear gloves when in contact with the detainee or when handling clothing and bedding, and contaminated articles should be laundered or incinerated. the cell should be professionally cleaned after use, paying particular attention to the toilet area. this viral hepatitis occurs worldwide, with variable prevalence. it is highest in countries where hygiene is poor and infection occurs year-round. in temperate climates, the peak incidence is in autumn and winter, but the trend is becoming less marked. all age groups are susceptible if they are nonimmune or have not been vaccinated. in developing countries, the disease occurs in early childhood, whereas the reverse is true in countries where the standard of living is higher. in the united kingdom, there has been a gradual decrease in the number of reported cases from to ( , ) . this results from, in part, improved standards of living and the introduction of an effective vaccine. the highest incidence occurs in the -to -year-old age group. approximately % of people older than years have natural immunity, leaving the remainder susceptible to infection ( ) . small clusters occur from time to time, associated with a breakdown in hygiene. there is also an increasing incidence of hav in gay or bisexual men and their partners ( ) . an unpublished study in london in showed a seroprevalence of % among gay men (young y et al., unpublished). the clinical picture ranges from asymptomatic infection through a spectrum to fulminant hepatitis. unlike hbv and hcv, hav does not persist or progress to chronic liver damage. infection in childhood is often mild or asymptomatic but in adults tends to be more severe. after an incubation period of - days (mean days) symptomatic infection starts with the abrupt onset of jaundice anything from days to weeks after the anicteric phase. it lasts for approximately the same length of time and is often accompanied by a sudden onset of fever. hav infection can lead to hospital admission in all age groups but is more likely with increasing age as is the duration of stay. the overall mortality is less than %, but % of people will have a prolonged or relapsing illness within - months (cdc fact sheet). fulminant hepatitis occurs in less than % of people but is more likely to occur in individuals older than years or in those with pre-existing liver disease. in patients who are hospitalized, case fatality ranges from % in - years olds to nearly % in those older than years ( ). the individual is most infectious in the weeks before the onset of jaundice, when he or she is asymptomatic. this can make control of infection difficult because the disease is not recognized. the main route is fecal-oral through the ingestion of contaminated water and food. it can also be transmitted by personal contact, including homosexuals practicing anal intercourse and fellatio. there is a slight risk from blood transfusions if the donor is in the acute phase of infection. it should not be considered a risk from needlestick injuries unless clinical suspicion of hav is high. risk groups include homeless individuals, homosexuals, idus, travellers abroad who have not been vaccinated, patients with chronic liver disease and chronic infection with hbv and hcv, employees and residents in daycare centers and hostels, sewage workers, laboratory technicians, and those handling nonhuman primates. several large outbreaks have occurred among idus, some with an epidemiological link to prisons ( , ) . transmission occurs during the viremic phase of the illness through sharing injecting equipment and via fecal-oral routes because of poor living conditions ( ) . there have also been reports of hav being transmitted through drugs that have been carried in the rectum. a study in vancouver showed that % of idus had past infection of hav, and they also showed an increased prevalence among homosexual/bisuexual men ( ). staff with disease should report to occupational health and stay off work until the end of the infective period. those in contact with disease (either through exposure at home or from an infected detainee) should receive prophylactic treatment as soon as possible (see subheading . . .). to minimize the risk of acquiring disease in custody, staff should wear gloves when dealing with the detainee and then wash their hands thoroughly. gloves should be disposed of only in the clinical waste bags. detainees with disease should be kept in custody for the minimum time possible. they should only be sent to the hospital if fulminant hepatitis is suspected. the cell should be quarantined after use and professionally cleaned. any bedding or clothing should be handled with gloves and laundered or incinerated according to local policy. detainees reporting contact with disease should be given prophylactic treatment as soon as possible (see subheading . . .). contacts of hav should receive hav vaccine (e.g., havrix monodose or avaxim) if they have not been previously immunized or had disease. human normal immunoglobulin (hnig), mg, deep intramuscular in gluteal muscle should be used in the following circumstances: • has the detainee traveled to africa, south east asia, the indian subcontinent, central/south america, or the far east in the last - months? • ascertain whether he or she received any vaccinations before travel and, if so, which ones. • ask if he or she took malaria prophylaxis, what type, and whether he or she completed the course. • ask if he or she swam in any stagnant lakes during the trip. • if the answer to any of the above is yes, ask if he or she has experienced any of the following symptoms: a fever/hot or cold flushes/shivering. diarrhea ± abdominal cramps ± blood or slime in the stool. a rash. persistent headaches ± light sensitivity. nausea or vomiting. aching muscles/joints. a persistent cough (dry or productive) lasting at least weeks. • take temperature. • check skin for signs of a rash and note nature and distribution. • check throat. • listen carefully to the lungs for signs of infection/consolidation. staff at higher risk of coming in to contact with hav should consider being vaccinated before exposure. two doses of vaccine given - months apart give at least years of protection. there is no specific treatment for hav, except supportive measures and symptomatic treatment. although the chance of encountering a tropical disease in custo dy is small, it is worth bearing in mind. it is not necessary for a forensic physician to be able to diagnose the specific disease but simply to recognize that the detainee/staff member is ill and whether he or she needs to be sent to the hospital (see tables - ) . this is best achieved by knowing the right questions to ask and carrying out the appropriate examination. tables - should be used as an aide to not missing some more unusual diseases. guidance for clinical health care workers: protection against infection with blood-borne viruses; recommendations of the expert advisory group on aids and the advisory group on hepatitis guidelines for hand hygiene in health care settings. recommendations of the healthcare infection control practices advisory committee and the hicpac/ shea/apic/idsa hand hygiene task force national model regulations for the control of workplace hazardous substances. commonwealth of australia, national occupational health and safety committee good practice guidelines for forensic medical examiners and the hospital infection control practices advisory committee. guideline for infection control in health care personnel report from the unlinked anonymous surveys steering group. department of health a strategy for infectious diseases-progress report. blood-borne and sexually transmitted viruses: hepatitis. department of health universal precautions for prevention of transmission of human immuno-deficiency virus, hepatitis b virus and other bloodborne pathogens in health-care settings risk factors for horizontal transmission of hepatitis b in a rural district in ghana familial clustering of hepatitis b infection: study of a family intrafamilial transmission of hepatitis b in the eastern anatolian region of turkey hepatitis b outbreak at glenochil prison during european network for hiv/aids and hepatitis prevention in prisons. second annual report. the network prevalence of hiv, hepatitis b and hepatitis c antibodies in prisoners in england and wales; a national survey the epidemiology of acute and chronic hepatitis c the role of the parenteral antischistosomal therapy in the spread of hepatitis c virus in egypt chronic hepatitis c: disease management. nih publication no. - . february department of health hepatitis c virus: eight years old laboratory surveillance of hepatitis c virus in england and wales: - last update aids/hiv quarterly surveillance tables provided by the phls aids centre (cdsc) and the scottish centre for infection and environmental health hiv and aids in the uk in . communicable disease surveillance centre. an update mode of vertical transmission of hiv- . a metanalysis of fifteen prospective cohort studies vertical transmission rate for hiv in the british isles estimated on surveillance data hiv post-exposure prophylaxis after sexual assault: the experience of a sexual assault service in london guidance from the uk chief medical officer's expert advisory group on aids. uk health department failures of zidovudine post exposure prophylaxis seroconversion to hiv- following a needlestick injury despite combination post-exposure prophylaxis department of health, immunisation against infectious disease. united kingdom: her majesty's stationery office chickenpox-disease predominantly affecting adults in rural west bengal prevention of varicella: recommendations of the advisory committee on immunization practices varicella-zoster virus epidemiology. a changing scene? use of acyclovir for varicella pneumonia during pregnancy outcome after maternal varicella infection in the first weeks of pregnancy outcome in newborn babies given anti-varicella zoster immunoglobulin after perinatal maternal infection with varicella zoster virus varicella-zoster virus dna in human sensory ganglia epidemiology and natural history of herpes zoster and post herpetic neuralgia clinical applications for changepoint analysis of herpes zoster pain treatment of scabies with permethrin versus lindane and benzoyl benzoate treatment of ectoparasitic infections; review of the english-language literature nasal carriage of staphylococcus aureus: epidemiology and control measures centers for disease control and prevention. community-acquired methicillinresistant staphylococcus aureus infections-michigan methicillinresistant staphylococcus aureus, epidmiologic observations during a community acquired outbreak emergence of pvl-producing strains of staphylococcus aureus bacteriology of skin and soft tissue infections: comparison of infections in intravenous drug users and individuals with no history of intravenous drug use outbreak among drug users caused by a clonal strain of group a streptococcus. dispatchesemerging infectious diseases bacteriological skin and subcutaneous infections in injecting drug users-relevance for custody wound botulism associated with black tar heroin among injecting drug users isolation and identification of clostridium spp from infections associated with injection of drugs: experiences of a microbiological investigation team greater glasgow health board, scifh. unexplained illness among drug injectors in glasgow embolization of illicit needle fragments right ventricular needle embolus in an injecting drug user: the need for early removal departments of emergency medicine and pediatrics, lutheran general hospital of oak brook, advocate health system. emedicine-human bites prevention and treatment of dog bites human bites. department of plastic surgery guidelines for public health management of meningococcal diseases in the uk planning, registration and implementation of an immunisation campaign against meningococcal serogroup c disease in the uk: a success story efficacy of meningococcal serogroup c conjugate vaccine in teenagers and toddlers in england quadrivalent meningoimmunisation required for pilgrims to saudi arabia risk of laboratory-acquired meningococcal disease cluster of meningococcal disease in rugby match spectators immunisation against infectious disease. her majesty's stationery office ciprofloxacin as a chemoprophylactic agent for meningococcal diseaselow risk of anaphylactoid reactions joint formulary committee - . british national formulary notification of tuberculosis an updated code of practice for england and wales statutory notifications to the communicable disease surveillance centre. preliminary annual report on tuberculosis cases reported in england, wales, and the prevention and control of tuberculosis in the united kingdom: uk guidance on the prevention and control of transmission of . hiv-related tuberculosis . drug-resistant, including multiple drug-resistant, tuberculosis. department of health, scottish office control and prevention of tuberculosis in the united kingdom: code of practice epidemiology of tuberculosis in the united states nosocomial transmission of multi-drug resistant tuberculosis among hiv-infected persons-florida the continued threat of tuberculosis tuberculosis-a clinical handbook the white plague: down and out, or up and coming? a prospective study of the risk of tuberculosis among intravenous drug users with human immunodeficiency virus infection influence of tuberculosis on human immunodeficiency virus (hiv- ): enhanced cytokine expression and elevated b -microglobulin in hiv- associated tuberculosis the chest roenterogram in pulmonary tuberculosis patients seropositive for human immunodeficiency virus type coronavirus as a possible cause of severe acute respiratory syndrome epidemiological determinants of spread of causal agents of severe acute respiratory syndrome in hong kong alert, verification and public health management of sars in post-outbreak period age-specific antibody prevalence to hepatitis a in england: implications for disease control phls advisory committee on vaccination and immunisation. guidelines for the control of hepatitis a infection control of a community hepatitis a outbreak using hepatitis a vaccine seroprevalence of and risk factors for hepatitis a infection among young homosexual and bisexual men outbreaks of hepatitis a among illicit drug users identifying target groups for a potential vaccination program during a hepatitis a community outbreak multiple modes of hepatitis a transmission among metamphetamine users past infection with hepatitis a among vancouver street youth, injection drug users and men who have sex with men implications for vaccination programmes key: cord- - ngfv u authors: gea-banacloche, juan title: risks and epidemiology of infections after hematopoietic stem cell transplantation date: - - journal: transplant infections doi: . / - - - - _ sha: doc_id: cord_uid: ngfv u infections following hct are frequently related to risk factors caused by the procedure itself. neutropenia and mucositis predispose to bacterial infections. prolonged neutropenia increases the likelihood of invasive fungal infection. gvhd and its treatment create the most important easily identifiable risk period for a variety of infectious complications, particularly mold infections. profound, prolonged t cell immunodeficiency, present after t cell-depleted or cord blood transplants, is the main risk factor for viral problems like disseminated adenovirus disease or ebv-related posttransplant lymphoproliferative disorder. understanding the epidemiology of infections after allogeneic hematopoietic stem cell transplantation (hct) is important to implement appropriate preventive strategies as well as to effectively diagnose and treat individual patients. several groups of experts and professional organizations publish guidelines that provide specifi c recommendations for prophylaxis and management of infections after hct [ - ] , including vaccinations [ , , ] . many of these recommendations are necessarily based on low-quality evidence and rely heavily on expert opinion. guidelines should not be followed blindly, but understood as tools that may help to provide the best possible care. risk factors for infection include individual characteristics (e.g., indication for hct, prior infections, cmv serostatus, particular genetic traits) and type of transplant (based on conditioning regimen, stem cell source, degree of hla homology, and immunosuppression). the development of graft-versus-host disease (gvhd) is frequently the decisive contributor to infectious morbidity and mortality. different indications for hct are associated with their own infectious risks. primary immunodefi ciencies (pid), hemoglobinopathies, and hematologic malignancies present different challenges. even in hematologic malignancies, the risk may vary depending on the specifi c condition: patients with chronic myelogenous leukemia (cml), acute myeloid leukemia (aml), and chronic lymphocytic leukemia (cll) present different risks based on both the biology of the disease and prior treatment. these factors should be considered when assessing individual patients. prior infections must be considered. a history of infection or colonization with a multidrug-resistant organism (mdro) like carbapenem-resistant enterobacteria (cre), extended-spectrum beta-lactamase (esbl)-producing gram-negative bacteria, vancomycin-resistant enterococcus (vre), or methicillin-resistant staphylococcus aureus (mrsa) has implications regarding optimal management of fever during neutropenia [ , , ] , which is a common complication of hct. transplant candidates are routinely screened for serologic evidence of latent infections that may reactivate (hsv, vzv, cmv, ebv, hepatitis b and c, toxoplasmosis); some of these will be discussed later in this chapter. some transplant centers will perform screening for tuberculosis with tuberculin skin test (tst) or interferon-gamma release assay (igra), at least for patients who are considered at signifi cant risk for the disease. prior invasive fungal infections may reactivate following transplant, and secondary prophylaxis is required [ - ] . even active fungal infection has been reported to be controllable. there are, however, cases of progression of prior aspergillosis after transplant; myeloablative conditioning, prolonged neutropenia, cytomegalovirus (cmv) disease, and graft-versus-host disease (gvhd) are risk factors [ , ] . as the correlates of native and adaptive immunity are better understood, genetic associations are coming to light. there is evidence that some donor haplotypes of tlr , the gene that encodes the toll-like receptor protein (tlr ) are associated with increased risk of invasive aspergillosis after hct [ ] . recipient's mutations in mbl , the gene that encodes mannose-binding lectin (mbl), have been associated with increased risk of infection after neutrophil recovery following myeloablative transplant [ ] . other polymorphisms of mbl may be important for infection through a direct infl uence on the risk of developing gvhd [ , ] . different genotypes of activated killer immunoglobulin-like receptors (akir) in the donor have been found to protect from cmv reactivation [ ] . many of these associations are preliminary and require more data to be confi rmed, but they hold the promise of a more individualized approach to infectious prophylaxis. from a practical standpoint, it is helpful to consider three distinct periods during transplant: pre-engraftment (until neutrophil recovery), early post-engraftment (from engraftment until day ) , and late post-engraftment (after day ) . this framework originated with myeloablative transplants, and is eminently pragmatic. the pre-engraftment phase may be accompanied by profound neutropenia and signifi cant mucositis, which results in increased risk of bacterial infections from the resident gastrointestinal fl ora, candidiasis, aspergillosis (in cases of prolonged neutropenia) and herpes simplex virus reactivation. after engraftment, with neutropenia no longer being a factor, many infections are related to the profound defect in cellular immunity caused by the conditioning regimen and the immunosuppression administered to prevent gvhd. cmv reactivation and the development of acute gvhd and its treatment play a central role during this time. the day landmark derives from the standard time at which immunosuppression (e.g., cyclosporine a or tacrolimus) is frequently tapered. infections after this point would be primarily related to lack of immune reconstitution and, in the absence of gvhd, become progressively less common. not all allogeneic stem cell transplantations are the same. several characteristics of the transplant infl uence the risk of infection: the conditioning preparative regimen, the source of stem cells, the degree of hla identity between donor and recipient, and the prophylactic strategy adopted to prevent gvhd (use of t cell depletion or immunosuppressive medications). table - summarizes the impact of these factors on infections. matching for ucb transplants focuses on three loci (hla-a, hla-b, and hla-drb ). the majority of ucb transplants are mismatched by at least one locus (often two). among transplants mismatched at two loci, mismatching at hla-c and hla-drb was associated with the highest risk of mortality [ ] . the degree of mismatch between the donor and the recipient affects the infectious risk mainly through the likelihood of gvhd. more gvhd usually results in more infections. to prevent gvhd in a mismatched transplant, more potent immunosuppression may be required, increasing the risk of infection. it is also possible that immune reconstitution proceeds more slowly (even with the same immunosuppressive regimen) after a urd hct. these factors may result in increased risk of infections associated with t cell immunodefi ciency, like cmv, pneumocystis jirovecii pneumonia (pcp), and epstein-barr virus (ebv)-related posttransplant lymphoproliferative disorder (ptld). however, provided the number of stem cells administered is the usual (> × kg − ), neutrophil recovery proceeds at the standard pace and there is no increased risk of neutropeniarelated infections. the problems with ucb transplants include a markedly decreased stem cell dose (often < × kg − ) which results in prolonged neutropenia (up to weeks), with the attendant risk of bacterial and fungal infections [ ] . in addition, the cord blood does not have antigen-specifi c memory t cells that can expand in a thymus-independent fashion to provide protection against viruses and opportunistic pathogens. this results in high frequency of late severe infections following cord transplantation, even when the neutropenic period is shortened by coadministration of stem cells from a thirdparty donor [ ]. stem cells may be given using the bone marrow, g-csfmobilized peripheral blood stem cells (pbscs), or ucb. frequently bone marrow will result in more prolonged neutropenia compared with pbsc, and increased infections during neutropenia should be expected. however, a multicenter randomized trial comparing peripheral blood stem cells with the bone marrow from unrelated donors showed no difference in the relapse or infectious mortality between both groups, but confi rmed that chronic gvhd is more common with mobilized pbsc [ ] . the particular features of ucd transplants were discussed on the preceding paragraph. manipulation of the stem cells, immunosuppressive drugs, or a combination gvhd may be prevented by decreasing the amount donor t cells or by limiting t cell function with immunosuppressive agents. the stem cells, whether from the bone marrow or the periphery, may be administered unmanipulated (sometimes called "t cell replete") or enriched by cd selection (also called "t cell depleted"). if unmanipulated bone marrow or pbscs are used, the dose of cd + t cells administered with the graft varies between × kg − when bone marrow is used and × kg − when pbscs are used [ ] . reductions in the amount of t cells of - log are possible, and in some haploidentical transplant regimens, as few as . × cd + cells are given, which still results in detectable immune reconstitution starting - months after transplant [ ] . t cell depletion may minimize or altogether prevent gvhd but may result in prolonged immunodeficiency, depending on the degree of depletion. if an unmanipulated product is used, t cell depletion may be attained in vivo by using alemtuzumab or atg. these agents produce a profound depletion of t cells in vivo, and their long halflife makes them still be present and active in the recipient when the stem cell product is administered. if no in vitro or in vivo t cell depletion is used, one of a variety of immunosuppressive regimens will be given to prevent gvhd (e.g., tacrolimus + methotrexate, tacrolimus plus mycophenolate mofetil, cyclosporine a, sirolimus, posttransplant cyclophosphamide). a randomized controlled trial documented more infections in patients randomized to (moderate) t cell depletion than in the group who received pharmacologic immunosuppression [ ] . t cell depletion in vivo with alemtuzumab has been associated with increased risk of infection [ ] . it is possible that different pharmacological regimens may result in different infectious risks, but this has not been adequately studied. preliminary evidence suggests that a sirolimus-based regimen may result in less cmv reactivation [ ] and that posttransplant cyclophosphamide result in relatively decreased risk of ptld [ ]. the above categories may combine in several ways, compounding the risk of infection. these variations should be considered both when designing a regimen of anti-infective prophylaxis and when considering an individual patient who may have an infection. gvhd is the most important cause of non-relapse mortality following hct, and it is frequently complicated by infection. gvhd is categorized as acute or chronic based on its time of onset. acute gvhd develops before day and is characterized by gastrointestinal disease (secretory diarrhea, nausea, vomiting), liver dysfunction, and skin rash. stages of gvhd in the skin, gut, and liver combine to give a grade (i-iv) of the severity of the disease. acute gvhd grades iii-iv is associated with signifi cant mortality. the treatment of choice is high-dose systemic corticosteroids. gvhd is associated with signifi cant immune dysregulation [ , ] and is frequently accompanied by cmv reactivation [ ] . the combination of disruption of the gi mucosa (and sometimes skin) and high-dose corticosteroids (in addition to the immunosuppressive agents concurrently given, like tacrolimus and mmf) constitute a high-risk setting for infection. bacterial, fungal, and viral infections are common under these circumstances. chronic graft-versus-host disease (cgvhd) has been traditionally defi ned chronologically: gvhd starting after day . it has been classifi ed based on its relation to prior gvhd (progressive when acute gvhd continues after day , quiescent when there is a period of time during which the patient is free of gvhd, or de novo when chronic gvhd is the fi rst manifestation of gvhd) and its extension (limited or extensive, reformulated as clinical limited, or clinical extensive). the clinical syndrome of typical chronic gvhd is quite distinct from the acute form, and a new classifi cation focusing on the clinical characteristics of the disease as well as on the timing is being increasingly used [ ] . from the standpoint of infectious diseases, the important consideration is that the presence of chronic gvhd is associated with high risk of infection [ , ] . multiple immune defects have been described during chronic gvhd, involving humoral and cellular immunity [ , ] as well as functional hyposplenism [ , ] . besides these abnormalities, that result in delayed immune reconstitution and poor response to immunizations, the risk is of infection is increased by the treatment of extensive cgvhd [ ], which typically includes systemic corticosteroids and a variety of steroid-sparing agents. notably, cgvhd is a well-documented risk for pneumococcal infections [ , ] , fungal infections, and late cmv disease. however, all types of infections are more common during cgvhd, particularly during the fi rst few months [ ] . when gvhd is not controlled by corticosteroids, it is called " steroid refractory ," and there is currently no universally accepted standard treatment. this situation is important from the infectious disease standpoint because patients are usually treated with a variety of highly immunosuppressive regimens (e.g., atg, cyclophosphamide, mmf, infl iximab, daclizumab, alefacept, alemtuzumab, sirolimus, visilizumab, denileukin diftitox, and others) [ ] that result in a wide array of infectious complications. reactivation of cmv is very common, as are fungal infections [ , ] , epstein-barr virus-related ptld [ ] , as well as human herpesvirus (hhv- ) [ ] and adenovirus [ ] . there are no controlled studies to support any particular infection prevention strategy during this period of increased immunosuppression, but some authors have emphasized that early use of prophylactic antibiotics and antifungals is an essential part of a successful approach to this problem [ ] . unfortunately, this is a condition for which controlled trials are unlikely to be performed, and different centers will have to decide on a particular approach of close monitoring versus prophylaxis based on local experience and published case series. in the following sections, the epidemiology of bacterial, fungal, viral, and parasitic diseases will be discussed. the implications for prophylaxis and management will be mentioned. immunizations for transplant recipients, (as well as their caregivers and immediate contacts) are discussed in chap. . risks and epidemiology of bacterial infections after allogeneic hct . . early bacterial infections: pre-engraftment approximately % of hct recipients will experience at least one episode of bacteremia during the fi rst few weeks, and a similar proportion after engraftment [ ] . these infections are usually related to either neutropenia with subsequent bacterial translocation through the gi mucosa (mucosal barrier injury laboratory-confi rmed bloodstream infection or mbi-lcbi) or the intravascular catheter (central lineassociated bloodstream infections or clabsis) [ ] . the relative frequency of gram-positive and gramnegative infections during neutropenia varies in different series and with the use of prophylactic antibiotics. in some centers, the most frequent gram-positive isolates are viridans group streptococcus [ ] ; this may be a function of the conditioning regimen or the patient population. enterococcus faecium , frequently vre, is another gram-positive organism that tends to cause bloodstream infection relatively early, although this seems to be rather institution dependent [ ] . the gram-negative bacteria are commonly enterobacteriaceae . these infections are generally related to the disruption of the gi mucosa due to the preparative regimen. the role of reduced diversity of the microbiota with subsequent bacterial domination and ultimately bacteremia is an area of intense study [ ] . the risk of bacteremia during neutropenia may be decreased by the use of prophylactic antibiotics [ , ] . this had been shown in multiple studies over the years, but the recommendation of using antibiotics did not become part of practice guidelines until recently. it is not clear whether this recommendation will continue amidst the increasing concern over the role of antibiotic-induced decreased microbiome diversity on the outcome of hct [ ] . in this regard it is of interest that fl uoroquinolones seem to have less detrimental effects on biodiversity of the fecal fl ora than beta-lactams. levofl oxacin at a dose of mg/d for patients who are going to be profoundly neutropenic for longer than week is the current recommendation of the idsa [ ]. following engraftment in a large study from the sloan kettering cancer center, the risk factors for post-engraftment bacteremia included acute gvhd, renal dysfunction, hepatic dysfunction, and neutropenia [ ] . enterococcus (vre) and coagulase-negative staphylococcus were the most common gram-positive isolates. enterobacteriaceae and non-fermentative gram-negative bacteria (including pseudomonas , stenotrophomonas , and acinetobacter , possibly related to the indwelling catheter) were the most common gramnegative isolates. bacteremia following engraftment often happens in the setting of patients with a complicated clinical course, acute gvhd, and multiple medical problems or else is catheter related. daily bathing with chlorhexidine-impregnated washcloths decreased the risk of acquisition of mdros and development of hospital-acquired bloodstream infections in transplant recipients in a randomized trial [ ] , and this practice should be considered by every transplant program. the advantages and disadvantages of active screening for colonization by resistant pathogens have not been adequately studied in hct recipients. it is likely that local epidemiology determines whether screening is an effi cacious and costeffective approach to either prevent infection or improve outcomes. a retrospective study on vre bacteremia from the sloan kettering cancer center showed that vre carriage was predictive of subsequent vre bacteremia, but failed to detect the pathogen in many patients [ ] . performing surveillance cultures for resistant organisms in vulnerable patient populations is part of the cdc recommendations "management of multidrug-resistant organisms in healthcare settings, " [ ] , and has been vigorously advocated by some experts [ [ , ] . both early and late (beyond day ) pneumococcal disease has been reported, with late infections strongly associated with active cgvhd [ ] . these have been attributed to inadequate antibody production and functional hyposplenism [ , ] . vaccination against s. pneumoniae should be given to all hct recipients, starting - months after transplant and using the -valent conjugate vaccine [ ] (see chap. for details). four doses of the vaccine result in enhanced antibody response and tolerable side effects [ ] . antibiotic prophylaxis against s. pneumoniae prophylaxis for adults with active cgvhd has been recommended [ ] , although there is only weak evidence supporting its effi cacy. penicillin v-k is safe and well tolerated, but the local patterns of penicillin resistance may make other antibiotics (e.g., trimethoprim, sulfamethoxazole, azithromycin, or levofl oxacin) preferable, although their long-term safety is not well established. late bacterial infections often involve the respiratory tract. pneumonia is the most common cause of fatal late infection [ , ] . chronic gvhd is the risk factor most commonly identifi ed. besides s. pneumoniae , multiple other pathogens have been reported. nocardia also tends to occur late and in patients with cgvhd [ , ] . mycobacterial infections are uncommon and diffi cult to diagnose [ ] . risk factors for the development of active tb include gvhd, corticosteroid treatment, and total body irradiation (tbi) [ ] . the need for universal testing for tuberculosis is controversial, given the unknown sensitivity and specifi city of the tests in this population and the fact that tuberculosis is a relatively uncommon complication after hct (albeit still approximately three times higher than in the general population) [ invasive candidiasis follows prior colonization and favorable conditions for the yeast: disruption of the gi mucosa during chemotherapy or acute gvhd, overgrowth in the presence of broad-spectrum antibiotics, and/or presence of indwelling catheters (the catheter seems to be the main risk factor in the case of c. parapsilosis ). early studies showed that fl uconazole during the pre-engraftment period could decrease the incidence of invasive candidiasis [ , ] . accordingly, fl uconazole is recommended as part of the standard prophylactic regimen during the pre-engraftment period. the prevalent use of fl uconazole has resulted in substantial decrease in the incidence of infections caused by c. albicans with relative increases in the incidence of other species of candida with decreased susceptibility to this agent (e.g., c. glabrata , c. krusei ) [ ] . invasive aspergillosis occurs during specifi c "at risk" periods following hct, with a fi rst peak around the time of neutropenia pre-engraftment, a second peak between days and (the time of acute gvhd and its treatment), and a third peak late after transplant, usually in the midst of actively treated cgvhd [ ] (figure - ) . a variety of risk factors for invasive aspergillosis have been identifi ed over the years, but the most consistently found to be signifi cant in multivariate analyses are acute gvhd, chronic extensive gvhd, and cmv disease [ - ] . systemic corticosteroids are almost always present as part of the treatment of acute and chronic gvhd. non-aspergillus mold infections (e.g., fusariosis, mucormycosis, scedosporiosis), sometimes referred to as emerging mold infections, have been reported with increasing frequency [ ] . the increased use of prophylaxis with activity against aspergillus would be expected to result in a relative increase of other opportunistic mycoses like mucormycosis [ ] . considering the diversity of fungal infections after transplant and the current antifungal armamentarium, it is controversial which antifungal prophylaxis is appropriate at what point during transplant. for instance, although fl uconazole is a safe and well-established intervention during the preengraftment period of myeloablative transplants [ , ] , it is reasonable to question how necessary it is in transplants with conditioning regimens that result in shorter neutropenia. micafungin showed to be equivalent to fl uconazole in a randomized controlled trial [ ] , and the same question (what kind of transplant patient would benefi t most) applies. regarding the duration of antifungal prophylaxis, fl uconazole up to day posttransplant was associated with improved survival mainly due to decreased incidence of systemic candidiasis [ ] , but it is uncertain whether this strategy should be used for all patients or should be received for some selected subgroups considered at higher risk. similarly, it is reasonable to question the indication for fl uconazole during periods when the main fungal infection is aspergillosis. several randomized controlled trials have compared fl uconazole with another azole with activity against molds (itraconazole [ , ] , voriconazole [ ] , or posaconazole [ ] ) either as standard posttransplant prophylaxis or during periods of increased risk. the general conclusion of these trials is that the aspergillus-active drugs are, indeed, more effective than fl uconazole in preventing ia, but the benefi t in survival in the context of a clinical trial with careful monitoring of galactomannan antigen is hard to demonstrate [ ] . the asbmt/ebmt guidelines recommend posaconazole or voriconazole as antifungal prophylaxis in the setting of gvhd and micafungin in the setting of prolonged neutropenia [ ] . of note, posaconazole prophylaxis was superior to fl uconazole or itraconazole and improved survival in prolonged neutropenia in non-transplant patients [ ] . now, there are even more options of mold-active prophylaxis with posaconazole delayed-release tablets, intravenous posaconazole, and the new agent isavuconazole. infections after allogeneic hct viral infections remain a challenge because newer transplant modalities result in severe prolonged t cell immunodeficiency and because the current antiviral armamentarium is very limited. multiple latent viruses may reactivate following hct [ ] . the role of monitoring by pcr is well defi ned mainly for cmv. latent viral reactivation is of particular concern in recipients of cord [ ] or t cell-depleted transplants. table - presents a summary of this section. members of the herpesvirus family that have caused significant disease after transplant include hsv- , hsv- , vzv, ebv, cmv, and hhv- . posttransplant complications of hhv- are not well defi ned, although multiple associations have been described. hhv- infection and disease (primary effusion lymphoma and kaposi's sarcoma) occur only infrequently after hct. hsv- and hsv- may reactivate following the preparative regimen and complicate chemotherapy-induced mucositis, so it is customary to administer prophylaxis with acyclovir or valacyclovir at least until engraftment. in patients with common recurrences, long-term suppression may be appropriate. vzv predictably reactivates following transplant (approximately % in the fi rst year), either as shingles, multidermatomal, disseminated, or even without a rash ("zoster sine herpete"). in patients who are at risk for vzv reactivation, the use of long-term acyclovir safely prevents the occurrence of vzv disease [ , ] , and currently it is recommended for at least year following hct. cmv remains latent in a variety of human cells. cmvseropositive hct recipients are at risk for cmv reactivation and disease after transplant. the term "cmv infection" is used to denote the presence of cmv in the blood detected by pcr or pp antigenemia [ ] . following reactivation, cmv may cause disease typically in the form of pneumonia and/or gastrointestinal disease (most commonly colitis). other cmv diseases like retinitis or cns involvement are rare after hct but have been described: retinitis has been associated with high cmv viral load [ ] sometimes in the context of chronic gvhd and cns disease (encephalitis and ventriculitis), sometimes with resistant virus in the cns [ , ] . the risk for reactivation may be related to the presence of cmv-specifi c immunity in the donor. the rate of cmv infection in the donor-recipient (d/r) pairs often follows the progression d r d r d r d r -+ + + + - , suggesting that cmv-specifi c memory t cells administered with the stem cells may play a role in preventing reactivation and disease. cmv infection or disease in cmv-seronegative recipients of seronegative donors (r−/d−) is rare when leucodepleted or cmv-negative blood products are used [ ] . every transplant program must decide on a strategy to monitor cmv and prevent disease. depending on a variety of factors, either universal prophylaxis with ganciclovir up to day or a preemptive strategy of weekly monitoring and early therapy may be used. both approaches resulted in similar overall mortality when compared in a randomized controlled trial, but universal prophylaxis was followed by more cases of late cmv disease [ , ] . late cmv disease has emerged as a signifi cant problem, as it occurs when patients are not being under close monitoring by the transplant center. risk factors include lymphopenia and chronic gvhd [ ] . preventing late cmv disease may be accomplished by either prophylaxis with valganciclovir or the preemptive approach with weekly cmv pcr monitoring [ ] . the effect of cmv serostatus of donor and recipient on overall survival is complex (for a review, see [ ] and chap. ). ptld is a spectrum of lymphoid proliferations that may happen after solid organ or allogeneic stem cell transplantation, usually (but not always) driven by ebv [ ] . pathologically the spectrum goes from polymorphic, polyclonal tissue infi ltration of lymphocytes to monomorphic involvement with high-grade b cell lymphoma. after allogeneic hct, the proliferating cells may be from donor (most commonly) or recipient origin. this disorder is typically related to insuffi cient or abnormal t cell responses against ebv [ ] , and accordingly it is more common in the setting of hla-mismatched transplants, t cell depletion, or intense immunosuppression for the treatment of gvhd [ - ] . some cases have followed the use of alemtuzumab for in vivo t cell depletion or gvhd prophylaxis [ ] , despite the fact that anti-cd also results in depletion of b cells and earlier had been reported to be associated with relatively less risk. interestingly, the use of posttransplant cyclophosphamide to prevent gvhd seems to be associated with lower risk of ptld [ ] . monitoring of ebv viral load by quantitative pcr is now recommended in those transplants considered at high risk. preemptive management of increasing ebv viral load in patients at risk has been associated with good outcomes [ ] , although it is not clear when exactly this treatment should be given. a ct/pet may be useful to localize areas amenable to biopsy (figure - ). hhv- is acquired early in life, when it may cause roseola infantum and nonspecifi c febrile illnesses. it frequently reactivates following hct. using quantitative pcr, hhv- can often be detected in peripheral blood - weeks after transplant. most of the time the reactivation seems to be asymptomatic [ ] , but a number of associations (rash, delayed engraftment, gvhd, thrombocytopenia, increased overall mortality) as well as actual clinicopathological entities (hepatitis, pneumonitis, encephalitis) have been described [ - ] . hhv- is possibly the most common cause of infectious encephalitis after hct [ ] . it seems to be particularly frequent after cord blood transplant. cases of encephalitis tend to be accompanied by higher viral loads of hhv- in plasma [ ] , but the role of systematic monitoring of hhv- in plasma is unknown at this time, as reactivation seems much more common than disease [ ] and attempts to use a preemptive strategy using foscarnet have not been successful [ ] . the european conference on infections in leukemia has proposed evidence-based guidelines to address the diagnostic and therapeutic uncertainties related to this infection [ ] . respiratory viruses , a heterogeneous group of virus that is responsible for most upper acute respiratory infections in normal hosts, result in signifi cant morbidity and mortality after hct, particularly during the fi rst months following transplant [ ] . even asymptomatic carriage of respiratory viruses at the time of transplant has been reported to result in increased risk of unfavorable outcomes [ ] . besides respiratory syncytial virus (rsv) [ ] , infl uenza, parainfl uenza virus (piv) [ ] , rhinovirus [ ] , and adenovirus, newly identifi ed viruses including metapneumovirus [ ] , coronavirus [ ] , and bocavirus [ ] have emerged as signifi cant pathogens. these infections present signifi cant risks both acutely and in the long term. during the acute infection, hct recipients are at risk of developing viral pneumonia that sometimes progresses to respiratory insuffi ciency, mechanical ventilation and death, and also at risk of concomitant or secondary bacterial or fungal infections that are associated with increased mortality [ , , ] . longterm, there seems to be an association between early infection (pre-day ) with some of these viruses (most notably piv and rsv) and later development of chronic airfl ow obstruction [ ] . the most signifi cant risk factor overall for progression of these infections from the upper respiratory tract to the lungs seems to be lymphopenia [ ] . corticosteroid use seems to contribute to progression to pneumonia in rsv and parainfl uenza infections but not so in infl uenza [ , ] (see table - ). besides its role among the community-acquired respiratory virus, adenovirus may cause disease in transplant recipients following reactivation in the gastrointestinal tract followed by dissemination and end-organ damage [ ] . de novo acquisition of adenovirus may also result in disseminated disease. there are more than types of human adenovirus, with dif-ferent tropisms and possibly varying susceptibilities to antiviral agents. they can cause a variety of diseases, including upper and lower respiratory tract infection, colitis, hemorrhagic cystitis (hc), nephropathy, and cns disease. systemic adenovirus disease seems to be more common in children, particularly in recipients of cord blood or t cell-depleted transplants [ - ] . patients with gvhd on treatment with high-dose corticosteroids are also at risk (figure - ) . some studies have documented that sustained high levels of adenoviremia are associated with disease [ ] . it is not known yet whether a preemptive approach with cidofovir can successfully prevent disseminated disease and death [ , ] . bk virus infects % of humans by age . it predictably reactivates in most patients following hct and causes hemorrhagic cystitis (hc) in a minority of them [ ] . detection of high levels of bk in the peripheral blood seems to correlate with the presence of bk-induced hc [ , ] . in a large study from the fred hutchinson cancer research center (fhcrc), no association was found between bk virus-associated hc and lymphopenia, corticosteroid use, and gvhd-the typical risk factors for viral infections after hct [ ] . in contrast, other smaller studies have found an association with gvhd. the pathogenesis of this disease remains unexplained. bk-induced nephropathy, a common problem after kidney transplant, remains infrequent after hct and does seem to be related to profound immunosuppression [ ] . bk pneumonitis has also been described, but it is distinctly rare [ ] . jc virus is also acquired by most people during childhood. in immunocompromised hosts, it may cause encephalitis (jc encephalitis, previously called progressive multifocal leukoencephalopathy (pml)) with multiple areas of demyelin-ation without edema detectable by mri. some studies have suggested that detectable viral load after hct may be more common than currently thought [ ] . ascertaining risk factors for this disease is diffi cult because some transplant recipients may have conditions known to be associated with it and also received medications like mmf, rituximab, or brentuximab, which have been associated with pml even in the absence of allo-hct. pcp is an opportunistic infection of patients with profound cellular immunodefi ciency, and prophylaxis is recommended after hct. it is now relatively uncommon: . - . % of patients transplanted from several series [ , ] most cases seem to occur relatively late, after discontinuing prophylaxis or during periods of intensive immunosuppression for the treatment of gvhd [ ] . hypoxemia is characteristic at presentation. atypical radiological manifestations, including nodular infi ltrates and pleural effusions (in contrast to typical interstitial pneumonitis), are described frequently, as is the presence of co-pathogens [ ] . the preferred prophylaxis is trimethoprim/sulfamethoxazole (tmp/smx) , and several dosing regimens are effective (one single-strength tablet daily, one double-strength tablet daily, or one double-strength tablet three times/week) [ ] . tmp/ smx may be poorly tolerated because of hematologic toxicity, skin rash and/or gastrointestinal toxicity [ ] . it is unclear which is the prophylaxis of choice if tmp/ smx cannot be used. aerosolized pentamidine is convenient, obviates the problem of compliance, and is less toxic than dapsone and better tolerated than atovaquone. however, it has been reportedly associated with more failures than dapsone [ ] . dapsone seemed to be effective and well tolerated in one study [ ] but not in another when it was given only three times per week [ ] . dapsone should not be given to patients with g pd defi ciency. methemoglobinemia is a well-known complication of dapsone [ ] that should be considered in the presence of unexplained shortness of breath. atovaquone suspension mg/d may be used, but published experience in hsct recipients is limited [ , ] . atovaquone is expensive and poor tolerance has made compliance for some patients diffi cult. absorption is better in the presence of signifi cant amount of fat, and breakthroughs are well documented ( figure - ) . pcp prophylaxis is recommended at least until all immunosuppression has been stopped but it is unclear how much longer to continue it [ ] . most cases of toxoplasmosis after hct represent reactivation, although rare cases of transmission with bone marrow transplant have been suspected [ ] . recipients should be tested for anti-toxoplasma igg antibody, and if they are found to be positive, prophylaxis is recommended. rare cases of toxoplasmosis after hct have occurred in seronegative recipients [ , ] . the disease tends to occur within the fi rst months after transplant, but it can happen later in the presence of persistent immunosuppression [ - ] . the risk of toxoplasmosis varies with the type of transplant and the immunosuppression: cord blood and use of atg were found to be risk factors for disease in a prospective study [ ] ; most cases in another series occurred in urd or mismatched transplants [ ] . tmp/smx as given for pcp prophylaxis is considered adequate to prevent toxoplasmosis, although there have been cases on hct recipients who were receiving it [ ] . the best alternative for patients who are intolerant to tmp/smx is unknown. dapsone and atovaquone showed some effi cacy in hiv-infected patients and there is increasing experience after hct [ ] , although failures have been reported. other regimens include clindamycin with pyrimethamine and leucovorin, pyrimethamine with sulfadiazine, or pyrimethamine and sulfadoxine and leucovorin [ ] . if a reliable quantitative pcr assay is available, frequent monitoring and preemptive treatment may be appropriate, since pcrdetected reactivation seems to precede symptoms by - days [ ] . retrospective data suggest this strategy may result in improved outcome [ ] . in summary, infections following hct are frequently related to risk factors caused by the procedure itself. neutropenia and mucositis predispose to bacterial infections. prolonged neutropenia increases the likelihood of invasive fungal infection. gvhd and its treatment create the most important easily identifi able risk period for a variety of infectious complications, particularly mold infections. profound, prolonged t cell immunodefi ciency, present after t cell-depleted or cord blood transplants, is the main risk factor for viral problems like disseminated adenovirus disease or ebvrelated ptld. besides all these "procedure-related" risk factors, there are individual characteristics that only now are starting to be investigated and understood. future epidemiological and basic studies will likely result in truly personalized prophylactic regimens that will increase the unquestionable benefi ts of antimicrobial prophylaxis and reduce the cost, both direct and indirect, associated with this life-saving practice. guidelines for preventing infectious complications among hematopoietic cell transplantation recipients: a global perspective fourth european conference on infections in leukaemia (ecil- ): guidelines for diagnosis, prevention, and treatment of invasive fungal diseases in paediatric patients with cancer or allogeneic haemopoietic stem-cell transplantation european guidelines for antifungal management in leukemia and hematopoietic stem cell transplant recipients: summary of the ecil - update european guidelines for prevention and management of infl uenza in hematopoietic stem cell transplantation and leukemia patients: summary of ecil- targeted therapy against multiresistant bacteria in leukemic and hematopoietic stem cell transplant recipients: guidelines of the th european conference on infections in leukemia european guidelines for empirical antibacterial therapy for febrile neutropenic patients in the era of growing resistance: summary of the th european conference on infections in leukemia management of hsv, vzv and ebv infections in patients with hematological malignancies and after sct: guidelines from the second european conference on infections in leukemia european guidelines for diagnosis and treatment of adenovirus infection in leukemia and stem cell transplantation: summary of ecil- idsa clinical practice guideline for vaccination of the immunocompromised host vaccination of allogeneic haematopoietic stem cell transplant recipients: report from the international consensus conference on clinical practice in chronic gvhd clinical practice guideline for the use of antimicrobial agents in neutropenic patients with cancer: update by the infectious diseases society of america guideline for the management of fever and neutropenia in children with cancer and/or undergoing hematopoietic stem-cell transplantation secondary antifungal prophylaxis with voriconazole to adhere to scheduled treatment in leukemic patients and stem cell transplant recipients hematopoietic stem cell transplantation in patients with active fungal infection: not a contraindication for transplantation risk factors for recurrence of invasive fungal infection during secondary antifungal prophylaxis in allogeneic hematopoietic stem cell transplant recipients impact of the intensity of the pretransplantation conditioning regimen in patients with prior invasive aspergillosis undergoing allogeneic hematopoietic stem cell transplantation: a retrospective survey of the infectious diseases working party of the european group for blood and marrow transplantation lack of b cells precursors in marrow transplant recipients with chronic graftversus-host disease infectious morbidity in long-term survivors of allogeneic marrow transplantation is associated with low cd t cell counts functional asplenia after bone marrow transplantation. a late complication related to extensive chronic graft-versushost disease pneumococcal arthritis and functional asplenia after allogeneic bone marrow transplantation chronic graft versus host disease is associated with long-term risk for pneumococcal infections in recipients of bone marrow transplants chronic graft-versus-host disease: a prospective cohort study secondary treatment of acute graft-versus-host disease: a critical review tumor necrosis factor-alpha blockade for the treatment of acute gvhd infl iximab use in patients with severe graftversus-host disease and other emerging risk factors of noncandida invasive fungal infections in allogeneic hematopoietic stem cell transplant recipients: a cohort study a humanized non-fcr-binding anti-cd antibody, visilizumab, for treatment of steroid-refractory acute graft-versus-host disease post-transplant acute limbic encephalitis: clinical features and relationship to hhv the successful use of alemtuzumab for treatment of steroid-refractory acute graft-versushost disease in pediatric patients improved survival in steroid-refractory acute graft versus host disease after non-myeloablative allogeneic transplantation using a daclizumab-based strategy with comprehensive infection prophylaxis pre-and post-engraftment bloodstream infection rates and associated mortality in allogeneic hematopoietic stem cell transplant recipients the burden of mucosal barrier injury laboratory-confi rmed bloodstream infection among hematology, oncology, and stem cell transplant patients colonization, bloodstream infection, and mortality caused by vancomycin-resistant enterococcus early after allogeneic hematopoietic stem cell transplant intestinal domination and the risk of bacteremia in patients undergoing allogeneic hematopoietic stem cell transplantation levofl oxacin to prevent bacterial infection in patients with cancer and neutropenia antibiotic prophylaxis for bacterial infections in afebrile neutropenic patients following chemotherapy the effects of intestinal tract bacterial diversity on mortality following allogeneic hematopoietic stem cell transplantation effect of daily chlorhexidine bathing on hospital-acquired infection the changing epidemiology of vancomycin-resistant enterococcus (vre) bacteremia in allogeneic hematopoietic stem cell transplant (hsct) recipients healthcare infection control practices advisory committee. management of multidrug-resistant organisms in health care settings what to think if the results of the national institutes of health randomized trial of methicillin-resistant staphylococcus aureus and vancomycin-resistant enterococcus control measures are negative (and other advice to young epidemiologists): a review and an au revoir streptococcus pneumoniae infections in hematopoietic stem cell transplantation recipients: clinical characteristics of infections and vaccine-breakthrough infections early and late invasive pneumococcal infection following stem cell transplantation: a european bone marrow transplantation survey immunogenicity, safety, and tolerability of -valent pneumococcal conjugate vaccine followed by -valent pneumococcal polysaccharide vaccine in recipients of allogeneic hematopoietic stem cell transplant aged ≥ years: an open-label study risks and epidemiology of infections after hematopoietic stem cell transplantation development project on criteria for clinical trials in chronic graft-versus-host disease: v. the ancillary therapy and supportive care working group report late infections after allogeneic bone marrow transplantations: comparison of incidence in related and unrelated donor transplant recipients nocardiosis after bone marrow transplantation: a retrospective study systemic nocardiosis following allogeneic bone marrow transplantation mycobacterial infection: a diffi cult and late diagnosis in stem cell transplant recipients tuberculosis after hematopoietic stem cell transplantation: incidence, clinical characteristics and outcome. spanish group on infectious complications in hematopoietic transplantation risk factors for invasive fungal infections in haematopoietic stem cell transplantation a controlled trial of fl uconazole to prevent fungal infections in patients undergoing bone marrow transplantation effi cacy and safety of fl uconazole prophylaxis for fungal infections after marrow transplantation-a prospective, randomized, double-blind study candidemia in allogeneic blood and marrow transplant recipients: evolution of risk factors after the adoption of prophylactic fl uconazole epidemiology of aspergillus infections in a large cohort of patients undergoing bone marrow transplantation invasive aspergillosis in allogeneic stem cell transplant recipients: changes in epidemiology and risk factors invasive fungal infections in recipients of allogeneic hematopoietic stem cell transplantation after nonmyeloablative conditioning: risks and outcomes risks, diagnosis and outcomes of invasive fungal infections in haematopoietic stem cell transplant recipients epidemiology and outcomes of invasive fungal infections in allogeneic haematopoietic stem cell transplant recipients in the era of antifungal prophylaxis: a singlecentre study with focus on emerging pathogens zygomycosis in a tertiary-care cancer center in the era of aspergillus-active antifungal therapy: a casecontrol observational study of recent cases micafungin versus fl uconazole for prophylaxis against invasive fungal infections during neutropenia in patients undergoing hematopoietic stem cell transplantation prolonged fl uconazole prophylaxis is associated with persistent protection against candidiasis-related death in allogeneic marrow transplant recipients: long-term follow-up of a randomized, placebo-controlled trial intravenous and oral itraconazole versus intravenous and oral fl uconazole for long-term antifungal prophylaxis in allogeneic hematopoietic stem-cell transplant recipients. a multicenter, randomized trial itraconazole versus fl uconazole for prevention of fungal infections in patients receiving allogeneic stem cell transplants randomized, double-blind trial of fl uconazole versus voriconazole for prevention of invasive fungal infection after allogeneic hematopoietic cell transplantation posaconazole or fl uconazole for prophylaxis in severe graft-versus-host disease second-versus fi rstgeneration azoles for antifungal prophylaxis in hematology patients: a systematic review and meta-analysis posaconazole vs. fl uconazole or itraconazole prophylaxis in patients with neutropenia large-scale multiplex polymerase chain reaction assay for diagnosis of viral reactivations after allogeneic hematopoietic stem cell transplantation intensive strategy to prevent cytomegalovirus disease in seropositive umbilical cord blood transplant recipients long-term acyclovir for prevention of varicella zoster virus disease after allogeneic hematopoietic cell transplantation-a randomized double-blind placebo-controlled study use of long-term suppressive acyclovir after hematopoietic stem-cell transplantation: impact on herpes simplex virus (hsv) disease and drug-resistant hsv disease cytomegalovirus pp antigenemia-guided early treatment with ganciclovir versus ganciclovir at engraftment after allogeneic marrow transplantation: a randomized double-blind study risk factors for cytomegalovirus retinitis in patients with cytomegalovirus viremia after hematopoietic stem cell transplantation cmv central nervous system disease in stem-cell transplant recipients: an increasing complication of drug-resistant cmv infection and protracted immunodefi ciency cytomegalovirus ventriculoencephalitis with compartmentalization of antiviral-resistant cytomegalovirus in a t cell-depleted haploidentical peripheral blood stem cell transplant recipient transfusion-transmitted cytomegalovirus infection after receipt of leukoreduced blood products successful modifi cation of a pp antigenemia-based early treatment strategy for prevention of cytomegalovirus disease in allogeneic marrow transplant recipients late cytomegalovirus disease and mortality in recipients of allogeneic hematopoietic stem cell transplants: importance of viral load and t-cell immunity valganciclovir for the prevention of complications of late cytomegalovirus infection after allogeneic hematopoietic cell transplantation: a randomized trial the role of cytomegalovirus serostatus on outcome of hematopoietic stem cell transplantation post-transplant lymphoproliferative disorders prophylaxis of toxoplasmosis infection with pyrimethamine/sulfadoxine (fansidar) in bone marrow transplant recipients risk factors for lymphoproliferative disorders after allogeneic hematopoietic cell transplantation ebv-associated post-transplant lymphoproliferative disorder after umbilical cord blood transplantation in adults with hematological diseases ebv-associated post-transplant lymphoproliferative disorder following in vivo t-cell-depleted allogeneic transplantation: clinical features, viral load correlates and prognostic factors in the rituximab era impact of epstein barr virus-related complications after high-risk allo-sct in the era of pre-emptive rituximab human herpesvirus infections after bone marrow transplantation: clinical and virologic manifestations high levels of human herpesvirus dna in peripheral blood leucocytes are correlated to platelet engraftment and disease in allogeneic stem cell transplant patients clinical outcomes of human herpesvirus reactivation after hematopoietic stem cell transplantation human herpesvirus reactivation on the th day after allogeneic hematopoietic stem cell transplantation can predict grade - acute graft-versus-host disease human herpesvirus- encephalitis after allogeneic hematopoietic cell transplantation: what we do and do not know human herpesvirus (hhv- ) reactivation and hhv- encephalitis after allogeneic hematopoietic cell transplantation: a multicenter, prospective study frequent human herpesvirus- viremia but low incidence of encephalitis in double-unit cord blood recipients transplanted without antithymocyte globulin foscarnet against human herpesvirus (hhv)- reactivation after allo-sct: breakthrough hhv- encephalitis following antiviral prophylaxis management of cmv, hhv- , hhv- and kaposi-sarcoma herpesvirus (hhv- ) infections in patients with hematological malignancies and after sct the challenge of respiratory virus infections in hematopoietic cell transplant recipients risks and epidemiology of infections after hematopoietic stem cell transplantation with respiratory virus detection before allogeneic hematopoietic stem cell transplantation respiratory syncytial virus lower respiratory disease in hematopoietic cell transplant recipients: viral rna detection in blood, antiviral treatment, and clinical outcomes parainfl uenza virus infections after hematopoietic stem cell transplantation: risk factors, response to antiviral therapy, and effect on transplant outcome rhinovirus infections in hematopoietic stem cell transplant recipients with pneumonia mortality rates of human metapneumovirus and respiratory syncytial virus lower respiratory tract infections in hematopoietic cell transplantation recipients human rhinovirus and coronavirus detection among allogeneic hematopoietic stem cell transplantation recipients disseminated bocavirus infection after stem cell transplant infl uenza infections after hematopoietic stem cell transplantation: risk factors, mortality, and the effect of antiviral therapy human parainfl uenza virus infection after hematopoietic stem cell transplantation: risk factors, management, mortality, and changes over time airfl ow decline after myeloablative allogeneic hematopoietic cell transplantation: the role of community respiratory viruses respiratory syncytial virus in hematopoietic cell transplant recipients: factors determining progression to lower respiratory tract disease adenovirus infection and disease in pediatric hematopoietic stem cell transplant patients: clues for antiviral preemptive treatment adenovirus infection rates in pediatric recipients of alternate donor allogeneic bone marrow transplants receiving either antithymocyte globulin (atg) or alemtuzumab (campath) adenovirus infections following allogeneic stem cell transplantation: incidence and outcome in relation to graft manipulation, immunosuppression, and immune recovery invasive adenoviral infections in t-celldepleted allogeneic hematopoietic stem cell transplantation: high mortality in the era of cidofovir quantitative real-time polymerase chain reaction for detection of adenovirus after t cell-replete hematopoietic cell transplantation: viral load as a marker for invasive disease cidofovir for adenovirus infections after allogeneic hematopoietic stem cell transplantation: a survey by the infectious diseases working party of the european group for blood and marrow transplantation polyomavirus bk infection in blood and marrow transplant recipients bk dna viral load in plasma: evidence for an association with hemorrhagic cystitis in allogeneic hematopoietic cell transplant recipients kidney and bladder outcomes in children with hemorrhagic cystitis and bk virus infection after allogeneic hematopoietic stem cell transplantation bk nephropathy in pediatric hematopoietic stem cell transplant recipients pneumonitis post-haematopoeitic stem cell transplant-cytopathology clinches diagnosis jc polyomavirus reactivation is common following allogeneic stem cell transplantation and its preemptive detection may prevent lethal complications pneumocystis carinii pneumonitis following bone marrow transplantation occurrence of pneumocystis jiroveci pneumonia after allogeneic stem cell transplantation: a -year retrospective study late onset pneumocystis carinii pneumonia following allogeneic bone marrow transplantation infl uence of type of cancer and hematopoietic stem cell transplantation on clinical presentation of pneumocystis jiroveci pneumonia in cancer patients a randomized trial of daily and thrice-weekly trimethoprim-sulfamethoxazole for the prevention of pneumocystis carinii pneumonia in human immunodefi ciency virusinfected persons. terry beirn community programs for clinical research on aids (cpcra) aerosolized pentamidine as pneumocystis prophylaxis after bone marrow transplantation is inferior to other regimens and is associated with decreased survival and an increased risk of other infections toxicity and effi cacy of daily dapsone as pneumocystis jiroveci prophylaxis after hematopoietic stem cell transplantation: a case-control study high rates of pneumocystis carinii pneumonia in allogeneic blood and marrow transplant recipients receiving dapsone prophylaxis acquired methemoglobinemia: a retrospective series of cases at teaching hospitals a prospective randomized trial comparing the toxicity and safety of atovaquone with trimethoprim/sulfamethoxazole as pneumocystis carinii pneumonia prophylaxis following autologous peripheral blood stem cell transplantation atovaquone suspension compared with aerosolized pentamidine for prevention of pneumocystis carinii pneumonia in human immunodefi ciency virus-infected subjects intolerant of trimethoprim or sulfonamides regionally limited or rare infections: prevention after hematopoietic cell transplantation transmission of toxoplasmosis by bone marrow transplant associated with campath- g disseminated toxoplasmosis in marrow recipients: a report of three cases and a review of the literature. bone marrow transplant team disseminated toxoplasmosis after allogeneic stem cell transplantation in a seronegative recipient toxoplasmosis after hematopoietic stem cell transplantation report of a -year survey from the infectious diseases working party of the european group for blood and marrow transplantation early detection of toxoplasma infection by molecular monitoring of toxoplasma gondii in peripheral blood samples after allogeneic stem cell transplantation atovaquone for prophylaxis of toxoplasmosis after allogeneic hematopoietic stem cell transplantation molecular diagnosis of toxoplasmosis in immunocompromised patients: a three-year multicenter retrospective study risks and epidemiology of infections after hematopoietic stem cell transplantation key: cord- -ans d oa authors: arimond, alexander; borth, damian; hoepner, andreas; klawunn, michael; weisheit, stefan title: neural networks and value at risk date: - - journal: nan doi: nan sha: doc_id: cord_uid: ans d oa utilizing a generative regime switching framework, we perform monte-carlo simulations of asset returns for value at risk threshold estimation. using equity markets and long term bonds as test assets in the global, us, euro area and uk setting over an up to , weeks sample horizon ending in august , we investigate neural networks along three design steps relating (i) to the initialization of the neural network, (ii) its incentive function according to which it has been trained and (iii) the amount of data we feed. first, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the hidden markov). we find latter to outperform in terms of the frequency of var breaches (i.e. the realized return falling short of the estimated var threshold). second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). in particular this design feature enables the balanced incentive recurrent neural network (rnn) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of , days. we find our networks when fed with substantially less data (i.e. , days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets ... while leading papers on machine learning in asset pricing focus on predominantly returns and stochastic discount factors (chen, pelger & zhu ; gu, kelly & xiu ) , we are motivated by the global coid- virus crisis and the subsequent stock market crash to investigate if and how machine learning methods can enhance value at risk (var) threshold estimates. in line with gu, kelly & xiu's ( : ) , we like to open by disclaiming our awareness that " [m] achine learning methods on their own do not identify deep fundamental associations" .without human scientists designing hypothesized mechanisms into an estimation problem. nevertheless, measurement errors can be reduced based on machine learning methods. hence, machine learning methods employed as means to an end instead of as end in themselves can significantly support researchers in challenging estimation tasks. in their already legendary paper, gu, kelly & xiu (gkx in the following, ) apply machine learning to a key problem in academic finance literature: 'measuring asset risk premia'. they observe that machine learning improves the description of expected returns relative to traditional econometric forecasting methods based on (i) better out-ofsample r-squared and (ii) forecasts earning larger sharpe ratios. more specifically, they compare four 'traditional' methods (ols, glm, pcr/pca, pls) with regression trees (e.g. random forests) and a simple 'feed forward neural network' based on k stocks over months , using firm characteristics, sectors and + baseline signals. crediting inter alia (i) flexibility of functional form and (ii) enhanced ability to prioritize vast sets of baseline signals, they find the feed forward neural networks (ffnn) to perform best. contrary to results reported from computer vision, gkx further observe that "'shallow' learning outperforms 'deep' learning" (p. ), as their neural network with hidden layers excels beyond neural networks with more hidden layers. they interpret this result as a consequence of a relatively much lower signal to noise ratio and much smaller data sets in finance. interestingly, the outperformance of nns over the other methods widens at portfolio compared to stock level, another indication that an understanding of the signal to noise ratio in financial markets is crucial when training neural networks. that said, while classic ols is statistically significantly weaker than all other models, nn beats all others but not always at statistically significant levels. gkx finally confirm their results via monte carlo simulations. they show that if one generated two hypothetical security price datasets, one linear and un-interacted and one nonlinear and interactive, ols and glm would dominate in former, while nns dominate in the latter. they conclude by attributing the "predictive advantage [of neural networks] to accommodation of nonlinear interactions that are missed by other methods." (p. ) following gkx, an extensive literature on machine learning in finance is rapidly emerging. chen, pelger and zhu (cpz in the following, ) introduce more advanced (i.e. recurrent) neural networks and estimate a (i) non-linear asset pricing model (ii) regularized under no-arbitrage conditions operationalized via a stochastic discount factor (iii) while considering economic conditions. in particular they attribute the time varying dependency of the stochastic discount factor of about ten thousand us stocks to macroeconomic state processes via a recurrent long short term memory (lstm) network. in cpz's ( : ) view "it is essential to identify the dynamic pattern in macroeconomic time series before feeding them into a machine learning model". avramov et al. ( ) replicate the approaches of gkx's ( ) , cpz ( ) , and two conditional factor pricing models: kelly, pruitt, and su's ( ) linear instrumented principal component analysis (ipca) and gu, kelly, and xiu's ( ) nonlinear conditional autoencoder in the context of real-world economic restrictions. while they find strong fama french six factor (ff ) adjusted returns in the original setting without real world economic constraints, these returns reduce by more than half if microcaps or firms without credit ratings are excluded. in fact, when avramov et al. ( : ) are "[e]xcluding distressed firms, all deep learning methods no longer generate significant (valueweighted) ff -adjusted return at the % level." they confirm this finding by showing that the gkx ( ) and cpz ( ) machine learning signals perform substantially weaker in economic conditions that limit arbitrage (i.e. low market liquidity, high market volatility, high investor sentiment). curiously though, avramov et al. ( : ) find that the only linear model they analyse - kelly et al.'s ( ) ipca -"stands out … as it is less sensitive to market episodes of high limits to arbitrage." their finding as well as the results of cpz ( ) imply that economic conditions have to be explicitly accounted for when analysing the abilities and performance of neural networks. furthermore, avramov et al. ( ) as well as gkx ( ) and cpz ( ) make anecdotal observations that machine learning methods appear to reduce drawdowns. while their manuscripts focused on return predictability, we devote our work to risk predictability in the context of market wide economic conditions. the covid- crisis as well as the density of economic crisis in the previous three decades imply that catastrophic 'black swan' type risks occur more frequent than predicted by symmetric economic distributions. consequently, underestimating tail risks can have catastrophic consequences for investors. hence, the analysis of risks with the ambition to avoid underestimations deserves, in our view, equivalent attention to the analysis of returns with its ambition to identify investment opportunities resulting from mispricing. more specifically, since a symmetric approach such as the "mean-variance framework implicitly assumes normality of asset returns, it is likely to underestimate the tail risk for assets with negatively skewed payoffs" (agarwal & naik, : ) . empirically, equity market indices usually exhibit, not only since covid- , negative skewness in its return payoffs (albuquerque, , kozhan et al. . consequently, it is crucial for a post covid- world with its substantial tail risk exposures (e.g. second pandemic wave, climate change, cyber security) that investors provided with tools which avoid the underestimation of risks best possible. naturally, neural networks with their near unlimited flexibility in modelling non-linearities appear suitable candidates for such conservative tail risk modelling that focuses on avoiding giglio & xiu ( ) , and kozak, nagel & santosh ( ) as also noteworthy, as are efforts by fallahgouly and franstiantoz ( ) and horel and giesecke ( ) to develop significant tests for neural networks. our paper investigates is basic and/or more advanced neural networks have the capability of underestimating tail risk less often at common statistical significance levels. we operationalize tail risk as value at risk which is the most used tail risk measure in both commercial practice as well as academic literature (billio et al. , billio and pellizon, , jorion, , nieto & ruiz, . specifically, we estimate var thresholds using classic methods (i.e. mean/variance, hidden markov model) as well as machine learning methods (i.e. feed forward, convolutional, recurrent), which we advance via initialization of input parameter and regularization of incentive function. recognizing the importance of economic conditions (avramov et al. , chen et al. , we embed our analysis in a regime-based asset allocation setting. specifically, we perform monte-carlo simulations of asset returns for value at risk threshold estimation in a generative regime switching framework. using equity markets and long term bonds as test assets in the global, us, euro area and uk setting over an up to , weeks sample horizon ending in august , we investigate neural networks along three design steps relating (i) to the initialization of the neural network's input parameter, (ii) its incentive function according to which it has been trained and which can lead to extreme outputs if it is not regularized as well as (iii) the amount of data we feed. first, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the hidden markov). we find latter to outperform in terms of the frequency of var breaches (i.e. the realized return falling short of the estimated var threshold). second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). this design features leads to better regularization of the neural network, as it substantially reduces extreme outcomes than can result from a single incentive function. in particular this design feature enables the balanced incentive recurrent neural network (rnn) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of , days. we find our networks when fed with substantially less data (i.e. , days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets. our contributions are fivefold. first, we extend the currently return focused literature of machine learning in finance (avramov et al. , chen et al. gu et al. ) to also focus on the estimation of risk thresholds. assessing the advancements that machine learning can bring to risk estimation potentially offers valuable innovation to asset owners such as pension funds and can better protect the retirement savings of their members. second, we advance the design of our three types of neural networks by initializing their input parameter with the best established model. while initializations are a common research topic in core machine learnings fields such as image classification or machine translation (glorot & bengio, , we are not aware of any systematic application of initialized neural networks in the field of finance. hence, demonstrating the statistical superiority of an initialized neural network over itself non-initialized appears a relevant contribution to the community. third, while cpz ( ) regularize their neural networks via no arbitrage conditions, we regularize via balancing the incentive function of our neural networks on multiple objectives (i.e. estimation accuracy and empirically realistic regime distributions). this prevents any single objective from leading to extreme outputs and hence balances the computational power of the trained neural network in desirable directions. in fact, our results show that amendments to the incentive function maybe the strongest tool available to us in engineering neural networks. fourth, we also hope to make a marginal contribution to the literature on value at risk estimation. whereas our paper is focused on advancing machine learning techniques and is therefore following billio and pellizon ( ) anchored in a regime based asset allocation setting to account for time varying economic states (cpz, ), we still believe that the nonlinearity and flexible form especially of recurrent neural networks maybe of interesting to the var (forecasting) literature (billio et al. , nieto & ruiz, , patton et al. . fifth, our final contribution lies in the documentation of weaknesses of neural networks as applied to finance. while avramov et al. ( ) subjects neural networks to real world economic constraints and finds these to substantially reduce their performance, we expose our neural networks to data scarcity and document just how much data these new approaches need to advance the estimation of risk thresholds. naturally, such long data history may not always be available in practice when estimating asset management var thresholds and therefore established methods and neural networks are likely to be used in parallel for the foreseeable future. in section two, we will describe our testing methodology including all five competing models (i.e. mean/variance, hidden markov model, feed forward neural network, convolutional neural network, recurrent neural network). section three describes data, model training, monte carlo simulations and baseline results. section four then advances our neural networks via initialization and balancing the incentive functions and discusses the results of both features. section five conducts robustness tests and sensitivity analyses before section six concludes. we acknowledge that most recent statistical advances in value at risk estimation have concentrated on jointly modelling value at risk and expected shortfall and were therefore naturally less focused on time varying economic states (patton et al. , taylor , ). value at risk estimation with mean/variance approach when modelling financial time series related to investment decisions the asset return of portfolio (p) at time (t) as defined in equation ( ) below is the focal point of interest instead of asset price , since investors earn on the difference between the price at which they sold. value-at-risk (var) metrics are an important tool in many areas of risk management. our particular focus on var measures as a means to perform risk budgeting in asset allocation. asset owners such as pension funds or insurances as well as asset managers often incorporate var measures into their investment processes (jorion, ) . value at risk is defined as in equation ( ) as the lower bound of a portfolio's return, which the portfolio or asset is not expected to fall short off with a certain probability (a) within the next period of allocation (n). pr ( + < − ( )) = for example, an investment fund indicates that, based on the composition of its portfolio and on current market conditions, there is a % or % probability it will not lose more than a specified amount of assets over the next trading days the var measurement can be interpreted as a threshold (billio and pellizon ) . if the actual portfolio or asset return falls below this threshold, we refer to this a var breach. the classic mean variance approach of measuring var values is based on the assumption that asset returns follow a (multivariate) normal distribution. var thresholds can then be measured by estimating the mean and covariance ( , Σ) of the asset returns by calculating sample mean and sample covariance of the respective historical window. the % or % percentile of the resulting normal distribution will be an appropriate estimator of the % or % var threshold. we refer to this way of estimating var thresholds as being the "classical" approach and use it as baseline of our evaluation. this classic approach, however, does not sufficiently reflect the skewness of real world equity markets and the divergences of return distributions across different economics regimes. in other words, the classic approach does not take into account longer term market dynamics, which express themselves as phases of growth or of downside, also commonly known as bull market and bear markets. for this purpose, regime switching models have grown in popularity well before machine learning entered finance (billio and pellizon ) . in this study, we model financial markets inter alia using neural networks while accounting for shifts in economics regimes (avramov et al. , chen et al., . due to the generative nature of these networks, they are able to perform monte-carlo simulation of future returns, which could be beneficial for var estimation. in asset manager's risk budgeting it is advantageous to know about the current market phase (regime) and estimate the probability that the regime changes (schmeding et al., ) . the most common way of modelling market regimes is by distinguishing between bull markets and bear markets. unfortunately, market regimes are not directly observable, but are rather to be derived indirectly from market data. regime switching models based on hidden markov models are an established tool for regime based modelling. hidden markov models (hmm)which are based on markov chains -are models that allow for analysing and representing characteristics of time series such as negative skewness (ang and bekaert, ; timmerman, ) . we employ the hmm for the special case of two economic states called 'regimes' in the hmm context. specifically, we model asset returns y t ∈ n (we are looking at n ≥ assets) at time t to follow an n-dimensional gaussian process with hidden states s ∈ { , } as shown in equation ( ): the returns are modelled to have state dependent expected returns μ ∈ as well as covariance Σ ∈ . the dynamic of is following a homogenous markov chain with transition probability matrix with = ( = | | − = ) and = ( = | | − = ) . this definition describes if and how states are changing over time. it is also important to note the 'markov property' that the probability of being in any state at the next point in time only depends on the present state, not the sequence of states that preceded it. furthermore, the probability of being in a state at a certain point in time is given as π = ( = ) and ( − π ) = ( = ). this is also called smoothed state probability. by estimating the smoothed probability πt of the last element of the historical window as the present regime probability, we can use the model to start from there and perform monte-carlo simulations of future asset returns for the next days. this is outlined for the two-regimes case in figure below. figure : algorithm for the hidden markov monte-carlo simulation (for two regimes) : estimate = ( , , , Σ) from history when graves [ ] successfully made use of a long short-term memory (lstm) based recurrent neural network to generate realistic sequences of handwriting, he followed the idea of using a mixture density network (mdn) to parametrize a gaussian mixture predictive distribution (bishop, ) . compared to standard neural networks (multi-layer perceptron) as used by gkx ( ), this network does not only predict the conditional average of the target variable as point estimate (in gkx' case expected risk premia), but rather estimates the conditional distribution of the target variable. given the autoregressive nature of graves' approach, the output distributions are not assumed to be static over time, but dynamically conditioned on previous outputs, thus capturing the temporal context of the data. we consider both characteristics as being beneficial for modelling financial market returns, which experience a low signal to noise ratio as highlighted by gkx' results due to inherently high levels of intertemporal uncertainty. the core of the proposed neural network regime switching framework is a (swappable) neural network architecture, which takes as input the historical sequence of daily asset returns. at the output level, the framework computes regime probabilities and provides learnable gaussian mixture distribution parameters, which can be used to sample new asset returns for monte-carlo simulation. a multivariate gaussian mixture model (gmm) is a weighted sum of k different components, each following a distinct multivariate normal distribution as shown in equation ( ): a gmm by its nature does not assume a single normal distribution, but naturally models a random variable as being the interleave of different (multivariate) normal distributions. in our model, we interpret k as the number of regimes and φi explains how much each regime contributes to the (current output). in other words, φi can be seen as the probability that we are in regime i. in this sense the gmm output provides a suitable level of interpretability for the use case of regime based modelling. with regard to the neural network regime switching model, we extend the notion of a gaussian mixture by conditioning φi via a yet undefined neural network f on the historic asset returns within a certain window of a certain size. we call this window receptive field and denote its size by r: this extension makes the gaussian mixture weights dependent on the (recent) history of the time varying asset returns. note that we only condition φ on the historical returns. the other parameters of the gaussian mixture ( , Σ ), are modelled as unconditioned, yet optimizable parameters of the model. this basically means we assume the parameters of the gaussians to be constant over time (per regime). this is in contrast to the standard mdn, where ( , Σ ) are also conditioned on and therefore can change over time. keeping these remaining parameters unconditional is crucial to allow for a fair comparison between the neural networks and the hmm, which also exhibits time invariant parameters ( , Σ ) in its regime shift probabilities. following graves ( ), we define the probability given by the network and the corresponding sequence as shown in equation ( ) and ( ), respectively: since financial markets operate in weekly cycles with many investors shying away from exposure to substantial leverage during the illiquid weekend period, we are not surprised to observe that model training is more stable when choosing the predictive distribution to not only be responsible for the next day, but for the next days (hann and steuer, ) . we call this forward looking window the lookahead. this is also practically aligned with the overall investment process, in which we want to appropriately model the upcoming allocation period, which usually spans multiple days. it also fits with the intuition that regimes do not switch daily but have stability at least for a week. the extended sequence probability and sequence loss are denoted accordingly in equations ( ) and ( ): an important feature of the neural network regime model is how it simulates future returns. we follow graves ( ) approach and conduct sequential sampling from the network. when we want to simulate a path of returns for the next n business days, we do this according to the algorithm displayed in figure . in accordance with gkx ( ) we first focus our analysis on traditional "feed-forward" neural networks before engaging in more sophisticated neural network architectures for time series analysis within the neural network regime model. the traditional model of neural networks, also called multi-layer perceptron, consists of an "input layer" which contains the raw input predictors and one or more "hidden layers" that combine input signals in a nonlinear way and an "output layer", which aggregates the output of the hidden layers into a final predictive signal. the nonlinearity of the hidden layers arises from the application of nonlinear "activation functions" on the combined signals. we visualise the traditional feed forward neural network and its input layers in figure . we setup our network structure in alignment with gkx's ( ) best performance neural network 'nn '. the setup of our network is thus given with hidden layers with decreasing number of hidden units ( , , ) . since we want to capture the temporal aspect of our time series data, we condition the network output on at least a receptive field of days. even though the receptive field of the network is not very high in this case, the dense structure of the network results in a very high number of parameters ( in total, including the gmm parameters). in between layers, we make use of the activation function tanh. convolutional neural networks (cnns) can also be applied within the proposed neural network regime switching model. recently, cnns gained popularity for time series analysis, as for example van den oord et al. ( ) successfully applied convolutional neural networks on time series data for generating audio waveforms, the state-ofthe-art text-to-speech and music generation. their adaption of convolutional neural networkscalled wavenethas shown to be able to capture long ranging dependencies on sequences very well. in its essence, a wavenet consists of multiple layers of stacked convolutions along the time axis. crucial features of these convolutions are that they have to be causal and dilated. causal means that the output of a convolution only depends on past elements of the input sequence. dilated convolutions are ones that exhibit "holes" in their respective kernel, which effectively means that its filter size increases while being dilated with zeros in between. wavenet typically is constructed with increasing dilation factor (doubling in size) in each (hidden) layer. by doing so, the model is capable of capturing an exponentially growing number of elements from the input sequence depending on the number of hidden convolutional layers in the network. the number of captured sequence elements is called the receptive field of the network (and in this sense is equal to the receptive field defined for the neural network regime model). the convolutional neural network (cnn), due to its structure of stacked dilated convolutions, has a much greater receptive field than the simple feed forward network and needs much less weights to be trained. we restricted the number of hidden layers to to illustrate the idea. our network structure has hidden layers. each hidden layer furthermore exhibits a number of channels, which are not visualized here. figure illustrates the networks basic structure as a combination of stacked causal convolutions with a dilation factor of d = . the backing model presented in this investigation is inspired by wavenet, we restrict the model to the basic layout, using causal structure and increasing dilation between layers. the output layer comprises the regime predictive distributions by applying a softmax function to the hidden layers' outputs. our network consists of hidden layers, each layer having channels. the convolutions each have a kernel size of . in total, the network exhibits weights (including gmm parameters), the receptive field has a size of days. as graves ( ) was very successful in applying lstm for generating sequences, we also adapt this approach for the neural network regime switching model. originally introduced by hochreiter and schmidhuber ( ), a main characteristic of lstmswhich are a sub class of recurrent neural networks -is its purpose-built memory cells, which allows it to capture long range dependencies in the data. from a model perspective, lstms differ from other neural network architectures in that they are applied recurrently (see figure ). the output from a previous sequence of the network function servesin combination with the next sequence element -as input for the next application of the network function. in this sense, the lstm can be interpreted as being similar to an hmm, in that there is a hidden state which conditions the output distribution. however, the lstm hidden state not only depends on its previous states, but it also captures long term sequence dependencies through its recurrent nature. maybe most notably, the receptive field size of an lstm is not bound architecture wise as in case of simple feed forward network and cnn. instead, the lstm's receptive field depends solely on the lstms ability to memorize the past input. in our architecture we have one lstm layer with a hidden state size of . in total, the model exhibits parameters (including the gmm parameters). the potential of lstms was noted by cpz ( : ) who note that "lstms are designed to find patterns in time series data and … are among the most successful commercial ais". assessment procedure we obtain daily price data for stock and bond indices globally for three major global markets (i.e. eu, uk, us) to study the presented regime based neural network approaches on a variety of stock markets and bond markets. for each stock market, we focus on one major stock index. for bond markets, we further distinguish between long term bond indices ( - years) and short term bond indices ( - years). the markets in scope are ( ) the data dates back to at least january and ends with august , which means covering almost years of market development. hence, the data also accounts for crises like the dot-com bubble in the early s as well as the financial crisis of . this is especially important for testing the regime based approaches. the price indices are given as total return indices (i.e. dividends treated as being reinvested) to properly reflect market development. the data is taken from refinitiv's datastream. descriptive statistics are displayed in table , whereby panel a displays a daily frequency and panel b a weekly frequency. mean returns for equities exceed the returns for bond whereby the longer bond return more than the shorter one. equities have naturally a much higher standard deviation and a far worse minimum return. in fact, equity returns in all four regions lose substantially more money than bond return even at the th percentile, which highlights that the holy grail of asset allocation is the ability to predict equity market drawdowns. furthermore, equity markets tend to bequite negatively skewed as expected while short bonds experience a positive skewness, which reflects previous findings (albuquerque, , kozhan et al. ) and the inherent differential in the riskiness of both asset's payoffs. [insert table about here] the back testing is done on a weekly basis via a moving window approach. at each point in time, the respective model is fitted by providing the last , days (which is roughly years) as training data. we choose this long range window, because neural networks are known to need big datasets as inputs and it is reasonable to assume that over eight years include simultaneously times of (at least relative) crisis and times of market growth. covering both bull and bear markets in the training sample is crucial to allow the model to "learn" these types of regimes. for all our models we set the number of regimes to k = . as we back test an allocation strategy with a weekly re-allocation, we set the lookahead for the neural network regime models to days. we further configured the back testing dates to always align with the end of a business week (i.e. fridays). the classic approach does not need any configuration, model fitting is same as computing sample mean and sample covariance of the asset returns within the respective window. the hmm also does not need any more configuration, the baum-welch algorithm is guaranteed to converge the parameters into a local optimum with respect to the likelihood function (baum, ) . for the neural network regime models, additional data processing is required to learn network weights that lead to meaningful regime probabilities and distribution parameters. an important pre-processing step is input normalization, as it is considered good practice for neural network training (bishop, ) . for this purpose, we normalize the input data by ' = ( − ( )) / ( ) . in other words, we demean the input data and scale them by their variance but without removing the interactions between the assets. we train the network by using the adamax optimizing algorithm (kingma & ba, ) and at the same time applying weight decay to reduce overfitting (krogh & hertz, ) . learning rate and number of epochs configured for training vary depending on the model. in general, estimating parameters of a neural network model is a non-convex optimization problem. thus, the optimization algorithm might become stuck in an infeasible local optimum. in order to mitigate this problem, it is common practice to repeat the training multiple times, starting off having different (usually randomly chosen) parameter initializations, and then averaging over the resulting models or picking the best in terms of loss. in this paper, we follow a best-out-of- approach, that means each training is done five times with varying initialization and the best one is selected for simulation. the initialization strategy, which we will show in chapter . , further mitigates this problem by starting off from an economically reasonable parameter set. we observe that the in-sample regime probabilities learned by the neural network regime switching models as compared to those estimated by the hmm based regime switching model generally show comparable results in terms of distribution and temporal dynamics. when we set k = and the model fits two regimes with nearly invariably one having a positive corresponding equity means and low volatility, and the other experiencing a low or negative equity mean and high volatility. these regimes can be interpreted as bull and bear market, respectively. the respective insample regime probabilities over time also show strong alignment with growth and drawdown phases. this holds true for the vast majority of seeds and hence indicates that the neural network regime model is a valid practical alternative for regime modelling when compared to a hidden markov model. after training the model for a specific point in time, we start a monte carlo simulation of asset returns for the next days (one week -monday to friday). for the purpose of calculating statistically solid quantiles of the resulting distribution, we simulate , paths for each model. we do this for at least (emu), and at most (globally) points in time within the back-test history window. as soon as we have simulated all return paths, we calculate a total (weekly) return for each path. the generated weekly returns follow a non-trivial distribution, which arises from the respective model and its underlying temporal dynamics. based on the simulations we compute quantiles for value at risk estimations. for example, the . and . percentile of the resulting distribution represent the % and % - day -var metric, respectively. we evaluate the quality of our value at risk estimations by counting the number of breaches of the asset returns. in case, the actual return is below the estimated var threshold, we count this as a breach. assuming an average performing model, it is e.g. reasonable to expect % breaches for a % var measurement. we compared the breaches of all models with each other. we classify a model as being superior to another model, if the number of var breaches is less than those from the compared model. a value comparison comp = . (= . ) indicates that the row model is superior (inferior) to the column model. we performed significance tests by applying paired t-tests. we further evaluated a dominance value which is defined as shown in equation ( ): in our view the three most crucial design features of neural networks in finance, where the sheer number of hidden layers appears less helpful due to the low signal to noise ratio (gkx, ), are: amount of input data, initializing information and incentive function. big input data is important for neural networks, as they need to consume sufficient evidence also of rarer empirical features to ensure that their nonlinear abilities in fitting virtually any functional form are used in a relevant instead of an exotic manner. similarly, the initialization of input parameters should be as much as possible based on empirically established estimates to ensure that the gradient descent inside the neural network takes off from a suitable point of departure, thereby substantially reducing the risks that a neural network confuses itself into irrelevant local minima. on the output side, every neural network is trained according to an incentive (i.e. loss) function. it is this particular loss function which determines the direction of travel for the neural network, which has no other ambitions than to minimize its loss best possible. hence, if the loss function only represents one of several practically relevant parameters, the neural network may come to results with bizarre outcomes for those parameters not included in its incentive function. in our case, for instance, the baseline incentive is just estimation accuracy which could lead to forecasts dominated much more by a single regime than ever observed in practice. in other words, after a long bull market, the neural network could "conclude" that bear markets do not exist. metaphorically spoken, a unidimensional loss function in a neural network has little decency (marcus, ) . commencing with the initialization and the incentive functions, we will assess our three neural networks in the following vis a vis classic and hmm approach, where each of the three networks is once displayed with an advanced design feature and once with a naïve design feature. if no specific initialization strategy for neural networks is defined, it occurs entirely random, normal via a computer generated random number. where established econometric approaches use naïve priors (i.e. mean), neural networks originally relied on brute force computing power and a bit of luck. hence, it is unsurprising that initializations are a common research topic in core machine learnings fields such as image classification or machine translation (glorot & bengio, nowadays. however, we are not aware of any systematic application of initialized neural networks in the field of finance. hence, we compare naïve neural networks, which are not initialized with neural networks that have been initialized with the best available prior. in our case, the best available prior for , Σ of the model is the equivalent hmm estimation based on the same window. such initialization is feasible, since the structure of the neural network -due to its similarity with respect to , Σis broadly comparable with the hmm. in other words, we make use of already trained parameters from hmm training as starting parameters for the neural network training. in this sense, initialized neural networks are not only flexible in their functional form, they are also adaptable to "learn" from the best established model in the field if suitably supervised by the human data scientists. metaphorically spoken, our neural networks can stand on the shoulders of the giant that hmm is for regime based estimations. table presents the results by comparing breaches between the two classic approaches (mean/variance, hmm) and the non-initialized and hmm initialized neural networks across all four regions. panel a and b display the % var threshold for equities and long bonds, respectively, while panels c and d show the equivalent comparison for % var thresholds. note that for model training we apply a best-out-of- strategy as described in section . . that means we repeat the training five times, starting off with random parameter initializations each time. in case of the presented hmm initialized model, we apply the same strategy, with the exception that , Σ of the model are initialized the same for each of the five iterations. all residual parameters are initialized randomly as fits best according to the neural network part of the model. xxx findings are observable: first, not a single var threshold estimation process in a single region and in either of the two asset classes was able uphold its promise in that an estimated % var threshold should be breached no more than % of the time. this is very disappointing and quite alarming for institutional investors such as pension funds and insurance since it implies that all approachesestablished and machine learning basedfail to sufficiently capture downside tail risks and hence underestimate % var thresholds. the vast majority of approaches estimate var thresholds that occur in more than % of the cases and the lstm fails entirely if not initialised. in fact, even the best method, the hmm for us equities, estimates var thresholds which are breached in . % of the cases. second, when inspecting the ability of our eight methods to estimate % var thresholds, the result remains bad but is less catastrophic. the mean/variance approach, the hmm and the initialised lstm display cases where their var thresholds were breaches in less than the expected %. the mean/variance and hmm approach make their thresholds in out of cases and the initialised lstm in out of . overall, this is still a disappointing performance, especially for the feed forward neural network and the cnn. even though we initialize , Σ from hmm parameters, we still have weights to be initialized arising from the temporal neural network part of the model. we do this on a per layer level by sampling uniformly as where i is the number of input units for this layer. we focus our discussion of results on the equities and long bonds since these have more variation, lower skewness and hence risk. results for the short bonds are available upon request from the contact author. third, when comparing the initialised with the non-initialised neural networks, the performance is like day vs. night. the non-initialised neural networks perform always worse and the lstm performs entirely dismal without a suitable prior. when comparing across all eight approaches, the hmm appears most competitive which means that we either have to further advance the design of our neural networks or their marginal value add beyond classic econometric approaches appears inexistent. to advance the design of our neural networks further, we aim to balance its utility function to avoid extreme unrealistic results possible in the univariate case. [insert table about here] whereas cpz ( ) regularize their neural networks via no arbitrage conditions, we regularize via balancing the incentive function of our neural networks on multiple objectives. specifically, we extend the loss function to not only focus on accuracy of point estimates but also give some weight to eventually achieving empirically realistic regime distributions (i.e. in our data sample across all four regions no regimes display more than % frequency on a weekly basis). this balanced extension of the loss function prevents the neural networks from arriving at bizarre outcomes such as the conclusion that bear markets (or even bull markets) barely exist. technically, such bizarre outcomes result from cases where the regime probabilities φi(t) tend to converge globally either into or for all t, which basically means the neural network only recognises one-regime. to balance the incentive function of the neural network and facilitate balancing between regime contributions, we introduced an additional regularization term reg into the loss function which penalizes unbalanced regime probabilities. the regularization term is displayed in equation ( ) below. if bear and bull market have equivalent regime probabilities the term converges to . , while it converges towards the larger the imbalance between the two regimes. substituting equation ( ) into our loss function of equation ( ), leads to equation ( ) below, which doubles the point estimation based standard loss function in case of total regime balance inaccuracy but adds only % of the original loss function in case of full balance. conditioning the extension of the loss function on its origin is important to avoid biases due to diverging scales. setting the additional incentive function to initially have half the marginal weight of the original function also seems appropriate for comparability. the outcome of balancing the incentive functions of our neural networks are displayed in table , where panels a-d are distributed as previously in table . the results are very encouraging, especially with regards to the lstm. the regularized lstm is in all cases (i.e. thresholds, asset classes, regions) better than the non-regularized lstm. for the % var thresholds, it reaches realized occurrences of less than % in half the cases. this implies that the regularized lstm can even be more cautious than required. the regularized lstm also sets a new record for the % [insert table about here] to measure how much value the regularized lstm can add compared to alternative approaches, we compute the annual accumulated costs of breaches as well as the average cost per breach. they are displayed in table for the % var threshold. the regularized lstm is for both numbers in any case better than the classic approaches (mean/variance ad hmm) and the difference is economically meaningful. for equities the regularized lstm results in annual accumulated costs of - basis points less than the classic mean/variance approach, which would be up to over one billion us$ avoid loss per annum for a > us$ billion equity portfolios of pension fund such as calpers or pggm. compared to the hmm approach, the regularized lstm avoids annual accumulated costs of - basis points, which is still a substantial amount of money for the vast majority of asset owners. with respect to long bonds, where total returns are naturally lower, the regularized lstm's avoided annual costs against the mean/variance and the hmm approach range between - basis points, which is high for bond markets. [insert table about here] these statistically and economically attractive results have been achieved, however, based on , days of training data. such "big" amounts of data may not always be available for newer investment strategies. hence, it is natural to ask if the performance of the regularized neural networks drop when fed with just half the data (i.e. , days). apart from reducing statistical power, a period of over years also may comprise less information on downside tail risks. indeed, the results displayed in table show that in all context of var thresholds and asset classes, the regularized networks trained on , days substantially outperform and usually dominate their equivalently designed neural networks with half the training data. hence, the attractive risk management features for hmm initialised, balanced incentive lstms are likely only available for established discretionary investment strategies where sufficient historical data is available or for entirely rules-based approaches whose history can be replicated ex-post with sufficient confidence. [insert table about here] we further conduct an array of robustness tests and sensitivity analysis to challenge our results and the applicability of neural network based regime switching models. as first robustness test, we extend the regularization in a manner that the balancing incentive function of equation ( ) has the same marginal weight than the original loss function instead of just half the marginal weight. the performance of both types of regularized lstms is essentially equivalent second, we study higher var thresholds such as % and find the results to be very comparable to the % var results. third, we estimate monthly instead of weekly var. accounting for the loss of statistical power in comparison tests due to the lower number of observations, the results are equivalent again. we conduct two sensitivity analysis. first, we set up our neural networks to be generalized by two balancing incentive functions but without hmm initialisation. the results show the regularization enhances performance compared to the naïve non-regularized and non-initialized models but that both design features are needed to achieve the full performance. in other words, initialization and regularization seem additive design features in terms of neural network performance. second, we run analytical approaches with k > regimes. adding a third or even fourth regime when asset prices only know two directions leads to substantial instability in the neural networks and tends to depreciate the quality of results. inspired by gkx ( )'s and cpz ( ) to outperform the single incentive rnn as well as any other neural network or established approach by statistically and economically significant levels. third, we half our training data set of , days. we find our networks when fed with substantially less data (i.e. , days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets. hence, we conclude that well designed neural networks, i.e. a recurrent lstm neural network initialized with best current evidence and balanced incentivescan potentially advance the protection offered to institutional investors by var thresholds through a reduction in threshold breaches. however, such advancements rely on the availability of a long data history, which may not always be available in practice when estimating asset management var thresholds. descriptive statistics of the daily returns of the main equity index (equity), the main sovereign bond with (short) - years maturity (sb - y) and the main sovereign bond (long) with - year maturity (sb - ). descriptive statistics include sample length, the first three moments of the return distribution and thresholds along the return distribution. risks and portfolio decisions involving hedge funds skewness in stock returns: reconciling the evidence on firm versus aggregate returns can machines learn capital structure dynamics? working paper international asset allocation with regime shifts machine learning, human experts, and the valuation of real assets machine learning versus economic restrictions: evidence from stock return predictability a maximization technique occurring in the statistical analysis of probabilistic functions of markov chains bond risk premia with machine learning value-at-risk: a multivariate switching regime approach econometric measures of connectedness and systemic risk in the finance and insurance sectors neural networks for pattern recognition deep learning in asset pricing subsampled factor models for asset pricing: the rise of vasa microstructure in the machine age towards explaining deep learning: significance tests for multi-layer perceptrons asset pricing with omitted factors how to deal with small data sets in machine learning: an analysis on the cat bond market understanding the difficulty of training deep feedforward neural networks generating sequences with recurrent neural networks autoencoder asset pricing models much ado about nothing? exchange rate forecasting: neural networks vs. linear models using monthly and weekly data j. long short-term memory towards explainable ai: significance tests for neural networks improving earnings predictions with machine learning. working paper jorion, p.. value at risk characteristics are covariances: a unified model of risk and return adam: a method for stochastic optimization shrinking the cross-section the skew risk premium in the equity index market a simple weight decay can improve generalization advances in financial machine learning deep learning: a critical appraisal frontiers in var forecasting and backtesting dynamic semiparametric models for expected shortfall (and value-at-risk) maschinelles lernen bei der entwicklung von wertsicherungsstrategien. zeitschrift für das gesamte kreditwesen deep learning for mortgage risk forecasting value at risk and expected shortfall using a semiparametric approach based on the asymmetric laplace distribution forecast combinations for value at risk and expected shortfall moments of markov switching models verstyuk, s. . modeling multivariate time series in economics: from auto-regressions to recurrent neural networks. working paper fixup initialization: residual learning without normalization. interantional conference on learning representations (iclr) paper acknowledgments: we are grateful for comments from theodor cojoianu, james hodson, juho kanniainen, qian li, yanan, andrew vivian, xiaojun zeng and participants at financial data science association conference in san francisco the international conference on fintech and financial data science at university college dublin (ucd). the views expressed in this manuscript are not necessarily shared by sociovestix labs, the technical expert group of dg fisma or warburg invest ag. authors are listed in alphabetical order, whereby hoepner serves as the contact author (andreas.hoepner@ucd.ie). any remaining errors are our own. key: cord- -pv doe authors: novossiolova, tatyana title: twenty-first century governance challenges in the life sciences date: - - journal: governance of biotechnology in post-soviet russia doi: . / - - - - _ sha: doc_id: cord_uid: pv doe the chapter explores the rapid advancement of biotechnology over the past few decades, outlining an array of factors that drive innovation and, at the same time, raise concerns about the extent to which the scope and pace of novel life science developments can be adequately governed. from ‘dual-use life science research of concern’ through the rise of amateur biology to the advent of personalised medicine, the chapter exposes the limitations of the existing governance mechanisms in accommodating the multifaceted ethical, social, security, and legal concerns arising from cutting-edge scientific and technological developments. so far as to suggest that the 'life sciences knowledge, materials and technologies are advancing worldwide with moore's law-like speed.' and whilst some commentators have questioned the extent to which the ongoing progress of biotechnology has translated into practical applications and novel products, there is some consensus that the biotechnology landscape has been fundamentally transformed over the recent decades with the possibilities now unlocked holding revolutionary potential. indeed, rapid advances in the field have produced a knowledge base and set of tools and techniques that enable biological processes to be understood, manipulated and controlled to an extent never possible before ; they have found various applications in numerous spheres of life, generating enormous benefits and offering bright prospects for human betterment; and they have come to be regarded as a key driver of economic development with potential to close the gap between resource-rich and resource-poor countries. the progress of biotechnology has been largely driven by three sets of forces, namely social, political and economic. the social dynamics at work in this context are understood as the efforts to improve public health and overall wellbeing of individuals both in the global north and global south, boost agricultural yields and encourage environment-friendly practices to mitigate the adverse effects of climate change. several factors account for the significant value attached to the life sciences in the context of intense globalisation and continuous change. surging population numbers and extended life expectancy are augmenting the demand for developing effective and affordable medications, novel approaches for the treatment of chronic diseases and additional cost-effective sources of energy and food production. at the same time, rising global trade and travel, coupled with increased urbanisation, and an uneven distribution of wealth are creating optimal conditions for disease outbreaks, pandemics and environmental degradation. against this backdrop, biotechnology appears full of promise and critical to tackling social and natural concerns; enhancing disease prevention, preparedness and surveillance; promoting development; and alleviating human suffering. economic dynamics include national expenditure on research and development, purchasing power, trends in consumerism and market pressures and fluctuations. besides public funding for r&d which remains a key factor in the growth and flourishing of bioindustry in developed and emerging economies alike, private investment from venture capital firms, start-up companies and transnational corporations (tncs) have also played an indispensable role in capturing new markets and further facilitating the extension of bioeconomy on a global scale. dupont's significant footprint in india is indicative in this regard, not least because of the depth and diversity of the activity that the company has undertaken via its offshore r&d centres ranging from crop science to biofuels. likewise, merck has outlined a . billion dollar commitment to expand r&d in china, as part of which it intends to establish an asia headquarters for innovative drug discovery in beijing. political dynamics are triggered by states' increasing commitment to support the progress of biotechnology as a way of maximising their power and boosting their status in the international arena. in the aftermath of / and the 'anthrax letters' attack of october , substantial effort has been given to harnessing life science research for the purposes of national security. biodefence and bioterrorism preparedness are thus considered high-priority areas for national investment by government agencies and the military alike. an illustrative example of this two-tiered approach is the funding policy in the usa, where biodefence research is financed by the nih, department of homeland security (dhs) and defense advanced research projects agency (darpa), to name a few. under the synergistic influence of these three sets of forcessocial, economic and politicalbiotechnology has been transformed into a truly global fast-evolving enterprise encompassing a multitude of stakeholders, delivering considerable benefits and holding out still greater promise, with profound and far-reaching implications for virtually every aspect of human well-being and social life. the pharmaceutical industry is a case in point, for its steady expansion would hardly be possible were it not for the vast array of techniques and methods enabled by the progress of the life sciences. worth roughly billion dollars, the global pharmaceutical market dominates the life sciences industry and arguably determines the trajectory of life sciencesrelated technological development and global spread. gene cloning, dna sequencing and recombinant construction of cell lines, to name a few, are all deemed indispensable for the development of novel medicines and therapeutics. it suffices to mention that more than half of the top selling commercially available drugs in the usa would not exist without those methods. agriculture, too, has been heavily influenced by the ongoing biotechnology revolution, as evidenced in the rapid growth and dispersion of commercialised transgenic crops (biotech crops) and the efforts to use gmos (both animals and plants) for the production of vaccine antigens and other biologically active proteins ('biopharming'). indeed, the increase in the area of farmland planted with transgenic crops rose dramatically from . hectares in to about million hectares in and is still growing. in addition, technological convergence between biotechnology, nanotechnology, information technologies and cognitive science has unlocked a broad scope of opportunities for maximising public (and private) welfare, offering substantial benefits in wideranging areas such as medicine, pharmacy, crime investigation and national security by ensuring precision and reliability, while at the same time, reducing the amount of time previously required for the performance of certain tasks. four key features of biotechnology make it so appealing to the majority of stakeholders involved. first, biotechnology innovation is characterised by duality, whereby research yields results that simultaneously lead to advances in basic knowledge and stimulate product development. second, the output that the life sciences generate in the form of new medicines, improved nutrition products, enhanced yields and novel materials, is 'strongly positive'. the increasing utility of tools and strategies for human enhancement, whether in professional sport, for cosmetic and aesthetic purposes, or on the battlefield, vividly reflects the firm conviction that the transformative capacity of biotechnology, even at the most fundamental level, is something to be welcomed and vigorously embraced. what is more, biotechnology possesses proven economic viability, as illustrated in the burgeoning industries and new markets it has spurred. against this backdrop, the high rate of biotechnology expansion is anything but surprising, since every increment in biological capability pays back the researcher and the researcher's sponsors in short order. payback comes in the esteem of peers, in promotions, and in increases in the academic or corporate salaries of the researchers whose work generates knowledge and new therapies. payback comes in the form of profits for the manufacturers of kits to perform the manipulations, royalties for the writers of the methods manuals profits for the drug industry. payback comes for the public in the form of new drugs and therapies. fourth, besides being cost-effective, many of the benefits that biotechnology offers are easy to obtain and disseminate. in other words, many of the various prospects for public (and private) betterment are not situated at some distant moment in the future but can be realised immediately, as a result of which pressing problems can be alleviated, if not fully resolved, and substantial revenue can be generated in the short term. last but not least, while there are some risks and concerns associated with the advancement of biotechnology, few of those are deemed urgent or significant enough to impact on the pace of innovation. as the actual manifestation of such risks is often contingent upon the interplay of a variety of factors, this renders the likelihood of a major crisis unfolding as a result of the progress of biotechnology low. moreover, there is a genuine belief that any challenges that may arise from the proliferation of novel technologies can either be foreseen or dealt with on a case-by-case basis. given the enormous potential of biotechnology for addressing societal, economic and environmental challenges, it is unsurprising that most states have readily endorsed scientific and technological innovation and embarked on largescale generously-funded r&d programmes in the life sciences. given the powerful multifaceted impetus for biotechnology advancement, it is possible to identify at least five key trends in the governance of biotechnology that are common for highly industrialised and developing countries alike. those include: high-level coordination, facilitation and funding; synergies within and between both the public and private sector; emphasis on strategic and competitive interests at the expense of precaution; regulations that seek to promote rather than restrict scientific and technological progress; and overreliance on technical solutions. at international level, the on-going expansion of biotechnology has been hailed not only as an inherently positive development but also as an essential prerequisite for enhancing human welfare and addressing various socio-economic, environmental and health concerns. in its world health report, the who called for: increased international and national investment and support in [life science] research aimed specifically at improving coverage of health services within and between countries. the who has also strived to promote research on specific diseases, such as hiv/aids, cancer, pandemic influenza, tuberculosis and malaria, with the goal to improve methods for prevention and diagnostics and facilitate the development of effective therapeutics and vaccines. in a similar fashion, the un food and agriculture organisation (fao) has highlighted the positive impact that biotechnology could have on the development of agriculture: . . . biotechnology could be a major tool in the fight against hunger and poverty, especially in developing countries. because it may deliver solutions where conventional breeding approaches have failed, it could greatly assist the development of crop varieties able to thrive in the difficult environments where many of the world's poor live and farm. it is not difficult to see how those assertions have been translated into national policies and practical steps across the globe. the us nih that provide the bulk financial support for medical and health-oriented r&d in the us spent over . billion dollars during the fiscal year , about a third of which was allocated for funding biotechnology and bioengineering projects. within its sixth framework programme for research and technological development spanning the period - the european union (eu) distributed more than . billion euro for projects under the theme 'life sciences, genomics and biotechnology for health'. developing countries, too, are increasingly investing in 'red' biotechnology as part of their efforts to address public health concerns. according to a recent who report, support for biotechnology and particularly, for cancer research, in cuba has soared over the past years, amounting to over one billion dollars. as a result, the cuban biotechnology industry is burgeoning, holding around international patents and exporting vaccines and pharmaceuticals to more than countries. the prospect of climate change coupled with rising population numbers has compelled governments in the global north and south alike to explore 'green' biotechnology as a means of ensuring food security. the usa remains by far the largest commercial producer of gm crops. several eu member states (france, germany, spain, poland, romania, czech republic, portugal and slovakia), canada and australia further feature in the list of industrialised nations that have embarked on growing gm plant breeds. more and more emerging economies are striving to expand their agrobiotechnology sector, most notably brazil, india, argentina, south africa, mexico, burkina faso, myanmar and chile. in , the chinese government launched a major r&d initiative worth billion dollars to develop new plant varieties by that will enhance yields, have improved nutritional value and be resistant to pests. public-private partnerships underpinned by access to early-stage risk capital and strong linkages between business, universities and entrepreneurial support networks constitute an important vehicle for promoting innovation and fostering technology transfer and product development. for instance, the chinese government has launched a major initiative mobilising . billion dollars in venture capital to support start-ups in the immense zhangjiang science park outside shanghai ; russia's rusnano has entered a million dollar partnership with the us venture capital firm domain associates to fund 'emerging life science technology companies and establish manufacturing facilities in russia for production of advanced therapeutic products'; and cleveland's university hospital has allocated million dollars for setting up a 'non-profit entity to fund and advise physician-scientists on transitional research and a related for-profit accelerator that will develop selected compounds to proof of concept.' the kauffman foundation in the usa, a wealthy philanthropic establishment dedicated exclusively to the goal of entrepreneurship has been particularly zealous in its quest for promoting university-based entrepreneurial activities nationwide. its kauffman campuses initiative launched in early enjoyed so much popularity among universities that following the initial round of grants totalling million dollars, the foundation announced its resolve to leverage a million dollar investment for the creation of new interdisciplinary education programmes. university-industry partnerships, while not a novel phenomenon in the area of biotechnology, have considerably intensified over the past several decades, thus facilitating the widespread commercialisation of life science research. indeed, per cent of the companies in the us surveyed by blumenthal et al. in had relationships with an academic institution in that year and in more than half of those cases industry provided financial support for research in such institutions. according to another study, the total industry investment in academic life science research in the usa tripled between and reaching almost billion dollars and has been growing ever since. against this backdrop, some commentators have put forward the 'triple helix' model, which serves both as a conceptual tool and a policy blueprint. in the former case, it is used to elucidate the academic-industry-government relationships that underpin the institutional arrangements and changing practices in the processes of production, transfer and application of knowledge in post-industrial societies; in the latter, it is promoted as a framework for economic development through state investment and knowledge sharing between academia and industry. others, however, have remained sceptical of the close integration of universities and the private sector voicing concerns about the possible deleterious effects arising therefrom: as in other activities, when big money flows fast, temptations and opportunities arise for risky behaviour and stealthy or even brazen wrongdoing in pursuit of personal or institutional advantage. the new world of academiccommercial dealings is characterised by some grey areas and evolving rules for permissible and impermissible conduct. the people who manage and conduct research in scientific organisations are not immune to the weaknesses and foibles so plentiful elsewhere, despite the accolades for probity that science bestows upon itself. with more and more universities joining the biotechnology 'gold rush' and corporate values and goals steadily penetrating the professional academic cultures, scholarship turns into a result-oriented activity subject to the priorities and interests of business partners and industrial sponsors. strategy and careful planning deemed essential to the pursuit of for-profit knowledge can have a restraining effect on the spontaneous vigour characteristic of academic research, limiting the range of problems that could be studied to those defined by the market. at the same time, scientists often find themselves under tremendous pressure striving to satisfy the demands of their industrial clients without utterly neglecting their academic duties ranging from mentorship through filing grant applications to publishing. the extensive workload coupled with the bright prospects for securing long-term research funding and achieving some individual gain and prominence provide a favourable environment in which instances of dubious, sometimes fraudulent, behaviour, conflicts of interest and lack of transparency, unless too severe, are unlikely to encounter widespread opprobrium and may even go unnoticed. in the race for patents and venture capital, the business mentality dulls scientific rigour and the ethics threshold appears not too difficult to cross. interests at the expense of precaution given the tremendous benefits that biotechnology is expected to generate in virtually any sphere of human activity, it is not difficult to understand why its progress is predominantly viewed through an explicitly positive lens by policy-makers. since the opportunities for achieving public betterment and enhancing state prestige and international standing are too tempting and too abundant, there is a powerful urge to dedicate both will and resources to promoting the large-scale expansion of the life sciences. for one thing, the prospect of conquering disease and maximising human wellbeing provides solid justification for a deliberate and sustained investment in fostering scientific and technological prowess. lack of commitment and reluctance to support r&d in the life sciences then becomes an unfavourable option in the political calculations of states regardless of their level of economic development and international status. within the context of political calculus pervaded by realist fears, competition and power, the perceived risks of inaction with regard to scientific and technological development justify vast expenditure, lower regulatory barriers to innovation and product development. political choices concerning biotechnology support are therefore frequently made at the expense of calls for caution and potential social, environmental and ethical concerns. the regulation of genetic engineering is a case in point. as discussed in the previous chapter, from the outset, the attempts of governments to impose strict controls on research involving rdna faced a severe backlash from academic scientists and business executives alike. by the s, the various legislative initiatives put forward in the usa were abandoned in favour of the regime established by the nih guidelines, which virtually exempted the biotechnology industry from formal regulation. while the leading us-based companies pledged to 'voluntarily comply' with the guidelines, behind the scenes they craftily continued to push for a system that would insulate them from governmental and public scrutiny. indeed, during the s when the states parties to the btwc strived to strengthen the treaty by negotiating a binding verification mechanism corporate interests proved too big and too important to be ignored. both the pharmaceutical research and manufacturers of america (phrma) which represented the country's major research-based pharmaceutical and biotechnology companies, and the biotechnology industry organisation (bio) which at that time represented some biotechnology firms, became vocal opponents of any measures designed to promote international arms control which seemed to hinder in any way the protection of proprietary information and intellectual property. in the period between and the associations invested considerable effort, time and ingenuity in lobbying the us government and influencing the diplomatic talks in geneva to secure an outcome that was in line with the demands of their constituencies. of course, it would be naive to ascribe the us resolve to reject in both the text of the protocol and its utility in general for providing adequate verification and enhancing confidence among states parties solely to the activity of the biotechnology industry; nevertheless, it would be equally naive to suppose that corporate interests played no significant role in the process. besides economic priorities, national security and military calculations can also provide a compelling rationale for downplaying the potential risks associated with biotechnology expansion. following the 'anthrax letters' attack in october , the us government embarked on a massive financial investment to boost its bioterrorism preparedness and enable the prevention, early detection, monitoring and emergency response to biological threats. as outlined in biodefense for the st century, a presidential directive that set out a comprehensive framework for national biodefence policy, between and the federal government provided roughly billion dollars 'to state and local health systems to bolster their ability to respond to bioterrorism and major public health crises'. along with the highly controversial vaccination programme that the government envisaged, another important development designed to enhance america's biodefence preparedness and capability was the drastic increase both in the number of high-containment labs (bsl- and bsl- ) and the number of researchers with access to some of the most dangerous pathogens known to mankind, including the causative agents of ebola, plague and q fever. some commentators have questioned the logic behind this policy highlighting the heightened risk of accidental or deliberate release of pathogens. far from being ill-founded or hypothetical, such fears stemmed from a range of high-profile cases that occurred after across the us in which the lack of proper training and professional negligence resulted in scientists being exposed to or infected with deadly microbes. real-life horror stories about vials of plague being transported in the hand-luggage of researchers on passenger aircraft without the required authorisation, and deadly cultures gone missing from what appeared to be secure laboratories further fuelled the criticism toward the us biodefence policy raising difficult questions about its appropriateness and actual goals even before the 'anthrax letter' investigation revealed that the attack was 'insider's business'. life science research, just as any other sphere of professional activity, is subject to a range of institutional, national and international regulations. along with the more general rules such as those related to occupational health and safety, fair pay and job competition, conflict of interests, labour rights, and professional liability, there are also specific ones addressing particular aspects of the research process including project clearing (e.g. review by local biosafety committees), safe laboratory practice and transport of pathogens (e.g. international health regulations), exchange of viral strains (e.g. pandemic influenza preparedness framework, ), handling of dangerous pathogens (e.g. us select agent programme) and ethical treatment of human subjects and samples obtained therefrom (e.g. the human tissue act in the uk). while hardly exhaustive, this list suffices to convey the idea that the regulatory regime governing the practice of life science research is dense and comprehensive. with more than international organisations overseeing biotechnology from various perspectives, there is a prima facie reason to assume that the regime in its current form is sufficiently flexible to accommodate novel advances and hold any potential risks, which they may pose, at bay. yet in reality over the past decade the opposite trend has prevailed, that is, the existing governance mechanisms have struggled to respond adequately to the proliferation of new scientific developments with multiple adaptive uses and the multiplicity of cutting-edge developments posing profound ethical quandaries. how to account for this discrepancy? part of the problem stems from the fact that since at least the late s the regulation of biotechnology has been streamlined so as to become compatible with and not a restriction on continued technological change and economic growth. as such, it rests upon the barely questioned assumption that the progress of biotechnology is inherently good and needs to be harnessed and vigorously promoted. needless to say, any measures that seem to slow down or restrain its advancement are deemed undesirable and even detrimental to socio-economic development. hence, when developing regulations, policy-makers have generally pursued a twofold objective: first, to promote the safe practice of life science research by reducing any risks arising therefrom both to scientists and the general public; and second, to ensure that any issues that may hinder the expansion of biotechnology are not subject to restrictive legislation. a vivid manifestation of this approach is the way in which the ongoing debate on 'dual use research of concern'benignly-intended research that seeks to maximise human welfare by responding to health, societal and environmental ills but could also facilitate to the development of more sophisticated and potent biological weapons and enable bioterrorism has been handled. for more than a decade, researchers, journal editors, security experts and policy-makers have strived to devise oversight mechanisms and governance initiatives that could adequately tackle the challenge of dual use without stifling innovation. unfortunately, to date their efforts have met with little success, as a result of which virtually each experiment of dual use concern is dealt with separately on a case-by-case basis. this is not to say that there are no similarities across the studies of this kind. on the contrary, a few of the most notable examples follow a similar paradigm, including the creation of a vaccine-resistant strain of the mousepox virus, the artificial synthesis of the polio virus, the recreation of the spanish influenza virus and, most recently, the production of a mammalian-transmissible h n avian influenza virus (see fig. . ). all four of them were performed in strict compliance with the rules and procedures in place for laboratory biosafety, biosecurity and biorisk management and under appropriate physical containment conditions; all had passed thorough review by the respective local biosafety and bioethics committees; and all of them were deemed essential in terms of public health benefits. above all, the ethical and security concerns that the studies have raised go far beyond the laboratory door, posing fundamental questions about how life science research is reviewed, conducted and communicated. yet none of the high-profile experiments of concern has proved critical enough to provoke a radical change in the way dual-use research is governed. three points merit consideration in this regard. the first pertains to the manner in which the dominant discourse on dual use is framed, that is, in purely ethical terms as a dilemma. while bioethics undoubtedly has a role to play in the discussions on dual use, the language in early , the journal of virology published a report of the creation of a highly virulent strain of the ectromelia virus, the causative agent of mousepox. the work described in the report was carried out by a group of australian scientists based in canberra. its original goal was the development of an infectious immunecontraceptive that could be used against wild mice for the purpose of pest control. to achieve this, the group drew upon previously published work. during the course of the experiment, the researchers unexpectedly discovered that the newly engineered strain of the mousepox virus, which they created, killed % more mice than the parent virus, including mice that had been vaccinated or that had natural immunity. when the research was published, concerns were raised that it could potentially be misapplied for hostile purposes, or even that the same technique could be utilised for creating a more virulent strain of the variola virus, which causes smallpox in humans. in , a team of scientists led by dr eckard wimmer from the university of new york at stony brook announced that they had successfully created a polio virus 'from scratch'. to carry out the research, the scientists 'followed a recipe they downloaded from the internet and used gene sequences from a mail-order supplier'. a once the virus created, it was tested on mice, as a result of which the infected animals became paralysed and died. the study spurred a wide-ranging debate, not least because it drew attention to the possibility of using synthetic biology for constructing de novo viruses for the purposes of bioterrorism. in , it was announced that cdc scientists, together with colleagues from several research institutions across the usa, had successfully recreated the influenza virus, that was responsible for the pandemic, which killed between and million people worldwide. using dna from a tissue of a flu victim buried in the permafrost in alaska, the researchers managed to reconstruct the influenza virus and thus study its pathogenesis and properties that contributed to its virulence. despite the scientific justification that was put forward, critics have argued that the study is 'a recipe for disaster', not least because the availability of the virus' full-genome sequence and detailed method for its reconstruction on the internet may facilitate its synthesis by a of 'dual-use dilemmas' is too abstract to offer appropriate analytical tools for dealing with the issues at play. as discussed above, the questions that dual-use research poses such as data sharing, research funding and project planning are far from hypothetical but they feature explicitly in everyday professional practice. however, the 'dilemma framework' automatically strips them of the complex socio-technical arenas in which they have actually presented themselves by laying an emphasis on what action should ideally be taken, rather than what is practically feasible given the circumstances. moreover, such issues are typically structural in nature for they constitute fundamental elements of the life sciences professional culture, and as such, could hardly be adequately addressed solely at the level of individual researchers. yet framing social, legal and security concerns in terms of moral dilemmas allows for structural issues to be omitted from the discussion, rendering life scientists the chief, if not the only, moral agents expected to reach what is deemed to be the 'right' answer. assigning abstract duties then comes to be regarded as an appropriate 'solution', even if those are virtually impossible to fulfil given the complexities of the working environment within which researchers operate. the second point is related to the reductionist view that dominates the discourse of what counts as a risk in life science research. perhaps one of the most significant legacies of the asilomar conference on rdna (see chapter ) is the emphasis on laboratory risk that could be effectively managed by dint of physical containment and rules and procedures for safe laboratory practice. it suffices to mention that the bulk of guidelines and formal regulations published by the who focus exclusively on promoting and refining measures that aim to maximise laboratory biosafety and prevent the accidental release of pathogens. hence, it is hardly surprising that the concept of dual use and the idea of risks beyond the laboratory door implicit in it seem alien to the majority of practising researchers. striking as it may appear, even though dual use research has been debated for more than a decade now, the level of awareness among life scientists of the broader social, security and legal implications of their work remains low. the third point deals with the way in which risks in life science research are assessed and mitigated. given the narrow definition of risk encompassing technical particulars, physical containment and biosafety, risk assessment is considered an appropriate and reliable tool for ensuring research safety. the heavy reliance upon risk assessing tools is underpinned by two underlying assumptions. one is that it is possible to foresee and calculate most, if not all, things that could potentially go wrong both during the development phase of the project and after its completion. the other is that it is possible then to use the produced data as a basis for devising measures and strategies for eradicating, or at least, mitigating the risks likely to occur. attractive as it may seem, this 'new alchemy where body counting replaces social and cultural values' presupposes a clear distinction between the risk assessment 'experts' and the general public, whereby the former are granted a licence to make decisions about the risks that the latter cannot do without. likewise, costbenefit analysis on the basis of which research proposals are screened for potentials risks and security concerns has attracted some serious criticism. in the view of some commentators, besides being sometimes deeply inaccurate, the cost-benefit analysis is 'ethically wrong' since 'applying narrow quantitative criteria to human health and human life' is unacceptable. but there are other problems, too. as pointed out by dickson, the cost-benefit analysis distorts political decision-making by omitting any factors that cannot be quantified, thus obscuring questions of equity, justice, power, and social welfare behind a technocratic haze of numbers. as a result, complex and politically charged decisions are reduced to a form that fits neatly into the technocratic ways of making regulatory decisions, whereby calculations and approximations made by the few substitute for the judgements of many. the wide-ranging controversy that unraveled in late when two teams of scientists working independently in the netherlands and the usa managed to produce an air-borne strain of the h n avian influenza virus, a highly pathogenic and lethal microbe with over per cent mortality rate in humans arguably constituted the pinnacle of the deliberation on dual use research. both studies set alarm bells ringing for the security community who almost immediately jumped in the debate voicing concerns over the possibility of biological proliferation and bioterrorism. some commentators even argued that the experiments ran counter to the spirit if not to the letter of the btwc. against this backdrop, the resultant controversy was deemed at least initially to offer a timely opportunity to evaluate the existing governance mechanisms, determine their gaps and weaknesses, and broaden the scope of deliberation inviting participation of a wide range of stakeholders. unfortunately, the outcome of the debate proved far more moderate, signalling preference for preserving the status quo without disrupting the established systems for governance and oversight. despite the extensive mass media coverage of the controversy, only few public consultations were held and none of those was designed as a platform for making policy proposals or developing action plans. moreover, the denselypacked agenda prepared duly in advance left very limited scope for posing 'tricky' questions which the participating 'experts' might have struggled to answer. needless to say, all consequential decisions were made behind closed doors away from public scrutiny and on some occasions the people with the greatest vested interest in the publication of the studies were also the ones with the greatest say in the process. there were no significant changes in terms of governance initiatives, either. far from being ground-breaking developments, the us government policy for oversight of life sciences dual use research of concern and the decision of the dutch government to invoke export control legislation before allowing the publication of the study conducted within its jurisdiction were little more than desperate moves that aimed to obscure the inadequacy and shortcomings of the measures already in place. overall, the manner in which the h n debate was handled could be treated as a missed opportunity, whereby those in charge of the decision-making process did little to address or even acknowledge the broader issues underpinning dual-use research of concern but simply 'kicked the can down the road to the next manuscript' waiting for the next controversy to erupt. technology seems to play a significant role in the governance of life science research. high-containment laboratories, well-equipped biosafety cabinets, sophisticated waste management systems, enhanced personal protective equipment and secure containers for the safe storage and transportation of biohazard materials are just a few of the tools and systems in place that allow the safe handling of dangerous pathogens and toxins and, at the same time, protect both laboratory personnel and the general public from exposure to deadly microbes. that said, the effectiveness of technical solutions should not be overstated if only for the fact that 'problems' of governance are barely technical matters per se but rather constitute complex issues of human relatedness. nevertheless, the attractiveness of technological fixes as offering reliable risk mitigation and reassurance in the safety of biotechnology is ever growing. it suffices to mention that the h n controversy discussed above was in part resolved after the lead researchers in the netherlands and the usa respectively agreed to add a detailed section on the technical specificities and laboratory biosafety and biosecurity measures taken during the experiments. the strategy has proven effective in diverting attention from the rather inconvenient questions regarding the utility and significant potential for hostile misuse of the so called 'gain-of-function' (gof) research and concentrating it on more mundane issues dealing with in-house precautions and safety procedures. once the latter were deemed adequately resolved, the former were effectively forgotten. still, the value of technical means in ensuring reliable risk management should not be taken for granted. for one thing, laboratory biosafety precautions, however sophisticated, are far from perfect and accidents do occur. such is the case with the pirbright site in the uk which was at the centre of a major outbreak of foot-and-mouth disease in , as a result of which over animals were slaughtered. in the bioterrorism bsl- laboratory at the us cdc in atlanta suffered repeated problems with airflow systems designed to help prevent the release of infectious agents. the faulty system could perhaps be regarded as an exception had it not been for the authoritative investigation report of the us government accountability office (gao) released in march . according to the report, the cost of building and maintaining high-containment laboratories, coupled with the absence of national standards for their design, construction, operation, and maintenance 'exposes the nation to risk'. far more critical is the situation in the developing world and emerging economies where lax regulations and technical failures have significantly heightened the risk of accidental release of pathogens, as demonstrated by the numerous 'escapes' of the severe acute respiratory syndrome (sars). but even if technology functions impeccably, this hardly reduces the likelihood for a human error or inappropriate behaviour. unlocked doors in high-containment facilities hosting deadly pathogens, eating and drinking in laboratories and poor waste disposal practices are just a small part of the otherwise long list of mundane mishaps that may result in severe consequences. it is worth mentioning that the us cdc came under the spotlight after internal e-mail correspondence revealed that doors in the bsl- block where experiments involving the causative agenets of anthrax, sars and influenza were performed were left unlocked on numerous occasions, thus increasing the risk of unauthorised access or theft. given the chance of technical flaw and the potential for human error, some life scientists have begun to question the reliability of existing laboratory precautions and demand thorough review and evaluation. in a recent letter to the european commission the foundation for vaccine research has asked for 'a rigorous, comprehensive risk-benefit assessment' of gof research that 'could help determine whether the unique risks posed by these sorts of experiments are balanced by unique public health benefits which could not be achieved by alternative, safe scientific approaches'. engines that drive biotechnology momentum by and large, the ongoing progress of biotechnology is largely viewed and assessed through an explicitly positive lens which allows focusing almost exclusively on the benefits likely to be accrued notwithstanding the risks, actual and potential. the resultant distorted image is problematic, not least because it precludes any comprehensive discussion on the potential side effects and negative implications of novel life science advances. above all, it sustains the barely questioned assumption that the existing governance mechanisms are adequate and sufficient to cope with the stresses and strains of the rapidly evolving biotechnology landscape. yet given the complex and multifaceted dynamics shaping the life science enterprise, the rapid pace of innovation, and the limits to predicting the synergistic and cumulative effects of the proliferation of new technologies, the uncritical acceptance of such assumptions is at best naïve and at worst dangerous. arguably the advancement of the life sciences has greatly benefited from the fascinating breakthroughs made in other areas of study, such as chemistry, engineering, computing, informatics, robotics, mathematics and physics. some commentators even talk about a third revolution in biotechnology underpinned by scientific and technological convergence: convergence does not simply involve a transfer of tools sets from one science to another; fundamentally different conceptual approaches from physical science and engineering are imported into biological research, while life science's understanding of complex evolutionary systems is reciprocally influencing physical science and engineering. convergence is the result of true intellectual cross-pollination. the resultant 'new biology' has opened up a range of marvellous possibilities enabling the manipulation of living matter at the full range of scales, as well as the application of biological systems principles for the development of novel materials, processes and devices. as such, it has been largely hailed as possessing the 'capacity to tackle a broad range of scientific and societal problems.' this is not an exaggeration. as noted by a recent report of the us nas, the precipitous decline in the cost of genome sequencing would not have been possible without a combination of engineering of equipment, robotics for automation, and chemistry and biochemistry to make the sequencing accurate. likewise, it is the combination of expertise from fields as diverse as evolutionary biology, computer science, mathematics, and statistics that has allowed both the analysis of raw genomic data and the subsequent use of these data to other fields. at the same time, advances in nanoscience and nanotechnology have considerably enhanced drug delivery making it more accurate by targeting specific parts of the body. yet the transformative potential of scientific and technological convergence comes at a price, not least because parallel to the benefits it offers, there are risks the effects of which could be truly devastating. take drug delivery, for instance. thanks to the technological breakthroughs over the past decade, doctors have gained unprecedented access to the human body which, in turn, has facilitated the treatment of previously incurable disease and conditions (e.g. some forms of cancer). nanoparticles and aerosols are now utilised for delivering a precise dose of therapeutics to tissues and cells via novel pathways circumventing body's natural defences and evading immune response. it is not difficult to imagine how such knowledge could be misapplied for malicious ends, including incapacitating and killing. research on bioregulators is a case in point. bioregulators are natural chemicals in the human body that play a vital role in the maintenance of the homeostasis but when administered in large quantities or in healthy individuals could be toxic and lead to serious disorders, even death. given their properties, bioregulators constitute the perfect bioweapon: efficient and virtually impossible to detect. and if in the past, security analysts discounted the risk of their weaponisation due to the instability of the compounds when released in the atmosphere, the emergence of novel drug delivery techniques has significantly altered the security calculus. this is just but one example of the challenges that the increasing convergence between biology and chemistry poses to the integrity of the international biological and chemical non-proliferation regimes. even though some effort has been made over the recent years to address those and other areas of concern and strengthen the international prohibition against biological and chemical warfare, in practical terms little has been achieved, as a result of which the risk of the hostile exploitation of novel scientific developments remains far from hypothetical. along with the risk of misuse of new knowledge, there is the risk posed by the lack of sufficient scientific knowledge. cross-disciplinary convergence opens a multitude of opportunities for manipulation and modification of living matter but, at the same time, it precludes almost any sensible assessment of the potential interactions likely to occur in the process. nano-based medicine is but one area that has attracted criticism in this regard. since some elements behave differently at nano-scale, it becomes extremely difficult to assess their level of toxicity or other negative side effects that they may exert. such is the case with long carbon nanotubes, which having been initially praised for their potential to improve implant development were later blamed for exhibiting asbestos-like behaviour that could lead to cancer. another area of converging science with far-reaching implications is synthetic biology, a cross-disciplinary field that draws upon strategies and techniques from molecular biology, chemistry, engineering, genomics, and nanotechnology and thus enables the design and modification of biological systems at a fundamental level. empowered by the tools of synthetic biology, in scientists managed to assemble a polio virus 'from scratch' in the absence of a natural template. and in craig venter and his team announced the construction of the first self-replicating synthetic cell which, in their view, was 'a proof of the principle that genomes can be designed in the computer, chemically made in the laboratory and transplanted into a recipient cell to produce a new self-replicating cell controlled only by the synthetic genome.' the controversial work has attracted criticism on several grounds, including the potential negative effects of the accidental or deliberate release of the novel organism in the environment and the arrogance of scientists to 'play god'. more broadly, both the polio and synthetic cell studies have exposed the obstacles to the regulation of synthetic biology. while some commentators dismiss the risk of bioterrorism, underscoring the key role of tacit skills and knowledge and the difficulties that the lack thereof poses to the replication of the experiments, other issues still merit attention. consider the question of access to commercially available genomic sequences. even though the oversight system for screening base pair orders has improved since the guardian report that exposed the lax regulations under which virtually anyone could order gene sequences, gaps still remain leaving scope for abuse by those with malign intent. for example, schmidt and giersch have outlined at least three areas of emerging challenges that the existing governance regimes would struggle to accommodate, including 'split orders', 'outsourcing', and the potential for non-natural biological systems. the human genome project completed in lasted over ten years and cost close to billion dollars; by contrast, about a decade later, wholegenome sequencing can be performed within hours at a price of roughly dollars or less. while still in its infancy, personalised medicine and individual genetic testing are steadily gaining popularity. indeed, 'up to people in england are expected to have their entire genetic makeup mapped in the first stage of an ambitious public health programme' launched by the national health service in that aims to 'revolutionise the treatment and prevention of cancer and other disease.' according to its proponents, genomic testing offers numerous advantages vis-à-vis traditional evidence-based medicine, including the possibility of early diagnostics of disease, of individually-tailored treatment and, perhaps most importantly, of disease prevention, as illustrated in the resolve of the hollywood actress angelina jolie to undergo double mastectomy after discovering she has an inherited genetic mutation that puts her at high risk of breast and ovarian cancer. but this is just the beginning. in scientists managed to sequence a foetus's entire genome using a blood sample from the mother and a saliva specimen from the father, a development that could potentially allow for a range of genetic disease conditions to be detected prenatally. and laboratory experiments have already demonstrated the efficacy of genetic therapy to cure mitochondrial disease by creating an embryo with genetic material from both parents and a third person acting as a donor. while truly breathtaking, the advances outlined above raise a host of thorny issues of ethical, social, and legal concern that merit public scrutiny and extensive deliberation before decisions regarding their widespread application are made. at a very basic level, there is the question of whether and to what extent we as individuals are capable of assimilating the information that our own genetic makeup may reveal. are we sufficiently resilient to cope with the emotional distress, anxiety, shame, stigma and guilt that the awareness of severe medical conditions that we or our closed ones are suffering or likely to develop? far from hypothetical, this question has prompted the establishment of a novel profession, that of the genetics counsellor whose task is to help patients overcome any negative effects, stress, or psychological trauma that the disclosure of their genomic map may create. this is just a partial solution though, for the crux of the matter lies in finding a way to deal effectively with risk and probabilities and we as humans are yet to demonstrate a capacity for understanding or relating them to our own lives. individual emotional turmoil, however significant, constitutes only the tip of the iceberg. according to daniel kevles, the torrent of new genetic information has already begun to fundamentally reconfigure social practices and inter-personal relations: it has been rightly emphasised that employers and medical or life insurers may seek to learn the genetic profiles of, respectively, prospective employees or clients. employers might wish to identify workers likely to contract disorders that allegedly affect job performance while both employers and insurers might wish to identify people likely to fall victim to diseases that result in costly medical or disability payouts. whatever the purpose, such genetic identification would brand people with what an american union official has called a life-long 'genetic scarlet letter' or what some europeans term a 'genetic passport'. linking genetic makeup with human identity would ultimately set the scene for the proliferation of technologies aimed at human enhancement: after all, if a gene therapy could allow one to stand a chance in a job competition, boosting one's capabilities would potentially make them a more desirable candidate. other issues of more immediate concern are also likely to arise. one is privacy. gene-sequencing companies usually hold the genetic data of their clients in digital format on online platforms, which automatically creates a risk that personal information may be leaked, hacked or stolen. further, there is the question of ownership. consider, for instance, the controversial issue of human gene patenting, whereby patented genes are treated as research tools and, as such, are controlled by the patent holder who may restrict and charge for their use. thus created, the system often operates to the detriment of patients by hindering research practice, elevating diagnostics prices and denying access to second and independent medical opinion. gene identification alone has a potential 'dark side' too, for it could enable the development of weapons targeted at groupspecific gene markers (e.g. ethnicity). pre-natal genetic testing is yet another significant bone of contention, not least because it evokes notions of state-mandated eugenic programmes and assaults on human rights and dignity. while a nazi-like campaign for a superior race seems improbable in the twentieth-first century, this is not to say that other forms of eugenics may not be encouraged. indeed, some commentators have highlighted the rise of 'homemade eugenics', whereby individual families can make decisions on the attributes of their progeny: the lure of biologically improving the human race, having tantalised brilliant scientists in the past, could equally seduce them in the future, even though the expression of the imperatives may differ in language and sophistication. objective, socially unprejudiced knowledge is not ipso facto inconsistent with eugenic goals of some type. such knowledge may, indeed, assist in seeking them, especially in the consumer-oriented, commercially driven enterprise of contemporary biomedicine. it is plausible to assume that when presented with the opportunity of having their future child tested for genetic disorders, many parents would barely hesitate to accept. such a resolve could have far-reaching implications though. for instance, some genetic therapies entail the use of donor dna different from that of the parents, whereby any genetic modifications in the embryo will pass down to future generations. despite the government support for the 'three-parent babies' in the uk, local religious organisations have protested vociferously against the legalisation of the technique. at the same time, there are certain genetic disorders that can be diagnosed at an early stage but, as of yet, cannot be cured, which inevitably poses the tough choice between raising an unhealthy child and abortion. to be sure, such questions constitute more than individual parents' dilemmas, for they touch upon established social and cultural values, something evident in the profound differences across national reproductive policies. more broadly, there are concerns that reproductive genomics may remain a prerogative of those affluent enough to afford it, thus further exacerbating the divide between the global rich and the global poor. the growth of life science capacity over the past few decades across the globe has been truly astonishing, leading to the emergence of a vibrant research community that brings together researchers from various parts of the world. indeed, a nas report highlights the extension of both north-south and south-south partnerships, which has played a key role in synergising strengths and maximising competitiveness by improving the quality and effectiveness of research and facilitating data sharing. at the same time, increasing collaboration in the realm of biotechnology industry has offered companies situated in emerging economies access to the global market, thus contributing to economic development and growth. recent advances in technology and laboratory and experimental equipment have further impacted on the practice of life science research in profound ways. improvements in dna sequencing technology have significantly shortened the time required for the preparation of nucleic base-lines, thus relieving scientists of the burden of completing the task themselves and allowing them to focus on their actual project instead. studies and experiments once performed by senior researchers with extensive experience are now carried out by masters students. aided by specially designed genetic engineering toolkits, children as young as the age of ten start exploring the realm of biology in an interactive and engaging manner. needless to say, their notion of science and the world in general would differ significantly from that of their parents whose primary sources of knowledge used to be textbooks and encyclopaedias. indeed, the increasing commercialisation of synthetic biology offers anyone curious enough to fiddle with biological systems the chance of doing so in the comfort of their own home. such modern gene hackers often lack formal background in biology and come from various walks of life. driven by an insatiable appetite for knowledge and the vision of a ground-breaking discovery that could be turned into a multimillion dollar profit, they take up the rather unusual hobby of biohacking which entails the redesign of existing and the creation of novel biological systems. for just few hundred dollars bio enthusiasts set up laboratories easily obtaining all essential requisites and equipment from online sales. and if to some biohacking equates to little more than an unusual hobby, others highlight its potential to generate substantial revenue and fuel economic development. contrary to popular expectation, biohackers are not just eccentric individuals who work in solitude away from public attention. rather, they are members of a wide global movement dedicated to the ideal of do-it-yourself biology (diy), which has branches in locations on four continents. the movement has been partially institutionalised through the establishment of the biobricks and international genetically engineered machine (igem) foundations, which seek to promote the open and ethical conduct of biological engineering and stimulate innovation and creativity. to this end, igem holds an annual competition open to high school students, university undergraduates and entrepreneurs from all over the world. with more than participating teams, the competition constitutes the premiere forum at which biohackers can showcase their skills through project presentation. exciting as it may seem, the ongoing diffusion of life science expertise poses an array of governance conundrums. at the level of professional practice, the proliferation of research facilities around the world has exposed the urgent need for laboratory biosafety and biosecurity training, especially in developing states where a tradition of handling dangerous pathogens is lacking. the issue is further complicated, for such countries often lack the required legal and institutional infrastructure to ensure that professional practice is in compliance with relevant international regulations. foreign aid has gone some way in helping overcome those deficiencies but it has given rise to new problems, too. for instance, it is far from unlikely for a donor state to provide material support for the construction of a state-of-theart laboratory eventually leaving its maintenance to the local government, which can hardly afford the subsequent costs. a similar trend is observed in the area of capacity building and human resource development. most projects that aim to promote biorisk management and a biological security culture tend to be severely constrained in terms of time and funding and overly ambitious in terms of agenda and expected outcomes. lack of adequate mechanisms for quality assessment hinders progress evaluation and sometimes leads to duplication of effort and resources. the emergence of the diy biologists in the life science arena has further added to the challenge of ensuring that novel scientific and technological developments are utilised in a safe and ethical manner. even at the level of everyday practice, difficulties still persist. for instance, many amateur scientists have complained of the lack of manuals and guidelines regarding the safe operation and maintenance of home laboratories. issues such as waste disposal, safe handling and storage of biological material and prevention of contamination pervade the work of biohackers who unlike professional researchers conduct experiments in a much more volatile environment. potential security concerns are also present. with more and more individuals gaining access to biological engineering technologies, ensuring appropriate oversight of what goes on in garage laboratories becomes increasingly difficult. the experience of the us fbi is a case in point. back in the fbi arrested steven kurtz, a professor at the university of buffalo under the suspicion of plotting a bioterrorist attack. the subsequent investigation revealed that all laboratory and dna extraction equipment found in kurtz's house was legitimately obtained and used in his artwork. in an attempt to avoid mistakes of this kind, the fbi has drastically changed its approach to dealing with the diy movement launching a series of outreach activities that seek to raise awareness of the potential security implications of biohacking. while undoubtedly necessary, such initiatives may well be seen as too little, too late in light of the wide spread of materials, tools and devices that could facilitate the malign misuse of the life sciences. indeed, it is worth noting that as early as the late s the us defence threat reduction agency (dtra) managed to build a research facility that simulated the manufacture of weaponised anthrax using only commercially available materials and equipment. the role of states: both a poacher and gamekeeper structural factors have an important bearing on the development and growth of biotechnology. economic considerations, power interests and realist fears generate potent dynamics that shape and influence and sometimes direct the life science trajectory. within this context, states assume a dual role. on the one hand, they are expected to act as gamekeepers and regulate, monitor and control the process of life science research and the dissemination of novel technologies. on the other hand, though, they also have powerful incentives to act as 'poachers', not least because of the fascinating opportunities for enhancing their prosperity, prestige and security that scientific and technological development open up. the following passage effectively outlines states' dual function: government has an important role in setting long-term priorities and in making sure a national environment exists in which beneficial innovations will be developed. there must be a free and rational debate about the ethical and social aspects of potential uses of technology, and government must provide an arena for these debates that is most conducive to results that benefit humans. at the same time, government must ensure economic conditions that facilitate the rapid invention and deployment of beneficial technologies, thereby encouraging entrepreneurs and venture capitalists to promote innovation. given that the agent (i.e. state governments) in charge of initiating ethical debates on the progress of biotechnology is also the one expected to provide the conditions that would allow this progress to generate outcomes likely to contribute to economic growth and political superiority, it is hardly surprising that any issues likely to slow down or otherwise hinder the enormous momentum of the life sciences are omitted from public discussion. this duality further informs how risks are perceived, framed and addressed. for instance, even though most of the developing countries lack capacity to manage dual use research of concern, they do not see this as an immediate priority and prefer to invest effort and resources in improving their laboratory biosafety and laboratory biosecurity infrastructure and capacity. in the view of their governments, the dangers of naturally occurring and circulating diseases constitute a far greater worry than the potential for misuse of cutting-edge research. by contrast, some developed countries, most notably the usa, have embarked on building their biological defence systems highlighting the grave threat posed by the potential use of bioweapons by non-state actors. their activities have encountered severe opprobrium as some analysts see them as a contravention of the norms embedded in the btwc. the evolution of the chemical and biological non-proliferation regime epitomises the attempts of states to avert the hostile exploitation of the life sciences whilst promoting their use for 'peaceful, prophylactic and protective purposes'. the entry into force of the btwc and the chemical weapons convention (cwc) in and , respectively, is indicative both of states' renunciation of chemical, biological, and toxin weapons and of their commitment to the goals of arms control and disarmament. that said, the imperfections and shortcomings of these treaties signify the influence of realist fears and political calculations that pervade international negotiations. in the case of the btwc, two points merit attention. the first pertains to the lack of verification mechanism when the treaty was first agreed back in the early s. subsequent revelations of secret state-led offensive biological programmes in the former soviet union, south africa and iraq up until the early s have significantly undermined the convention. second, the failure to negotiate a binding protocol in has further dimmed the prospects for strengthening the regime and thus ensuring universal compliance with its prescriptions. less acute but just as worrying is the situation regarding the cwc. even though the convention is exemplary in many respects, not least because of its verification system, almost universal membership and implementing body -organisation for the prohibition of chemical weapons (opcw)it still faces serious challenges that need to be considered. for instance, while the treaty bans the development, production, acquisition, and retention of chemical weapons, the definition of 'purposes not prohibited under th[e] convention' entails 'law enforcement including domestic riot control purposes' (article ii. d). some commentators have argued that given the lack of a universally agreed definition what kind of activities count as 'law enforcement', this text opens a major loophole in the convention. several states parties of the convention have voiced concerns in this regard. australia has noted that: the weaponisation of [central nervous system] acting chemicals for law enforcement purposes is of concern to australia due to the health and safety risks and the possibility of their deliberate misuse, both of which have the potential to undermine the global norm against the use of toxic chemicals for purposes prohibited by the convention. [ . . . ] australia's position is that it is not possible for a state party to disseminate anaesthetics, sedatives or analgesics by aerial dispersion in an effective and safe manner for law enforcement purposes. critics highlight the possibility for the deployment of novel chemical weapons for the purposes of countering terrorism, something evident in the moscow theatre siege (dubrovka) when the russian security forces used a fentanyl-derivative agent, as a result of which about a sixth of the hostages and all of the terrorists involved died. in the european court of human rights ruled with regard to the dubrovka operation that: there had been no violation of article (right to life) of the european convention on human rights concerning the decision to resolve the hostage crisis by force and use of gas. the court, nonetheless, noted that: even if the gas had not been a 'lethal force' but rather a 'non-lethal incapacitating weapon', it had been dangerous and even potentially fatal for a weakened person [ . . . ]. the court further confirmed some of the earlier criticisms that were levelled against the government, particularly in terms of preparedness and provision of medical assistance. according to the ruling, russia had to pay damages to all the applicantsrepresentatives of siege victims. to date, russian officials have withheld information concerning the exact formula of the gas, which was used during the dubrovka operation, on security grounds. given the lack of an internationally agreed definition of what constitutes 'terrorism' on the one hand, and the rise of irregular/asymmetric warfare and sporadic conflicts, on the other, some commentators have warned against the possibility of a 'grey area' which may enable states to utilise non-traditional methods of war to gain advantage. deliberative systems encompass a vast array of practices, processes and mechanisms, both formal and informal, whereby a polity considers the 'acceptability, appropriateness and control of novel developments in or impacting on, shared social and physical arenas'. by design, they reflect and are informed by the values, beliefs and standards shared among the group, or in other words, by the prevalent culture. as such, deliberative systems vary across societies with their intensity, inclusiveness and structure depending on the established political and social norms. yet their chief purpose and function remain virtually the same, namely to help societies adapt to the changing circumstance of their milieu in a way that ensures stability, sustainability and safety. public deliberation requires time; and wide-ranging life science advances, current and planned, offer profound challenges to shared ideas and ideals about the foundations of human relatedness and of social coherence, justice, human dignity and many other norms, both formal and informal. yet given the ruminative nature of deliberative processes, on the one hand, and the fast speed at which biotechnology innovation is evolving on the other, the danger of the former being steadily outpaced and overburdened by the latter is far from hypothetical. consider the following passage sketching the scale of social changes likely to arise from the increasing convergence between nanotechnology, biotechnology, cognitive neuroscience and information technology: in the foreseeable future, we will be inundated with new inventions, new discoveries, new start-ups, and new entrepreneurs. these will create new goods and new services. [ . . . ] as expectations change, the process of politics and government will change. people's lives will be more complex and inevitably overwhelming. keeping up with the changes that affect them and their loved ones exhausts most people. they focus most of their time and energy on tasks of everyday life. in the future, when they achieve success in their daily tasks, people will turn to the goods and services, the new job and investment opportunities, and the new ideas inherent in the entrepreneurial creativity of the age of transitions. no individual and no country will fully understand all of the changes as they occur or be able to adapt to them flawlessly during this time. this vision of a 'brave new world' merits attention on two important grounds. first, it implies that the changes likely to occur in the not too distant future as a result of the rapid progress of science and technology are imminent and unavoidable in the sense that their advent hardly depends on or even requires extensive public deliberation. second, given that our capacity for adaptation to and grasp of those changes will be considerably impaired, the age of transitions leaves little space for public deliberation. to add to this gloomy picture, there is already some evidence that the progress in the life sciences is overwhelming the existing deliberative mechanisms. for instance, kelle et al. argue that the rapidity of biotechnology advancement coupled with the immensity and complexity of the knowledge accumulated therefrom complicates efforts to deal with potential risks, something evident in the regulatory gap that the convergence of chemistry and biology has created in the area of arms control. this is problematic, for the reduced resilience of deliberative systems provides favourable conditions in which scientific and technological innovation can continue unabated. a vicious circle is thus created in which the inability of deliberative systems to cope with the strain exerted by biotechnology advancement fuels the latter turning it into a self-propelling force. the proliferation of contentious 'gain-of-function' research is a case in point. even though the h n controversy discussed in the preceding sections exposed the limitations of existing governance mechanisms for addressing the potential security, ethical, and legal implications arising from such studies, it hardly precluded scientists from conducting similar experiments. indeed, less than four months after the moratorium on research involving contagious h n virus was lifted, a team of chinese researchers announced the creation of a hybrid of the h n strain and the h n virus that caused the flu pandemic. and it was not long until the newly-emerged h n influenza virus became airborne, as well. if anything, those examples indicate that in light of the rapid pace of life science progress, addressing governance concerns on a caseby-case basis is not only self-defeating but given the number and variety of conundrums, it is likely to become unsustainable in the long run. given the significant potential of biotechnology to bring about multifaceted changes in different spheres of life and generate considerable benefits in the form of new products, enhancement of public and private capital and alleviation of social ills, there is a powerful urge to allow the ongoing expansion of the life sciences to proceed largely unfettered. risks are carefully calculated and, where possible, downplayed as hypothetical at the expense of comprehensive deliberation. and even when proposals for risk mitigation measures are entertained, preference is usually given to those unlikely to hinder the progress of life sciences. by and large, there is a genuine belief that the existing governance mechanisms in the area of biotechnology can accommodate and cope with the wide-ranging pressures exerted by scientific innovation and the rapid diffusion of technologies with multiple uses, by offering 'solutions' and handling concerns on a case-by-case basis. in particular, the technology of safety is still 'celebrated as an unadulterated improvement for society as a whole'. yet there are reasons for scepticism toward the adequacy and effectiveness of the governance approaches currently in place. much of the discussion in the preceding sections has focused on the ways in which the increasing pace, growth and global diffusion of biotechnology advances are beginning to expose the limits of the existing measures for control and risk management by challenging accepted values and beliefs and redefining established norms of practice. as the multifaceted dynamics driving the biotechnology momentum continue to intensify and multiply, it becomes more and more difficult to comprehend, let alone foresee, the various impacts that the large-scale deployment and proliferation of novel scientific and technological advances have both on our social systems and the environment. given the tight coupling between human-made and natural systems and their complex, often unanticipated interactions with catastrophic potential, the existing narrow definitions of risk are rendered inadequate. at the same time, the advent of new technologies with multiple adaptive applications opens up an array of possibilities for hostile exploitation thus compelling governments to make tough decisions in an attempt to reconcile the benefits of biotechnology with the potential security concerns arising therefrom. while the advancement of biotechnology promises tremendous public health benefits, it also holds a considerable catastrophic potential, as the case of 'gain-of-function' experiments illustrate. as scientific capabilities and work involving dangerous pathogens proliferate globally, so do risks and the prospects of failures, whether technical or arising from human error. indeed, assessing the rapidly evolving life science landscape some security commentators argue that 'current genetic engineering technology and the practices of the community that sustains it have definitively displaced the potential threat of biological warfare beyond the risks posed by naturally occurring epidemics' . laboratories, however well equipped, do not exist in isolation but are an integral part of a larger ecological system. as such, they constitute a 'buffer zone' between the activities carried out inside and the wider environment. and despite being technically advanced and designed to ensure safety, this 'buffer zone', just as other safety systems is far from infallible. for one thing, mechanical controls leave room for human error and personal judgement, both of which are factors that could be highly consequential but which could hardly be modelled or predicted with exact certainty. the speed at which the transformation of the life sciences is taking place is yet another factor that adds to the complexity of life science governance. stability is a fundamental condition for the development and preservation of human and natural systems alike. in social systems, culture is the primary source of stability, for it determines what values, beliefs, practices and modes of behaviour are deemed acceptable and, as such, lays the foundations of order. all forms of governance therefore are cultural artefacts and manifestations of culture. culture also provides the tacit standards whereby change is assessed and treated as acceptable or unacceptable. hence, any state of affairs in which the rate of change precludes regulation disrupts the ordinary functioning of the system and jeopardises its preservation: the breakdown of human regulation does not extinguish regulation of a simpler sort. [ . . . ] the system formed by men and the rest of the natural world will continue to regulate itself after a fashion, even if human regulation wholly fails at all levels above the primary group. but the resulting 'order' would exclude all those levels of human order which man-made stability makes possible. to be sure, a world characterised by a runaway biotechnology would be far different from the one we know. the main challenge to averting this prospect lies in ensuring that the systems of governance are in sync with the progress of life sciences. history has shown that even highly developed, long-standing systems of governance can fail for reasons as diverse as disasters; loss of authority/legitimacy of governing bodies; and pervasive corruption. one further source of failure includes the inability of a society to adapt to its changing milieu: men are adaptable; they can learn to live even in harsh and hostile environmentsso long as the environment remains constant enough to give them time to learn. [ . . . ] if they form the habit of adapting by constantly changing that to which they are trying to adapt, they build uncertainty into the very structure of their lives. they institutionalise cluelessness. the process of adaptation is closely connected to cultural patterns and any serious disruptions in the latter could have detrimental effects and impair it severely. the extent to which change is taking place within the framework of the prevalent culture defines the borderline between system evolution and system disintegration. the governance mechanisms currently in place, both formal and informal, are all a function of historical, cultural, and sociopolitical contingencies. as such, their capacity for adaptation largely depends on our ability to comprehend and assimilate the complex changes that the progress of biotechnology brings about. they can only evolve as fast as our shared standards, values, routines and perceptions allow them to. and that is why governance can hardly be reduced to a technocratic exercise; on the contrary, to be effective, it requires extensive deliberation and full appreciation of the far-reaching implications of novel life science advances. trends and drivers of change in the biomedical healthcare sector in europe: mapping report (dublin: european foundation for the improvement of living and working conditions chemical synthesis of poliovirus cdna: generation of infectious virus in the absence of natural template the test-tube synthesis of a chemical called poliovirus: the simple synthesis of a virus has far-reaching societal implications craig venter creates synthetic life form', the guardian moore's law pertains to the rapid rate of technological development and advances in the semiconductor industry, specifically the doubling of the number of transistors on integrated circuits that occurs approximately every months. although advances in the life sciences occur at more random intervals and are driven by new conceptual breakthroughs in understanding of biological processes, it is a useful metaphor for the exponential growth of knowledge related to biology. see committee on the advances in technology and the prevention of their application to next generation bioterrorism and biological warfare threats, an international perspective on advancing technologies and strategies for managing dual-use risks the myth of the biotech revolution: an assessment of technological, clinical and organisational change globalization, biosecurity and the future of the life sciences innovation in global industries: u. s. firms competing in a new world biotechnology and the un's millennium development goals top ten biotechnologies for improving health in developing countries rising to the challenge: u.s. innovation policy for global economy innovating in the new austerity, burrill & co's th annual report on the life sciences industry globalization and the future of the life sciences, op cit an international perspective on advancing strategies for managing dual-use risks the valley of the shadow of death', dspace@mit globalization and the future of the life sciences the biotechnology promise: capacity-building for participation of developing countries in the bioeconomy converging technologies for enhancing human performance: science and business perspectives converging technologies -shaping the future of european societies taking care of the symbolic order: how converging technologies challenge our concepts enhancement technologies and the modern self the ethics of nbic convergence bringing converging technologies closer to civil society: the role of precautionary principle', innovation: the innovation in global industries, op cit in the valley of the shadow of death see the 'infectious diseases' section of the who website for an overview of the nih budget for , see david malakoff and jeffrey mervis, 'first look: us spending deal a mixed bag for science within nih's flat budget, a few favourites', scienceinsider for information on the eu sixth framework programme and the activity area of life sciences cuba -battling cancer with biotechnology global status of commercialised biotech/gm crops plant genetic engineering: china hesitates on the brink', gmo safety rising to the challenge, op cit government academic, and venture firms come together in march to fund translational and early-stage development', fiercebiotech science for sale: the perils, rewards and delusions of campus capitalism relationships between academic institutions and industry in the life sciences -an industry survey the expanding role of university patenting in the life sciences: assessing the importance of experience and connectivity university-industry relationships and the design of biotechnology research entrepreneurial science in the academy: a case of the transformation of norms the triple helix of university-industry-government relations the triple helix: university-industry-government innovation in action the dynamics of innovation: from national systems and "mode " to a triple helix of university-industry-government relations commercialisation of the university and the problem choice by academic biological scientists see also dina biscotti et al. 'the "independent investigator": how academic researchers construct their professional identity in university-industry agricultural biotechnology research collaborations toward more secrecy in science: comments on some structural changes in science -and on their implications for an ethics of science varieties of secrets and secret varieties: the case of biotechnology for a detailed analysis on the us decision to reject the draft btwc protocol, see malcolm dando the white house the pitfalls of bioterrorism preparedness: the anthrax and smallpox experiences bioterrorism and smallpox planning: information and voluntary vaccination taking biodefence too far new labs, more terror a plague of researchers mixing bugs and bombs lab loses trio of plague mice', nature news anthrax letters' attack and the controversy of the us biodefence programme see jeanne guillemin army suspends germ research at maryland lab back to bioweapons? international health regulations (who, ), specifically section 'laboratory pandemic influenza preparedness framework for the sharing of influenza viruses and access to vaccines and other benefits (who for further discussion on the implementation of biotechnology regulations, see bo sundqvist et al. 'harmonisation of european laboratory response networks by implementing cwa : use of gap analysis and an "insider" exercise as tools on the development of health and safety regulations on the use of nanotechnology, see eileen kuempel et al. 'risk assessment and risk management of nanoparticles in the workplace: translating research into practice there is a debate on whether the 'american model' of science-policy making underpinned by neoliberal ideology is fully embraced in europe. see, for example, gabriele abels, 'the long and winding road from asilomar to brussels: science, policy and the public in biotechnology regulation further, a policy paper issued by a business taskforce appointed by the uk government issued a policy paper in late demanding the liberalization of the existing eu legislation which, in their view, 'places restrictions on products and technologies without adequate evidence of risk'. see department for business, innovation & skills and the prime minister's office, cut the eu red tape: report form the business task force proposed framework for the oversight of dual use life science research: strategies for minimising the potential misuse of research information biosecurity reconsidered: calibrating biological threats and responses some commentators have expressed scepticism toward the claim that scientific and technological advancement poses serious threats underscoring the importance of other factors, such as socio-economic and socio-technic contexts. see, for example, kathleen vogel, 'intelligent assessment: putting emerging biotechnology threats in context others, however, argue that advances in modern biology and medicine have implications for the evolution of biological weapon programmes. see malcolm dando, 'the impact of the development of modern biology and medicine on the evolution of offensive biological warfare programmes in the twentieth century expression of mouse interleukin- by a recombinant ectromelia virus suppresses cytolytic lymphocyte responses and overcomes genetic resistance to mousepox the efficacy of cidofovir treatment of mice infected with ectromelia (mousepox) virus encoding interleukin- disaster in the making creation of killer poxvirus could have been predicted chemical synthesis of poliovirus cdna: generation of infectious virus in the absence of natural template a tale of two studies: ethics, bioterrorism and the censorship of science first synthetic virus created', bbc news university of bradford, available at www.brad.ac.uk/ acad/sbtwc (accessed / / ). see also german ethics council, biosecurity -freedom and responsbility of research on the framing of risk in biotechnology, see geert van calster, 'risk regulation, eu law and emerging technologies: smother or smooth? ethics of risk analysis and regulatory review: from bio-to nanotechnology european biotechnology regulation: framing the risk assessment of a herbicide tolerant crop framing the uncertainty of risk: models of governance for genetically modified crops denmark's regulation of agri-biotechnology: co-existence bypassing risk issues a distorted regulatory landscape: genetically modified wheat and the influence of non-safety issues in canada on the shortcomings of the existing models for public deliberation on the risks of biotechnology, see les levidow, 'european public participation as risk governance: enhancing democratic accountability for agrobiotech policy for critique of cost-benefit analysis, see also brian rappert the creation of contagious h n avian influenza virus: implications for the education of life scientists all meetings of the us national science advisory board for biosecurity (nsabb) convened to discuss the manuscripts were restricted to selected individuals and full proceedings were never published. moreover, the consequential consultation meeting organised by the world health organisation (who) in february which rejected the nsabb recommendation for a redacted publication of the manuscripts featured the lead scientists who conducted the experiments and representatives of the us national institutes of health flu experts -and one ethicist -debate controversial h n papers', scienceinsider fight over dutch h n paper enters endgame', scienceinsider fight over dutch h n paper enters endgame', scienceinsider dutch appeals court dodges decision on hotly debated h n papers', scienceinsider bias accusation rattles us biosecurity board experimental adaptation of an influenza h ha confers respiratory droplet transmission to a reassortant h ha/h n virus in ferrets airborne transmission of influenza a/h n virus between ferrets: materials/methods, supporting text, tables, figures, and/or refences safety incidents" at animal lab', bbc news notes airflow problems plague cdc bioterror lab high-containment laboratories: assessment of the nation's need is missing the scientist lab infection blamed for singapore sars case between and , several high-containment facilities in the us have experienced serious biosafety lapses. accidents involving dangerous pathogens such as the causative agents of anthrax and bird flu were reported biosafety in the balance' anthrax? that's not the real worry cdc closes anthrax and flu labs after accidents about the same time, there were also reports about smallpox vials being retrived after having been left unaccounted for over years. see ap, 'forgotten vials of smallpox found in storage room gain-of-function' influenza research and proposal to organise a scientific briefing for the european commission and conduct comprehensive risk-benefit assessment the third revolution: the convergence of the life sciences see committee on biomocular materials and processes, national research council, inspired by biology: from molecules to materials to machines a new biology for the st century nanotechnology: what it can do for drug delivery facing the truth about nanotechnology in drug delivery carbon nanotubes -bullets in the fight against cancer on the governance challenges brought about by the convergence between biology and other fields of science, see francis fukuyama and caroline wagner, information and biological revolutions: global governance challenges -summary of a study group report of the first workshop the body's own bioweapons innovation, dual use, and security: managing the risks of emerging biological and chemical technologies life sciences and related fields: trends relevant to the biological weapons convention building better implants some nanotubes could cause cancer and breeding: panacea or pandora's box?', news release on the security implications of synthetic biology, see international council for the life sciences synthetic biology and biosecurity awareness in europe on the social and ethical aspects of synthetic biology, see presidential commission for the study of bioethical issues, new directions: the ethics of synthetic biology and emerging technologies synthetic biology: social and ethical challenges framing biosecurity: an alternative to the biotech revolution model vogel's view are contested in jonathan tucker 'could terrorists exploit synthetic biology lax laws, virus dna and potential for terror on the issue of commercial order screenings, see stephen maurer et al. making commercial biology safer: what the gene synthesis industry has learned about screening customers and orders on the governance of synthetic biology, see catherine lyall dna synthesis and biological security industry self-governance: a new way to manage dangerous technologies unpacking synthetic biology: identification of oversight policy problems and options synthetic biology, security and governance'¸biosocieties dna synthesis and security science enters $ , genome era', bbc news happened when i had my genome sequenced', the observer for information about commercial companies offering full-genome sequencing genome-based therapeutics: targeted drug discovery and development: workshop summary dna of , people to be mapped for nhs', the guardian systems cancer medicine: towards realisation of predictive, preventive, personalised and participatory (p ) medicine' grateful and moved" by reaction to her mastectomy decision', the guardian dna blueprint for fetus built using tests of parents cure for illness raises ethical fear', the guardian what happened when i had my genome sequenced another point that cadwalladr raises is the danger of a negative placebo effect whereby doubts about certain genetic disorder may lead to psychosomatic symptoms from eugenics to patents: genetics, law, and human rights unhidden traits: genomic data privacy debates heat up from eugenics to patents see also harriet washington, deadly monopolies: the shocking corporate takeover of life itself -and the consequences for your health and our medical future taking life: private rights in public nature benefits and threats of developments in biotechnology and genetic engineering' in sipri yearbook, armaments ethnic-affiliation estimation by use of population-specific dna markers racial differences in the response to drugs -pointers to genetic differences the achilles' helix from eugenics to patents three-person" embryos to combat genetic disease', the guardian innovative genetic treatment to prevent mitochondrial disease, press release three-parent embryos for mitochondrial disease? twelve reasons for caution february , the uk passed legislation allowing the use of the technique. see james gallagher, 'uk approves three-person babies', bbc news building baby from the genes up one commentator distinguishes between 'big science' which was 'todown, hierarchical, vertical' and 'networked science' characterised by 'open systems, open software, open participation'. see diane rhoten biopunk: solving biotech's biggest problems in kitchens and garages life hackers biology is technology: the promise, peril, and new business of engineering life diffusion of synthetic biology: a challenge to biosafety do i understand what i can create: biosafety issues in synthetic biology charge dropped against artist in a terror case strategies to educate amateur biologists and scientists in non-life science disciplines about dual use research in the life sciences governing amateur biology: extending responsible research and innovation in synthetic biology to new actors, research report for the wellcome trust project 'building sustainable capacity in dual use bioethics secret project manufactured mock anthrax', the washington times biological weapons and america's secret war global governance and the twenty-first century technology converging technologies for improving human performance: nanotechnology, biotechnology, information technology and cognitive science report of the who informal consultation on dual use research of concern biological threat assessment: is the cure worse than the disease? dangerous ambiguities: regulation of riot control agents and incapacitants under the chemical weapons convention weaponisation of central nervous system acting chemicals for law enforcement purposes, xix session of the conference of the states parties moscow theatre siege: questions remain unanswered', bbc news press release: use of gas against terrorists during the moscow theatre siege was justified, but the rescue operation afterwards was poorly planned and implemented spasenie zalozhnikov ili unichtozhenie terroristov?', novaya gazeta the dubrovka and beslan hostage crises: a critique of russian counter-terrorism sekretov bol'she net', rossiskaya gazeta see also national research council, avoiding surprise in an era of global technology advances the challenge to deliberative systems of technological systems convergence converging technologies for improving human performance, op cit preventing a biochemical arms race, op cit bringing the bwc conventions closer together', the cbw conventions bulletin h n hybrid viruses bearing /h n virus genes transmit in guinea pigs by respiratory droplet infectivity, transmission, and pathology of human-isolated h n influenza virus in ferrets and pigs limited airborne transmission of h n influenza a virus between ferrets shifting the blame: literature, law, and the theory of accidents in nineteenth-century america normal accidents, op cit building a bio world shifting the blame, op cit freedom in a rocking boat: changing values in an unstable society some commentators have critiqued the work of diamond on the grounds of simplicity. for a summary of some of the criticisms levelled at his work, see eric powell key: cord- -uvjjmt p authors: shi, yong; zheng, yuanchun; guo, kun; jin, zhenni; huang, zili title: the evolution characteristics of systemic risk in china’s stock market based on a dynamic complex network date: - - journal: entropy (basel) doi: . /e sha: doc_id: cord_uid: uvjjmt p the stock market is a complex system with unpredictable stock price fluctuations. when the positive feedback in the market amplifies, the systemic risk will increase rapidly. during the last years of development, the mechanism and governance system of china’s stock market have been constantly improving, but irrational shocks have still appeared suddenly in the last decade, making investment decisions risky. therefore, based on the daily return of all a-shares in china, this paper constructs a dynamic complex network of individual stocks, and represents the systemic risk of the market using the average weighting degree, as well as the adjusted structural entropy, of the network. in order to eliminate the influence of disturbance factors, empirical mode decomposition (emd) and grey relational analysis (gra) are used to decompose and reconstruct the sequences to obtain the evolution trend and periodic fluctuation of systemic risk. the results show that the systemic risk of china’s stock market as a whole shows a downward trend, and the periodic fluctuation of systemic risk has a long-term equilibrium relationship with the abnormal fluctuation of the stock market. further, each rise of systemic risk corresponds to external factor shocks and internal structural problems. the stock market is a typical complex system with multiple stock prices fluctuating from equilibrium to deviation and to equilibrium again. a large number of heterogeneous investors buy and sell stocks frequently, making the relationships between different stocks unpredictable. in most scenarios, owing to some factors like herd effect, investors' investment strategies converge [ , ] ; when some investors buy a stock, other investors tend to buy the same one, and furthermore, when the vast majority of investors buy or sell stocks, other investors usually follow this action. at the same time, different listed companies are another heterogeneous agent in the stock market. on the one hand, the economic exchanges between listed companies will lead to the linkage of their stock prices. on the other hand, similar actions by investors on similar stocks can cause herd behavior between different stock prices. when the prices of a large number of stocks in the market tend to be consistent, it means that the herd effect in the market is higher, and the stock market is more likely to fluctuate excessively and consistently, leading to higher market systemic risks [ , ] . in former studies, the capital asset pricing model (capm) framework was usually used to analyze financial systematic risks as a basic theory [ ] [ ] [ ] [ ] . according to capm, risks can be divided into systematic risk (or market risk) and non-systematic risk, while the latter can be diminished through investment portfolios. the systematic risk often refers to pervasive, far-reaching, perpetual market risk, which can be measured by the variance of the portfolio (beta) altogether. therefore, most studies on systematic risk are based on beta values. although this theory is widely adopted, it usually comes with a number of hypotheses, such as homogenous investors in capital markets. however, in modern financial markets, different investors generally have different degrees of rationality, ability to obtain information, and sensitivity to prices, that is, investors are usually heterogenous. hence, capm may not be a reasonable model in the real complex world [ , ] . more importantly, this paper focuses on the systemic risk, which reflects the stability of the system and the characteristics of risk transmission among individuals in a certain complex system. a complex network, which is based on physics and mathematics theory, can tackle complicated practical problems [ ] . it is especially suitable for modeling, analysis, and calculation in complex finance systems [ ] . nowadays, the literature on applying complex networks to finance is growing in size, and complex networks have become important tools in the finance field [ ] . after years of development, china's stock market is growing in scale and vitality, while the market operation mechanism and management system are constantly improving. nevertheless, there have been several typical bear and bull markets in recent years, and systemic risk in the stock market has risen periodically. therefore, a dynamic complex network of individual stocks in china's stock market is constructed in this paper to measure the dynamic systemic risk of china's stock market. then, the tendency evolution and cycle change characteristics of systemic risk are explored. the structure of this paper is as follows. section summarizes the applications of complex networks in the field of economy and finance; section introduces the data and methodology used in this paper; section proposes the empirical results and analysis; and the conclusions and some discussion are given in section . construction of the network consists of two important steps, defining nodes and defining edges. in previous studies, nodes are usually represented by different agents in the financial market, that is, stocks or bonds, and edges are symbolized by the relationship between such agents. pearson's correlation coefficient is the most common and easiest way to measure the correlation between two entities in the financial market [ ] [ ] [ ] [ ] [ ] [ ] [ ] . for example, mcdonald et al. used pearson correlation coefficient to construct a currency-related network in the global foreign exchange market and obtained temporary dominant or dependent currency information [ ] . in addition, other correlation coefficients, such as spearman rank-order correlation coefficient [ ] , multifractal detrended cross-correlation analysis (mfcca) [ , ] , multifractal detrended fluctuation analysis (mfdca) [ ] , and cophenetic correlation coefficient (ccc) [ ] , have also been put forward. furthermore, correlation can also be defined by some econometric methods, such as the granger causality test [ , ] , cointegration test [ ] , dynamic correlation coefficient with garch (dcc-garch) [ ] , and so on. after the definition of edges, some filter methods for choosing the important edges should be applied. otherwise, the complex network will be very large and complicated, which is not conducive to subsequent analysis. minimum spanning tree (mst) can be used for this purpose. after mst operation, the complex network will retain only n − edges, where n is the number of the nodes, which greatly facilitates the study of the network topology. at present, mst is most commonly used to simplify the financial complex network [ , , [ ] [ ] [ ] , [ ] [ ] [ ] . for example, in , mantegna first proposed that mst could be used to search for important edges in the stock market network, and a stock market topology with economic significance could be obtained [ ] . except for mst, other days and day for step. then, the average weight and structural entropy of the network in each day can be obtained. the ratio of the stocks with weight in the top % to the average weight in each window period can also be calculated and defined as the concentration ratio of important stock. therefore, four network indexes could be derived. next, these four network indexes were combined with the stock market index and - standardization before empirical mode decomposition (emd) was performed. through the above process, the original sequences were divided into a number of intrinsic mode functions (imfs). then, the results were reconstructed with grey relational analysis (gra), making each sequence have three items, that is, tendency, cycle, and disturbance. finally, the statistical analysis of the three components was conducted in order to explore the development of china's stock market and the evolution characteristics of systemic risk. modeling of the complex network, emd, and gra is introduced as follows. window period can also be calculated and defined as the concentration ratio of important stock. therefore, four network indexes could be derived. next, these four network indexes were combined with the stock market index and - standardization before empirical mode decomposition (emd) was performed. through the above process, the original sequences were divided into a number of intrinsic mode functions (imfs). then, the results were reconstructed with grey relational analysis (gra), making each sequence have three items, that is, tendency, cycle, and disturbance. finally, the statistical analysis of the three components was conducted in order to explore the development of china's stock market and the evolution characteristics of systemic risk. modeling of the complex network, emd, and gra is introduced as follows. a complex network consists of several nodes and edges linking them. the node is the basic element of a complex network, which is the abstract expression of an "individual" in the real world. the edge is an expression of the relationship between the elements and can be given weight according to the extent of the relationships. here, represents the weight of the edge linking node and node , where , = , , , . . . , and is the number of nodes in a certain network. for an undirected network, ( ) we can also use the weighted degree to represent the importance of nodes, which is defined as where ( ) is the set of nodes linking to node . the larger the weighted degree, the stronger the degree of correlation with other nodes and the more important the node. we use the return rates of a-share stocks on china's stock market as the network nodes and construct the network using correlation coefficient as the edge weight. a complex network consists of several nodes and edges linking them. the node is the basic element of a complex network, which is the abstract expression of an "individual" in the real world. the edge is an expression of the relationship between the elements and can be given weight according to the extent of the relationships. here, w ij represents the weight of the edge linking node i and node j, where i, j = , , , . . . , n and n is the number of nodes in a certain network. for an undirected network, we can also use the weighted degree to represent the importance of nodes, which is defined as where v (i) is the set of nodes linking to node i. the larger the weighted degree, the stronger the degree of correlation with other nodes and the more important the node. we use the return rates of a-share stocks on china's stock market as the network nodes and construct the network using correlation coefficient ρ ij as the edge weight. here, {x it , i = , , · · · , n; t = , , · · · , t} is the original stock return rates data and < · · · > indicates a time-average over the t data points for each time series. after we get w ij , we calculate the average weight, top nodes weight, and concentration ratio below: top nodes = concentration ratio = top nodes average weight ( ) where top (i) means the nodes i with the top weights (dw i ). furthermore, we calculate the network's structural entropy, which is often used to measure the complexity of the complex network system [ ] . however, as the structural entropy of the all-connected network is constant, it is meaningless for our analysis, so we need to remove the edge of weak correlation to get a non-all-connected network for calculating the structural entropy. the threshold value of the correlation coefficient is set at . . if the absolute value of the correlation coefficient ρ ij , that is, w ij , is less than . , this edge will be cut off, and we will get a non-fully connected network to calculate the structural entropy e deg under each window [ ] : where n is the total number of nodes in the network; k is boltzmann's constant; and p i can be calculated by the number of edges connecting to node i, namely, the degree of node i: combining the three network indexes with china's stock market index gives four input data, named as {y kt , k = , , , ; t = , , · · · , t}. y kt have to be - standardized, owing to significant differences at the numerical level, that is, for the signal z(t), the upper and lower envelopes are determined by local maximum and minimum values of the cubic spline interpolation. m is the mean of envelopes. subtracting m from z(t) yields a new sequence h . if h is steady (does not have a negative local maximum or positive local minimum), it is denoted as the intrinsic mode function (im f ). if h is not steady, it is decomposed again, until steady series is attained, which is denoted as im f . then, m replaces the original z(t) and m is the mean of the envelopes of m , and m is similarly decomposed. repeating these processes k times gives im f k , that is, finally, let res denote the residual of z(t) and all im f s: where im f s and res could be extracted for the gra process. the grey relational analysis was first put forward by deng j l in [ ] . his grey relational degree model, which is usually called the grey relative correlation degree, mainly focused on the influence of distance between points in the system. the grey relative correlation degree formula is given by equation ( ). where d i (t) is the reference series; d j (t) is the compared series; and ρ is the distinguishing coefficient, which is usually equal to . . in order to overcome the weakness of the grey relative correlation degree, the absolute correlation degree was proposed by mei ( ) [ ] . the formula is given by equation ( ). considering the weakness and strength, we used the grey comprehensive relational degree to classify the noise terms and market fluctuation terms. the formula of the grey comprehensive relational degree is given by equation ( ): where β is the weight of the grey relative relational degree, which is valued as . . figure a compares the three average weight related indicator of the dynamic complex network with the dynamic evolution of the shanghai composite index standardized by setting it as on the first trading day of . it can be seen that the average weight of the complex network and the average weight of the top stocks have strong synchronization, with a high correlation of . . therefore, both of them can be used as proxy indicators of systemic risk. however, the concentration ratio is not consistent with the overall systemic risk. the concentration of risk is relatively low when the systemic risk is high, which means the risk is relatively decentralized. furthermore, the concentration ratio and the average weight are significantly negatively correlated with a correlation coefficient of − . . in this way, we will focus on using the index of the average weight to measure the systemic risk of the chinese stock market. it can also be seen from figure a that, although there is a correlation between two average weight indexes (all and top ) and the stock index, the coefficients, − . and − . , are relatively small. this proved that the level of systemic risk is not determined by the move of overall price trend. in order to further investigate the relationship between the systemic risk represented by the average weight, the beta value (β) obtained by the capm model, and the stock average variance (v), we estimated β t and v t as follows: where n is the total number of stocks; t is the length of the sliding window; r f is the risk-free interest rate, which was set to %; x kt is the return of the kth stock in the sliding window t; y t is the return of the stock index, which is symbolized for market return and is represented by .sh; β kt is calculated by mls with y t and x kt ; e kt is the error term; β t is the average of all individual stocks' beta; and v t is the average variance of all stocks in sliding window t. in figure b , we compare the systemic risk with beta and stock variance, finding that these three have different moving trends, which shows that our systemic risk index can catch unique market fluctuations. furthermore, the systemic risk index was ahead of beta in several stages, such as from june to july or from july to august , which shows that our systemic risk index has a certain risk pre-warning ability. it can also be seen from figure a that, although there is a correlation between two average weight indexes (all and top ) and the stock index, the coefficients, − . and − . , are relatively small. this proved that the level of systemic risk is not determined by the move of overall price trend. in order to further investigate the relationship between the systemic risk represented by the average weight, the beta value ( ) obtained by the capm model, and the stock average variance ( ), we further compared the systemic risk represented by average weight with the volatility index (vix) of china and the u.s. stock market. considering the chinese vix cannot cover the above research range, the u.s. vix was selected for comparison purposes. the correlation coefficient between the two vix in this range is significantly positive, but the coefficient is only . . figure a presents the great differences in the trend of vix between china and the united states. it can be seen that the correlation coefficient between average weight and chinese vix is . during the interval since the chinese vix launched. it is noteworthy that the volatility index leads the systemic risk index to a certain extent. this is confirmed by the results obtained from the cross-correlation analysis with the maximum coefficient of . , corresponding to lags of days (which means current systemic risk is highly related to the vix from days prior). however, this is mainly because the systemic risk index constructed in this paper was compiled using the sliding window method, with the window length of days, so the systemic risk index of a certain time, t actually represents the systemic risk of the previous days. between the two vix in this range is significantly positive, but the coefficient is only . . figure a presents the great differences in the trend of vix between china and the united states. it can be seen that the correlation coefficient between average weight and chinese vix is . during the interval since the chinese vix launched. it is noteworthy that the volatility index leads the systemic risk index to a certain extent. this is confirmed by the results obtained from the crosscorrelation analysis with the maximum coefficient of . , corresponding to lags of days (which means current systemic risk is highly related to the vix from days prior). however, this is mainly because the systemic risk index constructed in this paper was compiled using the sliding window method, with the window length of days, so the systemic risk index of a certain time, , actually represents the systemic risk of the previous days. in fact, the complex network characteristics of individual stocks are effective at reflecting the systemic risk of the market. to verify this, we calculated the -day averages for vix, which are shown in figure b . it can be seen that the systemic risk index constructed in this paper is consistent with the -day average trend for china's vix, and the systemic risk is ahead of china's vix after and is more sensitive, which proves the effectiveness of the systemic risk index derived from the complex network. figure shows the comparison between the structural entropy and the number of nodes in a complex network. it can be seen that the structural entropy is highly correlated with the number of nodes, and the correlation coefficient reaches . . in other words, the increase in system complexity of china's stock market is mainly caused by the increase in the number of listed companies. nevertheless, we can also find that, in addition to the overall upward trend, structural entropy also has periodic fluctuations. therefore, multi-scale analysis is required to determine whether the system complexity represented by structural entropy is related to systemic risk. in fact, the complex network characteristics of individual stocks are effective at reflecting the systemic risk of the market. to verify this, we calculated the -day averages for vix, which are shown in figure b . it can be seen that the systemic risk index constructed in this paper is consistent with the -day average trend for china's vix, and the systemic risk is ahead of china's vix after and is more sensitive, which proves the effectiveness of the systemic risk index derived from the complex network. figure shows the comparison between the structural entropy and the number of nodes in a complex network. it can be seen that the structural entropy is highly correlated with the number of nodes, and the correlation coefficient reaches . . in other words, the increase in system complexity of china's stock market is mainly caused by the increase in the number of listed companies. nevertheless, we can also find that, in addition to the overall upward trend, structural entropy also has periodic fluctuations. therefore, multi-scale analysis is required to determine whether the system complexity represented by structural entropy is related to systemic risk. figure presents the emd results of the standardized systemic risk index, structural entropy, and stock price index, respectively. it can be seen that the two original sequences are divided into seven imfs and one residual term, among which the residual term can represent the overall trend of indexes' evolution to a certain extent, while the imf of lower frequency can describe the periodic fluctuation of indexes in different time scales, and the imf of highest frequency represents the stochastic perturbation. figure presents the emd results of the standardized systemic risk index, structural entropy, and stock price index, respectively. it can be seen that the two original sequences are divided into seven imfs and one residual term, among which the residual term can represent the overall trend of indexes' evolution to a certain extent, while the imf of lower frequency can describe the periodic fluctuation of indexes in different time scales, and the imf of highest frequency represents the stochastic perturbation. through emd, it can be found that the residual term, also known as the trend term, decomposed by structural entropy, represents the growth in the number of network nodes, and the correlation coefficient between this residual term and the number of network nodes can be further improved to . . when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches . . therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. . . when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches . . therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures and present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures and present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. . . when removing the trend term from the original sequence and comparing it to the systemic risk series represented by the average weight, as shown in figure , the highly consistent fluctuations between the two series can be seen, and the correlation coefficient of the two reaches . . therefore, adjusted structural entropy, that is, removing the trend term of the network size, can also measure the systemic risk. nevertheless, owing to the high correlation between these two series, the following analysis only focuses on the systemic risk represented by average weight. in order to further observe the systemic risk evolution of the chinese stock market, several imfs and residual terms obtained from emd decomposition were combined using the method of grey correlation degree. figures and present the trend term, cycle term, and random term of systemic risk (average weight) and the stock price index. then, we focused on the overall trend change and cycle fluctuation of systemic risk in china's stock market. for the long-term tendency, we found that the overall trend of the stock price rose steadily, while the systemic risk has been declining slowly throughout the evolution of the chinese stock market since . this means that, although there is still phased systemic risk in the chinese stock market, the overall level of systemic risk is declining as the operating mechanism and related regulations are constantly improving. for the long-term tendency, we found that the overall trend of the stock price rose steadily, while the systemic risk has been declining slowly throughout the evolution of the chinese stock market since . this means that, although there is still phased systemic risk in the chinese stock market, the overall level of systemic risk is declining as the operating mechanism and related regulations are constantly improving. for the cycle fluctuation, the rise of systemic risk is usually caused by the joint action of external shocks and internal operations, which is manifested in the excessive rise and fall in the stock market. therefore, the cyclical characteristics of systemic risk have no direct relationship with the fluctuations of the stock market. thus, we converted the cycle fluctuation of the stock market into the difference from the price mean using ( ) . considering that cycle_abs_stock and cycle_risk are both non-stationary, we calculated their firstorder differences. the results of augmented dickey-fuller (adf) tests show that both variables are an integrated of order one. therefore, cointegration tests can be proposed on the original sequences. the results of johnson trace tests show that there are at least two cointegration relationships between the two variables, which confirms that there is a long-term equilibrium relationship between stock price volatility and systemic risk. the equilibrium equation is all the coefficients are significant at the % significance level, so the volatility of the stock market is positively related to systemic risk from the perspective of long-term equilibrium, which means that, while the stock price deviates from the theoretical value of equilibrium, the systemic risk will be at a high level. in figure , when the blue line is above , the systemic risk is large, while when the blue line is below , the systemic risk is small. the red line represents the absolute value of stock price movements, and the red line is clearly ahead of the above-zero parts of the blue line. for the cycle fluctuation, the rise of systemic risk is usually caused by the joint action of external shocks and internal operations, which is manifested in the excessive rise and fall in the stock market. therefore, the cyclical characteristics of systemic risk have no direct relationship with the fluctuations of the stock market. thus, we converted the cycle fluctuation of the stock market into the difference from the price mean using ( ) . considering that cycle_abs_stock and cycle_risk are both non-stationary, we calculated their first-order differences. the results of augmented dickey-fuller (adf) tests show that both variables are an integrated of order one. therefore, cointegration tests can be proposed on the original sequences. the results of johnson trace tests show that there are at least two cointegration relationships between the two variables, which confirms that there is a long-term equilibrium relationship between stock price volatility and systemic risk. the equilibrium equation is all the coefficients are significant at the % significance level, so the volatility of the stock market is positively related to systemic risk from the perspective of long-term equilibrium, which means that, while the stock price deviates from the theoretical value of equilibrium, the systemic risk will be at a high level. in figure , when the blue line is above , the systemic risk is large, while when the blue line is below , the systemic risk is small. the red line represents the absolute value of stock price movements, and the red line is clearly ahead of the above-zero parts of the blue line. entropy , , x for peer review of figure . cycle evolution of systemic risk. figure shows the cycle evolution of systemic risk, the lead-lag relationship between systemic risk and stock volatility is dynamic owing to the sliding window processing. from the perspective of the whole cycle evolution, we found that there were several periods of high systemic risk in the chinese stock market since , as described below. - : the stock market was in a shock stage during this period. on the one hand, the chinese stock market was impacted by external factors such as the asian financial crisis; on the other hand, the operating mechanism at that time was not perfect enough, with frequent insider trading and market manipulation. the systemic risk was at a high level during this period and, therefore, the stock market began to comprehensively reform its trading mechanism in . although the market fluctuation was not violent from the current perspective, it actually contained many factors causing systemic risk. - : the stock market was in a declining bear stage during this period. owing to the poor performance of high-tech companies, resulting from the burst of the global internet bubble and the launch of the policy reducing the state-owned shares holding of listed companies, the stock market had a big crash in china. related departments issued a series of favorable strategies such as reducing interest rates and trading commissions; however, the imperfection of the market led to a number of "black markets", which brought a high systemic risk. - : this period includes both excessive rise and fall of the market. the reform of nontradable shares in , together with a series of positive policies such as the entry of insurance funds and the appreciation of renminbi (rmb), promoted the rise of the chinese stock market. however, a lot of speculation by inexperienced individual investors caused a more and more serious herding effect, and the systemic risk was maintained at a high level for a long time. followed by the global financial crisis brought by the u.s. subprime crisis, with the launch of stock index futures, the chinese stock market began to reverse to a bear stage, and the systemic risk in this stage also remained at a high level. - : this stage was another volatile bear market; the fluctuation of stock price was much smaller than that of the previous stage, but the systemic risk still remained at a similar level. even though the chinese economy maintained a high growth rate during this period, the stock market was influenced by the global financial markets, as well as the european debt crisis. the low volatility of the stock market still contained large systemic risks, which were reinforced by the frequent occurrence of black swan events such as a rear-end collision of bullet trains, clenbuterol, and so on. figure shows the cycle evolution of systemic risk, the lead-lag relationship between systemic risk and stock volatility is dynamic owing to the sliding window processing. from the perspective of the whole cycle evolution, we found that there were several periods of high systemic risk in the chinese stock market since , as described below. - : the stock market was in a shock stage during this period. on the one hand, the chinese stock market was impacted by external factors such as the asian financial crisis; on the other hand, the operating mechanism at that time was not perfect enough, with frequent insider trading and market manipulation. the systemic risk was at a high level during this period and, therefore, the stock market began to comprehensively reform its trading mechanism in . although the market fluctuation was not violent from the current perspective, it actually contained many factors causing systemic risk. - : the stock market was in a declining bear stage during this period. owing to the poor performance of high-tech companies, resulting from the burst of the global internet bubble and the launch of the policy reducing the state-owned shares holding of listed companies, the stock market had a big crash in china. related departments issued a series of favorable strategies such as reducing interest rates and trading commissions; however, the imperfection of the market led to a number of "black markets", which brought a high systemic risk. - : this period includes both excessive rise and fall of the market. the reform of non-tradable shares in , together with a series of positive policies such as the entry of insurance funds and the appreciation of renminbi (rmb), promoted the rise of the chinese stock market. however, a lot of speculation by inexperienced individual investors caused a more and more serious herding effect, and the systemic risk was maintained at a high level for a long time. followed by the global financial crisis brought by the u.s. subprime crisis, with the launch of stock index futures, the chinese stock market began to reverse to a bear stage, and the systemic risk in this stage also remained at a high level. - : this stage was another volatile bear market; the fluctuation of stock price was much smaller than that of the previous stage, but the systemic risk still remained at a similar level. even though the chinese economy maintained a high growth rate during this period, the stock market was influenced by the global financial markets, as well as the european debt crisis. the low volatility of the stock market still contained large systemic risks, which were reinforced by the frequent occurrence of black swan events such as a rear-end collision of bullet trains, clenbuterol, and so on. - : the market price was rising rapidly in and the systemic risk was also in a climbing stage. however, a high level of risk still appeared in , which was a stage of rapid and frequent fluctuations. the issuing scale of new stocks increased significantly, driving frequent market shocks such as thousands of shares rising or falling together, two triggering circuit breaker events in a day, and so on. thus, the overall capital presents a large-scale net outflow, and the investor sentiment fluctuates abnormally. to summarize, the systemic risk of the stock market will significantly increase in the irrational stages of rise, fall, and frequent shocks. however, extremely high systemic risk is more likely in the cases of collapse and frequent shocks. complex networks have been widely used in the field of socio-economic analysis. most of them focus on the risk contagion of banks and international economic or trade exchanges; however, studies on the stock market are limited. in fact, a complex network provides an important tool for the study of the stock market, which is a self-organizing complex system with multi-agent interactions. the average weight of the complex network can be used to measure the aggregation of positive feedback in the market, so as to measure the overall systemic risk. on the basis of the data of all a-shares in china, this paper constructs a dynamic complex network of stock correlation, and the change of average weight as well as adjusted structural entropy of the network are used to measure the evolution of systemic risk in china's stock market. although, owing to the use of a sliding window, the average weight or structural entropy in fact presents the average systemic risk level in the past days, it also reflects the evolution of systemic risk in china's stock market for more than years as a whole. the results show that the systemic risk of china's stock market shows a downward trend on the whole, which is closely related to the continuous improvement of the management system and operation mechanism of the financial market. in addition, there is a long-term equilibrium relationship between the cycle fluctuation of systemic risk and the excessive fluctuation of the stock market. since , the stages with high systemic risk have appeared with excessive increases, excessive falls, and frequent fluctuations of the stock market. meanwhile, it can also be seen from figure that the global stock market began to fluctuate significantly under the influence of the novel coronavirus pneumonia. the chinese stock market is relatively stable at present, but the systemic risk has been climbing rapidly since the beginning of february. therefore, we must be alert to the further expansion of the systemic risk of the chinese stock market under the double impact of internal and external factors. herd behavior and investment herd behavior in financial markets the low-volatility anomaly: market evidence on systematic risk vs. mispricing. financ unobservable systematic risk, economic activity and stock market variance and lower partial moment measures of systematic risk: some analytical and empirical results systematic risk in emerging markets: the d-capm time varying capm betas and banking sector risk determinants of systematic risk an introduction to econophysics: correlations and complexity in finance physical approach to complex systems analyzing. and modeling real-world phenomena with complex networks: a survey of applications complex networks in finance networks in economics and finance in networks and beyond: a half century retrospective hierarchical structure in financial markets dynamics of market correlations: taxonomy and portfolio analysis detecting a currency's dominance or dependence using foreign exchange network trees characteristic analysis of complex network for shanghai stock market a network perspective of the stock market systemic risk and hierarchical transitions of financial networks the dynamic evolution of the characteristics of exchange rate risks in countries along "the belt and road" based on network analysis degree stability of a minimum spanning tree of price return and volatility minimum spanning tree filtering of correlations for varying time scales and size of fluctuations fuzzy entropy complexity and multifractal behavior of statistical physics financial dynamics characterizing emerging european stock markets through complex networks: from local properties to self-similar characteristics structure and response in the world trade network time and frequency structure of causal correlation networks in the china bond market econometric measures of connectedness and systemic risk in the finance and insurance sectors cointegration-based financial networks study in chinese stock market does network topology influence systemic risk contribution? a perspective from the industry indices in chinese stock market analysis of a network structure of the foreign currency exchange market topology of correlation-based minimal spanning trees in real and model markets a global network of stock markets and home bias puzzle complex networks in a stock market an approach to hang seng index in hong kong stock market based on network topological statistics explaining what leads up to stock market crashes: a phase transition model and scalability dynamics pathways towards instability in financial networks singular cycles and chaos in a new class of d three-zone piecewise affine systems introduction to grey system theory the concept and computation method of grey absolute correlation degree the authors declare no conflict of interest. the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results. key: cord- -mvq a authors: ascherio, alberto; munger, kassandra l. title: epidemiology of multiple sclerosis: environmental factors date: - - journal: nan doi: . /b - - - - . - sha: doc_id: cord_uid: mvq a this chapter discusses the environmental factors associated to epidemiology of multiple sclerosis. the epidemiologic evidence points to three environ­mental risk factors—infection with the epstein-barr virus (ebv), low levels of vitamin d, and cigarette smoking—whose association with multiple sclerosis (ms) seems to satisfy in varying degrees most of the criteria that support causality, including temporality, strength, consis­tency, biologic gradient, and plausibility. none of these associations, however, has been tested experimentally in humans and only one––vitamin d deficiency is presently amenable to experimental interventions. the evidence, albeit more sparse and inconsistent, linking other environmental factors to ms risk are summarized. epidemiologic clues to the hypothetical role of infection in ms are com­plex and often seem to point in opposite directions. the ecological studies, database/linkage analyses, and longitudinal studies of sunlight exposure and vitamin d are reviewed. biologic mechanisms for smoking and increased risk of ms could be neuro­toxic, immunomodulatory, vascular, or they could involve increased frequency and duration of respiratory infections. some other possible risk factors include––diet and hepatitis b vaccine. data. such has been the case, for example, with interventions to reduce lung cancer incidence by reducing exposure to tobacco smoke. as discussed in this chapter, epidemiologic evidence points to three environmental risk factors-infection with the epstein-barr virus (ebv), low levels of vitamin d, and cigarette smoking-whose association with multiple sclerosis (ms) seems to satisfy in varying degrees most of the criteria that support causality, including temporality (i.e., the cause must precede the effect), strength, consistency, biologic gradient, and plausibility. none of these associations, however, has been tested experimentally in humans, and only one (vitamin d deficiency) is presently amenable to experimental interventions. this chapter also summarizes the evidence, albeit more sparse and inconsistent, linking other environmental factors to ms risk. for many years, it appeared that the "who, where, and when" of ms epidemiology was well understood. however, some aspects of ms epidemiology may be changing, notably the observations of an attenuation of the latitude gradient , and the increasing female-to-male ratio. in this section, we discuss the "classic" view of ms epidemiology, some of which has been known for more than years, and then some recent developments that may provide new clues to the etiology of ms. ms is the most common neurologic disease in young adults. incidence rates are low in childhood and adolescence (< / , /year) high in the middle to late twenties and early thirties ( to / , /year in high-risk populations), and gradually decline thereafter, with rates less than / , /year among those older than years of age. , women are approximately twice as likely as men to develop ms, , and the lifetime risk among white women is about in . , ms exhibits a worldwide latitude gradient, with high prevalence and incidence in northern europe, canada, , the northern united states, , , and southern australia and decreasing prevalence and incidence in regions closer to the equator. exceptions to the latitude gradient exist and include a lower than expected prevalence in japan and higher than expected prevalence and incidence in the mediterranean islands of sardinia and sicily. kurtzke summarized the early descriptive studies by depicting areas of high (≥ / , ), medium ( to / , ), and low (< / , ) prevalence of ms; we have updated his figures with more recent prevalence estimates , , [ ] [ ] [ ] [ ] [ ] [ ] [ ] (fig. - ) . a more comprehensive review of ms incidence and prevalence worldwide was published in . it is important to note that differences in estimated incidence across countries or time periods can result from differences in study design, case ascertainment, or diagnostic criteria, rather than from real changes in disease occurrence. differences in prevalence are even more difficult to interpret, because they may reflect increased survival or earlier diagnosis, both of which can occur even if the incidence is the same. in spite of these limitations, the collective data do support a higher risk of ms at higher latitudes, both north and south of the equator. the existence of the latitude gradient alone is not enough to support an environmental component, because it could be explained by genetic differences. , however, studies of ms incidence and prevalence among migrant populations also support a role for environmental factors. these studies have limitations, in that migrants may be different from nonmigrants in socioeconomic and health status, may not utilize local health care resources, and therefore they may be less likely to be diagnosed; in addition, enumeration of the immigrant population for disease statistics may be difficult or impossible. , nevertheless, migrant studies on ms collectively support a decreased prevalence of ms among those who migrate from high-to low-risk areas, particularly if the migration occurs before years of age. moreover, one study found a decreased prevalence of ms in all age groups among immigrants from europe to australia, suggesting that the protective effect may extend into adulthood as well. studies within the united states have also supported a decreased risk of ms among migrants from northern (> ° to ° n), australia and new zealand europe figure - worldwide prevalence estimates of multiple sclerosis. blue, more than cases per , population; purple, to / , ; green, to / , ; orange, to / , ; yellow, fewer than / , ; white, insufficient data. an asterisk indicates that data for that region or country are older and should be interpreted cautiously. high-risk parts of the country to southern (< ° n), low-risk regions. , the study of u.s. veterans is particularly compelling because of its large sample size and rigorous design. in this study, kurtzke observed that individuals who were born in the northern united states but migrated to the southern part of the country before joining the military had a % reduced risk of ms compared with those who did not migrate ( fig. - ) . fewer studies have been conducted among migrants from low-to high-risk areas. in general, these studies have found that a low risk of ms is retained after migration, but that the offspring of migrants have a higher risk of ms, similar to that in the host country. , [ ] [ ] [ ] [ ] [ ] in the u.s. veterans study, individuals who were born in the southern part of the country and migrated to northern states before entering the military had a % increased risk of ms, and those migrating from the middle tier of states to northern regions had a % increased risk (see fig. - ). more recently, in a study conducted in the french west indies (a low-risk area), an increased risk of ms was found among individuals who had moved to france (a high-risk area) and then returned to the west indies. the increase in risk was greatest among those who migrated to france before the age of years. the incidence of ms appears to have been relatively stable over the past years in several high-risk areas, including denmark and the northern united states, but there is some evidence that ms may be increasing in japan and in parts of southern europe, most notably in sardinia. interestingly, the island of malta has continued to experience low, stable rates of ms despite its proximity to sardinia and sicily and a high frequency of the ms-associated hla-drb * allele. there is also evidence of an increased female-to-male ratio in ms incidence. in canada, the female-to-male ratio apparently has increased from approximately : among individuals born in the s and s to approximately : among those born in the s. this change is strongly correlated with, and could be at least in part explained by, a sharp increase in the female-to-male ratio in smoking behavior (unpublished data), because smoking is a strong risk factor for ms (see later discussion). an attenuation of the latitude gradient was observed independently in a population of u.s. nurses and in u.s. military veterans. among nurses born between and and among veterans of world war ii or the korean conflict, those living in the northern tier of states (> ° to ° n) had a greater than threefold increased risk of ms compared to those in the southern tier (< ° n). among vietnam and gulf war veterans, however, this gradient was attenuated to less than twofold, and among nurses born between and it completely disappeared ( fig. - ) . because the methods used to determine rates of ms in the early and later cohorts were the same, and because the individuals in the cohorts had similar socioeconomic status or access to health care, this attenuation was unlikely to be due to artifact. a change of this magnitude over such a short period of time argues for an environmental, rather than a genetic, explanation of the latitude gradient; as discussed later, this environmental factor may involve changes in patterns of infection or sun exposure, or both. further, the attenuation was probably caused by an increase in ms incidence in the southern united states, because incidence rates in the northern states, based at least on data from the longitudinal study in olmsted county, minnesota, seem to have remained relatively stable. an attenuation of the latitude gradient in europe has also been observed; however, no systematic studies have assessed this gradient within the same population over time, and the attenuation therefore may be due to improved study methodology and case ascertainment, particularly within the united kingdom. the possibility of an infectious cause was considered early in ms history, and numerous viruses and bacteria were, at different times, implicated as likely etiologic agents. the results of early studies, based on microscopic examination of pathologic material and attempts to transmit the disease to animals, often were null or spuriously positive because of contamination and could not be replicated. later, numerous serologic studies were conducted, often demonstrating significantly elevated antibody titers against several viruses in ms patients compared with healthy controls, but these differences were probably an epiphenomenon of the immune activation rather than being of etiologic significance. in part as a consequence of these investigations, many researchers became skeptical about the existence of an infectious agent causing ms, and this skepticism persists today. epidemiologic clues to the hypothetical role of infection in ms are complex and often seem to point in opposite directions. on the one hand, results of family studies, including investigations of half-siblings, adopted children, and spouses of individuals with ms, support a strong genetic component as the leading explanation of ms clustering within families and provide little evidence of person-to-person transmission. on the other, there are well-documented, albeit controversial, reports of epidemics of ms, most notably in the faroe islands, that are most easily explained by the introduction and transmission of an infectious agent. to reconcile these findings, it has been postulated that ms is a rare complication of a common infection, with the disease occurring in genetically or otherwise predisposed individuals. in this scenario, the epidemics would be a consequence of the introduction of the ms-causing agent for the first time in remote, previously naïve populations. two hypotheses as to the nature of this infection have been proposed: ( ) the responsible microorganism is more common in areas of high ms prevalence (the "prevalence" hypothesis), and ( ) the ms-causing agent is ubiquitous and more easily transmitted in areas of low ms prevalence, where infection occurs predominantly in infancy, when it would be less harmful and more likely to confer protective immunity. the latter proposal is called the "poliomyelitis" hypothesis, by analogy with the epidemiology of poliomyelitis before vaccination. the poliomyelitis hypothesis is also consistent with the higher prevalence of ms in communities with better hygiene, in individuals with higher education, , and in those with late age at infection with common viruses, as well as the general lack of increase in ms incidence among individuals migrating from low-to high-prevalence areas. however, the poliomyelitis hypothesis cannot explain the reduced risk of ms among migrants from high-to low-risk areas and, in fact, would predict an increase in ms risk in this circumstance, whereas the prevalence hypothesis is consistent with the observations. failure to identify a specific microbe as the cause of ms, despite evidence that is consistent with some role for infection in at least modulating ms risk, has strengthened support for a third, more general, "hygiene" hypothesis, according to which exposure to multiple infections in childhood primes the immune responses later in life toward a less inflammatory and a less autoimmunogenic profile. the hygiene hypothesis can explain all the features of ms epidemiology that are explained by the original formulation of the poliomyelitis hypothesis. in addition, the protective effect of migration from high-to low-ms areas, which is paradoxical under the poliomyelitis hypothesis, could be beneficial because of increased exposure of migrants to parasitic and other infections in the low-risk area. at the population level, prevalence of ms is positively correlated with high levels of hygiene, as measured, for example, by prevalence of intestinal parasites. the improving hygienic conditions in southern europe in the last few decades could explain the increased prevalence of ms reported in multiple surveys (although whether there was a true increase in ms incidence remains unsettled). it is also interesting that infection with intestinal helminths, which is highly prevalent in developing countries, had been reported to cause an immune deviation with attenuation of helper t-cell cellular immune responses and remission of ms. finally, the hygiene hypothesis provides a convincing explanation for the observations that infectious mononucleosis (im) is associated with an increased risk of ms (relative risk [rr] = . ; p < . ) and that the epidemiology of im is strikingly similar to that of ms (table - ). because im is common in individuals who are first infected with ebv in adolescence or adulthood but rare when ebv infection occurs in childhood, it is a strong marker of age at ebv infection, which is itself strongly correlated with socioeconomic development across populations and with socioeconomic status within populations. an exception to this pattern is seen in asia, where ebv infection occurs uniformly early in life and im is thus rare. it is noteworthy that the incidence of ms remains relatively low in asian countries, including japan, despite the fast industrialization and reduction of infectious diseases, although there is evidence that the incidence may be increasing in japan. according to the hygiene hypothesis, the association between im and ms risk does not reflect a causal effect of ebv but rather the indirect manifestation of a common cause; that is, both ms and im are the result of high hygiene and a resulting low burden of infection during childhood. an important prediction of this hypothesis is that ms risk will be high among individuals reared in a highly hygienic environment, even if they do not happen to be infected with ebv later in life, whereas, if ebv has a causal role in ms, individuals who are not infected with ebv would have a low risk of ms. the data on this point are unequivocal: individuals who are not infected with ebv, even though they have the same hygienic upbringing as those with im, have an extremely low risk of ms (odds ratio [or] from metaanalysis = . ; p < . ) ( table - ). the contrast could not be sharper or more consistent: ms risk among individuals who are not infected with ebv is at least -fold lower than that of individuals who are ebv-positive, and -fold lower than that of individuals with a history of im. because studies in pediatric ms , rule out a common genetic resistance to ms and ebv infection, we can conclude either that ebv itself or some other factor closely related to ebv is a strong causal risk factor for ms or that ms itself strongly predisposes to ebv infection. temporality is the only truly necessary criteria for causality. the association between ebv infection and ms is strong and consistent across multiple studies in different populations, and there is to some extent a biologic gradient (higher risk associated with severity of infection, as indicated by history of im). until recently, all studies on ms and infection used a cross-sectional design and could not completely rule out the possibility that ebv infection was a consequence rather than a cause of ms. however, the results of four longitudinal serologic studies have now been published (table - ) . [ ] [ ] [ ] [ ] the most consistent finding across these studies is that, among individuals who will develop ms, there is an elevation of serum antibodies against the ebv nuclear antigen (ebna ) that precedes the onset of ms symptoms by many years. the presence of anti-ebna antibodies is a marker of past infection with ebv, because titers typically rise only weeks after the acute infection. further, there is no evidence in clinical studies of acute primary ebv infection in individuals with ms. taken together, these results indicate that ms is a consequence rather than a cause of ebv infection. until recently, ebv had not been found in ms lesions, , and therefore the link between ebv and ms was postulated to be mediated by indirect mechanisms. the leading hypothesis was that the immune response to ebv infection in genetically susceptible individuals cross-reacts with myelin antigens (molecular mimicry). the discovery that ms patients have an increased frequency and broadened specificity of cd -positive t cells recognizing ebna and the identification of two ebv peptides (one of which is from ebna ) as targets of the immune response in the cerebrospinal fluid of ms patients provided support to the molecular mimicry theory. other proposed hypotheses included the activation of superantigens, an increased expression of alpha b-crystallin, and infection of autoreactive b lymphocytes. however, in a recent, rigorous pathologic study, large numbers of ebvinfected b cells were found in the brain of most of ms patients. these cells were more numerous in areas with active inflammatory infiltrates, where cytotoxic cd -positive t cells displaying an activated phenotype were seen contiguous to the ebv-infected cells. alone, these pathologic findings provide only suggestive evidence for a causal role of ebv in ms, because the infiltration of ebv-infected b cells could be secondary to the inflammatory process that is the hallmark of ms, but their convergence with the epidemiologic evidence described earlier is so striking that noncausal explanations become improbable. however, independent replication of these findings is needed before any conclusion can be drawn. the strong increase in ms risk after ebv infection and (if confirmed) the presence of ebv in ms lesions suggest that antiviral drugs or a vaccine against ebv could contribute to ms treatment and prevention. although antiviral drugs have been tried in the past for ms treatment with borderline results, - none of the treatment regimens used was sufficiently effective against latent ebv infection. several aspects of ms epidemiology cannot be explained by ebv infection, indicating that other factors must contribute. genes are clearly important, and it is of interest that the association between anti-ebna titers and ms risk has been found in both hla-drb * -positive and hla-drb * -negative individuals. variations in ebv strains could also play a role, although evidence in support of this hypothesis remains limited. , many other infectious agents have been hypothesized to be related to ms, mostly because of pathologic studies or their role in animal models. recent candidates include chlamydia pneumoniae, - human herpesvirus , - retroviruses, , and coronaviruses, but there are no convincing epidemiologic studies linking these infections to ms risk. noninfectious factors may also be important, and prominent among them are vitamin d and cigarette smoking. one of the strongest correlates of latitude is the duration and intensity of sunlight, which in ecologic studies is inversely correlated with ms prevalence. [ ] [ ] [ ] because exposure to sunlight is for most people the major source of vitamin d, average levels of vitamin d also display a strong latitude gradient. ultraviolet b (uv-b) radiation ( to nm) converts cutaneous -dehydrocholesterol to previtamin d . previtamin d spontaneously isomerizes to vitamin d , which is then hydroxylated to (oh)d ( -hydroxyvitamin d ), the main circulating form of the vitamin, and then to , (oh) d ( , -dihydroxyvitamin d ), the biologically active hormone. however, during the winter months at latitudes greater than ° n (e.g., boston, ma), even prolonged sun exposure is insufficient to generate vitamin d, and levels decline. , use of supplements or high consumption of fatty fish (a good source of vitamin d) or vitamin d-fortified foods (mostly milk in the united states) may partially compensate for this decline, but few people consume large enough amounts of vitamin d, and seasonal deficiency is common. a link between vitamin d deficiency and ms was proposed more than years ago as a possible explanation of the latitude gradient and of the lower prevalence of ms in fishing communities with high levels of fish intake ; however, the immunomodulatory effects of vitamin d were not known, and the hypothesis did not generate much interest at the time. after the discovery that the vitamin d receptor is expressed in several cells in the immune system and is a potent immunomodulator, a series of experiments revealed a protective role of , (oh) d in several autoimmune conditions and in transplant rejection. the effects in experimental autoimmune encephalomyelitis, an animal model of ms, were particularly striking: injection of , (oh) d was found to completely prevent the clinical and pathologic signs of disease, , whereas vitamin d deficiency accelerated the disease onset. , with vitamin d deficiency becoming a biologically plausible risk factor for ms, several epidemiologic studies were conducted to determine whether exposure to sunlight or vitamin d intake is associated with ms risk. the main results of these studies are shown in table - , and their strengths and limitations are discussed in the following paragraphs. as mentioned earlier, the results of ecologic studies support an inverse association between sunlight exposure and ms risk. however, because people living in the same area share many characteristics other than the level of sunlight, the consensus is that evidence from these studies is weak. in an exploratory investigation based on death certificates, working outdoors was associated with a significantly lower ms mortality in areas of high, but not low, sunlight. in a separate study in the united kingdom, the skin cancer rate, a marker of sunlight exposure, was found to be about % lower than expected among individuals with ms (p = . ). although the results of these investigations are consistent with a protective effect of uv light exposure, they could also represent "reverse causation" (i.e., individuals with ms could reduce their exposure to sunlight after disease onset). the results of case-control studies comparing history of sun exposure in childhood (presumed to be a critical period, mostly from the results of studies in migrants) between ms cases and controls have been conflicting. the results of one study were contrary to a protective effect of vitamin d, and no association between sun exposure in childhood and ms risk was found in another. in contrast, results consistent with a protective effect of sun exposure were reported in a study in tasmania in which information on time spent in the sun was complemented by measurement of skin actinic damage, a biomarker of uv light exposure, as well as an investigation in norway and a study of monozygotic twins in the united states. in the norway study, an inverse association was also found between consumption of fish and ms risk. selection and recall biases are potential problems in case-control studies, but recall bias cannot explain the inverse association observed in tasmania with actinic damage, and selection bias is unlikely in the twin study. the strongest evidence relating vitamin d levels to ms risk has been provided by two longitudinal studies, one based on assessment of dietary vitamin d intake, and one on serum levels of (oh)d. the relation between vitamin d intake and ms risk was studied in more than , women in the nurses' health study and nurses' health study ii cohorts. dietary vitamin d intake was assessed from comprehensive and previously validated semiquantitative food frequency questionnaires administered every years during the follow-up of the cohorts. , total vitamin d intake at baseline was inversely associated with risk of ms: the age-adjusted pooled relative risk (rr) comparing the highest with the lowest quintile of consumption was . ( % confidence interval [ci], . to . ; p for trend = . ). intake of iu/day of vitamin d from supplements only was associated with a % lower risk of ms. these rrs did not materially change after further adjustment for pack-years of smoking and latitude at birth. confounding by other micronutrients cannot be excluded, but adjustments for them in the analyses did not change the results. because dietary vitamin d is only one component contributing to total vitamin d status (the other being sun exposure), a determination of whether serum levels of vitamin d are associated with ms risk in healthy individuals would strengthen the evidence in favor of a causal role for vitamin d. the serum level of (oh)d is a marker of vitamin d status and bioavailability; therefore, if vitamin d is protective, high serum levels of (oh)d would be expected to predict a lower risk of ms in healthy individuals. this question was recently addressed in a collaborative, prospective case-control study using the department of defense serum repository (dodsr). the study included military personnel with confirmed ms and at least two serum samples collected before the onset of ms symptoms. risk of ms was % lower among white individuals with (oh)d levels of nmol/l or higher, compared with those levels lower than nmol/l, and the reduction in ms risk associated with (oh)d levels � nmol/l compared with those levels < nmol/l was considerably stronger before the age of years ( to years) than at ages or older. an important question concerning vitamin d and ms is the age intervals during which vitamin d may be important. the results of migration studies suggest that more pronounced changes in ms risk are likely to occur among individuals who migrate in childhood. the age of years, chosen as an arbitrary cutoff point in early studies, is usually quoted in the literature, but the reality is that data are insufficient to identify a meaningful threshold above which migration would not alter ms risk, and in at least one study a reduction in risk was also observed among individuals who migrated as adults. the results of the casecontrol study in tasmania suggest that exposure to sunlight is mostly protective in childhood. further, vitamin d exposure in utero has been proposed as a possible explanation for the peak in ms incidence among individuals born in may (whose mothers were not pregnant during the summer, when uv light levels are higher) and the dip among those born in november, according to recent data from canada and sweden. on the other hand, the results of the longitudinal studies support a protective effect of vitamin d also later in life. both the lower risk of ms among women taking vitamin d supplements and the lower risk among men and women with higher levels of (oh)d would be difficult to explain by a protective effect of vitamin d solely in utero or during childhood. therefore, it seems likely that, if vitamin d effectively protects against ms, levels during early adult life are also important. overall, the epidemiologic evidence of a causal association between vitamin d and ms is strong but not compelling, mainly because there are few studies based on prospective measurement of levels of exposure to sunlight, vitamin d intake, or serum (oh)d concentration. however, the public health implications of a possible causal association are enormous. if vitamin d reduces the risk of ms, supplementation in adolescents and young adults could be used effectively for prevention. based on studies among individuals with low sun exposure, supplements providing between and iu/day of vitamin d would increase serum (oh)d to the optimal levels. [ ] [ ] [ ] [ ] there is an urgent need to conduct further longitudinal studies, preferably in a large, randomized controlled clinical trial assessing whether vitamin d supplementation in the general population prevents ms. the trial would have to be very large, because ms is a rare disease, but the sample size could be reduced by oversampling individuals who are at high risk, such as those with first-degree relatives who have ms. alternative study designs might include national or multinational studies based on randomization of school districts or other suitable units. cigarette smoking was found to increase the risk of ms in some , but not all , case-control studies. a cross-sectional survey of the general population in hordaland county, norway, found an increased risk of ms in ever-smokers compared with never-smokers (rr = . ; % ci, . to . ). four prospective studies on smoking and ms have been conducted. among , british women in the oxford family planning association study, those who smoked or more cigarettes per day were compared with never-smokers and had an % increased risk of ms (rr = . ; % ci, . to . ). a total of , women from across the united kingdom were enrolled in the royal college of general practitioners' oral contraception study, which found that women smoking or more cigarettes per day had a % increased risk of ms (rr = . ; % ci, . to . ), compared with never-smokers. the nurses' health study and nurses' health study ii cohorts included more than , u.s. women; those who smoked or more pack-years had a % increased risk (rr = . ; % ci, . to . ; p < . ) compared with never-smokers. in a prospective case-control study in the general practice research database, which included both men and women, ever-smokers had a % increased risk of ms, compared with never-smokers (rr = . ; % ci, . to . ). the suggestion of an increased risk of ms among smokers was consistent across all four studies, and pooled estimates of the relative risk were highly statistically significant when never-smokers were compared with past and current smokers (fig. - a) or with moderate and heavy smokers (see fig. - b). additional support for a role of smoking includes a twofold increase in risk of pediatric ms among children exposed to parental smoking and an increased risk of transition to secondary progressive ms among individuals with relapsing-remitting ms ; however, the latter finding was not confirmed in a recent investigation. biologic mechanisms for smoking and increased risk of ms could be neurotoxic, immunomodulatory, , or vascular (i.e., increased permeability of the blood-brain barrier), or they could involve increased frequency and duration of respiratory infections, which may then contribute to increased ms risk. smoking also appears to increase the risk of other autoimmune diseases, including rheumatoid arthritis [ ] [ ] [ ] [ ] [ ] and systemic lupus erythematosus, arguing for a more general effect of cigarette smoking on autoimmunity. although several foods or nutrients were found to be related to be ms risk in ecologic or case-control studies, the results overall were inconsistent and unconvincing. in ecologic studies, positive correlations were found between ms and intake of animal fat [ ] [ ] [ ] and saturated fat, as well as consumption of meat, milk, and butter, , and inverse correlations were found with intake of fat from fish , and nuts (sources of polyunsaturated fat). an increased risk of ms with increasing animal or saturated fat intake and a protective effect of increasing polyunsaturated fat intake were also reported in a case-control study, but otherwise the results of case-control studies have largely not supported an association between increased ms risk and milk or meat consumption, , [ ] [ ] [ ] [ ] or decreased risk and consumption of sources of polyunsaturated fat such as fish or nuts. , however, in a recent study in norway, fish consumption or more times per week among individuals living at latitudes between ° and ° n was inversely related to ms risk. other results have included an inverse association of risk with intake of vitamin c and juice, but no association with other antioxidant vitamins or with fruits and vegetables , , , has been reported. it is important to note that ecologic studies are prone to be confounding and in general provide only very weak evidence of the potential effects of diet on disease risk. retrospective case-control studies are also prone to bias due to both control selection and differential recall. the latter effect is particularly problematic, because even a modest difference in diet recall between cases and controls can cause a large bias in relative risk estimates. this problem is compounded in ms by changes in diet that may occur in the early clinical or preclinical phases of the disease. therefore, although these studies have been important in drawing attention to several aspects of diet as potentially important risk factors for ms, their results, whether in favor or against an hypothetical association, should be interpreted extremely cautiously. understanding of the relation between diet and ms will require the conduct of large longitudinal investigations, with repeated assessment of diet using rigorous and validated methods and possibly measurements of biomarkers of nutrient intakes. so far, the only prospective studies of diet and ms were those conducted among women in the two nurses' health study cohorts. in this population, neither animal fat nor saturated fat was associated with ms risk, but there was a suggestion of an inverse association with intake of the n- polyunsaturated fat linolenic acid. there were also no significant associations between intakes of dairy products, fish, meat, vitamins c or e, carotenoids, or fruits and vegetables and ms risk. however, participants in these studies were already to years of age at time of recruitment, and therefore they shed little light on the possible effect of diet earlier in life and ms risk. studies have also examined whether intake of polyunsaturated fats affects ms progression. n- polyunsaturated fat supplementation in doses ranging from . to . g/day administered for periods of to months did not have significant effects on disability levels in two randomized controlled trials that included a total of patients with relapsing-remitting ms, , although trends were in favor of the supplemented groups in both studies. results of three randomized controlled trials examining the effects of n- polyunsaturated fat supplementation ( to g/day for to months) on ms progression, including a total of patients with relapsing-remitting ms, [ ] [ ] [ ] and a meta-analysis of these studies suggested that supplementation may reduce the severity and duration of relapses. in summary, there is no compelling evidence that dietary factors other than vitamin d play a causal role in ms, but neither can such a role be excluded, particularly for diet during adolescence or childhood, which may be important periods in the etiology of ms. estrogen has been hypothesized to protect against ms, because in high levels it appears to promote the non-inflammatory type immune response, rather than the pro-inflammatory type response predominately seen in ms, and because during pregnancy, when estrogen levels are high, women with ms experience fewer relapses than during the puerperium. in prospective studies, , , , neither oral contraceptive use, parity, nor age at first birth was associated with ms risk. a decreased risk of ms during pregnancy followed by an increased risk during the first months after delivery was shown in a study based on a general practice database in the united kingdom. in the same study, recent use of oral contraceptives was also associated with a reduced risk. collectively, these studies suggest that short-term exposure to estrogen may be protective against ms, but that this protection is transient. concerns that the hepatitis b vaccine may increase the risk of ms were raised after widespread administration of the vaccine in france, but the results of most studies have not supported a causal association. studies in the united states conducted among subjects included in a health care database, among nurses, and among participants in three health maintenance organizations found no association between hepatitis b vaccination and risk of ms. further, in studies of children and adolescents, no association was found between hepatitis b vaccination and ms risk or risk of conversion to ms among children with a first demyelinating event. however, a case-control study conducted in the general practice research database in the united kingdom did find a threefold increased risk associated with receipt of the vaccine within years before ms onset, and a french case-control study reported a nonsignificant increased risk of ms among individuals with clinically isolated syndrome after vaccination. among individuals with ms, the vaccine does not appear to increase the risk of relapses. overall, there is no convincing evidence that hepatitis b vaccination increases ms risk. other environmental factors have been associated with ms, but the available evidence is sparse, and the relevance of these factors to ms etiology remains uncertain. an increased risk of ms has been reported in relation to exposure to organic solvents, [ ] [ ] [ ] [ ] [ ] physical trauma, and psychological stress from the loss of a child (bereavement), whereas a decreased risk has been observed for use of penicillin and antihistamines, high levels of serum uric acid, [ ] [ ] [ ] and tetanus toxoid vaccination. epidemiology: principles and methods the contribution of changes in the prevalence of prone sleeping position to the decline in sudden infant death syndrome in tasmania geographic variation of ms incidence in two prospective studies of us women multiple sclerosis in us veterans of the vietnam era and later military service: race, sex, and geography sex ratio of multiple sclerosis in canada: a longitudinal study the danish multiple sclerosis registry: a -year follow-up the epidemiology of multiple sclerosis in europe epidemiology of multiple sclerosis in u.s. veterans: i. race, sex, and geographic distribution epidemiology of multiple sclerosis: incidence and prevalence rates in denmark - based on the danish multiple sclerosis registry the frequency and geographic distribution of multiple sclerosis as indicated by mortality statistics and morbidity surveys in the united states and canada multiple sclerosis in new orleans, louisiana, and winnipeg, manitoba, canada: follow-up of a previous survey in new orleans, and comparison between the patient populations in the two communities latitude, migration, and the prevalence of multiple sclerosis the incidence and prevalence of reported multiple sclerosis multiple sclerosis in australia and new zealand: are the determinants genetic or environmental? ms epidemiology world wide: one view of current status multiple sclerosis in the japanese population epidemiologic evidence for multiple sclerosis as an infection the distribution of multiple sclerosis clinical and epidemiological profile of multiple sclerosis in a reference center in the state of bahia, brazil multiple sclerosis in latin america multiple sclerosis in kwazulu natal, south africa: an epidemiological and clinical study prevalence of multiple sclerosis in texas counties incidence and prevalence of multiple sclerosis in olmsted county multiple sclerosis in isfahan, iran multiple sclerosis the dissemination of multiple sclerosis: a viking saga? a historical essay migrant studies in multiple sclerosis the age-range of risk of developing multiple sclerosis: evidence from a migrant population in australia multiple sclerosis and age at migration epidemiology of multiple sclerosis in us veterans: iii. migration and the risk of ms multiple sclerosis among immigrants in greater london motor neurone disease and multiple sclerosis among immigrants to britain motor neuron disease and multiple sclerosis among immigrants to england from the indian subcontinent, the caribbean, and east and west africa multiple sclerosis among the united kingdom-born children of immigrants from the west indies multiple sclerosis among united kingdom-born children of immigrants from the indian subcontinent, africa and the west indies role of return migration in the emergence of multiple sclerosis in the french west indies incidence and prevalence of multiple sclerosis in olmsted county multiple sclerosis prevalence among sardinians: further evidence against the latitude gradient theory multiple sclerosis in malta in : an update hla-drb and multiple sclerosis in malta the possible viral etiology of multiple sclerosis genetics of multiple sclerosis analysis of the 'epidemic' of multiple sclerosis in the faroe islands multiple sclerosis in the faroe islands: i. clinical and epidemiological features multiple sclerosis in the faroe islands: an epitome multiple sclerosis and poliomyelitis epidemiological study of multiple sclerosis in israel: ii. multiple sclerosis and level of sanitation multiple sclerosis in australia: socioeconomic factors epidemiology of multiple sclerosis in us veterans: vii. risk factors for ms part iii: selected reviews. common childhood and adolescent infections and multiple sclerosis the effect of infections on susceptibility to autoimmune and allergic diseases the hygiene hypothesis and multiple sclerosis association between parasite infection and immune responses in multiple sclerosis infectious mononucleosis and risk for multiple sclerosis: a meta-analysis multiple sclerosis and epstein-barr virus epstein-barr virus epstein-barr virus the prevalence and clinical characteristics of ms in northern japan environmental risk factors for multiple sclerosis: part i. the role of infection epstein-barr virus in pediatric multiple sclerosis high seroprevalence of epstein-barr virus in children with multiple sclerosis epstein-barr virus antibodies and risk of multiple sclerosis: a prospective study an altered immune response to epstein-barr virus in multiple sclerosis: a prospective study temporal relationship between elevation of epstein barr virus antibody titers and initial onset of neurological symptoms in multiple sclerosis epstein-barr virus and multiple sclerosis: evidence of association from a prospective study with long-term follow-up association between clinical disease activity and epstein-barr virus reactivation in ms absence of epstein-barr virus rna in multiple sclerosis as assessed by in situ hybridisation is epstein-barr virus present in the cns of patients with ms? increased frequency and broadened specificity of latent ebv nuclear antigen- -specific t cells in multiple sclerosis identification of epstein-barr virus proteins as putative targets of the immune response in multiple sclerosis an epstein-barr virus-associated superantigen ebv-induced expression and hla-drrestricted presentation by human b cells of alpha b-crystallin, a candidate autoantigen in multiple sclerosis infection of autoreactive b lymphocytes with ebv, causing chronic autoimmune diseases dysregulated epstein-barr virus infection in the multiple sclerosis brain acyclovir treatment of relapsing-remitting multiple sclerosis: a randomized, placebo-controlled, double-blind study a randomized, double-blind, placebo-controlled mri study of anti-herpes virus therapy in ms a randomized clinical trial of valacyclovir in multiple sclerosis integrating risk factors: hla-drb * and epstein-barr virus in multiple sclerosis a single subtype of epstein-barr virus in members of multiple sclerosis clusters epstein-barr virus genotypes in multiple sclerosis multiple sclerosis associated with chlamydia pneumoniae infection of the cns intrathecal antibody production against chlamydia pneumoniae in multiple sclerosis is part of a polyspecific immune response infection with chlamydia pneumoniae and risk of multiple sclerosis ioannidis a: chlamydia pneumoniae infection and the risk of multiple sclerosis: a meta-analysis human herpesvirus and multiple sclerosis: systemic active infections in patients with early disease intrathecal antibody (igg) production against human herpesvirus type occurs in about % of multiple sclerosis patients and might be linked to a polyspecific b-cell response human herpesvirus and multiple sclerosis: a one-year follow-up study a putative new retrovirus associated with multiple sclerosis and the possible involvement of the epstein-barr virus in this disease the danish multiple sclerosis registry: history, data collection and validity human coronavirus oc infection induces chronic encephalitis leading to disabilities in balb/c mice some comments on the relationship of the distribution of multiple sclerosis to latitude, solar radiation, and other variables the prevalence of multiple sclerosis in australia geographical considerations in multiple sclerosis regional variation in multiple sclerosis prevalence in australia and its association with ambient ultraviolet radiation sunlight and vitamin d for bone health and prevention of autoimmune diseases, cancers, and cardiovascular disease influence of season and latitude on the cutaneous synthesis of vitamin d : exposure to winter sunlight in boston and edmonton will not promote vitamin d synthesis in human skin safety and efficacy of increasing wintertime vitamin d and calcium intake by milk fortification serum -hydroxyvitamin d concentrations of new zealanders aged years and older multiple sclerosis: vitamin d and calcium as environmental determinants of prevalence: a viewpoint. part : sunlight, dietary factors and epidemiology the immonological functions of the vitamin d endocrine system , -dihydroxyvitamin d prevents the in vivo induction of murine experimental autoimmune encephalomyelitis , -dihydroxyvitamin d reversibly blocks the progression of relapsing encephalomyelitis: a model of multiple sclerosis treatment of experimental autoimmune encephalomyelitis in rat by , -dihydroxyvitamin d( ) leads to early effects within the central nervous system mortality from multiple sclerosis and exposure to residential and occupational solar radiation: a case-control study based on death certificates skin cancer in people with multiple sclerosis: a record linkage study epidemiologic study of multiple sclerosis in israel: i. an overall review of methods and findings epidemiological study of multiple sclerosis in western poland past exposure to sun, skin phenotype and risk of multiple sclerosis: a case-control study outdoor activities and diet in childhood and adolescence relate to ms risk above the arctic circle childhood sun exposure influences risk of multiple sclerosis in monozygotic twins vitamin d intake and incidence of multiple sclerosis the use of a self-administered questionnaire to assess diet four years in the past food-based validation of a dietary questionnaire: the effects of week-to-week variation in food consumption serum -hydroxyvitamin d levels and risk of multiple sclerosis the age-range of risk of developing multiple sclerosis: evidence from a migrant population in australia timing of birth and risk of multiple sclerosis: population based study circulating -hydroxyvitamin d levels indicative of vitamin d sufficiency: implications for establishing a new effective dietary intake recommendation for vitamin d estimates of optimal vitamin d status vitamin d supplementation, -hydroxyvitamin d concentrations, and safety human serum -hydroxycholecalciferol response to extended oral dosing with cholecalciferol epidemiologic study of multiple sclerosis in israel a case-control study of the association between socio-demographic, lifestyle and medical history factors and multiple sclerosis how multiple sclerosis is related to animal illness, stress and diabetes environmental risk factors and multiple sclerosis: a community-based, case-control study in the province of ferrara smoking is a risk factor for multiple sclerosis oral contraceptives and reproductive factors in multiple sclerosis incidence the influence of oral contraceptives on the risk of mulitple sclerosis cigarette smoking and incidence of multiple sclerosis cigarette smoking and the progression of multiple sclerosis parental smoking at home and the risk of childhoodonset multiple sclerosis in children cigarette smoking and progression in multiple sclerosis neuropathological changes in chronic cyanide intoxication immunomodulatory effects of cigarette smoke effects of tobacco glycoprotein (tgp) on the immune system: ii. tgp stimulates the proliferation of human t cells and the differentiation of human b cells into ig secreting cells the epidemiology of acute respiratory infections in children and adults: a global perspective oral contraceptives, cigarette smoking and other factors in relation to arthritis reproductive factors, smoking, and the risk for rheumatoid arthritis smoking, obesity, alcohol consumption, and the risk of rheumatoid arthritis cigarette smoking increases the risk of rheumatoid arthritis: results from a nationwide study of disease-discordant twins smoking and risk of rheumatoid arthritis smoking history, alcohol consumption, and systemic lupus erythematosus: a case-control study multiple sclerosis and nutrition diet and the geographical distribution of multiple sclerosis nutrition, latitude, and multiple sclerosis mortality: an ecologic study the risk of multiple sclerosis in the u.s.a. in relation to sociogeographic features: a factor-analytic study correlation between milk and dairy product consumption and multiple sclerosis prevalence: a worldwide study nutritional factors in the aetiology of multiple sclerosis: a case-control study in montreal, canada studies on multiple sclerosis in winnipeg, manitoba, and new orleans, louisiana: ii. a controlled investigation of factors in the life history of the winnipeg patients epidemiological study of multiple sclerosis in western poland milk consumption and multiple sclerosis-an etiological hypothesis risk factors in multiple sclerosis: a population-based case-control study in hautes-pyrenees nutritional epidemiology dietary fat in relation to risk of multiple sclerosis among two large cohorts of women intakes of carotenoids, vitamin c, and vitamin e and ms risk among two large cohorts of women a double-blind controlled trial of long chain n- polyunsaturated fatty acids in the treatment of multiple sclerosis low fat dietary intervention with omega- fatty acid supplementation in multiple sclerosis patients double-blind trial of linoleate supplementation of the diet in multiple sclerosis polyunsaturated fatty acids in treatment of acute remitting multiple sclerosis linoleic acid in multiple sclerosis: failure to show any therapeutic benefit linoleic acid and multiple sclerosis: a reanalysis of three double-blind trials rate of pregnancy-related relapse in multiple sclerosis: pregnancy in multiple sclerosis group oral contraceptives and the incidence of multiple sclerosis recent use of oral contraceptives and the risk of multiple sclerosis a shadow falls on hepatitis b vaccination effort no increase in demyelinating diseases after hepatitis b vaccination hepatitis b vaccination and the risk of multiple sclerosis vaccinations and risk of central nervous system demyelinating diseases in adults school-based hepatitis b vaccination programme and adolescent multiple sclerosis hepatitis b vaccine and risk of relapse after a first childhood episode of cns inflammatory demyelination recombinant hepatitis b vaccine and the risk of multiple sclerosis: a prospective study hepatitis b vaccination and first central nervous system demyelinating event: a case-control study vaccinations and the risk of relapse in multiple sclerosis. vaccines in multiple sclerosis study group organic solvents and multiple sclerosis: a synthesis of the current evidence exposure to organic solvents and multiple sclerosis multiple sclerosis and organic solvents organic solvents and the risk of multiple sclerosis the risk for multiple sclerosis in female nurse anaesthetists: a register based study the relationship of ms to physical trauma and psychological stress: report of the therapeutics and technology assessment subcommittee of the american academy of neurology the risk of multiple sclerosis in bereaved parents: a nationwide cohort study in denmark antibiotic use and risk of multiple sclerosis allergy, histamine receptor blockers, and the risk of multiple sclerosis uric acid levels in sera from patients with multiple sclerosis serum uric acid and multiple sclerosis serum uric acid levels of patients with multiple sclerosis and other neurological diseases tetanus vaccination and risk of multiple sclerosis: a systematic review epstein-barr virus antibodies in multiple sclerosis epstein-barr virus infection and antibody synthesis in patients with multiple sclerosis epstein-barr nuclear antigen and viral capsid antigen antibody titers in multiple sclerosis increased prevalence and titer of epstein-barr virus antibodies in patients with multiple sclerosis viral antibody titers: comparison in patients with multiple sclerosis and rheumatoid arthritis the italian cooperative multiple sclerosis casecontrol study: preliminary results on viral antibodies the implications of epstein-barr virus in multiple sclerosis: a review altered antibody pattern to epstein-barr virus but not to other herpesviruses in multiple sclerosis: a population based case-control study from western norway altered prevalence and reactivity of anti-epstein-barr virus antibodies in patients with multiple sclerosis a role of late epstein-barr virus infection in multiple sclerosis exposure to infant siblings during early life and risk of multiple sclerosis key: cord- - fjx s authors: xie, kefan; liang, benbu; dulebenets, maxim a.; mei, yanlan title: the impact of risk perception on social distancing during the covid- pandemic in china date: - - journal: int j environ res public health doi: . /ijerph sha: doc_id: cord_uid: fjx s social distancing is one of the most recommended policies worldwide to reduce diffusion risk during the covid- pandemic. based on a risk management perspective, this study explores the mechanism of the risk perception effect on social distancing in order to improve individual physical distancing behavior. the data for this study were collected from chinese residents in may using an internet-based survey. a structural equation model (sem) and hierarchical linear regression (hlr) analyses were conducted to examine all the considered research hypotheses. the results show that risk perception significantly affects perceived understanding and social distancing behaviors in a positive way. perceived understanding has a significant positive correlation with social distancing behaviors and plays a mediating role in the relationship between risk perception and social distancing behaviors. furthermore, safety climate positively predicts social distancing behaviors but lessens the positive correlation between risk perception and social distancing. hence, these findings suggest effective management guidelines for successful implementation of the social distancing policies during the covid- pandemic by emphasizing the critical role of risk perception, perceived understanding, and safety climate. as the number of global coronavirus cases explodes rapidly, threatening millions of lives, the covid- pandemic has become the fastest spreading, most extensive, and most challenging public health emergency worldwide since world war ii [ ] . compared to seasonal influenza, this coronavirus appears to be more contagious and transmits much faster. for example, the basic reproduction rate r for seasonal influenza is approximately . , while for covid- , this value comprises . on average [ ] [ ] [ ] . with no efficacious treatments and vaccines available yet, social distancing measures are still one of the common approaches to reduce the rate of infection. moreover, for the foreseeable multiple waves of the pandemic, covid- prevention will continue to rely on physical distancing behaviors until safe vaccines or effective pharmacological interventions become accessible. accordingly, social distancing has been implemented by authorities across the globe to prevent diffusion of the disease. facing this global pandemic, even each government has issued advice about mobility restriction, the definition of social distancing, and distancing rules. however, the guidance documents differ social distancing has received increasing attention in numerous studies over recent decades, especially since the covid- outbreaks. in order to explore critical points and network patterns of these prior research studies, a co-word analysis was conducted. the literature keywords present the relationship between the study subjects and a concentration of the research content [ ] . hence, the application of a co-word analysis on the existing literature can provide generic knowledge and network patterns in the studies on social distancing. an integrated search was conducted given the topic of social distancing, such as "physical distancing", "social isolation", "lockdown", etc. subsequently, related papers published from january through june were retrieved using the web of science core database. then, using citespace software, which is designed as a tool for progressive knowledge domain visualization [ ] , the co-occurrence matrix of keywords was calculated and visualized, as shown in figure . the size of the keywords presents the frequency of co-occurrence and the connection shows the significance of co-occurrence [ ] . based on the co-word analysis, the major research focus and inner bibliometric characteristics of social distancing were concluded from four perspectives, such as how social distancing affects the pandemic, the additional effects and challenges caused by social distancing, modeling and simulation of social distancing, and influencing factors. most of the previous studies [ ] confirmed that social distancing has positive effects on the pandemic slowdown while several studies [ ] seem not to confirm this. some studies believe that social distancing cuts off the transmission path of the virus, thereby reducing r [ ] . moreover, different mathematical models and simulations have displayed a good correlation with the data showed in biomedical studies, which offered a high level of evidence for the impact of social distancing measures to contain the pandemic [ , ] . for example, based on simple stochastic simulations, cano et al. [ ] evaluated the efficiency of social distancing measures to tackle the covid- pandemic. okuonghae and omame [ ] found if at least % of the population would implement social distancing measures, the pandemic will eventually disappear according to the numerical simulations of the model. nevertheless, a systematic review and meta-analysis demonstrated that the social distancing regulation showed a non-significant protective effect, which can be caused by the persisting knowledge gaps in disparate population groups [ ] . although various cohort studies and modeling simulations have found that the social distance regulations can effectively prevent the spread of the pandemic, the additional effects and challenges caused by social distancing cannot be ignored. for instance, anxiety associated with social distancing may have a long-term effect on mental health [ ] and social inequality. furthermore, loneliness pandemics are arising from physical isolation as well [ ] . as a form of reduced movement and face-to-face connections between people, social distancing has changed residents' conventional health behaviors, which may lead to increasing obesity, accidental pregnancies, and other health risks [ , ] . a national survey carried out in italy demonstrated that individual needs shifted towards the three bottom levels of the maslow's pyramid (i.e., belongingness and love needs, safety needs, and physiological needs) due to the social isolation [ ] . compared with the impact of social distancing, more previous studies focused on its influencing factors. first, at the national and cultural dimension levels, akim and ayivodji [ ] concluded that certain economic and fiscal interventions were associated with higher compliance with social distancing. huynh [ ] found that countries with higher "uncertainty avoidance index" indicate a lower proportion of public gatherings. likewise and moon [ ] explored the role of cultural orientations and showed that vertical collectivism predicted stronger compliance with social distancing norms. then, at the level of public society, aldarhami et al. [ ] conducted a survey indicating that the high level of public awareness affects social distancing implementation. besides, public health authorities and experts alike pointed out that mass media and information played an important role in developing public awareness and constructing social distancing behaviors among social populations [ ] . lastly, from the perspective of individual behaviors and psychological factors, oconnell et al. [ ] reported that more antisocial individuals may pose a health risk to the public and engage in fewer social distancing regulations. based on a cross-sectional online survey, yanti et al. [ ] identified that the respondents who had sufficient knowledge and a good attitude would positively comply with safety behaviors, such as keeping a physical distance from others and wearing face masks in public places. although the evidence unambiguously supported that implementing the social distancing regulations has a crucial effect on restraining the pandemic [ ] , recent studies found that mobility restrictions do not lead to an expected reduction of coronavirus cases [ , ] . previous literature has conducted various analyses regarding the different factors motivating social distancing behaviors. however, facing the current enormous gap between the method and the existing practice, limited research has paid attention to the key factors from the perspective of risk management. because of the significant role that individuals and public awareness play in compliance with social distancing, this study focuses on the mechanism of the risk perception effect on social distancing. individual's perceived understanding and safety climate are also examined to identify their effectiveness in the relationship between risk perception and social distancing. based on a quantitative online survey with a sample size of participants from china over the period of may , we built the structural equation model (sem) and conducted hierarchical linear regression (hlr) analysis to examine how the selected moderators influence social distancing behavior. the remainder of the paper is organized as follows. section will review the risk perception theories and develop several hypotheses with the conceptual framework. section describes the research methodology, data collection, and measurement of latent variables. then, we analyze the data and examine hypotheses (section ) and finally, discuss the implications and limitations of our findings (section ) as well as draw the main conclusions (section ). risk exists objectively, but distinct people will take different behavioral decisions when they perceive risk differently [ ] . hence, even many medical experts stressed the importance of maintaining physical distancing amid the covid- pandemic and people's risk perception still colors beliefs about facts. the concept of risk perception differs among different disciplines [ ] . in this study, risk perception in the context of the pandemic is defined as the psychological processes of subjective assessment of the probability of being infected by the coronavirus, an individual's perceived health risk, and available protective measures [ , ] . compared to the concept of risk perception in other fields, the health risk perception and the severity caused by the consequences of subsequent behavioral decisions are the most prominent features. empirical evidence has indicated that health risk perception may significantly affect people's self-protective behaviors and increase negative consequences of health risks [ ] . dionne et al. [ ] found that risk perception associated with medical activities was a critical predictor of the epidemic prevention behaviors. accordingly, as reported, underestimation of the pandemic knowledge and health risks could lead to decreasing implementation of social distancing. most previous research focused on identifying influencing factors for people's health risk perception as risk perception largely determines whether individuals would take protective measures during the pandemic. also, there are various factors that reduce the substantial deviation between the actual objective risk and subjective feelings. perceived understanding is just one of the crucial factors that refers to situational awareness for the adoption of healthcare protections when facing the pandemic [ ] . according to the theory of planned behavior, only when people realize that they are in a health risk or even death risk will they have the situational awareness to take further healthcare protections. effective and timely perceived understanding will greatly promote people to translate risk perception into actual actions [ ] . perceived understanding plays a vital role in the adoption of healthcare behaviors. therefore, the following four hypotheses were developed, considering the findings from previous studies. perceived understanding about the covid- pandemic plays a mediating role between risk perception and social distancing behavior. facing huge economic pressure and public opinion, many companies and organizations gradually re-opened. at the same time, these institutions require their employees to implement the social distancing policies strictly. similarly, when people go out to eat, shop, and entertain, many public places remind people to maintain a physical distance. regardless of whether it is a social organization or a public place, this kind of a reminder message released through information media has virtually created a safe climate to require people to take necessary measures and reduce the spread of the virus. generally, the safety climate refers to individuals' perception of safety regulations, procedures, and behaviors in the workplace [ ] . from the perspective of pandemic prevention and control, the safety climate relates to a consensus created by the work environment which will promote people consciously or unconsciously to take the appropriate safety measures. namely, safety climate reflects common awareness among employees on the importance of organizational safety issues [ ] . numerous observations and studies attest to the relationship between safety climate and protective behavior. bosak et al. [ ] found that a good safety climate was negatively related to people's risk behaviors. moreover, another study showed that safety climate completely mediated the effect of risk perception on safety management [ ] . however, few studies focused on the influence of safety climate on people's self-protection behavior during the pandemic. taking protective measures, such as social distancing, wearing face masks, and other self-prevention behaviors, are instrumental to avoid the spread of the infection. an organization with a good safety climate can carry out relevant safety training and drills, so as to suppress the potential risk tendency and promote their employees' safety behaviors. therefore, if the working environment can strengthen the education and publicity of pandemic knowledge, people are more willing to take correct protective measures, such as maintaining a social distance. additionally, koetke et al. [ ] also pointed out that safety climate (trust in science) played a moderating role in the relationship between conservative and social distancing intentions. to conclude, based on the above literature reviews, the conceptual framework of this study is illustrated in figure . our last two hypotheses read as follows: according to the th china statistical report on internet development, which was announced by the china internet network information center (cnnic), in , there were million internet users in china. several studies exploring some physical or psychological influencing mechanisms, such as risk perception, showed no significant difference between internet users and non-users [ ] . therefore, online questionnaires were randomly collected from internet users through wenjuan.com. a total of completed responses were received with an effective rate of . %, after excluding suspected unreal answers completed in less than s. additionally, participants were first directed to review and provide their consent using an online informed consent form, which was pre-approved by a panel of experts and the institutional review board, before answering the survey questionnaire. the data collection was anonymously conducted throughout may . the female participants constituted . % of the sample, while . % of the sample were male participants. among the respondents, most of them were young people, . % belonged to the age group of - years, while . % belonged to the age group of - years. a total of . % of the participants had a college degree or above and only % had a lower level education than high school. out of the total sample, . % reported to be living in rural areas and . % lived in urban communities. it should be noticed that there were . % of the participants living in hubei province, which used to be the epicenter of the covid- pandemic in china. the initial questionnaire contained questions to measure these latent variables, including risk perception-rp ( items), perceived understanding-pu ( items), social distancing-sd ( items), and safety climate-sc ( items). all the measurement items were prepared based on the review of related literature and methods (table ) . for example, initial items for rp were generated following previous questionnaires conducted by dionne et al. [ ] and kim et al. [ ] . measurement items of pu were compiled based on the infectious disease-specific health literacy scale [ ] and the study by qazi et al. [ ] . the sc instrument statements were taken from the literature review and previously completed research [ , , ] . based on the studies of swami et al. [ ] and gudi et al. [ ] , initial measurement questions of sd were developed. additionally, to ensure the validity of the draft questionnaire, the original survey instrument statements were revised based on the suggestions from a panel of experts, including professionals of risk management, public health specialists, and community managers. then, necessary modifications were made by simplifying, rewording, and replacing several items after experts reviewed the survey structure, wording, and item allocation. according to the expert panel's feedback, the item-level content validity index (i-cvi) of the items were all greater than . and the scale-level cvi (s-cvi) is . (> . ), indicating an excellent validity of this scale (see supplementary materials ). an initial survey with items was first pilot tested among a randomly selected sample of internet users. after conducting cognitive interviews with the pilot sample participants and analyzing the reliability and correlations, measurement items (rp , rp , rp , and sd ) with a item-to-total correlation below . were removed. finally, a formal questionnaire containing items was developed. the response scale for all the survey items was a -point likert scale with categories ranging from = "strongly disagree" to = "strongly agree". all of the items were phrased positively, so that a higher score represented stronger agreement. table displays an overview of the scale and questionnaire items. avoid contact with individuals who have influenza. avoid traveling within or between cities/local regions. avoid using public transport due to covid- . avoid going to crowded places due to covid- . * safety climate the government is concerned about the health of people. koetke et al. [ ] ; neal et al. [ ] ; wu et al. [ ] sc i trust the covid- information provided by the government. there is a clearly stated set of goals or objectives for covid- prevention. people consciously follow the pandemic prevention regulations. being able to provide necessary personal protective equipment for workers during the pandemic. offering to workers as much safety instruction and training as needed during the pandemic. note: * items removed from the initial questionnaire. descriptive statistics and correlation analyses of the latent variables were first examined. then, the exploratory factor analysis (efa) and the confirmatory factor analysis (cfa) were conducted to verify the unidimensionality and reliability of the measurement items. the sem can be applied to control for measurement errors as well as to use parameters to identify interdependencies [ , ] . hence, this approach is appropriate to test the hypotheses by conducting the path analyses. in addition, to examine the moderating effect, hlr was carried out to verify hypotheses h and h . amos version . software was applied for cfa and sem (hypotheses h -h ) . the remaining analyses, e.g., efa and hlr (hypotheses h and h ) , were done using spss . . (ibm, armonk, ny, usa) the means, standard deviations (s.d.), and inter-correlations of all the measures are contained in table . there are significant positive correlations between the four variables. rp has significant positive correlations with sd and pu, suggesting a partial support for hypotheses h and h , respectively. moreover, both pu and sc showed a significant positive correlation with sd, indicating that hypotheses h and h were partially supported as well. reliability can be formally defined as the proportion of observed score variance, which is attributable to the true score variance. there exist several approaches to evaluate the reliability of a measuring item and internal consistency is the most widely used method in research with a cross-sectional design. the cronbach's alpha (α) can be used to estimate the internal consistency [ ] . a standard value for cronbach's alpha is . or above, which indicates strong internal consistency of adopted scales [ ] . table indicates that all four latent variables have good reliability (cronbach's α > . ), suggesting that the measurement items are appropriate indicators of their respective constructs. the validity analysis is used to examine the accuracy of the measurement instrument, namely the validity of the scale. the validity analysis mainly includes the content validity and the construct validity, of which the content validity has been supported by the expert panel's recommendations and pre-tests, while the construct validity requires a combination of efa and cfa. first, the kaiser-meyer-olkin (kmo) test value was . . in addition, the result of the bartlett test (χ = . , df = , p < . ) was large and significant. hence, the data shown in table were suitable for cfa. then, the measurement items identified four factors that exactly correspond to four latent variables. these four factors explained . % of the total variance. similarly, the cfa results confirmed the four-factor model. in this study, the goodness-of-fit statistics were found to be x / df = . ( ) and ( ): where λ i and σ e i represent the regression weight (factor loading) and measure variance estimate of the measurement item i, respectively, and k is the number of measurement items. cr and ave are other effective measures to evaluate the construct validity. correspondingly, according to jobson [ ] , the acceptable value of cr is . and above, while ave should be . and above. table demonstrates that most of the values of cr and ave met the standards, suggesting an acceptable goodness-of-fit for the further sem analysis. based on the conceptual framework, the sem analysis was conducted to explore the relationship between rp, sd, and pu (as the mediator). the hypothesized model shown in figure was first examined. table summarizes the fit indices of the model, which indicates an excellent goodness-of-fit for the data based on the majority of indices. in this model, several path analyses were developed to test hypotheses h , h , and h . as shown in table , rp has significant positive relationships with pu (β = . , c.r. = . , p < . ) and sd (β = . , c.r. = . , p < . ). likewise, pu plays a significant positive role on sd (β = . , c.r. = . , p < . ) as well. thus, it implies that hypotheses h , h , and h are supported. bias-corrected (bc) and percentile (pc) bootstrapping approaches were carried out to verify the mediating effect of pu. previous studies have found that bootstrapping was a proper method that can provide a robust test of mediating hypotheses [ ] . accordingly, the significant effect of risk perception on social distancing could be assessed through perceived understanding by using the bootstrapping of sub-samples. as can be seen from table , the values of the lower and upper limits ( % bc and pc bootstrap confidence intervals) for the indirect effect (β = . ) were all greater than zero. moreover, the value of z (indirect effect/standard error) equals . (> . ). subsequently, similar to an indirect effect, it was found that there were no zero values between the lower and upper limits ( % bc and pc bootstrap confidence intervals) for the direct effect (β = . , z = . ). therefore, perceived understanding partially mediates the positive effects of risk perception on social distancing. in other words, perceived understanding did not completely offset the effect of risk perception, which partially explains the social distancing. in summary, these results confirmed hypothesis h . hypothesis h predicted that safety climate positively moderates the impact of risk perception on social distancing. to test the moderation effects, the hlr analysis was conducted. model serves as a baseline with independent variables rp and sc. then, model incorporated additional variables rp×sc. table presents the significant interaction effects of the two-way interaction effect between rp and sc on sd (model , rp×sc, β = − . , p < . ). as shown in table , while risk perception is positively associated with social distancing regardless of the value of safety climate, the safety climate further reduces the positive effect. thus, hypothesis h is partially supported. additionally, whether sc is in model (β = . , p < . ) or model (β = . , p < . ), it presents a statistically significant positive relationship with sd, which further supports hypothesis h . note: *** p < . . vif represents variance inflation factor (vif = /tolerance), vif < (acceptable). this study has continued to demonstrate that social distancing behaviors play a critical role in preventing the diffusion of the covid- pandemic. in identifying influencing factors that lead to social distancing, previous studies have highlighted risk perception as a leading indicator of protective behaviors [ , , ] . people should be encouraged to promote risk perception in order to identify and rectify infection risks and health issues related to unprotected behaviors during the covid- pandemic. however, limited research has examined whether different risk perception of individuals affects their interpretation of the social distancing regulations in an equivalent manner. by investigating the measurement scales of risk perception, perceived understanding, safety climate, and social distancing across populations of internet users in china, this study addressed the mechanism of the risk perception effect on social distancing to improve individuals' physical distancing behaviors. this study provided evidence that risk perception and perceived understanding can significantly affect people's social distancing behaviors during the covid- pandemic. the results of the path analysis supported hypotheses h , h , and h . it is evident from figure , tables and that the path coefficients are significant and the overall hypothesized model has a good fit for the investigation. these findings are in line with aldarhami et al. [ ] , zhong et al. [ ] , and machida et al. [ ] . a key principle of social distancing behavior is that risk perception is a critical condition for protective action. the results support the finding that higher risk perception motivates people to comply with social distancing. only by enhancing risk perception can people truly remain vigilant against the pandemic and take protective measures. therefore, when the government implements social distancing and other prevention measures, it must take into account the public risk perception and improve public environmental awareness through various means, such as social media, press conferences, standard therapy, and guidelines for the outbreak response. in particular, it is necessary to rectify pandemic rumors to prevent incorrect information that can potentially reduce public risk perception. besides, we confirmed a dual effect of perceived understanding on social distancing. first, perceived understanding was found to predict social distancing directly. these results are consistent with other studies [ , ] which have shown that increased perceived understanding can encourage people to gain more knowledge about the pandemic and health risks, so that they would engage more in the social distancing regulations. then, we identified that perceived understanding as a factor showed an incomplete mediating effect on the relationship between risk perception and social distancing. previous literature regarding perceived understanding shows that it affects the social distancing behaviors related to the sources of information [ ] . on the other hand, our results confirm an indirect positive effect of risk perception on social distancing through perceived understanding. hence, with the help of the authority of medical experts, we should promptly popularize scientific knowledge of the pandemic and prevention measures among communities to enhance public perceived understanding. in addition, the increase in risk perception can promote public desire to understand the pandemic and pay more attention to their own health risks. the authorities should improve pandemic information release channels. moreover, we identified that a positive perception of safety climate (β = . , p < . ) would promote adherence to social distancing and that this effect would be stronger than the risk perception (β = . , p < . ). this finding concurs with the study conducted by kouabenan et al. [ ] . the achievement of a consensus on a safe climate requires the joint efforts of the organization and society. first, workplaces such as shops, cafeterias, office spaces, and public transit systems have to strengthen pandemic prevention and control drills. then, it is necessary to support community propaganda and scientific knowledge popularization and gather the individual consensus on self-protective behaviors. it is also strongly recommended to wear a face mask, keep a m physical distance between workers, and use sanitary measures in public venues. finally, we demonstrated that safety climate, risk perception, and social distancing are the interacting factors, supporting our hypothesis that a moderating effect of safety climate on the relationship between risk perception and social distancing exists, as found in kouabenan et al. [ ] , bosak et al. [ ] , and koetke et al. [ ] (see hypothesis h ). however, we did not find that safety climate increased the degree to which the risk perception positively affects social distancing. as shown in figure , risk perception was positively related to social distancing under the conditions of a high safety climate as well as under the conditions of a low safety climate. more importantly, we found that safety climate is a factor that lessens the positive correlation between risk perception and social distancing. this moderating effect improves our understanding of the contexts in which risk perception affects social distancing. yet, as described by kouabenan et al. [ ] , the safety climate was viewed as the key factor because it completely mediated the effect of perceived risk on safety behavior. one potential explanation for this difference of findings is the complex content of safety climate measurement items, because it actually includes three clauses. compared to the previous studies, we regarded the safety climate as the whole of social consciousness. the overall promotion of social protection awareness will replace the role of risk perception and may lead to compliance with social distancing through the public herd effect. therefore, while focusing on the importance of risk perception, we cannot ignore the positive incentives for social distancing brought by a good safety climate. in addition to enhancing employees' consensus on pandemic prevention, qualified organizations can physically isolate workspaces and public venues in time and space. for example, people should avoid going out for mass gatherings (lunches, shopping, traveling, education, leisure, etc.). then, for management commitment, they should physically divide the restaurant space, office space, and other public areas to ensure that people have sufficient isolation distance. flexible work scheduling, online office hours, and e-learning are encouraged for implementation. conclusively, application of innovating social distance management technologies (e.g., technologies that are based on an emerging range of ict technologies [ ] like bluetooth, radio frequency identification, cloud mobile, and others) can assist with achieving an accurate measurement of the physical distance between individuals and momentarily reminding people to maintain a social distance as needed. in public venues, such as dining areas, using multimedia, posters, and ground stickers with social distancing reminders can create a good safety climate. although substantial efforts were put into this study to ensure the reliability and validity of the results, a few limitations still exist, which might be explored in further research. first, our sample does consist of chinese internet users but may not have all the attributes that perfectly match the characteristics of the current chinese population. without collecting data from other regions and having a representative sample, the generalizability of our findings is limited to a certain extent. a cross-regional, more representative study with a bigger sample size could be used in future studies in order to improve accuracy and generalizability of the results. second, we measured all the latent variables with a simple one-dimensional factor by using a cross-sectional design. the results could neither exclude the possibility of reverse causation nor prove the exact cause-and-effect relationships from a cross-sectional survey design. hence, further study could be extended by collecting longitudinal data through multiple rounds of experiments. furthermore, several previous studies measured risk perception from a multi-dimensional perspective. therefore, it would be meaningful to present risk perception as a multi-dimension construct, developing a multi-item scale to promote reliability and validity. moreover, this study takes into consideration risk perception that creates social distancing for the adoption of risk management. some other factors, like knowledge and beliefs of the covid- pandemic, mask-wearing, self-awareness in prevention of covid- , number of confirmed covid- cases in a given region, death rate in a given region, and percentage of elderly population in a given region, can also be included in further research. finally, we considered the mediating and moderating effects of perceived understanding and safety climate. as contingent factors, these effects may interact with other factors, shifting the results conducted in the present study. besides, several control variables that are associated with population demographics, such as gender, age, and education level, did not show a significant impact on the relationships among these latent variables. this subject, however, is worth exploring in further research. this study investigated the impact of risk perception on social distancing during the covid- pandemic. based on the data collected from an online survey among participants in china throughout may , our analyses indicate that positive changes in social distancing behaviors are associated with increased risk perception, perceived understanding, and safety climate. the individual's perceived understanding partly plays a positive mediating role in the relationship between risk perception and social distancing behaviors. furthermore, the safety climate plays a negative role in the relationship between risk perception and social distancing because the safety climate seems to mitigate the effects of risk perception on social distance. hence, effective health promotion strategies directed at developing or increasing positive risk perception, perceived understanding, and safety climate should be conducted to encourage people to comply with the social distancing policies amid these unprecedented times. finally, these results are expected to contribute to management guidelines at the level of individual perception and public opinions as well as to assist with effective implementation of the social distancing policies in countries with a high risk of the covid- pandemic. pandemic is associated with antisocial behaviors in an online united states sample tracking changes in sars-cov- spike: evidence that d g increases infectivity of the covid- virus early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia the reproductive number of covid- is higher compared to sars coronavirus does culture matter social distancing under the covid- pandemic? social distancing: how religion, culture and burial ceremony undermine the effort to curb covid- in south africa airborne or droplet precautions for health workers treating covid- ? covid- and the social distancing paradox: dangers and solutions physical distancing for coronavirus (covid- ). available online how to slow the spread of covid- basic policies for novel coronavirus disease control by the government of japan staying alert and safe (social distancing) what is social distancing and how can it slow the spread of covid- ? available online covid- ) advice for the public detecting and visualizing emerging trends and transient patterns in scientific literature visualizing and exploring scientific literature with citespace: an introduction how many ways to use citespace? a study of user interactive events over months effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review effectiveness of personal protective measures in reducing pandemic influenza transmission: a systematic review and meta-analysis on the role of governmental action and individual reaction on covid- dynamics in south africa: a mathematical modelling study social distancing simulation during the covid- health crisis impact of social distancing measures for preventing coronavirus disease [covid- ]: a systematic review and meta-analysis protocol covid- modelling: the effects of social distancing analysis of a mathematical model for covid- population dynamics in mental morbidity arising from social isolation during covid- outbreak reconceptualizing social distancing: teletherapy and social inequality during the covid- and loneliness pandemics social distancing as a health behavior: county-level movement in the united states during the covid- pandemic is associated with conventional health behaviors love in the time of covid- : sexual function and quality of life analysis during the social distancing measures in a group of italian reproductive-age women a nation-wide survey on emotional and psychological impacts of covid- social distancing interaction effect of lockdown with economic and fiscal measures against covid- on social-distancing compliance: evidence from africa explaining compliance with social distancing norms during the covid- pandemic: the roles of cultural orientations, trust and self-conscious emotions in the us public perceptions and commitment to social distancing during covid- pandemic: a national survey in saudi arabia third-person effect and pandemic flu: the role of severity, self-efficacy method mentions, and message source community knowledge, attitudes, and behavior towards social distancing policy as prevention transmission of covid- in indonesia changes in contact patterns shape the dynamics of the covid- outbreak in china lockdown strategies, mobility patterns and covid- the risk concept-historical and recent development trends risk assessment and risk management: review of recent advances on their foundation risk perception through the lens of politics in the time of the covid- pandemic risk perception in fire evacuation behavior revisited: definitions, related concepts, and empirical evidence effects of news media and interpersonal interactions on h n risk perception and vaccination intent health care workers' risk perceptions and willingness to report for work during an influenza pandemic analyzing situational awareness through public opinion to predict adoption of social distancing amid pandemic covid- nurses' use of situation awareness in decision-making: an integrative review relationships between psychological safety climate facets and safety behavior in the rail industry: a dominance analysis safety climate, perceived risk, and involvement in safety management safety climate dimensions as predictors for risk behavior trust in science increases conservative support for social distancing. osf , cngq an assessment of the generalizability of internet surveys the effects of risk perceptions related to particulate matter on outdoor activity satisfaction in south korea study on the development of an infectious disease-specific health literacy scale in the chinese population the impact of organizational climate on safety climate and individual behavior core dimensions of the construction safety climate for a standardized safety-climate measurement analytic thinking, rejection of coronavirus (covid- ) conspiracy theories, and compliance with mandated social-distancing: direct and indirect relationships in a nationally representative sample of adults in the united kingdom knowledge and beliefs towards universal safety precautions during the coronavirus disease (covid- ) pandemic among the indian public: a web-based cross-sectional survey coefficient alpha and the internal structure of tests introduction to psychometric theory applied multivariate data analysis: volume ii categorical and multivariate methods testing mediation and suppression effects of latent variables: bootstrapping with structural equation models knowledge, attitudes, and practices towards covid- among chinese residents during the rapid rise period of the covid- outbreak: a quick online cross-sectional survey adoption of personal protective measures by ordinary citizens during the covid- outbreak in japan social distancing . with privacy-preserving contact tracing to avoid a second wave of covid- the authors declare no conflict of interest. key: cord- - kghmzf authors: lai, allen yu-hung; tan, seck l. title: impact of disasters and disaster risk management in singapore: a case study of singapore’s experience in fighting the sars epidemic date: - - journal: resilience and recovery in asian disasters doi: . / - - - - _ sha: doc_id: cord_uid: kghmzf singapore is vulnerable to both natural and man-made disasters alongside its remarkable economic growth. one of the most significant disasters in recent history was the severe acute respiratory syndrome (sars) epidemic in . the sars outbreak was eventually contained through a series of risk mitigating measures introduced by the singapore government. this would not be possible without the engagement and responsiveness of the general public. this chapter begins with a description of singapore’s historical disaster profiles, the policy and legal framework in the all-hazard management approach. we use a case study to highlight the disaster impacts and insights drawn from singapore’s risk management experience with specific references to the sars epidemic. the implications from the sars focus on four areas: staying vigilant at the community level, remaining flexible in a national command structure, the demand for surge capacity, and collaborative governance at regional level. this chapter concludes with a presence of the flexible command structure on both the way and the extent it was utilized. situated in southeast asia yet outside the pacific rim of fire, singapore is fortunate enough to have been spared from major natural disasters such as typhoons, floods, volcanic eruptions, and earthquakes. however, this does not imply that singapore is safe, or immune from being affected by disasters. singapore houses a population of . million, a ranking of the third highest population density in the world. about % of singapore's population resides in high-rise buildings (asian disaster reduction center ) . a major disaster of any sort could inflict mass casualties and extensive destruction to properties in singapore. clearly, like its neighboring countries, singapore is also vulnerable to both natural and man-made disasters alongside its remarkable economic growth. the potential risks may result from its dense population, intricate transportation network, or a transnational communicable disease. moreover, singapore can be affected by the situations in surrounding countries. for example, flooding in thailand and vietnam may affect the price of rice sold in singapore. indeed, singapore in her short history of years has experienced a small number of disasters. chief among these, the severe acute respiratory syndrome (sars) epidemic in was the most devastating. the sars outbreak brought about far-reaching public health and economic consequences for the country as a whole. fortunately, the outbreak was eventually contained through a series of risk mitigating measures introduced by the singapore government and the responsiveness of all singaporeans. it is important to point out that these risk mitigating measures, along with the public's compliance, were swiftly adjusted to address the volatile conditions-such as when more epidemiological cases were uncovered. in this chapter, we introduce singapore's all-hazard management framework as well as the insights drawn from singapore's risk management experience with specific references to the sars epidemic. to achieve our research objective, we utilized a triangulation strategy of various research methodologies. to understand the principles and practices of singapore's approach to disaster risk management, we carry out an historical analysis of official documents obtained from the relevant singapore government agencies as well as international organizations, literature reviews, quantitative analysis of economic impacts, qualitative interviews with key informants (e.g. public health professionals and decision-makers), and email communications with frontline managers from the public sector (e.g. the singapore civil defense force, the communicable disease centre) and non-governmental organizations. the authors also employed the 'cultural insider' approach by participating in epidemic control procedures against sars. in particular, we use the method of case study to illuminate singapore's approach to disaster risk management. the rationale of doing a case study of sars along with singapore's all-hazard approach is that the case study can best showcase the contextual differences, those being political, economic, and social. this case study aims to highlight the lessons drawn from past experiences in a specific context and timeframe, through which we are able to focus more on the nature of the risks, and the processes and the impacts of the disaster risk management and policy intervention. we also examined relevant literature on risk mitigating measures against communicable diseases in order to establish our conclusions. we evaluated oral accounts provided by key health policy decision-makers and experts for valuable insights. this chapter offers empirical evidence on the role of the whole-of-government approach to risk mitigation of the sars epidemic. applying the approach to a case study, our research enriches the vocabulary of risk management, adding to the body of knowledge on disaster management specific to the region of southeast asia. indeed, the dominant perspective in this field holds that the state must be able to exercise brute force and impose its will on the population (lai and tan ) . however, as shown in our paper, this dominant perspective is incomplete as the exercise of authority and power from the government is not necessarily sufficient to contain the transmission of transnational communicable diseases. success in fighting epidemics, as most would agree, is also contingent on a concerted effort of partnership between governmental authorities and the population at large. as discussed in the first section of this volume, community and family ties along with government responses can mitigate disasters. this chapter has four main sections. following this introduction, we provide an overview of singapore's historical disaster profiles. second, we introduce the policy and legal framework, and budgetary allocations for risk mitigation in singapore. third, we detail a case study of singapore's experience in fighting sars, as well as the impact of sars on singapore in its economic, healthcare, and psychosocial aspects. in the fourth section, we discuss the implications for practice and future research in disaster risk management, followed by conclusions. singapore has experienced a small number of disasters since it was founded in . in this section, we briefly provide an historical account of singapore's disaster risk profiles including earthquakes, floods, epidemics, civil emergencies, and haze. singapore has a low risk of earthquakes and tsunamis. geographically, singapore is located in a low seismic-hazard region. however, the high-rise buildings that are built on soft-soil in singapore are still vulnerable to earthquakes from far afield (asian disaster reduction center ) . this is because singapore is at a distance (nearest) of km from the sumatran subduction zone and km away from the sumatra fault both of which have the potential of generating large magnitude earthquakes. this geographic vicinity may produce a resonance like situation within high-rise buildings on soft-soil. recent tremors from the september sumatra offshore earthquake were experienced in buildings located mainly in the central, northern and western parts of singapore. on the front of potential tsunamis, singapore has developed a national tsunami response plan which is a multiagency government effort comprising of an early warning system, tsunami mitigation and emergency response plans, and public education. though singapore does not suffer from flood disasters due to the continuous drainage improvement works by the local authorities, the country has a risk of local flooding in some low-lying parts. the floods take place due to heavy rainfall that aggregates over short periods of time. the worst floods in singapore's history took place on december . the floods claimed seven lives, forced more than , people to be evacuated, and the total damages reached sgd million (tan ) . the swift and sudden floods in were caused by a combination of factors including torrential monsoon rains, drainage problems, and high incoming tides. over the following years, singapore saw a series of flash floods hit various parts of the city-state. for example, for example, - southeast asian floods hit singapore on december as a result of mm rainfall in h. from onwards, singapore has experienced a series of flash floods due to the higher-than-average rainfall. one severe episode occurred on june that flooded shopping malls and basement car parks in its most famous shopping area-orchard road. as per the reported historical disaster data from the cred international disaster database, singapore has suffered only two disaster events caused by epidemics. in , singapore experienced its largest known outbreak of hand-foot-mouth disease (hfmd) which affected more than , young children, causing three deaths. later in , sars hit singapore and it was singapore's most devastating disaster to date. the sars virus infected around , people worldwide and caused around deaths. in singapore, sars infected people, of whom died of this contagious communicable disease. in , novel avian influenza h n struck singapore, which affected , people with deaths. civil emergencies are defined as sudden incidents involving the loss of lives or damage to property on a large scale. they include ( ) civil incidents such as bomb explosions, aircraft hijacks, terrorist hostage-taking, chemical, biological, radiological and explosive (cbre) agents and the release of radioactive materials by warships, and ( ) civil emergencies, for example major fires, structural collapses, air crashes outside the airport boundary, and hazardous material incidents. in singapore, the singapore civil defense force (scdf) is responsible for civil emergencies. since , singapore has experienced several episodes of civil emergencies. for example, the greek tanker spyros explosion at the jurong shipyard in was singapore's worst industrial disaster in terms of lives lost (ministry of labor, singapore ) . in , the six-storey hotel new world collapse was singapore's deadliest civil disaster claiming lives. the collapse was due to structural faults. the scdf, together with other rescue forces, spent days on the whole relief operation. after the collapse, the government introduced more stringent regulations on construction building codes, and the scdf went through a series of upgrades in training and equipment (goh ). singapore experienced its first haze in the period of the end of august to the first week of november as a result of prevailing winds. the haze in , called the southeast asian haze, was caused by slash and burn techniques adopted by farmers in indonesia. the smoke haze carried particulate matter that caused an increase of acute health effects including increased hospital visits due to respiratory distress such as asthma, pulmonary infection, as well as eye and skin irritation. the haze also severely affected visibility in addition to increasing health problems. as a result, singapore's health surveillance showed a % increase in outpatient attendance for haze-related conditions (emmanuel ) . apart from healthcare costs, other costs associated with the haze included short-term tourism and production losses. a study by environmental economists of the southeast asian haze indicated a total of usd$ . million in economic losses in singapore alone. singapore is actively involved in various regional meetings to deal with transboundary smoke haze pollution in order to reduce the risk (singapore institute of international affairs ). the singapore government adopts a cross-ministerial policy framework-a wholeof-government integrated risk management (wog-irm), for disaster risk mitigation and disaster management (asia pacific economic cooperation ). this is a framework that aims to improve the risk awareness of all government agencies and the public, and helps to identify the full range of risks systematically. in addition, the framework identifies cross-agency risks that may have fallen through gaps in the system. this framework also includes medical response systems during emergencies, mass casualty management, risk reduction legislation for fire safety and hazardous materials, police operations, information and media management during crises and public-private partnerships in emergency preparedness. the wog-irm policy frame work in singapore functions in peacetime and in times of crisis. it refers to an approach that all relevant agencies work together in an established framework, with seamless communication and coordination to manage the risk (pereira ) . in peacetime, the home team comprises of four core agencies at central government level. these four agencies are the strategic planning office, the home front crisis ministerial committee (hcmc), the national security coordination secretariat, and the ministry of finance at the policy layer. among them, the strategic planning office provides oversight and guidance as the main platform to steer and review the overall progress of the wog-irm framework. during peacetime, the strategic planning office convenes meetings quarterly for the permanent secretaries from the various ministries across government. in a crisis, the home front crisis management system provides a "ministerial committee" responsible for all crisis situations in singapore. in the wog-irm structure, the hcmc is led by the ministry of home affairs (mha). in peacetime, mha is the principal policy-making governmental body for safety and security in singapore. in the event of a national disaster, the mha leads at the strategic level of incident management. the incident management system in singapore is known as the home front crisis management system (hcms). under the hcms, the scdf is appointed as the incident manager, taking charge of managing the consequences of disasters and civil emergencies. reporting to the hcmc is an executive group known as the home front crisis executive group (hceg), which is chaired by the permanent secretary for mha. the hceg is in charge of planning and managing all types of disasters in singapore. within the operation allayer, there are various functional inter-agency crisis management groups with specific responsibilities, integrated by the various governmental crisis-management units. at the tactical layer, there are the crisis and incident managers who supervise service delivery and coordination. the singapore government holds relevant ministries accountable in accordance to the nature and scope of the disaster. among those ministries and government agencies, the scdf is the major player in risk mitigation and management for civil emergencies. now, let us look into the scdf in more detail. for civil security and civil incidents, the singapore civil defense force (scdf) is singapore's leading operational authority-the incident manager for the management of civil emergencies. the scdf is responsible for leading and coordinating the multi-agency response under the home front crisis management committee. the scdf operates a three-tier command structure, with headquarters (hq) scdf at the apex commanding four land divisions. these divisions are supported by a network of fire stations and fire posts strategically located around the island. the scdf also serves the following pivotal functions. the scdf provides effective -h fire fighting, rescue and emergency ambulance services. the scdf developed the operations civil emergency (ops ce) plan-a national contingency plan. when ops ce is activated, the scdf is vested with the authority to direct all response forces under a unified command structure, thus enabling all required resources to be pooled. however, the wog-irm policy framework only came to existence when singapore encountered sars. the sars epidemic in was an institutional watershed for singapore's approach to risk mitigation and disaster management (pereira ) . prior to the sars epidemic, singapore's executive group mainly focused on crises or disasters that were civil defense in nature. these emergencies were merely conceived to be well managed by a solitary incident manager, supported by other relevant agencies. a specific multi-sectoral governance structure was not considered necessary to handle the crisis. the sars epidemic challenged the prevailing home front crisis management structure as the epidemic transcended just managing civil defense incidents. the policymakers realized the necessity to adopt a comprehensive disaster management framework, an all-hazard approach that includes a mechanism for seamless integration at both the strategic and operational levels among various government agencies. to this end, singapore revamped its home front crisis management framework to produce the current inter-agency structure. the main legislation supporting emergency preparedness and disaster management activities in singapore are the civil defense act of , the fire safety act of , and the civil defense shelter act of . the civil defense act provides the legal framework for, amongst other things, the declaration of a state of emergency and the mobilization and deployment of operationally-ready national service rescuers. provides the legal framework to impose fire safety requirements on commercial and industrial premises, as well as the involvement of the management and owners of such premises in emergency preparedness against fires; and the civil defense shelter act provides the legal framework for buildings to be provided with civil defense shelters for use by persons to take refuge during a state of emergency. to tackle disease outbreak, singapore had earlier promulgated the infectious disease act in . this legislation is jointly administered by the moh and the national environment agency (nea). unlike most governments that make regular national budgetary provision for potential disaster relief and early recovery purposes, the government of singapore makes no annual budgetary allocations for disaster response because the risks of a disaster are low (global facility for disaster reduction and recovery , p. ). however, the singapore government can swiftly activate the budgetary mechanisms or funding lines in the event of a disaster and ensure these lines are sufficiently resourced with adequate financial capacity. to illuminate singapore's approach to disaster management, we now use a case study of singapore's fight against sars to highlight policy learning and lessondrawing in a specific context and timeframe. this case study has three sections. we first introduce the epidemiology of sars in singapore. in the second section, we describe the impact caused by sars epidemics on singapore in the economic, healthcare, and psychosocial aspects. in the third section, we demonstrate singapore's risk mitigating management, and detail the government's risk mitigating measures to contain the epidemic. singapore is a small open economy. external shocks can result in high levels of volatility resonating across the domestic economy. these shocks in turn would bring about higher levels of risk and uncertainty in singapore. at the beginning of , singapore's economic outlook was clouded by the iraq war and its impact on oil prices (attorney-general's chambers ). the unexpected outbreak of sars led to greater uncertainty in the singapore economy. singapore's financial markets were severely affected due to the loss of public confidence and reduced floor trading. the impact of sars on the stock market reflected in the straits times index (sti) (see fig. . ). the market did not react well to the sars epidemic. in the first fortnight of the epidemic, the sti closed down points. even though more cases were reported, the sti climbed progressively up points over the next fortnight, eclipsing the earlier falls. this could be attributed to the strict measures which the singapore government introduced. the sti remained relatively stable over the immediate fortnight as new cases were reported. however, it started a downward plunge over the following fortnight as the number of cases peaked once more. the sti plunged points. however, the resilience of the sti was shown when it climbed back up, surpassing the level reported at the beginning of the sars period. the volatility of the sti demonstrates the vulnerability of a small open economy from exogenous forces-in this case, the sars epidemic. sars was the one single activity which contributed to the volatility of singapore's gross domestic product (gdp) in . the ministry of trade and industry (mti) revised the forecast for singapore's annual gdp growth down from to . %. this forecast was later revised upwards to . %. there were a number of channels by which the sars epidemic affected the economy. the economic impacts will be discussed from the positions of demand and supply shocks. the main economic impact of the sars outbreak was on the demand side, as consumption and the demand for services declined (henderson ) . the economic consequence caused fear and anxiety among singaporeans and potential tourists to singapore. the hardest and most directly hit were the tourism, retail, hospitality and transportrelated industries, for example airline, cruise, hotel, restaurant, travel agent, retail and taxi services, and their auxiliary industries (see fig. . and this had a direct impact on hotel occupancy rates, which declined sharply to % in late april . cancellation or postponement of tourism events increased by about - %. revenues of restaurants dropped by % while revenues of the travel agents decreased by %. sars had an uneven impact on various sectors of the economy. a four-tiered framework to assess the impact on the respective sectors showed that tier industries, such as the tourism and travel-related industries were most severely hit. tier industries account for . % of gdp. the tier industries, such as restaurants, retail and land transport industries were significantly hit, which account for . % of gdp. the next two tiers were less directly affected by the sars outbreak. tier industries include real estate and stock broking, which account for close to % of gdp. the remaining % of the domestic economy in tier includes manufacturing, construction and communications. these industries were not directly impacted by the outbreak of sars. all in all, the estimated decline in gdp directly from sars was %, equaling sgd million. singapore experienced a significant drop in tourist arrivals where visitors usually stay for up to days and transit onto their next destination. the trend for visitor inflow is that visitor inflows fall sharply. this is especially true in the case of singapore, when visitor stays tend to be shorter and the high-end visitors stayed away. as a result, tourism and other related industries were nearly crippled due to a significant reduction in both leisure and business travel. visitors from around the world cancelled or postponed their trips to singapore, causing a drastic decrease of total expenditure from visitors. (see table . ) plummeting visitor arrivals directly impacted hotel occupancy rates, which declined sharply to % in late april (see table . ). the hotel occupancy rate plummeted from to %, compared to the normal level of % or above. the annual averages for hotel occupancy rates were . % in , . % in , and . % in . singapore's national carrier, singapore airlines (sia), faced a record-breaking low passenger capacity of % in april and may . sia cancelled approximately % of its weekly schedules (henderson ) . sia laid off employees, of which were ground staff, as a consequence of a usd million loss in june . the hospitality industry had to resort to cutting budgets, which led to a steep plunge in the number of employed in the service sector. out of a total of , made unemployed, hotels and restaurants went through the biggest cut, that being , employees. the breakdown of total job losses showed % in the service sector, % in construction, and % in manufacturing. additionally, transactions in the retail sector were dropped by %. the private property volume transactions for condominiums and private property price index are also good proxies on the impact of the economy from sars. based on quarterly figures between and , the volume transactions dipped to a low in the first quarter of . also, there was a corresponding decline in the price index. transactions recovered steadily by the third quarter boosted by confidence in market sentiments (see fig. . ) . the sti and private property price index seemed to display fairly similar trends, albeit with some observed lag. note also that there is a lagged effect of consumer's deferred purchases after the outbreak of sars in singapore. demand creates its own supply. therefore, a fall in demand of goods and services is likely to bring about a fall in the supply of such goods and services. also, the loss of consumer and business confidence would reduce the level of aggregate demand. these effects were observed as the manufacturing industry experienced supply chain disruptions as the singaporean economy and employment market continued to weaken. singapore was taken off the who's list of sars affected countries on st may -one of the first countries to be removed from the list. with the "fear-factor" managed, normal daily activities slowly resumed. sars affected industries and sectors started to show signs of recovery towards the end of the second quarter in . a more comprehensive analysis of the economic costs of sars will need to consider the direct impact on consumer spending and indirect repercussions of the shock on trade and investment (asian development bank outlook ). the economic costs from a global disease, such as sars, go beyond the immediate impacts incurred in the affected sectors of disease-inflicted countries. this is not just because the disease spreads quickly across countries through networks related to global travel, but also because any economic shocks to one country spread quickly to other countries through the increased trade and financial linkages associated with globalization. however, just calculating the number of cancelled tourist trips, the declines in retail trade, and some of the factors discussed earlier do not provide a complete picture of the impact of sars. this is because there are close linkages within economies, across sectors, and across economies in both international trade and international capital flows. thus, analyzing the tourism sector alone may not be sufficient in analyzing the overall financial impact of sars. sars inflicted a heavy toll on businesses and immediately impacted severely the viability of business. businesses lost employees for long periods of time due to factors such as illness, the need to care for family members and fear of infection at work, or retrenchment. as the workforce shrunk due to absenteeism, business operations, for example supply chain, flow of goods worldwide and provision of services, were all affected both locally and internationally. in terms of retrenchment, the job prospects of employees in affected companies appeared miserable. a survey performed during the sars period showed that the jobless rate increased more than . %, the highest for the last decade in singapore (ministry of manpower, singapore ) . in absolute numbers, overall employment diminished by , in the second quarter of , the largest quarterly decline since the mid- s recession. unlike previous retrenchment that affected mainly blue-collar labor, sars also affected whitecollar employees too. the implementation of workplace sars control measures added to operational and administrative costs. for example, the policy of temperature taking was implemented at workplaces in the private sector. numerous private establishments installed thermal-scanners in their entrances from day one. however, such precautionary measures were necessary to contain the disease. this helped to restore business confidence and investment potential (a lower level of investments will lead to slower capital growth). but the reduction in an economy's capacity may linger on for a few quarters before it is restored to pre-sars levels. the loss of productive working days from quarantine, and implementation costs incurred to monitor movements of employees contributed to the reduction in the aggregate supply front. some of these economic effects may have worsened the public health situation if strategic planning was not in place. sars reduced levels of service and care in singapore's healthcare system as the system mobilized its medical resources to deal with the sars epidemic. the influx of influenza patients to hospitals and clinics crowded out many other patients with less urgent medical problems for treatment. this particularly affected those seeking elective operations that had to be postponed until the epidemic ended in singapore. sars also severely impacted singapore's healthcare manpower. during the peak of sars from mid-march to early april , there was a shortage of medical and nursing professionals because ( ) the demand for care of influenza patients substantially increased, and ( ) the supply of healthcare manpower decreased as some were also affected by the epidemic. like other business sectors, hospitals, clinics and other public health providers also faced a high staff-absenteeism rate and encountered difficulties in maintaining normal operations. this resulted in a further reduction in the level of service capacity. psychosocial impact from sars was mainly caused by limited medical knowledge of sars when it began its insidious spread in singapore. such uncertainty of contracting a highly contagious disease actually deteriorated the fear of security breaches, and the panic of overexposure (tan ) . responding to the uncertainty of disease transmission, the singapore government instituted many draconian public policies, such as social distancing, quarantine and isolation, as risk mitigating measures. all of these control measures created an instinctive withdrawal from society for the general population. this brought about a behavior which resulted in the public avoiding crowds and public places with human interaction. on march , the moh invoked the infectious disease act (ida) to isolate all those who had been exposed to sars patients. after ida was invoked, on march , schools and non-essential public places were closed. public events were cancelled to prevent close contact in crowds. singaporeans with contact history were asked to stay home for a period of time to prevent transmission. harsh penalties, such as hefty fines of more than usd , or imprisonment, were imposed on those who defied quarantine orders. in a drastic move reminiscent of a police state, closedcircuit cameras were installed in the houses of those ordered to stay home to monitor their compliance with the quarantine order (abc news online ). at the height of sars, , suspected cases were ordered to stay home, all of whom were monitored either by cameras or in less severe cases, by telephone calls. quarantine, regardless of its effectiveness, received strong criticism from the general public during the outbreak of sars due to the invasive nature of that measure (duncanson ) . impact of social distancing remains unclear, but who has recommended such control measures depending on the severity of the epidemic, risk groups affected and epidemiology of transmission (world health organization ). singapore's moh advocated the practice of social distancing during the outbreak of sars. the sole intention of social distancing was to limit physical interactions and close contact in public areas to slow the rate of disease transmission. additionally, social distancing measures in particular have a psychological impact. the practice of social distancing led to a social setback in businesses that suffered economic losses as a result (duncanson ) . the psychological impact of sars is longer lasting. the most immediate and tragic impact was the loss of loved ones. in this section, we detail singapore's command structure, legal framework in fighting sars, as well as risk mitigating measures in economic, healthcare, and psychosocial perspectives. one of the most important lessons the singapore government learned from the sars epidemic was the crucial role played by the bureaucracy in disaster management. the bureaucratic structure in place then was severely inadequate in terms of handling a situation that was both fluid and unprecedented; indeed, fighting sars required more than a medical approach because resources had to be drawn from agencies other than the moh. accordingly, a three-tiered national control structure was created in response to sars-these tiers were individually represented by the inter-ministerial committee (imc), the core executive group (ceg) and the inter-ministry sars operations committee (imoc) (tay and mui ) . the nine-member imc was chaired by the minister of home affairs (mha) and it fulfilled three major functions: ( ) to develop strategic decisions, ( ) to approve these major decisions, and ( ) to implement control measures. notably, the imc also played the role of an interagency coordinator overseeing the activities of other ministries and their subsidiaries. on april ( weeks after the first case of sars was reported), the ceg and a ministerial committee was formed. the ceg was chaired by the permanent secretary of home affairs and consisted of elements from three other ministries: the moh, the ministry of defense (mod) and the ministry of foreign affairs (mfa). in particular, the role of the ceg was to manage the sars epidemic by directing valuable resources to key areas. the imoc, meanwhile, was seminal in carrying out health control measures issued by the imc (see fig. . below). the moh, at the operational layer, formed an operations group responsible for the planning and coordination of health services, and operation in peacetime. during sars, it commanded and controlled all medical resources and served as the main operational linkage between the moh and all the healthcare providers. on march , when the epidemiological nature of sars was still unclear, the moh initiated a sars taskforce to look into the mysterious strain. only days later, after more sars cases were reported and a better epidemiological understanding of the strain was developed, the singapore government swiftly declared sars a notifiable disease under the infectious disease act (ida) (ministry of health, singapore a) . in the case of a broad outbreak, ida made it legally permissible to enforce mandatory health examination and treatment, exchange of medical information and cooperation between healthcare providers and the moh, and the quarantine and isolation of sars patients (infectious disease act ). in particular, the government amended the ida on april requiring all those who had come into contact with sars patients to remain indoors or report immediately to designated medical institutions for quarantine (ministry of health, singapore b) . asa legacy of singapore's british colonial past, the singapore legislature is unique and well-known for passing laws in a swift and efficient manner. the uniqueness in singapore's legal framework allows singapore to tan ( ) swiftly amend the ida during health crises to suit volatile conditions, for instance when more epidemiological cases were uncovered and the virus was better understood. all in all, the ida played an adaptive role in terms of facilitating a swift response to the outbreak of this particular epidemic. on march , the ceg designated the restructured public hospital-tan tock seng hospital (ttsh) as the sars hospital (james et al. ; tan ) . that is, once a suspected sars patient was detected at a local clinic or emergency department, he or she would then be transferred to ttsh immediately for further evaluation and monitoring. the national healthcare system prioritized life-saving resources such as medicine and medical equipment to allocate manpower and protective equipment to the ttsh. to ease the flu-like patient influx into the ttsh, the government diverted non-flu patients away from ttsh so that the sudden surge in the number of flu cases at ttsh did not paralyze its service delivery. the full impact of sars on the economy by and large depended on how quickly sars was contained, as well as the course of the sars outbreak in the region and beyond. to mitigate sars impact on singapore's economy, the government took every precaution and spared no effort to contain the sars outbreak in singapore. two aspects of sars warranted government intervention to mitigate economic impact. first, the information that needs to be collected and disseminated to effectively assess sars displays the characteristics of public good. second, there are negative externalities related to contagious diseases in the sense that they affect third parties in market transactions. public good and negative externalities are typical areas where government action is needed (fan ) . there are three major factors which can explain why some economies are more vulnerable and susceptible to the effect of sars than others (asian development bank outlook ) . these factors are structural issues (e.g. shares of tourism in gdp and the composition of consumer spending), initial consumer sentiments, and government responses. as the research shows, the singapore government implemented a usd million (sgd million in ) sars relief package to reduce the costs for tourism operators and its auxiliary services. on the other hand, an economic relief package worth usd m (sgd m) was created to aid businesses hit by sars. in addition, the government incurred usd$ m (sgd m) in direct operating expenditure related to sars, and committed another usd m (sgd m) development expenditure of hospitals for additional isolation rooms and medical facilities to treat sars and other infectious diseases. the government's economic incentives worked when seeking cooperation of other healthcare providers (such as public hospitals and local clinics) so that they would absorb additional cases of non-flu illnesses. to help sars affected firms tide over the plight and minimize job losses, singapore's national wage council widely consulted the private sector, and recommended sars-struck companies adopt temporary cost-cutting measures to save jobs. the measures adopted by the private sector included the implementation of a shorter working-week, temporary lay-offs and the arrangement for workers to take leave or undergo skills training and upgrading provided by the ministry of manpower and associated agencies. when these measures failed to preserve jobs, the last resort was temporary wage cuts. surveillance and reporting is critical in combating pandemics because it serves to provide early warning and even detection of impending outbreaks. the surveillance process involves looking out for possible virulent strains and disease patterns within a country's borders as well as at major border-crossings (jebara ; ansell et al. ; narain and bhatia ) . when sars first surfaced, the nature of this virus was largely unknown. as a consequence, health authorities worldwide were mostly unable to detect and monitor suspected cases. health authorities in singapore encountered this same problem. but with the aid of who technical advisors, singapore managed to establish in a timely manner identification and reporting procedures. furthermore, the moh also expanded the who's definitions for suspected cases of sars (to include any healthcare workers with fever and/or respiratory symptoms) in order to widen the surveillance net (goh et al. ) . as the pace of sars transmission quickened, the singapore parliament amended the ida on april requiring all suspected sars cases to be reported to the moh within h from the time of diagnosis. although these control measures were laudable, sars also exposed the weaknesses of singapore's fragmented epidemiological surveillance and reporting systems (goh et al. ) . as a major part of lesson-drawing in the post-sars era, a number of novel surveillance measures were introduced to integrate epidemiological data and to identify the emergence of a new virulent strain faster. one of the most notable was the establishment of an infectious disease alert and clinical database system to integrate critical clinical, laboratory and contact tracing information. today, the surveillance system has four major operational components that include community surveillance, laboratory surveillance, veterinary surveillance, external surveillance, and hospital surveillance. to limit the risk of transmission in healthcare institutions once the sars epidemic had broken out, the moh implemented a series of stringent infection-control measures that all healthcare workers (hcws) and visitors to hospitals visitors had to adhere to. the use of personal protective equipment (ppe) was made compulsory. visitors to public hospitals were barred from those areas where transmission and contraction were most likely. the movements of hcws in public hospitals were also heavily proscribed. unfortunately, except for ttsh, these critical measures were not enforced in all healthcare sectors until april , and this oversight resulted in a number of intra-hospital infections (goh et al. ). in addition, the policy of restricting the movements of hcws and visitors to hospitals was taken further. more specifically, their movements between hospitals were now restricted. patient movement between hospitals, meanwhile, was strictly restricted to medical transfers. the number of visitors to hospitals was also limited and their particulars recorded during each visit. it is also important to point out that these somewhat draconian control measures required strong public support and cooperation. indeed, their implementation would not have been successful had these two elements been missing. public education and communication are two indispensable components in health crisis management (reynolds and seeger ; reddy et al. ). communication difficulties are prone to complicate the challenge, especially when there is no established, high-status organization that can act as a hub for information collation and dissemination. therefore, it is necessary to disseminate essential information to the targeted population in a transparent manner. during the sars outbreak, the moh practiced a high degree of transparency when it shared information with the public. indeed, the clear and distinct messages from the moh contributed significantly to lowering the risk of public panic. the moh worked closely with the media to provide regular, timely updates and health advisories. this information was communicated to the public through every possible medium. in addition to the media (e.g. tv and radio), information pamphlets were distributed to every household and the moh website provided constant updates and health advisories to the general public. notably, a government information channel dedicated to providing timely updates was created on the same day- march -when the who issued a global alert. a dedicated tv channel called the sars channel was launched to broadcast information on the symptoms and transmission mechanisms of the virus (james et al. ) . the importance of social responsibility and personal hygiene was a frequent message heard throughout the sars epidemic. as an example, when tan tock seng hospital was designated as the sars hospital at the peak of sars epidemics, the government undertook many efforts in public communication and education to seek cooperation and support from other healthcare providers, such as public hospitals and local clinics, so that they would absorb the additional cases of non-flu illnesses. many organizations displayed prominent signs in front of their building entrances that reminded their staff as well as visitors to be socially responsible. school children were instructed to wash their hands and take their body temperature regularly. the public was told to wear masks and postpone non-essential travel to other countries. the moh advocated the practice of social distancing during the outbreak of sars. the sole intention of social distancing was of course to limit physical interactions and close contact in public areas thereby slowing the rate of transmission. as a result, all pre-school centers, after-school centers, primary and secondary schools, and junior colleges were closed from march to april . school children who had stricken siblings were advised to stay home for at least days. moreover, students who showed flu-like symptoms or had travelled to other affected countries were automatically granted a -day leave of absence and home-based learning program were instituted for those affected. extracurricular activities were also scaled down to minimize social contact. meanwhile, the moh also advised businesses to adopt social distancing measures such as allowing staff to work from home and using split-team arrangements. those who were most at higher risk of developing complications if stricken were moved and removed from frontline work to other areas where they were less likely to contract the virus. as mentioned earlier, the practice of social distancing also drew strong criticisms from those businesses that suffered economic losses as a result. apart from providing economic compensation, measures to mitigate psychosocial impacts are also important. the government's measures of public health control, as mentioned above, drew strong criticisms from businesses and the public during the outbreak of sars due to the invasive nature of those actions. besides these, the economic slowdown affected overall employment and personal income. some households required financial assistance. in response to the public complaints, authorities in singapore provided economic assistance to those individuals and businesses who had been affected by home quarantine orders through a "home quarantine order allowance scheme" (tay and mui ; teo et al. ) . at the same time, the moh worked together with various ministerial authorities to provide essential social services to those affected by the quarantine order. for example, housing was offered to those who were unable to stay in their own homes (because of the presence of family members) during their quarantine, ambulance services were freely provided by the singapore civil defense force to those undergoing quarantine at home to visit their doctors, as well as high-tech communication gadgets such as webcams, for those undergoing quarantine to stay in touch with relatives and friends. impacts on social welfare in large part relate to economic outlook, especially in the area of consumption patterns. all these risk mitigating measures were not only effective in containing the epidemic, but also valid for implications in disaster risk management. in this section, we draw on the lesson-learning from singapore's experience in fighting the sars epidemic, and discuss implications for future practice and research in disaster risk management. the implications are explained in four aspects: staying vigilant at the community level, remaining flexible in a national command structure, demand for surge capacity, and collaborative governance at regional level. it remains questionable that singapore's draconian health control measures may not be applicable or replicable in other countries, for example setting a camera to monitor the public's compliance during home quarantine. the evidence suggests that draconian government measures, such as quarantine and travel restrictions, are less effective than voluntary measures (such as good personal hygiene and voluntarily wearing of respiratory masks), especially over the long term. however, reminding the public to maintain a high level of vigilance and advocate individual social responsibility can be a persuasion tactic by an authority to influence and pressure, but not to force individuals or groups into complicity with a policy. therefore, promoting social responsibility is crucial in terms of slowing the pace of infection through good personal hygiene and respiratory etiquette in all settings. to achieve this goal, public education and risk communication are two indispensable components in health crisis management (reddy et al. ; reynolds and seeger ) . the community must be aware of the nature and scope of disasters. they have to be educated on the importance of emergency preparedness and involvement in exercises, training and physical preparations. at the community level, institutions and capacities are developed and strengthened which in turn systematically contribute to vigilance against potential risks. this is best illustrated in the singapore government's communication strategy to manage public fear and panic during the outbreak of sars (menon and goh ) . throughout the epidemic, the singapore government relentlessly raised the level of vigilance of personal hygiene and awareness of social responsibility. this, in large part, has to rely on public education and risk communication. to effectively disseminate the idea of vigilance across the public, political leaders were seen as doing and initiating a series of countermeasures to reassure the public. by showing the people that government leaders practiced what they preached, the examples served to naturalize and legitimize the public discourse of social responsibility for all singaporean citizens (lai ) . the need to stay vigilant is never overemphasized, but being vigilant does not equate to a panacea that ensures all government agencies work together. to be well prepared for the unexpected, we need a clear and swift national command structure that can flexibly respond to, and even more promptly than in the case of disease transmission, the changing situation. all local agencies responding to an emergency must work within a unified national command structure to coordinate multi-agency efforts in emergency response and management of disasters. on top of facilitating close inter-agency coordination, the strength of this flexible structure is in its ability to ensure a swift response to an epidemic outbreak by implementing risk mitigating measures more effectively and efficiently. structural flexibility involves swift deployment of forces to mitigate the incident at the tactical level, and to provide expert advice at the operational level, in order to minimize damage to lives and property. among other things, the flexibility endemic to this command structure facilitates the building of trust between the state and its people (lai ). this in turn ensures that government measures are quickly accepted by the general public. as shown in this chapter, the moh has been entrusted by the singapore government and pre-designated to be the incident manager for public health emergencies. when a sudden incident involves public health or the loss of lives on a large scale, the moh is responsible for planning, coordinating and implementing an assortment of disease control programs and activities. during the outbreak of sars, the singapore government established a national command and control structure that was able to adapt to rapidly changing circumstances that stemmed from the outbreak. specifically, the moh set up a taskforce within that ministry even when the definition of sars remained unclear. as more sars cases were uncovered and better epidemiological information became available, the government quickly created the inter-ministerial committee (imc) and core executive group (ceg)-both of which were instrumental in the design and implementation of all risk mitigating measures-to coordinate the operation to combat the outbreak (pereira ) . while this overarching governance structure is more or less standard worldwide ('t hart et al. ; laporte ) , the case of singapore is unique in that the city-state was able to overcome bureaucratic inertia and adapt this governance structure. from singapore's experiences during the sars crisis, we have learnt that the strength of a national command structure lies in its flexibility to link relevant ministries on the same platform. these linkages ensure a timely, coordinated response and service delivery. having a flexible structure was not the only reason behind the successful defeat of sars. in singapore's case, we also notice the success of containing an uncertain, high-impact disaster has to rely on surge capacity. in the context of this paper, surge capacity refers to the ability to mobilize resources (such as ppes, vaccines and hcws) to combat the outbreak of a pandemic. singapore's response to sars in illustrates the importance of being able to increase surge capacity swiftly to deal with an infectious disease outbreak. in the asia pacific region, this problem continues to hamper many countries' ability to combat infectious diseases (putthasri et al. ). for many public health organizations in asia, it is a matter of fact that they are unable to deal with pandemics because the resources to do so are simply absent (balkhy ; hanvoravongchai et al. ; lai b; oshitani et al. ) . meanwhile, there are evidences which suggest that surge capacity alone is not the full answer. for example, during the sars outbreak, abundant resources contribute an important but not all-encompassing element in the fight against these pandemics. as it turned out, when different stakeholders brought to the task-at-hand their unique skill sets and resources, they actually complicated the fight due to their lack of synergy. in fact, abundant resources without synergy might even undermine collaborative efforts. therefore, it is essential that the ability to link up various stakeholders must be complemented by some type of synergy between them. such ability can be enhanced through close collaboration. this brings us to the third implication for disaster management: collaborative governance at regional level. the trans-boundary nature of the disasters calls for a planned and coordinated approach towards disaster response for efficient rescue and relief operations lai a) . combating epidemics requires multiple states and government agencies to work together in close (webby and webster ) . therefore, it is clear that collaborative capacity of various stakeholders is central to the fight against transboundary communicable diseases (lai ; lai b; leung and nicoll ; voo and capps ) . while member states that are of advanced economic development typically lead such efforts, the inclusion of other developing countries, non-traditional agencies, and organizations (including non-governmental ones) is necessary and ultimately, inevitable. indeed, major countermeasures such as border control and surveillance are often made possible with the aid of regional collaboration. take the association of southeast asian nations (asean) as an example. asean countries take regional, national and sub-national approaches to disaster risk management ). the asean committee on disaster risk management (acdm) was established in and tasked with the coordination and implementation of regional activities on disaster management. the committee has cooperated with united nations bodies such as the united nations international strategy for disaster reduction (unisdr) and the united nations office for the coordination of humanitarian affairs (unocha). the asean agreement on disaster management and emergency response (aadmer) provides a comprehensive regional framework to strengthen preventive, monitoring and mitigation measures to reduce disaster losses in the region. in recent years, singapore has been active in providing training and education for disaster managers from neighboring countries. singapore has an ongoing exchange program with a number of asia pacific nations and europe. for example, to partner with apec to increase emergency preparedness in the asia-pacific region, singapore's scdf provides shortterm courses on disaster management in the civil defense academy (asia pacific economic cooperation ). the world today is far more inter-connected than ever before. international travel, transnational trade, and cross-border migration have drastically increased as a consequence of globalization. no country is spared from being influenced directly or indirectly by disasters. singapore is no exception. singapore is vulnerable to both natural and man-made disasters alongside its remarkable economic growth. in response, the singapore government adopts an approach of whole-of-government integrated risk management, a concerted, coordinated effort based on a total national response. we have witnessed in the case study singapore's all-hazard management framework with specific references to the sars epidemic. in fighting sars, singapore's health authority was responsive enough to swing into action when they realized that the existing bureaucratic structure was inadequate in terms of facilitating close cooperation between various key government agencies to tackle the health crisis on hand. therefore, a command structure was swiftly established. the presence of a flexible command structure, the way and the extent it was utilized, explains how well an epidemic was successfully contained. flexibility actually enhanced organizational capacities by making organizations more efficient under certain conditions. epidemic control measures such as surveillance, social distancing, and quarantine require widespread support from the general public for them to be effective. singapore's experiences with sars strongly suggest that risk mitigating measures can be effective only when a range of partners and stakeholders (such as government ministries, non-profit organizations, and grass-roots communities) become adequately involved. this is also critical to disaster risk management. whether all of these aspects are transferrable elsewhere needs to be assessed in future research. nonetheless, this unique discipline certainly has helped singapore come out of public health crises on a regular basis. singapore's response to the outbreak of sars offers valuable insights into the kinds approaches needed to combat future pandemics, especially in southeast asia. singapore imposes quarantine to stop sars spreading. abc news managing transboundary crises: identifying the building blocks of an effective response system apec partners with singapore on disaster management hfa implementation review for acdr asian development outlook update accessed adrc country report impact of sars on the economy, singapore government avian influenza: the tip of the iceberg severe acute respiratory syndrome -singapore how singapore avoided who advisory, toronto star impact to lung health of haze from forest fires: the singapore experience sars: economic impacts and implications, erd policy brief no. . manila: asia development bank advancing disaster risk financing and insurance in asean countries: framework and options for implementation, global facility for disaster reduction and recovery a new world now after hotel collapse, the straits times epidemiology and control of sars in singapore pandemic influenza preparedness and health systems challenges in asia: results from rapid analyses in asian countries crisis decision making: the centralization thesis revisited managing a health-related crisis: sars in singapore singapore government. agc public health measures implemented during the sars outbreak in singapore surveillance, detection, and response: managing emerging diseases at national and international levels shaping the crisis perception of decision makers and its application of singapore's voluntary contribution to post-tsunami reconstruction efforts shaping the crisis perception of decision makers and its application of singapore's voluntary contribution to post-tsunami reconstruction efforts organizational collaborative capacities in post disaster society organizational collaborative capacity in fighting pandemic crises: a literature review from the public management perspective toward a collaborative cross-border disaster management: a comparative analysis of voluntary organizations in taiwan and singapore a proposed asean disaster response, training and logistic centre: enhancing regional governance in disaster management combating sars and h n : insights and lessons from singapore's public health control measures critical infrastructure in the face of a predatory future: preparing for untoward surprise reflections on pandemic (h n ) and the international response managerial strategies and behavior in networks: a model with evidence from u.s. public education transparency and trust: risk communications and the singapore experience in managing sars daily distribution of sars cases statistics the explosion and fire on board s.t. spyros manpower research and statistics, singapore government the challenge of communicable diseases in the who south-east asia region major issues and challenges of influenza pandemic preparedness in developing countries crisis management in the homefront, presentation at network government and homeland security workshop capacity of thailand to contain an emerging influenza pandemic challenges to effective crisis management: using information and communication technologies to coordinate emergency medical services and emergency department teams crisis and emergency risk communication as an integrative model collaboration in the fight against infectious diseases economic survey of singapore singapore's efforts in transboundary haze prevention singapore real estate, and property price annual report on tourism statistics annual report on tourism statistics annual report on tourism statistics singapore floods sars in singapore -key lessons from an epidemic an architecture for network centric operations in unconventional crisis: lessons learnt from singapore's sars experience. california: thesis of naval postgraduate school sars in singapore: surveillance strategies in a globalizing city influenza pandemic and the duties of healthcare professionals are we ready for pandemic influenza who guidelines on the use of vaccines and antivirals during influenza pandemics. geneva: word health organization acknowledgement the authors would like to thank the economic research institute for asean and east asia (eria) to initiate this meaningful research project, and four commentators-professor yasuyuki sawada (tokyo university), professor chan ngaiweng (university sains malaysia), dr. sothea oum (eria), mr. zhou yansheng (scdf) and all participants in eria's two workshops, for their insightful comments for an earlier draft of this chapter. key: cord- - w fkd authors: nan title: abstract date: - - journal: eur j epidemiol doi: . /s - - - sha: doc_id: cord_uid: w fkd nan the organisers of the european congress of epidemiology , the board of the netherlands epidemiological society, and the board of the european epidemiology federation of the international epidemiological association (iea-eef) welcome you to utrecht, the netherlands, for this iea-eef congress. epidemiology is a medical discipline that is focussed on principles and methods of research on causes, diagnosis and prognosis of disease, and establishing the benefits and risks of treatment and prevention. epidemiological research has proven its importance by contributions to the understanding the origins and consequences of diseases, and has made major contributions to the management diseases and improvement of health in-patients and populations. this meeting provides a major opportunity to affirm the scientific and societal contributions of epidemiological research in health care practice, both in clinical medicine and in public health. during this meeting major current health care problems are addressed alongside methodological issues, and the opportunities and challenges in approaching them are explored. the exchange of ideas will foster existing co-operation and stimulate new collaborations across countries and cultures. the goal of this meeting is to promote the highest scientific quality of the presentations and display advanced achievements in epidemiological research. our aim is to offer a comprehensive and educational programme in the field of epidemiological research in health care and public health, and create room for discussions on contemporary methods and innovations from the perspective of policy makers, professionals and patients. above all, we want to stimulate open interaction among the congress participants. your presence in utrecht is key to an outstanding scientific meeting. the european congress of epidemiology is organised by epidemiologists of utrecht university, under the auspices of the iea-eef, and in collaboration with the netherlands epidemiological society. utrecht university, founded in , is the largest university in the netherlands and harbours the largest academic teaching hospital in the netherlands. the epidemiologists from utrecht university work in the faculties of medicine, veterinary medicine, pharmacy and biology. the current meeting was announced through national societies, taking advantage of their newsletters and of the iea-eef newsletter. in addition, avoiding the costs and disadvantages of the traditional journal advertisements and leaflets, information about the congress was disseminated via an internet mailing list of epidemiologists, which was compiled from, among other, the meeting in porto in , the european young epidemiologist network (http://higiene.med.up.pt/eye/) and several institutions and departments. many of the procedures followed this year were based on or directly borrowed from the stimulating iea-eef congress in porto in . publication in an international journal of large circulation of the congress programme and abstracts selected for oral and poster presentation, signifies the commitment of the organisers towards all colleagues that decided to present their original work at our meeting, and is intended to promote our discipline and to further stimulate the quality of the scientific work of european epidemiologists. and methods to the objectives and quality of its description; presentation of results; importance of the topic; and originality. a final rating was given on a - point scale. the two junior epidemiologists independently evaluated each abstract. based on ratings of the juniors, the senior epidemiologist gave a final abstract rating. the senior reviewer decided when juniors disagreed, and harnessed against untoward extreme judgements of the juniors. based on the judgement by the seniors abstracts with a final rating of or higher were accepted for presentation. next, in order to shape the scientific programme according to scientific and professional topics and issues of interest for epidemiologists, members of the scientific programme committee grouped the accepted abstracts in major thematic clusters. for these, topics, keywords and title words were used. within each cluster, abstracts with a final rating of or higher, as well as abstracts featuring innovative epidemiological approaches were prioritised to be programmed as an oral presentation. the submitted abstracts had an average final rating of (sd= ). in total abstracts ( %) with a final rating of or lower were rejected. because of the thematic programming some abstracts with a final rating of or higher will be presented as posters, while few with a final rating of are programmed as oral presentation. there were abstracts ( %) accepted and programmed for poster presentation; each poster will be displayed for a full day. in total abstracts ( %) are accepted for oral presentation. these are programmed in parallel sessions. based on the topics of their abstracts the oral sessions were arranged into themes, notably epidemiology of diseases, methods clinical & population research, burden of disease, high risk populations, growth and development, public health strategies, translational epidemiology. sessions from one theme never are programmed parallel. in table we present the submitted and accepted abstracts (oral or poster) according to the distribution of country of origin. in table submitted abstracts are displayed according to their topic, as classified by the authors using the topic long list of the submission form. the scientific programme committee convened in a telephone meeting by the end of the summer of and decided on the above programming process. the scientific programme committee was informed on the result of the programming process by the end of april . fifteen abstracts were submitted for the eye session work in progress. of these abstracts were selected for oral presentation and thereby nominated for the eye award. in total, abstracts were submitted in relation to the student award, of which were programmed for oral presentation and thus nominated. during the congress authors of poster presentations may name themselves as candidate for the poster award. during the closing ceremony the winners of the student award and the poster award will be announced. these awards are an initiative of the netherlands epidemiological society that will fund them in . according to the iea rules expenses of congress participation for applicants from low-income countries will be covered. the board of the iea-eef will select a maximum of candidates; their travel and registration expenses will be (partly) covered from the congress budget. in order to stimulate participation form as many as possible junior researchers and young epidemiologists the congress budget covers registration fee reduction for undergraduate (msc) participants and eye members. this also holds for the registration fee reduction of iea-eef members and nes members. it is years ago ( ) that the iea regional european meeting was held in the hague, the marcon wei leray gehring leray spallek greving laan brussee brussee may- de groot hoefman goettsch posters ordered by abstract number olawuyi de vries uotinen smits gimeno houben de vries de vries ten berg diaconu diaconu streppel belo sauvaget koopman kilsztajn pembrey schmidt kretzschmar scholtens defraye medronho eljedi van de garde mello hosper feleus bayingana de wit stolk teixeira teixeira verhoef capon de boer lazarevska terschu¨ren khosravi boroujeni molag mikolajczyk luijsterburg stolk mirabelli barreto curzio pereira van gageldonk-lafeber van nispen roobol mokkink rava haukka jansen de kraker bogers donalisio behrens borders melis pac muller van den hooven van der sande van den berg novoa van den boogaard vannoord koedijk kuczerowska giorgi rossi vannoord baussano agabiti bierma-zeinstra van wier faustini jarrin juhl miguel maira lindert van den berg mierzejewska fonseca cardoso martens cotton corte ursoniu vernic boer ruskamp szurkowska bijkerk fonseca cardoso mazur ahrens dijkstra ajdacic-gross ajdacic-gross lucas santos gielkens-sijstermans tobi de kraker proteomics and genomics are supposed to be related to epidemiology and clinical medicine, among other because of the putative diagnostic usefulness of proteomics and genomics tests. hence, clinical and sometimes even public health applications are promised by basic sciences. it is debated whether such promises and subsequent expectation are fulfilled. what are at meaningful and consequential examples of current findings in proteomics, genomics and similar approaches in biomedical research? are they different from the ''classic'' tools and frameworks of clinical epidemiology? in the context of proteomics and genomics, etiologic studies, primary prevention, epidemiological surveillance and public health are concerned with the influence of environmental exposures on gene expression and on the accumulation of genetic alterations. proponents and advocates of proteomics and genomics have suggested that their products can yield clinically useful findings, e.g., for early diagnosis, for prognosis, for therapeutic monitoring, without always needing to identify the proteins, peptides or other 'biomarkers' at stake. do we feel comfortable with this ''black-box'' reasoning, i.e. do we question the role of pathophysiological and mechanistic reasoning in clinical medicine? how much sense does it make for epidemiology to play with and scrutinize proteomics and genomics approaches in epidemiology and clinical medicine? what are at present (and in the near future) the main biological, clinical and public health implications of current findings in these research fields? in this plenary session these and other questions regarding the place and role of proteomics and genomics in clinical epidemiological research are discussed from different perspectives. infection diseases: beneficial or disaster for man? infectious diseases pose an increasing risk to human and animal health. they lead to increasing mortality, in contrast to the situation fifty years ago when new control measures still provided hope of overcoming many problems in the future. improved hygiene, better socio-economic circumstances, vaccination and use of antibiotics has led to a gradual decline of tuberculosis, rheumatic fever, measles and mumps in western societies over the last five decades. paradoxically, absence of exposure to infectious agents has a major impact as well. the decline in infectious disease risk is accompanied by a gradual increase of allergic and autoimmune diseases and this association is believed to be causal. exposure to infectious agents from early on in life can markedly boost an individual's natural resistance and hence influence the individual's reaction to future exposure to both biological and non-biological antigens. in this plenary session we want to emphasise both aspects of the effect of infectious agents on human and animal health. evidence based medicine in health care practice and epidemiological research p. glasziou & l. bonneux evidence-based medicine is defined as the conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. proponents of evidence-based medicine maintain that coming form a tradition of pathophysiological rationale and rather unsystematic clinical experience, clinical disciplines should summarize and use evidence concerning their practices, by using principles of critical appraisal and quantitative clinical reasoning. for this they should convert clinical information needs into answerable questions, locate the available evidence, critically appraise that evidence for its validity and usefulness, and apply the results of the best available evidence in practice. applying the principles of evidence-based medicine implies improvement of the effectiveness and efficiency of health care. therefore, evidence-based medicine has commonalties with clinical medical and epidemiological research. for integration of evidence-based medicine into health care practice the challenge is to translate knowledge from clinical medical and epidemiological research, for example in up to date practice guidelines. the limitations of using evidence alone to make decisions are evident. the importance of the values and preference judgments that are implicit in every clinical management decision are also evident. critics of evidence based medicine argue that applying best available research evidence in practice in order to improve the effectiveness and efficiency of health care contradicts with the importance of the values and preference judgments in clinical management decisions. in this plenary session we want to contrast these and other viewpoints on evidence based medicine in health care practice. statistical topics i: missing data prof. dr. t. stijnen this workshop will be an educational lecture on missing data by professor stijnen from the department of epidemiology and biostatistics of the erasmus mc rotterdam, the netherlands. every clinical or epidemiological study suffers from the problem of missing data. in practice mostly simple solutions are chosen, such as restricting the analysis to individuals without missing data or imputing missing values by the mean value of the individuals with observed data. it is not always realised that these simple approaches can lead to considerable bias in the estimates and standard errors of the parameters of interest. in the last to years much research has been done to better methods for dealing with missing values. in this workshop first an overview will be given of the methods that are available and their advantages and disadvantages will be discussed. most attention will be given to multiple imputation, to date generally considered as the best method for dealing with missing values. also the available software in important statistical packages such as spss and sas will be shortly discussed. prof. dr. b. van hout suppose that one wants to know how often individuals have a certain characteristic, and suppose that one doesn't have any knowledge -any knowledge at all -how often this is the case. now, suppose that one starts by checking individuals and only finds individual with this characteristic. than the probability that the ' th individual has the characteristic is / . the fact that this is not / (although it will be close to that if the numbers of observations increase) may be counter-intuitive. it will become less so, when realising how it is obtained from the formal integration of the new information with the complete uncertainty beforehand. this formal integration, with a prior indicating that it is as likely to be / as it is to be / as it is / , and with negative and positive observation -is by way of bayes rule. the italian mathematician, actuary, and bayesian, bruno de finetti ( - , estimated that it would take until the year for the bayesian view of statistics to completely prevail. the purpose of this workshop is to not only convince the attendants that this is appealing outlook but also to aid the workshop participants in realising this prediction. after a first introduction of the work of reverend bayes, a number of practical examples are presented and the attendant is introduced in the use of win-bugs. a first example -introducing the notion of noninformative priors -concerns a random effects logistic regression analysis. second, the use of informative priors, is illustrated (in contrast with non-informative priors) using an analysis of differences in quality of life as observed in a randomised clinical trial. it will be shown how taking account of such prior information changes the results as well as showing how such information may increase the power of the study. in a third example, it will be shown how winbugs offers a powerful and flexible tool to estimate rather complex multi-level models in a relatively easy way and how to discriminate between various models. within this presentation some survival techniques (or stress control techniques) will be presented for when winbugs starts to spit out obscure error-codes without giving the researcher any clue where to search for the reason behind these errors. communicating research to the public h. van maanen, prof. dr. ir. d. kromhout, a. aarts most researchers will at some point in their career face difficulties in communicating research results to the public. whereas most scientific publications will pass by the larger public in silence, now and then a publication provokes profound interest of popular press. interest from the general public should be regarded as positive. after all, public money is put into research and a researcher has a societal responsibility of spreading new knowledge. however, often, general press interest is regarded upon as negative by the researcher. the messages get shortened, distorted or ridiculed. whose responsibility is this misunderstanding between press and researchers? should a researcher foresee press reaction and what can be done to prevent negative consequences? is the press responsible background and relevance: patients with a carotid artery stenosis, including those with an asymptomatic or moderate stenosis, have a considerable risk of ischemic stroke. identification of risk factors for cerebrovascular disease in these patients may improve risk profiling and guide new treatment strategies. objectives and question: we cross-sectionally investigated whether carotid stiffness is associated with previous ischemic stroke or tia in patients with a carotid artery stenosis of at least %. design and methods: patients were selected from the second manifestations of arterial disease (smart) study, a cohort study among patients with manifest vascular disease or vascular risk factors. arterial stiffness, measured as change in lumen diameter of the common carotid arteries during the cardiac cycle, forms part of the vascular screening performed at baseline. the first participants with a stenosis of minimally % in at least one of the internal carotid arteries measured by duplex scanning were included in this study. logistic regression analysis was used to determine the relation between arterial stiffness and previous ischemic stroke or tia. results: the risk of ischemic stroke or tia in the highest quartile (stiffest arteries) relative to the lowest quartile was . ( % ci . - . ). these findings were adjusted for age, sex, systolic blood pressure, minimal diameter of the carotid artery and degree of carotid artery stenosis. conclusion and discussion: in-patients with a ‡ % carotid artery stenosis, increased common carotid stiffness is associated with previous ischemic stroke and tia. measurement of carotid stiffness may improve selection of high-risk patients eligible for carotid endarterectomy and may guide new treatment strategies. background (and relevance): patients with advanced renal insufficiency are at increased risk for adverse cardiovascular disease (cvd) outcomes. objectives and question: the aim was to establish whether impaired renal function is an independent predictor of cvd and death in an unselected high-risk population with cvd. design and methods: the study was performed in patients with cvd. primary outcomes were all vascular events and all cause death. during a median follow-up of months, patients had a vascular event ( %) and patients died ( . %). results: the adjusted hazard ratio (hr) of an estimated glomerular filtration rate < vs > ml/min per . m was . ( % ci . - . ) for vascular events and . ( % ci . - . ) for all cause death. for stroke as a separate outcome it was . ( % ci . - . ) . subgroup analysis according to vascular disease at presentation or the risk factors hypertension, diabetes and albuminuria had no influence on the hr's. conclusion and discussion: renal insufficiency is an independent risk factor for adverse cvd events in patients with a history of vascular disease. renal function was a particularly important factor in predicting stroke. the presence of other risk factors hypertension, diabetes or albuminuria had no influence on the impact of renal function alone. background and relevance: patients with hypertension have an increased case-fatality during acute mi. coronary collateral (cc) circulation has been proposed to reduce the risk of death during acute ischemia. objectives and question: we determined whether and to which degree high blood pressure (bp) affects the presence and extent of cc-circulation. design and methods: cross-sectional study in patients ( % males), admitted for elective coronary angioplasty between january and july . collaterals were graded with rentrop's classification (grade - ). ccpresence was defined as rentrop-grade ‡ . bp was measured twice with an inflatable cuff-manometer in seated position. pulse pressure was calculated by systolic blood pressure (sbp) )diastolic blood pressure (dbp). mean arterial pressure was calculated by dbp + / *(sbp-dbp). systolic hypertension was defined by a reading ‡ mmhg. we used logistic regression with adjustment for putative confounders. results: sbp (odds ratio (or) . per mmhg; % confidence-interval (ci) . - . ), dbp (or . per mmhg; % ci . - . ), mean arterial pressure (or . per mmhg; % ci . - . ), systolic hypertension (or . ; % ci . - . ), and antihypertensive treatment (or . ; % ci . - . ), each were inversely associated with the presence of cc's. also, among patients with cc's, there was a graded, significant inverse relation between levels of sbp, levels of pulse pressure, and collateral extent. conclusion and discussion: there is an inverse relationship between bp and the presence and extent of cc-circulation in patients with ischemic heart disease. background and relevance: silent brain infarcts are associated with decreased cognitive function in the general population. objectives and question: we examined whether this relation also exists in patients with symptomatic arterial disease. furthermore, we compared cognitive function of patients with stroke or tia, with cognitive function of patients with symptomatic arterial disease at other sites in the arterial tree. design and methods: an extensive screening was done in consecutive patients participating in the second manifestations of arterial disease (smart) study, including a neuropsychological test. inclusion diagnoses were cerebrovascular disease, symptomatic coronary artery disease, peripheral arterial disease, or abdominal aortic aneurysm. mri examination was performed to assess the presence of silent infarcts in patients without symptomatic cerebrovascular disease. the patients were assigned to one of three categories according to their patient history and inclusion diagnosis: no stroke or tia, no silent infarcts (n = ; mean age years); no stroke or tia, but silent infarcts present (n = ; mean age years); stroke or tia at background and relevance: patients with manifest vascular disease are at high risk of a new vascular event or death. modification of classical risk factors is often not successful. objectives and question: we determined whether the extra care of a nurse practitioner (np) could be beneficial to the cardiovascular risk profile of high-risk patients. design and methods: randomised controlled trial based on the zelen design. patients with manifestations of a vascular disease and who had ‡ modifiable vascular risk factors were prerandomised to receive treatment by a np plus usual care or usual care alone. after year, risk factors were re-measured. primary endpoint was achievement of treatment goals for risk factors. results: of the pre-randomised patients, of ( %) in the intervention group and of ( %) in the control group participated in the study. after a mean follow-up of months, the patients in the intervention group achieved significantly more treatment goals than did the patients in the control group (systolic blood pressure % versus %, total cholesterol % vs %, ldl-cholesterol % vs %, and bmi % vs %). medication use was increased in both groups and no differences were found in patients' quality of life (sf- ) at followup. conclusion and discussion: treatment delivered by nps, in addition to a vascular risk factor screening and prevention program, resulted in a better management of vascular risk factors compared to usual care alone in vascular patients after year follow-up. were used as non-invasive markers of vascular damage and adjusted for age and sex if appropriate. results: the prevalence of the metabolic syndrome in the study population was %. in pad patients this was %; in chd patients %, in stroke patients % and in aaa patients %. patients with the metabolic syndrome had an increased mean imt ( . vs. . mm, p-value < . ), more often a decreased abpi ( % vs. %, p-value . ) and increased prevalence of albuminuria ( % vs. %, p-value . ) compared to patients without this syndrome. an increase in the number of components of the metabolic syndrome was associated with an increase in mean imt (p-value for trend < . ), lower abpi (p-value for trend < . ) and higher prevalence albuminuria (p-value for trend < . ). conclusion and discussion: in patients with manifest vascular disease the presence of the metabolic syndrome is associated with advanced vascular damage. background (and relevance): in patients with type diabetes the progression of atherosclerosis is accelerated, as observed by the high incidence of cardiovascular events. objectives (and question): to estimate the influence of location and extent of vascular disease on new cardiovascular events in type diabetes patients. design and methods: diabetes patients (n = ), mean age years, with and without prior vascular disease were followed through - (mean follow-up years). patients with vascular disease (n = ) were classified according to symptomatic vascular location, and number (extent) of locations. we analyzed occurrence of new (non)-fatal cardiovascular events using cox proportional hazards models and kaplan-meier analysis. results: multivariate-adjusted hazard ratios (hrs) were comparable in diabetes patients with cerebrovascular disease (hr . ; % ci . - . ), coronary heart disease (hr . ; . - . ) and peripheral arterial disease (hr . ; . - . ), compared to those without vascular disease. multivariate-adjusted hr was . ; ( . - . ) in patients with one vascular location and . ; ( . - . ) in those with ‡ locations. the -year risks were respectively . % ( . - . ) and . % ( . - . ) . conclusion and discussion: diabetes patients with prior vascular disease have an increased risk of cardiovascular events, irrespective of symptomatic vascular location. cardiovascular risk increased with the number of locations. data emphasize the necessity of early and aggressive treatment of cardiovascular risk factors in diabetes patients. background (and relevance): despite recent advances in medical treatment, cardiovascular disease (cvd) is still health problem number one in western societies. a multifactorial approach with the aid of nurse practitioners (nps) is beneficial for achieving treatment goals and reducing events in patients with manifest cvd. objectives (and question): in the self-management of vascular patients activated by internet and nurses (spain) pilot study, we want to implement and test a secure personalized website with additional treatment and coaching of a np for hypertension, hyperlipidemia, diabetes mellitus, smoking and obesity in patients with clinical manifestations of cvd. design and methods: interesting patients are going to use the secure patient-specific website. before the use of the web-application, risk factors are measured. realistic treatment goal(s) for elevated risk factors based on current guidelines are made and appointments how to achieve the treatment goal(s) are determined between the patients and the np in a face to face contact. patients can enter his/ her own weight or a new blood pressure measurement for instance, besides the regular exchange information with the responding np through e-mail messages. the np personally replies as quick as possible and gives regular but protocol driven feedback and support to the patient. the risk factors are remeasured after six months. conclusion and discussion: the spain study is aimed to implement and test a patient specific website. secondary outcome is the change in cardiovascular risk profile. the pre-post measurements of risk factors and the amount of corresponding e-mail messages between the patient and the np enhances the feasibility of this innovative way of risk factor management. background (and relevance): modification of vascular risk factors has been shown to be effective in reducing mortality and morbidity in patients with symptomatic atherosclerosis. nevertheless, reduction of risk factors in clinical practice is difficult to achieve and maintain. objectives (and question): in the risk management in utrecht and leiden evaluation (rule) study, a prospective, comparative study, we assess the effects of a multidisciplinary vascular screening program on improvement of the cardiovascular risk profile and to compare this to a setting without such a program that provides current standard practice in patients referred for cardiovascular disorders. design and methods: patients with diabetes mellitus, coronary artery disease, cerebrovascular disease, or peripheral arterial disease ( per disease category in each hospital) referred by the general practitioner will be enrolled, started january . at the umcu, patients need to be enrolled in the vascular screening program or will be identified through the hospital registration system. at the lumc patients will be identified through the hospital registration system. risk factors will be measured in the two hospitals at baseline and one year after their initial visit. a risk function will be developed for this population based on data of the whole cohort. analysis will be performed on the two comparison groups as a whole, and on subgroups per disease category. changes in risk factors will be assessed with linear or logistic regression procedures, adjusting for differences in baseline characteristics between groups. conclusion and discussion: the rule study is aimed to evaluate the added value of a systematic hospital based vascular screening program on risk factor management in patients at high risk for vascular diseases. background: signs of early cerebral damage are frequently seen on mri scans in elderly people. they are related to future manifest cerebrovascular disease and cognitive deterioration. cardiovascular risk factors can only partially explain their presence and progression. evidence that inflammation is involved in atherogenesis continues to accumulate. chronic infections can act as an inflammatory stimulus. it is possible that subclinical inflammation and chronic infections play a role in the pathogenesis of early cerebral damage. objectives (and question): to unravel the role of inflammation and chronic infection in the occurrence and progression of early cerebral damage in patients with manifest vascular disease. design and methods: participants of the smart study with manifest vascular disease underwent an mr investigation of the brain between may and december . starting in january of all patients are invited for a second mr of the brain after an average follow-up period of four years. both at baseline and after follow-up all cardiovascular risk factors are measured and blood samples are stored to assess levels of inflammatory biomarkers and antibodies against several pathogens. occurrence and progression of early cerebral damage is assessed by measuring the volume of white matter lesions, the number of silent brain infarctions, cerebral atrophy, aberrant metabolic ratios measured with mr spectroscopy and cognitive function at baseline and after follow-up. the relation between inflammation, chronic infection and the occurrence and progression of early cerebral damage will be investigated using both crosssectional and longitudinal analysis. abstract monocyte chemoattractant protein (mcp- ) polymorphism and susceptibility for coro-nary collaterals j.j. regieli , , j. koerselman , ng sunanto , , m. entius , p.p. de jaegere , y. van der graaf , d.e. grobbee , p.a. doevendans heart lung institute, dept of cardiology clinical epidemiology, julius center for health sciences and primary care, utrecht, netherlands background (and relevance): collateral formation is an important beneficial condition during an acute ischemic event. a marked interindividual variability in high risk patients is seen, but at present the basis for this variability is unclear and can not be explained solely by environmental factors. a genetic factor might be present that could influence coronary collateral formation. objectives (and question): we have analyzed the association between a single nucleotide polymorphism in mcp- and the formation of coronary collaterals in patients admitted for angioplasty. mcp- has been suggested to play an important role in collateral development. design and methods: this study involved caucasian patients who were admitted for coronary angioplasty. coronary collateral development was defined angiographically as rentropgrade ‡ . polymorphisms in the promoter region of mcp- () ) were identified by pcr and allele specific restriction digestion. this method allows identification of individuals with either aa, ag or gg at mcp- position ) . statistical analysis was performed using a x -test, unconditional logistic regression, likelihood ration and a wald's test. results: we could genotype of the patients. coronary collaterals (rentropgrade > ) were found in patients. the allele frequency for aa, ag and gg was . %, . % and . %, respectively. the dis-tribution of mcp- genotypes in subjects without collaterals was in hardy weinberg equilibrium. we found that individuals with g allele ( %) were more likely to have collaterals than those with homozygous aa (or . , % ci . to . ) adjusted for potential confounders. linear regression shows that the allele g increased the likelihood for collateral presence with a factor . . conclusion and discussion: this study provides evidence for a role for genetic variation of mcp- gene in the occurrence of coronary collaterals in high risk patients. until september patients with recently established clinically manifest atherosclerotic disease with > modifiable vascular risk factors were selected for the study. the mean self-efficacy scores were calculated for vascular risk factors (age, sex, vascular disease, weight, diabetes mellitus, smoking behavior, hypercholesterolemia, hypertension, and hyperhomocysteinemia). results: diabetes, overweight, and smoking, but none of the other risk factors, were significantly associated with the level of self-efficacy in these patients. patients with diabetes had lower self-efficacy scores ( . ) for exercise and controlling weight ( . ) than patients without diabetes ( . p = . ) and ( . p = . ) respectively. overweight patients scored low on controlling weight ( . and . p< . ) and choosing healthy food ( . and . p = . ) than patients who were on a healthy weight ( . and . ). conclusion and discussion: patients with vascular diseases appear to have high levels of self-efficacy regarding medication use ( . ), exercise ( . ), and controlling weight ( . ). in patients with diabetes, overweight and in smokers, self efficacy levels were lower. practice implications: in nursing care and research on developing self-efficacy based interventions, lower self-efficacy levels can be taken into account for specific vascular patient groups. background (and relevance): little is known about the role of serum uric acid in the metabolic syndrome and increased risk of cardiovascular disease. we investigated the association between uric acid levels and the metabolic syndrome in a population of patients with manifest vascular diseases and whether serum uric acid levels conveyed an increased risk for cardiovascular disease in patients with the metabolic syndrome. design and methods: this is a nested case-cohort study of patients originating from the second manifestations of arterial disease (smart) study. all patients had manifest vascular diseases, constituting peripheral artery disease, cerebral ischemia, coronary artery disease and abdominal aortic aneurysm. analyzing the relationship of serum uric acid with the metabolic syndrome, age, sex, creatinine clearance, alcohol and diuretics were considered as confounders. investigating the relationship of serum uric acid levels with the risk for cardiovascular disease, values were adjusted for age and sex. results: the metabolic syndrome was present in % of the patients. serum uric acid levels in patients with metabolic syndrome were higher compared to patients without ( . ± . mmol/l vs. . ± . mmol/l). serum uric acid concentrations increased with the number of components of the metabolic syndrome ( . mmol/l to . mmol/l) adjusted for age, sex, creatinine clearance, alcohol and use of diuretics. increased serum uric acid concentrations showed to be independently associated with the occurrence of cardiovascular events in patients without the metabolic syndrome (age en sex adjusted hr: . , % ci . - . ) , contrary to patients with the metabolic syndrome (adjusted hr: . , % ci . - . ). conclusion: elevated serum uric acid levels are strongly associated with the metabolic syndrome, yet are not linked to an increased risk for cardiovascular disease in patients with the metabolic syndrome. however, in patients without the metabolic syndrome elevated serum uric acid levels are associated with increased risk for cardiovascular disease. the objective of this study is to investigate the overall and combined role of late-life depression, prolonged psychosocial stress exposure, and stress hormones in the etiology of hippocampal atrophy and cognitive decline. design and methods: as part of the smart study, participants with manifest vascular disease underwent an mri of the brain between may and december . in a subsample of subjects, cognitive function and depressed mood were assessed. starting in january , all patients are invited for a follow-up mri of the brain. at this follow-up measurement, minor and major depression, hypothalamic-pituitary-adrenal (hpa) axis function indicated by salivary cortisol, psychosocial stress exposure indicated by stressful life events early and later in life, and cognitive functioning will also be assessed. the independent and combined effects of late-life depression, (change in) hpa-axis activity, and psychosocial stress exposure on risk of hippocampal atrophy and cognitive decline will be estimated with regression analysis techniques adjusting for potential confounders. introduction the netherlands epidemiological society advocates according to good epidemiological practice, that research with sound research questions and good methodology should be adequately published independent of the research outcomes. although reporting bias in clinical trials is fully acknowledged, failure to report outcomes or selective reporting of outcomes in non-clinical trial epidemiological studies is less well known, but most likely occurs as well. in this mini-symposium the netherlands epidemiological society wants to give attention to this phenomenon of not publishing research outcomes, to encourage publication of all outcomes of adequate research. different scopes to this subject will be addressed: the background, an example of occurrence, initiatives to possibly avoid it and an editor's point of view. selective reporting of outcomes in clinical studies (reporting bias) has been described to occur frequently. therefore a registration of clinical trials is started which enables to address this problem in the future since occurrence of not publishing negative or adverse outcomes can be investigated with this registration. in non-clinical epidemiological studies the failure to report outcomes or selective reporting of outcomes most likely occurs as well, but is less studied and reported. again studies with negative outcomes or no associations are the ones most likely not to be reported. the most important obstacles for not publishing no or negative associations are tradition and priorities of researchers and journals. the reviewers might play a role in this as well. the netherlands epidemiological society advocates according to good epidemiological practice, that research with sound research questions and good methodology should be adequately published independent of the research outcomes. however, reality occurs not to be accordingly. therefore we would like to give attention to this phenomenon of not publishing research outcomes in non-trial-based epidemiological studies, to encourage publication of all outcomes of adequate research. in this mini-symposium, firstly the effects of failure or selective publishing of outcomes on subsequent meta-analysis in a non-clinical research setting will be demonstrated. afterwards, initiatives to promote and improve publication of observational epidemiological research will be addressed, the editor's point of view on this phenomenon will be given and finally concluding remarks will be given. background: there are several reasons for suspecting reporting bias in time-series studies of air pollution. such bias could lead to false conclusions concerning causal associations or inflate estimates of health impact. objectives: to examine time-series results for evidence of publication and lag selection bias. design and methods: all published time-series studies were identified and relevant data extracted into a relational database. effect estimates were adjusted to an increment of lg/m . publication bias was investigated using funnel plots and two statistical methods (begg, egger). adjusted summary estimates were calculated using the ''trim and fill'' method. the effect of lag selection was investigated using data on mortality from us cities and from a european multi-centre panel study of children. results: there was evidence of publication bias in a number of pollutant-outcome analyses. adjustment reduced the summary estimates by up to %. selection of the most significant lag increased estimates by over % compared with a fixed lag. conclusion and discussion: publication and lag selection bias occurs in these studies but significant associations remain. presentation and publication of time-series results should be standardised. background: selective non-publication of study outcomes hampers the critical appraisal and appropriate interpretation of available evidence. its existence could be shown empirically in clinical trials. observational research often uses an exploratory approach rather than testing specific hypotheses. results of multiple data analyses may be selected based on their direction and significance. objectives: to improve the quality of reporting of observational studies. to help avoid selective non-publication of study outcomes. methods: ''strengthening the reporting of observational studies in epidemiology (strobe)'' is an international multidisciplinary initiative that currently develops a checklist of items recommended for the reporting of observational studies (http:// www.strobe-statement.org). results: strobe recommends to avoid selective reporting of 'positive' or 'significant' study results and to base the interpretation on main results rather than on results of secondary analyses. discussion: strobe cannot prevent data dredging, but it promotes transparency at the publication stage. for instance, if multiple statistical analyses were performed in a large dataset to identify new exposure-outcome associations, authors should give details and not only report significant associations. strobe could have a ''feedback effect'' on study quality since, ideally, researchers think ahead when a study is planned and consider points that are essential for later publication. good publishing practice begins with researchers considering ( ) whether an intended study can bring added value, irrespective its result, ( ) and whether its methodology is valid to pick up positive and negative outcomes equally well. when reporting ( ) they should adequately discuss the significance of a negative result ( ) and be as eager to publish negative results as positive ones. as to editors, intentional bias in relation to study results is considered editorial malpractice, whatever its motivation. unintentional bias may be more frequent but will not easily be noticed, also by editors. editorial responsibility implies several levels (accepting for review, choice of reviewers, assess their reviews, decision making, and a repeated process in case of resubmission). various designs for process evaluation can be considered. evaluation will be more difficult for journals with few professional support. collaboration between journals can help, and may also avoid 'self evaluation bias'. in line with registering of randomized trials, registers for observational study protocols could facilitate monitoring for bias and searching unpublished results. but practicalities, methodological requirements, and bureaucratic burden should not be underestimated. in principle, in an era of electronic publishing every study can be made widely accessible widely also if not 'accepted', by editors or authors themselves. however, this would need huge changes in culture of authoring and reading, editorial practice, publishing business, and scientific openness. background: high circulating levels of insulin-like growth factor-i (igf-i), a mitogenic and anti-apoptotic peptide, have been associated with increased risk of several cancer types. objective: to study circulating levels of igf-i and igf binding protein- (igfbp- ) in relation to ovarian cancer risk. design and methods: within the european prospective investigation into cancer and nutrition (epic), we compared levels of igf-i and igfbp- measured in blood samples collected at baseline in women who subsequently developed ovarian cancer ( women diagnosed before age ) and controls. results: the risk of developing ovarian cancer before age ('premenopausal' was increased among women in the middle or top tertiles of igf-i, compared to the lowest tertile: or = . [ % ci: . - . ], and or = . [ % ci: . - . ], respectively (p trend = . ). results were adjusted for bmi, previous hormone use, fertility problems and parity. adjustment for igfbp- levels slightly attenuated relative risks. in older women we observed no association between igf-i, igfbp- and ovarian cancer risk. discussion and conclusion: in agreement with the only other prospective study in this field (lukanova et al, int j cancer, ) , our results indicate that high circulating igf-i levels may increase the risk of premenopausal ovarian cancer. background: the proportion of glandular and stromal tissue in the breast (percent breast density) is a strong breast cancer risk factor. insulin-like growth factor (igf- ) is hypothesized to influence breast cancer risk by increasing breast density. objectives: we studied the relation between premenopausal circulating igf- levels and changes in breast density over menopause. design and methods: mammograms and blood samples of premenopausal participants of the prospect-epic cohort were collected at baseline. a second mammogram was collected after these women became postmenopausal. we determined serum igf- levels. mammographic density was assessed using a computer-assisted method. changes in percent density over menopause were calculated for quartiles of igf- , using linear regression, adjusted for age and bmi. results: premenopausal percent density was not associated with igf- levels (mean percent density . in all quartiles). however, women in the highest igf- quartile showed less decrease in percent density over menopause ( st quartile: ) . vs th quartile: ) . , p-trend = . ). this was mostly explained by a stronger decrease of total breast size in women with high igf- levels. conclusion and discussion: women with high igf- levels show a lower decrease of percent density over menopause than those with low igf- levels. background: body mass index (bmi) has been found to be associated with risk of colon cancer in men, whereas weaker associations have been reported for women. reasons for this discrepancy are unclear but may be related to fat distribution or use of hormone replacement therapy (hrt) in women. objective: to examine the association between anthropometry and risk of colon cancer in men and women. design and methods: during . years of followup, we identified cases of colon cancer among , subjects free of cancer at baseline from european countries. results: bmi was significantly related to colon cancer risk in men (rr per kg/ m , . ; %-ci . - . ) but not in women (rr . ; . - . ; p interaction = . ), whereas waist-hip-ratio (whr) was equally strong related to risk in both genders (rr per . , men, . ; %-ci . - . ; women, . ; . - . ; p interaction = . ). the positive association for whr was not apparent among postmenopausal women who used hrt. conclusions: abdominal obesity is an equally strong risk factor for colon cancer in both sexes and whr is a disease predictor superior to bmi in women. the association may vary depending on hrt use in postmenopausal women; however, these findings require confirmation in future studies. background: fruits and vegetables are thought to protect against colorectal cancer. recent cohort studies, however, have not been able to show a protective effect. patients & methods: the relationship between consumption of vegetables and fruit and the incidence of colorectal cancer within epic was examined among , subjects of whom developed colorectal cancer. a multivariate cox proportional hazard model was used to determine adjusted cancer risk estimates. a calibration method based on standardized -hour dietary recalls was used to correct for measurement errors. results: after adjustment for potential confounding and exclusion of the first two years of follow-up, the results suggest that consumption of vegetables and fruits is weakly, inversely associated with risk of colorectal cancer (hr . , . , . , . , . , for quintiles of intake, % ci upper quintile . - . , p-trend . ), with each gram daily increase in vegetables and fruit associated with a statistically borderline significant % reduction in colorectal cancer risk (hr . ; . - . ). linear calibration strengthened this effect. further subgroup analyses will be presented. conclusion: findings within epic support the hypothesis that increased consumption of fruits and vegetables may protect against colorectal cancer risk. a diverse consumption of vegetables and fruit may influence the risk of gastric and oesophageal cancer. diet diversity scores (dds) were calculated within the epic cohort data from > , subjects in european countries. four scores, counting the number of ffq-based food-items usually eaten at least once in two weeks, were calculated to represent the diversity in the overall vegetable and/or fruit consumption. after an average follow-up of . years, incident cases of gastric and oesophageal cancer were observed. cox proportional hazard models were used to compute tertile specific risks, stratified by follow-up duration, gender and centre and adjusted for total consumption of vegetables and fruit and potential confounders.preliminary findings suggest that, compared to individuals who eat from only or less vegetable sub-groups, individuals who usually eat from eight different subgroups, have a reduced gastric cancer risk (hr . ; % ci . - . ). in comparison to all others, individuals who usually eat only the same fruit may experience an elevated risk (hr . ; % ci . - . ). these findings from the epic study suggest that a diverse consumption of vegetables may reduce gastric and oesophageal cancer risk. subjects with a very low diversity in fruit consumption may experience higher risk. g. steindorf , l. friedenreich , j. linseisen , p. vineis , e. riboli for the epic group german cancer research center, heidelberg, germany alberta cancer board, alberta, canada imperial college london, great-britain background: previous research on physical activity and lung cancer risk, conducted predominantly in males, has yielded inconsistent results. objectives: we examined this relationship among , men and women from the epic-cohort. design and methods: during . years of follow-up we identified men and women with incident primary lung cancer. detailed information on recreational, household and occupational physical activity, smoking habits, and diet was assessed. relative risks (rr) were estimated using cox regression. results: we did not observe an inverse association between occupational, recreational or household physical activity and lung cancer risk either in males or in females. we found a modest reduction in lung cancer risk associated with sports in males and cycling in females. for occupational physical activity, lung cancer risk was increased for unemployed men (rr = . ; % confidence interval . - . ) and men with standing occupations (rr = . ; . - . ) compared with sitting professions. conclusion: our study shows no convincing protective associations of physical activity with lung cancer risk. discussion: it may be speculated that the elevated risks for occupational physical activity could reflect the higher probability that manual workers are exposed to industrial carcinogens compared to workers having sitting/office jobs. purposes: epidemiological research almost always means using data and, increasingly, human tissue as well. the use of these resources is not free but is subject to various regulations, which differ in the european countries on several important aspects. usually these regulations have been determined without involvement of active epidemiological researchers or patient organisations. this workshop will address the issues involved in these regulations in the european context. it will serve the following purposes: -to provide arguments and tools and to exchange best practices for a way out of the regulatory labyrinths especially in cross european research projects; -to provide a platform for epidemiologists and patient groups to discuss their concerns about impediments for epidemiological research with other parties, like data protection authorities. targeted audience: the mini symposium is primarily meant for epidemiologists, but provides an excellent opportunity to meet and discuss with other stakeholders, like from patient groups, data protection authorities, the european commission etc. as well. therefore program allows for extra time for discussion. the other stakeholders will be explicitly invited. a special 'day ticket' is available to satellite symposium epidemiology and the seventh eu research framework over the last few years the seventh eu research framework has been drafted. it is now rapidly moving towards the first calls for proposals. previous eu research programmes and frameworks have been criticised because they are considered to include too few possibilities for epidemiological research and public health research. this satellite-symposium will provide an outline of the research framework and inform researchers about the current state of affairs of the seventh eu research framework. special focus will be on the possibilities for epidemiology and public health research. - . welcome by our host prof. jan willem coebergh, rotterdam, introduction, international and national regulations on the use of data and tissue or research in europe, different approaches to: evert-ben van veen l.l.m. (medlawconsult, the netherlands) -'identifiability' of data -consent for using data and tissue for research the tubafrost code of conduct to exchange data and tissue across europe. . - . data and tissuebanking for research in denmark: a liberal approach the danish approach to use patient data for epidemiological research, cooperation of the danish data protection authority, the danish act of to use anonymous but coded tissue for research based on an opt-out system, first experiences hans storm ph.d. (copenhagen, denmark) . - . estonian data protection act: a disaster for epidemiology the story of the birth of the act, implementing the european data protection directive and of its consequences reveal political and administrative incapability resulting in gradual vanishing of register-based epidemiological research. background: non-invasive assessment of atherosclerosis is important. most of the evidence of coronary calcium has been based on images obtained by electron beam ct (ebct). current data suggest that ebct and multi-slice ct (msct) give comparable results. since msct is more widely available than ebct, information on its reproducibility is relevant. objective: to assess inter-scan reproducibility of msct and to evaluate whether reproducibility is affected by different measurement protocols, slice thickness, cardiovascular risk factors and technical variables. design: cross-sectional study. materials and methods: the study population comprised healthy postmenopausal women. coronary calcium was assessed in these women twice at two separate visits using msct (philips mx idt ). images were made using . and . mm slice thickness. the agatston, volume and mass scores were assessed. reproducibility was determined by mean differences, absolute mean differences and intra-class correlation coefficients (iccc). results: the reproducibility of coronary calcium measurements between scans was excellent with iccc of > . , and small mean and absolute mean differences. reproducibilility was similar for . as for . mm slices, and equal for agatston, volume and mass measurements. conclusion: inter-scan reproducibilility of msct is excellent, irrespective of slice thickness and type of calcium parameter. background: it has been suggested that the incidence of colorectal cancer is associated with socioeconomic status (ses). the major part of this association may be explained by known lifestyle risk factors such as dietary habits. objective: to explore the association between diet and ses measured at area-based level. methods: the data for this analysis were taken from a multi-centre case-control study conducted to investigate the association between some environmental, genetic factors and colorectal cancer incidence. the townsend scores (as deprivation index) were categorized into fifths. a linear regression analysis was used to estimate difference in mean of each continuous variable of diet by deprivation fifth. results: the mean of processed meat consumption in the most deprived area was higher compared to the mean of that in the most affluent areas (mean difference = . , % ci: . , . ). by contrast, the mean of vegetables and fruits consumption in the most deprived areas was lower than that in the affluent areas. conclusion: our findings suggest that lifestyle factors are likely to be related to ses. thus any relation between ses and colorectal cancer may direct us to seek for the role of different life style factors in aetiology of this cancer. background: the reason for the apparent decline in semen quality during the past years is still unexplained. objective: to investigate the effect of exposure to cigarette smoke in utero on the semen quality in the male offspring. design and methods: in this prospective follow-up study, adult sons of mothers, who during pregnancy provided information about smoking and other lifestyle factors, are sampled in six strata according to prenatal tobacco smoke exposure. each man provides a semen sample, a blood sample, and answers a questionnaire, which is collected in a mobile laboratory. external quality assessment of semen analysis is performed twice a year. results: until now, a total of men have been included. the participation rate is %. the percentage of men with decreased sperm concentration (< mill/ml) is %. the unadjusted median ( - % percentile) sperm concentration in the non-exposed group (n = ) is ( - ) mill/ml compared to ( - ) mill/ml among men exposed to > cigarettes per day in fetal life (n = aim: to estimate the prevalence of overweight and obesity, and their effects in physical activity (pa) levels of portuguese children and adolescents aged - years. methods: the sample comprises subjects ( females- males) attending basic/secondary schools. the prevalence of overweight and obesity was calculated using body mass index (bmi), and the cut-off points suggested by cole et al. ( ) . pa was assessed with the baecke et al. ( ) questionnaire. proportions were compared using chi-square tests and means by anova. results and conclusions: overall, . % were overweight (females = . %; males = . %) and . % were obese (females = . %; males = . %). prevalence was similar across age and gender. bmi changed with age (p< . ), and a significant interaction between age and gender was found (p = . ): whereas bmi in males increased with aging, in females increased up to years and stabilized onwards. males showed significantly higher values of pa (p< . ). both genders had a tendency to increase their pa until - years. a significant interaction between age and gender (p = . ) points out different gender patterns across age: pa increased with aging in males but in females started to decline after years. no significant differences in pa were found between normal weight, overweight and obese subjects (p = . ). background: atherosclerosis is an inflammatory process. however, the relation between inflammatory markers and extent and progression of atherosclerosis remains unclear. objectives: we studied the association between c-reactive protein (crp) and established measures of atherosclerosis. design and methods: within the rotterdam study, a population-based cohort of , persons over age , we measured crp, carotid plaque and intima-media thickness (imt), abdominal artery calcification, ankle-brachial index (abi) and coronary calcification. using ancova, we investigated the relation between crp and extent of atherosclerosis. we studied the association between progression of extra coronary atherosclerosis (mean follow-up period: . years) and crp using multinomial regression analysis. results: crp levels were positively related to all measures of atherosclerosis, but the relation was weaker for measures based on detection of calcification only. crp levels were associated with severe progression of carotid plaque (multivariable adjusted odds ratio: . , % ci: . - . ), imt ( . , . - . ) and abi ( . , . - . ). no relation was observed with progression of abdominal artery calcification. conclusion and discussion: crp is related to extent and progression of atherosclerosis. the relation seems weaker for measures based on detection of calcification only, indicating that calcification of plaques might attenuate the inflammatory process. background: maternal stress during pregnancy has been reported to have an adverse influence on fetal growth. the terrorist attacks of september , on the united states have provoked feelings of insecurity and stress worldwide. objective: our aim was to test the hypothesis that maternal exposure to these acts of terrorism via the media had an unfavourable influence on mean birth weight in the netherlands. design and methods: in a prospective cohort study, we compared birth weights of dutch neonates who were in utero during the attacks with those of neonates who were in utero exactly year later. results: in the exposed group, birth weight was lower than in the non-exposed group (difference, g, %ci . , . , p = . ). the difference in birth weight could not be explained by tobacco use, maternal age, parity or other potential confounders, nor by shorter pregnancy durations. conclusion: these results provide evidence supporting the hypothesis that exposure of dutch pregnant women to the september events via the media has had an adverse effect on the birth weight of their offspring. objective: asian studies suggested potential reduction in the risk of pneumonia among patients with stroke on ace-inhibitor therapy. because of the high risk of pneumonia in patients with diabetes we aimed to assess the effects of ace-inhibitors on the occurrence of pneumonia in a general, ambulatory population of diabetic patients. methods: a case-control study was performed nested in , patients with diabetes. cases were defined as patients with a first diagnosis of pneumonia. for each case, up to controls were matched by age, gender, practice, and index date. current ace-inhibitor use was defined within a time-window encompassing the index date. results: ace-inhibitors were used in . % of , cases and in , % of , matched controls (crude or: . , % ci . to . ). after adjusting for potential confounders, ace-inhibitor therapy was associated with a reduction in pneumonia risk (adjusted or: . , % ci . to . ). the association was consistent among different relevant subgroups (stroke, heart failure, and pulmonary diseases) and showed a strong dose-effect relationship (p< . ). conclusions: use of ace-inhibitors was significantly associated with reduced pneumonia risk and may apart from blood pressure lowering properties be useful in prevention of respiratory infections in patients with diabetes. background: progressive decline in serum levels of testosterone occurs with normal aging in both men and women. this is paralleled by a decrease in physical performance and muscle strength, which may lead to disability, institutionalization and mortality. objective. we examined whether low levels of testosterone were associated with three-year decline in physical performance and muscle strength in two population-based samples of older men and women. methods: data were available for men in the longitudinal aging study amsterdam (lasa) and men and women in the health, aging, and body composition (health abc) study. levels of total testosterone and free testosterone were determined at baseline. physical performance and grip strength were measured at baseline and after three years. results: total and free testosterone were not associated with change in physical performance or muscle strength in men. in women, low levels of total testosterone (<= ng/dl) increased the risk of decline in physical performance (p = . ), and low levels of free testosterone (< pg/ ml) of decline in muscle strength (p = . ). conclusion: low levels of total and free testosterone were associated with decline in physical performance and muscle strength in older women, but not in older men. background: obesity and physical inactivity are key determinants of insulin resistance, and chronic hyperinsulinemia may mediate their effects on endometrial cancer (ec) risk. aim: to examine the relationships between prediagnostic serum concentrations of cpeptide, igf binding protein (igfbp)- and igfbp- , and ec risk. methods: we conducted a case-control study nested within the epic prospective cohort study, including incident cases of ec, in pre-and post-menopausal women, and matched control subjects. odds ratios (or) and % confidence intervals (ci) were calculated using conditional logistic regression models. results: in fasting women (> h since last meal), serum levels of c-peptide, igfbp- and igfbp- were not related to risk. however, in nonfasting women ( h or less since last meal), ec risk increased with increasing serum levels of c-peptide ( background: tobacco is the single most preventable cause of death in the world today. tobacco use primarily begins in early adolescent. objective: to estimate the prevalence and evaluate factors associated with smoking among high school going adolescents in karachi, pakistan. methods: a school based cross sectional survey was conducted in three towns of karachi from january through may . two-stage cluster sampling stratified on school types was employed to select schools and students. self-reported smoking status of school going adolescents was our main outcome in analysis. results: prevalence of smoking ( days) among adolescents was . %. multiple logistic regression model showed that after adjustment for age, ethnicity and place of residence, being student of a government school (or= . ; % ci: . - . ), parental smoking (or = . ; % ci: . - . ), uncle (or = . ; % ci: . - . ) , peer smoking (or = . ; % ci: . - . ) and spending leisure time outside home (or = . ; % ci . - . ) were significantly associated with adolescents smoking. conclusion: a . % prevalence of smoking among school going adolescents and influence of parents and peers in initiating smoking in this age group warrant the need for effective tobacco control in the country especially among the adolescents. background: individual patient data meta-analyses (ipd-ma) have been proposed to improve subgroup analyses that may provide clinically relevant information. nevertheles, comparison of the effect estimates of ipd-ma and meta-analyses of published data (map) are lacking. objective: to compare main and subgroup effect estimates of ipd-ma and map. methods: an extended literature search was performed to identify all ipd-ma of randomized controlled trials, followed by a related article search to identify maps with a similar domain, objective, and outcome. data were extracted regarding number of trials, number of subgroups, effect measure, effect estimate and their confidence intervals. results: in total ipd-ma and map could be included in the analysis. twentyfive main effect estimates could be compared; of which were in the same direction. although over subgroups were studied in both ipd-ma and map, only effect estimates could be compared; were in the same direction. subgroup analyses in map most often related to trial characteristics, whereas subgroup analyses in ipd-ma were related to patient characteristics. conclusion: comparable ipd-ma and map report similar main and subgroup effect estimates. however, ipd-ma more often study subgroups based on patient characteristics, and thus provide more clinically relevant information. patients with diabetes have an increased risk of a complicated course of community-acquired lower respiratory tract infections. although influenza vaccination is recommended for these persons, vaccination levels remain too low because of conflicting evidence regarding potential benefits. as part of the prisma nested casecontrol study among , persons recommended for vaccination, we studied the effectiveness of single and repeat influenza vaccination in the subgroup of adult diabetic population ( , ) during the - influenza a epidemic. case patients were hospitalized for diabetes, acute respiratory or cardiovascular events, or died and controls were sampled from the baseline cohort. after control for age, gender, health insurance, prior health care, medication use and co-morbid conditions logistic regression analysis showed that the occurrence of any complication ( hospitalizations, deaths) was reduced by % ( % confidence interval % to %). vaccine effectiveness was similar for those who received the vaccine for the first time and for those who received an earlier influenza vaccination. although we did not perform virological analysis or distinguish type i from type ii diabetes we conclude that patients with diabetes benefit substantially from influenza vaccination independent of whether they received the vaccine for the first time or received earlier influenza vaccinations. background: construction workers are at risk of developing silicosis. regular medical evaluations to detect silicosis preferably in the pre-clinical phase are needed. objectives: to identify the presence or absence of silicosis by developing an easy to use diagnostic model for pneumoconiosis from simple questionnaires and spirometry. design and methods: multiple logistic regression analysis was done in dutch construction workers, using chest x-ray indicative for pneumoconiosis (ilo profusion category > / ) as the reference standard (prevalence . %). model calibration was assessed with graph and the hoshmer-lemeshow goodness of fit test; discriminative ability using area under receiver operating characteristic curve (auc); and internal validity using bootstrapping procedure. results: age > years, current smoking, high exposure job title, working > years in the construction industry, 'feeling unhealthy', and standardized residual fev below ) . were selected as predictors. the diagnostic model showed a good calibration (p = . ) and discriminative ability (auc . ; % ci . to . ). internal validity was reasonable (correction factor of . and optimism corrected auc of . ). conclusions: and discussion: our diagnostic model for silicosis showed reasonable performance and internal validity. to apply the model with confidence, external validation before application in a new working population is recommended. background: artemisinin based combination therapy (act) reduces microscopic gametocytaemia, the malaria parasite stage responsible for transmission from man to mosquito. as a result, act is expected to reduce the burden of disease in african populations. however, molecular techniques recently revealed high prevalences of gametocytaemia below the microscopic threshold. our objective was to determine the importance of sub-microscopic gametocytaemia after act treatment. methods: kenyan children (n= ) aged months - years were randomised to four treatment regimens. gametocytaemia was determined by microscopy and pfs real-time nucleic acid sequence-based amplification (qt-nasba). transmission was determined by membrane feedings. findings: gametocyte prevalence at enrolment was . % ( / ) as determined by pfs qt-nasba and decreased after treatment with act. membrane feedings in randomly selected children revealed that the proportion of infectious children was up to fourfold higher than expected when based on microscopy. act did not significantly reduce the proportion of infectious children but merely the proportion of infected mosquitoes. interpretation: sub-microscopic gametocyte densities are common after treatment and contribute considerably to mosquito infection. our novel approach indicates that the effect of act on malaria transmission is much smaller than previously suggested. these findings are sobering for future interventions aiming to reduce malaria transmission. background: adequate folate intake may be important in the prevention of breast cancer. factors linked to folate metabolism may be relevant to its protective role. objectives: to investigate the association between folate intake and breast cancer risk among postmenopausal women and evaluate the interaction with alcohol and vitamin b intake. methods: a prospective cohort analysis of folate intake among , postmenopausal women from the e n french cohort who completed a validated food frequency questionnaire in was conducted. during years follow-up , cases of pathology-confirmed breast cancer were documented through followup questionnaires. nutrient intakes were categorized in quintiles and energy-adjusted using the regression-residual method. cox modelderived relative risks (rr) were adjusted for known risk factors for breast cancer. results: the multivariate rr comparing the extreme quintiles of folate intake was . ( % ci . - . ; p-trend= . ). after stratification, the association was observed only among women whose alcohol consumption was above the median (= . g/day) and among women who consumed = . lg/day of vitamin b . however, tests for interaction were not significant. conclusions: in this population, high intakes of folate were associated with decreased breast cancer risk; alcohol and vitamin b intake may modify the observed inverse association. background: the simultaneous rise in the prevalence of obesity and atopy in children has prompted suggestions that obesity might be a causal factor in the inception of atopic diseases. objective: we investigated the possible role of ponderal index (kg/m ) as marker for fatness at birth in early childhood atopic dermatitis (ad) in a prospective birth cohort study. methods: between november and november , mothers and their newborns were recruited after delivery at the university of ulm, germany. active follow-up was performed at the age of months. results: for ( %) of the children included at baseline, information on physician reported diagnosis of ad was obtained during follow-up. incidence of ad was . % at the age of one year. mean ponderal index at birth was . kg/m . risk for ad was higher among children with high ponderal index at birth (adjusted or for children within the third and fourth compared to children within the second quartile of ponderal index: . ; % ci . respectively) background: the relationship between duration of breastfeeding and risk of childhood overweight remains inconclusive, possibly in part caused by using never breastfeeding mothers as the reference category. objectives: we assessed the association between duration of breastfeeding and childhood overweight among ever breastfed children within a prospective birth cohort study. methods: between november and november all mothers and their newborns were recruited after delivery at the university of ulm, germany. active follow-up was performed at age months. results: among children ( % of children included at baseline) with available body mass index at age two ( . %) were overweight. whereas children ( . %) were never breastfed, ( . %) were breastfed for at least six months, and ( . %) were exclusively breastfed for at least six months. compared to children who were exclusively breastfed less than three months, the adjusted or for overweight was . ( % ci . ; . ) in children who were exclusively breastfed for at least three but less than six months and . ( % ci . ; . ) in children who were exclusively breastfed for at least six months. conclusion: these results highlight the importance of prolonged breastfeeding in the prevention of overweight in children. background: in africa, hiv and feeding practices influence child mortality. exclusive breastfeeding for months (bf ) and formula feeding (ff) when affordable are two who recommendations for safe feeding. objective: we estimated the proportion and the number of children saved with each recommendation at population level. design and methods: data on sub-saharan countries were analysed. we considered saved a child remaining hiv-free and alive after two years of life. a spreadsheet model based on a decision tree for risk assessment was used to calculate this number according to six scenarios that combine the two recommendations without and with promotion then with promotion and group education. results: whatever the country, the number of children saved with bf would be higher than with ff. overall, without promotion, ( background: farming has been associated with respiratory symptoms as well as protection against atopy. effects of different farming practices on respiratory health in adults have rarely been studied. objectives: we studied associations between farming practices and hay fever and current asthma in organic and conventional farmers. design and methods this cross-sectional study evaluated questionnaire data of conventional and organic farmers. associations between health effects and farm exposures were assessed by logistic regression. results: organic farmers reported slightly more hay fever than conventional farmers ( . % versus . %, p = . ). however, organic farming was no independent determinant for hay fever in multivariate models including farming practices and potential confounders. livestock farmers who grew up on a farm had a five-fold lower prevalence of hay fever than crop farmers without farm childhood (or . , % ci . - . ). use of disinfectants containing quaternary ammonium compounds was positively related to hay fever (or . , % ci . - . ). no effects of farming practices were found for asthma. conclusion and discussion: our study adds to the evidence that a farm childhood in combination with current livestock farming protects against allergic disorders. this effect was found for both organic and conventional farmers. background: although a body mass index (bmi) above kg/m is clearly associated with an increase in mortality in the general population, the meaning of high levels of bmi among physically heavily working men is less clear. methods: we assessed the association between bmi and mortality in a cohort of male construction workers, aged - years, who underwent an occupational health examination in wu¨rttemberg (germany) during - and who were followed over a years period. covariates considered in the proportional hazard regression analysis included age, nationality, smoking status, alcohol consumption, and comorbidity. results: during the follow-up deaths occurred. there was a strong u-shaped association between bmi and all-cause mortality, which was lowest for bmi levels between and kg/m . this pattern persisted after exclusion of the first years of follow-up and control for multiple covariates. compared with men with a bmi < . kg/m , the relative mortality was . ( % confidence interval: , - , ), . ( . - . ) and . ( . - . ) for bmi ranges - . , - . and = . kg/m . conclusion and discussion: bmi levels commonly considered to reflect overweight or moderate obesity in the general population may be associated with reduced mortality in physically heavily working men. background: colonoscopy with removal of polyps may strongly reduce colorectal cancer (crc) incidence and mortality. empirical evidence for optimal schedules for surveillance is limited. objective. to assess risk of proximal and distal crc after colonoscopy with polypectomy. design and methods: history and results of colonoscopies were obtained from cases and controls in a population-based case-control study in germany. risk of proximal and distal crc according to time since colonoscopy was compared to risk of subjects without previous colonoscopy. results: subjects with previous detection and removal of polyps had a much lower risk of crc within four years after colonoscopy (adjusted odds ratio . , % confidence interval . - . ), and a similar risk as those without colonoscopy in the long run. within four years after colonoscopy, risk was particularly low if only single or small adenomas were detected. most cancers occurring after polypectomy were located in the proximal colon, even if polyps were found in the sigma or rectum only. conclusion and discussion: our results support suggestions that surveillance colonoscopy after removal of single and small adenomas may be deferred to five years and that surveillance should include the entire colorectum even if only distal polyps are detected. background: a population-based early detection programme for breast cancer has been in progress in finland since . recently, detailed information about actual screening invitation schemes in - has become available in electronic form, which enables more specific modeling of breast cancer incidence. objectives: to present a methodology for taking into account historical municipality-specific schemes of mass screening when constructing predictions for breast cancer incidence. to provide predictions for numbers of new cancer cases and incidence rates according to alternative future screening policies. methods: observed municipality-specific screening invitation schemes in finland during - were linked together with breast cancer data. the incidence rate during the observation period was analyzed using poisson regression, and this was done separately for localized and nonlocalized cancers. for modeling, the screening programme was divided into seven different phases. alternative screening scenarios for future mass-screening practices in finland were created and an appropriate model for incidence prediction was defined. results and conclusion: expanding the screening programme would increase the incidence of localized breast cancers; the biggest increase would be obtained by expanding from women aged - to - . the impacts of changes in the screening practices on predictions for non-localized cancers would be minor. background: new screening technologies are implemented to routine screening in increasing numbers, with limited evidence on their effectiveness. randomised evaluation of new technologies is encouraged but rarely done. objective: to evaluate in a randomised design whether the effectiveness of an organised cervical screening programme can be improved by means of new technologies. methods: since , - , women have been invited annually to a randomised multi-arm trial ran within the finnish organised cervical screening programme. the invited women are randomly allocated to three study arms of different primary screening tests: conventional cytology, automation-assisted cytology and, since , human papillomavirus (hpv) testing. up to , we have gathered information on , screening visits in the automation-assisted arm and , in the hpv arm, and we have compared the results to conventional screening. results: automation-assistance resulted in a slightly increased detection of precancers, but the efficacy based on interval cancers is not known. results on hpv screening suggest higher detection of precancers and cancers compared to conventional screening. conclusion: evidence of higher effectiveness of new screening technologies is needed, especially when changing the existing screening programmes. the multi-arm trial shows how these technologies can be implemented to routine in a controlled manner. introduction: nodules and goitres are important risk factors for thyroid cancer. as the number of diagnosed cases of thyroid cancer is increasing, the incidence of such risk factors has been assessed in a french cohort of adults. methods: the su.vi.max (supple´mentation en vitamines et mine´raux antioxydants) cohort study included middle-aged adults followed-up during eight years. incident cases of goitres and nodules have been identified retrospectively by scheduled clinical examinations and spontaneous consultations by the participants. cox proportional hazards modeling was used to identify factors associated to thyroid diseases. results: finally, incident cases of nodules and goitres were identified among , subjects free of thyroid diseases at inclusion. after an average follow-up of years, the incidence of goitres and nodules was . % in - year old men, . % in - year old women and . % in - year old women. identified associated factors were age, low urinary thiocyanate level and oral contraceptive use in women, and high urinary thiocyanate level and low urinary iodine level in men. conclusion: estimated incidences are consistent with those observed in other countries. the protective role of urinary thiocyanate in both men and women and, in women, oral contraceptives deserve further investigation. background: various statistical methods for outbreak detection in hospital settings have been proposed in the literature. usually validation of those methods is difficult, because the long time series of data needed for testing the methods are not available. modeling is a tool to overcome that difficulty. objectives: to use model generated data for testing sensitivity and specificity of different outbreak detection methods. methods: we developed a simple stochastic model for a process of importation and transmission of infection in small populations (hospital wards). we applied different statistical outbreak detection methods described in the literature to the generated time series of diagnosis data and calculated and the sensitivity and specificity of different methods. results: we present roc curves for the different methods and show how they depend on the underlying model parameters. we discuss how sensitivity and specificity measures depend on the degree of underdiagnosis, on the ratio of admitted colonised patients to colonisation resulting from transmission in the hospital, and on the frequency of testing patients for colonisation. conclusions: modeling can be a useful tool for evaluating statistical methods of outbreak detection especially in situation where real data is scarce or its quality questionable. associated with higher mammographic density and breast pain, has been increased which has bearing on screening performance. objective: we compared the screening performance for women aged - years with dense and lucent breast patterns in two time periods and studied the possible interaction with use of hrt. methods: data were collected from a dutch regional screening programme for women referred in - (n = ) and - (n = ) . in addition, we sampled controls for both periods that were not referred (n = and n = resp.) and women diagnosed with an interval cancer. mammograms were digitised and computer-assisted methods used to measure mammographic density. among other parameters, sensitivity was calculated to describe screening performance. results: screening performance has improved slightly, but the difference between dense and lucent breast patterns still exists (e.g. sensitivity % vs. %). hrt use has increased; sensitivity was particularly low ( %) in the group of women with dense breast patterns on hrt. discussion: in conclusion, the detrimental effect of breast density and the interaction with hrt on screening performance warrants further research with enlargement of the catchment area, more referred women, interval cancers and controls. background: population based association studies might lead to false-positive results if possibly underlying population structure is not adequately accounted for. to assess the nature of the population structure some kind of cluster analysis has to be carried out. we investigated the use of self-organizing maps (soms) for this purpose. objectives: the two main questions concern identification of an either discrete or an admixed population structure and identification of the number of subpopulations involved in forming the structured population under investigation. design and methods: we simulated data sets with different population models and included varying informative marker and map sizes. sample sizes ranged from to individuals. results: we found that a discrete structure can easily be accessed by soms. a near to perfect assignment of individuals to their population of origin can be obtained. for an admixed population structure though, soms do not lead to reasonable results. here, even the correct number of subpopulations involved can not be identified. conclusion: in conclusion, soms can be an alternative to a model-based cluster analysis if the researcher assumes a discrete structure but should not be applied if an admixed structure is likely. background: little is known about the combined effect of duration of breastfeeding, sucking habits and malocclusion in the primary dentition. objectives: we studied the association of breastfeeding and non-nutritive sucking habits on malocclusion on the primary dentition. design and methods: a cross-sectional study nested in a birth cohort was carried out in pelotas, brazil. a random sample of children aged was examined and their mothers interviewed. the foster and hamilton criteria were used to define anterior open bite (aob) and posterior cross bite (pcb). information regarding breastfeeding and non-nutritive sucking habits was collected from birth to years-old. poisson's regression analysis was used. results: non-nutritive sucking habits between months and years of age (pr . [ . ; . ] ) and digital sucking at years of age (pr . [ . ; . ]) were risk factors for aob. breastfeeding for less than months (pr . [ . ; . ] ) and the regular use of a pacifier between months and years of age (pr . [ . ; . ]) were the risk factors for pcb. for pcb an interaction was identified between lack of breastfeeding and the use of a pacifier. conclusion: lack of breastfeeding and longer non-nutritive sucking habits during early childhood were the main risk factors for malocclusion in primary dentition. background: recent, dramatic coronary heart disease (chd) mortality increases in beijing, can be mostly explained by adverse changes in risk factors, particularly total cholesterol and diabetes. it is important for policy making to predict the impact of future changes in risk factors on chd mortality trends. objective: to assess the potential impact of changes in risk factors on numbers of chd deaths in beijing from to , to provide evidence for future chd strategies. design: the previously validated impact model was used to estimate the chd deaths expected in a) if recent risk factor trends continue or b) if levels of risk factors reduce. results: continuation of current risk factor trends will result in a % increase in chd deaths by , (almost half being attributable to increases in total cholesterol levels). even optimistically assuming a % annual decrease in risk factors, chd deaths would still rise by % because of population ageing. conclusion: a substantial increase in chd deaths in beijing may be expected by . this will reflect worsening risk factors compounded by demographic trends. population ageing in china will play an important role in the future, irrespective of any improvements in risk factor profiles. background: since smoking cessation is more likely during pregnancy than at other times, interventions to maintain quitting postpartum may give the best opportunity for a long-time abstinence. it is still not clear what kind of advice or counseling should be given to help prevent the relapse postpartum. objectives: to identify the factors, which predispose women to smoking relapse after delivery. design and methods: the cohort study was conducted in and in public maternity units in lodz, poland. the study population consisted of pregnant women between - weeks of pregnancy who have quit smoking no later than three months prior to participation in the study. smoking status was verified using saliva cotinine level. women were interviewed twice: during pregnancy and three months after delivery. results: within three months after delivery about half of women relapsed into smoking. the final model identified the following risk factors for smoking relapse: having partner and friends who smoke, quitting smoking in late pregnancy, and negative experiences after quitting smoking such as dissatisfaction with weight, nervousness, irritation, loosing pleasure. conclusion. this study advanced the knowledge of the factors that determine smoking relapse after delivery and provided preliminary data for future interventions. introduction: it remains difficult to predict the effect of an particular antihypertensive drug in an individual patient and pharmacogenetics might optimise this. objective: to investigate whether the association between use of angiotensin converting enzyme (ace)-inhibitors or ß-blockers and the risk of stroke or myocardial infarction (mi) is modified by the t-allele of the angiotensinogen m t polymorphism. methods: data were used from the rotterdam study, a population-based prospective cohort study. in total, subjects with hypertension were included from july st, onwards. follow-up ended at the diagnosis of mi or stroke, death, or the end of study period (january st, ) . the drug-gene interaction and the risk of mi/stroke was determined with a cox proportional hazard model (adjusted for each drug class as time-dependent covariates). results: the interaction between current use of ace-inhibitors and the angiotensinogen m t polymorphism increased the risk of mi (synergy index (si) = . ; % ci: . - . ) and non-significant increased risk of stroke (si = . ; % ci: . - . ). no interaction was found between current use of ß-blockers and the agt m t polymorphism on the risk of mi or stroke. conclusion: subjects with at least one copy of the t allele of the agt gene might have less benefit from ace-inhibitor therapy. [ . - . ] to . [ . - . ] in those without ms-idf and . [ . - . ] with ms-idf. ms-ncep had no effect. conclusion and discussion: although cardiovascular disease was self-reported, we conclude that the higher prevalence of cardiovascular disease is partly accounted for by marked differences in the prevalence of metabolic syndrome. the ms-idf criteria seem better for defining metabolic syndrome in ethnic groups than the ms-ncep criteria. background: selenium is an essential trace mineral with antioxidant properties. objective: to perform meta-analyses of the association of selenium levels with coronary heart disease (chd) endpoints in observational studies and the efficacy of selenium supplements in preventing chd in randomized controlled trials. methods: we searched medline and the cochrane library from through . relative risk (rr) estimates were pooled using an inversevariance weighted random-effects model. for observational studies reporting three or more categories of exposure we conducted a dose-response meta-analysis. results: twenty-five observational studies and clinical trials met our inclusion criteria. the pooled rr comparing the highest to the lowest categories of selenium levels was . ( % confidence interval . - . ) in cohort studies and . ( . - . ) in case-control studies. in dose-response models, a % increase in selenium levels was associated with a % ( - %) reduced risk of coronary events. in randomized trials, the rr comparing participants taking selenium supplements to those taking placebo was . ( . - . ). conclusion: selenium levels were inversely associated with the risk of chd in observational studies. the randomized trials findings are still inconclusive. these results require confirmation in randomised controlled trials. currently, selenium supplements should not be recommended for cardiovascular prevention. background propensity score analysis (psa) can be used to reduce confounding bias in pharmacoepidemiologic studies of the effectiveness and safety of drugs. however, confidence intervals may be falsely precise because psa ignores uncertainty in the estimated propensity scores. objectives: we propose a new statistical analysis technique called bayesian propensity score analysis (bpsa). the method uses bayesian modelling with the propensity score as a latent variable. our question is: does bpsa yield improved interval estimation of treatment effects compared to psa? our objective is: to implement bpsa using computer programs and investigate the performance of bpsa compared to psa. design and methods: we investigated bpsa using monte carlo simulations. synthetic datasets, of sample size n = , , , were simulated by computer. the datasets were analyzed using bpsa and psa and we estimated the coverage probability of % credible intervals. results the estimated coverage probabilities ranged from % to % for bpsa, and from % to % for psa, with simulation standard errors less than %. background: several factors associated with low birth weight, such as smoking and body mass index (bmi) do not explain all ethnic differences. this study investigates the effects of working conditions on birth weight among different ethnic groups. methods: questionnaire data, filled in weeks after prenatal screening, was used from the amsterdam born children and their development (abcd) study (all pregnant women in amsterdam [ / / - / / (n = . ), response ( %)]. ethnicity (country of birth). was dichotomised into dutch and non-dutch. working conditions were: weekly working hours, weekly hours standing/walking, physical load and job-strain (karasek-model). only singleton deliveries with pregnancy duration = weeks were included. results: although only . % of the non-dutch women worked during first trimester ( . % of the dutch women), they reported significantly more physical load ( . % vs . %), more hours standing/walking ( . % vs . %) and more high job-strain ( . vs . ). linear regression revealed that only high job-strain lowered significantly birth weight (non-dutch: gram and dutch: gram). after adjusting for confounders (gender, parity, smoking, maternal length, maternal bmi and education), this was only significant in the non-dutch group ( vs. gram). conclusion: job-strain has more effect on birth weight in non-dutch compared to dutch women. background: in panama population was estimated in . million habitants, from which three millions lived in malaria endemic areas. until january malaria control activities were accomplished under a vertical structure. objective: to evaluate the evolution of malaria control in panama, before and after the decentralization of the malaria program. design and methods: average (standard deviation) of the program indexes are described for the last decades. the correlation between positive smears index and per capita cost of the program is analyze. results: in the 's the average (standard deviation) positive smears index per habitants was . % ( . ); in the 's: . % ( . ); in the 's: . % ( . ); in the 's: . % ( . ); and in the first five years of : . % ( . ). after the decentralization of the program was accomplished in , the positive smears index increased . fold. the average per capita cost involved in malaria control activities per decade ranged between . y . us dollars and presented a determination coefficient of . in the reduction of the positive smears index. discussion: the decentralization had significant detrimental implications in the control program capabilities. background: notification rates of new smear-positive tuberculosis in the central mountainous provinces ( / , population) are considerably lower than in vietnam in general ( / , population). this study assessed whether this is explained by low case detection. objective: to assess the prevalence and case detection of new smear-positive pulmonary tuberculosis among adults with a prolonged cough in central mountainous vietnam. design and methods: a house-to-house survey of adults years or older was carried out in randomly selected districts in three mountainous provinces in central vietnam in . three sputum specimens were microscopically examined of persons reporting a cough of weeks or longer. results: the survey included , persons with a response of %. a cough of weeks or longer was reported by , ( . % % ci . - . ) persons. of these, were sputum smear-positive of whom had had anti-tuberculosis treatment. the prevalence of new smear-positive tuberculosis was / , population ( % ci - / , population). the patient diagnostic rate was . per person-year, suggesting that the case notification rate as defined by who was %. conclusion: low tuberculosis notification rates in mountainous vietnam are probably due to low tuberculosis incidence. explanations for low incidence at high altitude need to be studied. background: although patients with type diabetes (dm ) have an increased risk of urinary tract infections (utis), not much is known about predictors of a complicated course. objective: this study aims to develop a prediction rule for complicated utis in dm patients in primary care. design and methods: we conducted a -month prospective cohort study, including dm patients aged years or older from the second dutch national survey of general practice. the combined outcome measure was defined as the occurrence of recurrent cystitis, or an episode of acute pyelonephritis or prostatitis. results: of the , dm patients % was male and mean age was years (sd ). incidence of the outcome was per patient years (n = ). predictors were age, male sex, number of physician contacts, incontinence of urine, cerebro vascular disease or dementia and renal disease. the area under the receiver-operating curve (auc) was . ( % ci . to . ). subgroup analyses for gender showed no differences. there is an increased early postoperative mortality (operation risk) after elective surgery. this mortality is normally associated with cardiovascular events, such as deep venous thrombosis, pulmonary embolism, and ischemic heart diseases. our objective was to quantify the magnitude of the increased mortality and how long the mortality after an operation persists. we focused on the early postoperative mortality after surgery for total knee and total hip replacements from the national registries in australia and norway, which cover more than % of all operations in the two nations. only osteoarthritis patients between and years of age were included. a total of . patients remained for analyses. smoothed intensity curves were calculated for the early postoperative period. effects of risk factors were studied using a nonparametric proportional hazards model. the mortality was highest immediately after the operation ($ deaths per . patients per day), and it decreased until the rd postoperative week. the mortality was virtually the same for both nations and both joints. mortality increased with age and was higher for males than for females. a possible reduction of early postoperative mortality is plausible for the immediate postoperative period, and no longer than the rd postoperative week. background/objectives: single, modifiable risk factors for stroke have been extensively studied before, but their combined effects were rarely investigated. aim of the present study was to assess single and joint effects of risk factors on stroke and transitoric ischemic attack (tia) incidence in the european prospective investigation into cancer and nutrition (epic)-potsdam study. methods: among participants aged - years at baseline total stroke cases and tia cases occurred during . years of follow-up. relative risks (rr) for stroke and tia related to risk factors were estimated using cox proportional hazard models. results: after adjustment for potential confounders rr for ischemic stroke associated with hypertension was . ( % ci, . - . ) and for tia . ( % ci . - . ). the highest rr for ischemic stroke (rr . , % ci . - . , p trend< . ) and tia (rr . , % ci . - . , p trend= . ) were observed among participants with or modifiable risk factors. . % of ischemic strokes and . % of tia cases were attributable to hypertension, diabetes mellitus, high alcohol consumption, hyperlipidemia, and smoking. conclusion: almost % of ischemic stroke cases could be explained by classical modifiable risk factors. however, only one in four tia cases was attributable to those risk factors. background: the investigation of genetic factors is gaining importance in epidemi-ology. most relevant from a public health perspective are complex diseases that are characterised by complex pathways involving gene-gene-and gene-environment-interactions. the identification of such pathways requires sophisticated statistical methods that are still in their infancy. due to their ability in describing complex association structures, directed graphs may represent a suitable means for modelling complex causal pathways. objectives: we present a study plan to investigate the appropriateness for using directed graphs for modelling complex pathways in association stud-ies. design and methods: graphical models and artificial neural networks will be investigated using simulation studies and real data and their advantages and disadvantages of the respective ap-proaches summed up. furthermore, it is planned to construct a hybrid model exploiting the strengths of either model type. results and conclusions: the part of the project that concerns graphical models is being funded and ongoing. first results of a simulation study have been obtained and will be presented and discussed. a second project is currently being applied for. this shall cover the investigation of neural networks and the construction of the hybrid model. this study investigates variations in mortality from 'avoidable' causes among migrants in the netherlands in comparison with the native dutch population. data were obtained from population and mortality registries in the period - . we compared mortality rates for selected 'avoidable' conditions for turkish, moroccan, surinamese and antillean/aruban groups to native dutch. we found slightly elevated risk in total 'avoidable' mortality for migrant populations (rr = . ). higher risks of death among migrants were observed from almost all infectious diseases (most rr> . ) and several chronic conditions including asthma, diabetes and cerebro-vascular disorders (most rr> . ). migrant women experienced a higher risk of death from maternity-related conditions (rr = . ). surinamese and antillean/ aruban population had a higher mortality risk (rr = . and . respectively), while turkish and moroccans experienced a lower risk of death (rr = . and . respectively) from all 'avoidable' conditions compared to native dutch. control for demographic and socioeconomic factors explained a substantial part of ethnic differences in 'avoidable' mortality. conclusion: compared to native dutch, total 'avoidable' mortality was slightly elevated for all migrants combined. mortality risks varied greatly by cause of death and ethnicity. the substantial differences in mortality for a few 'avoidable' conditions suggest opportunities for improvement within specific areas of the healthcare system. warmblood horses scored by the jury as having uneven feet will never pass yearly selection sales of the royal dutch warmblood studbook (kwpn).to evaluate whether the undesired trait 'uneven feet' influences performance, databases of kwpn (n = horses) and knhs (n = show jumpers, n = dressage horses) were linked through the unique number of each registered horse. using a proc glm model of sas was investigated whether uneven feet had effects on age at first start and highest performance level. elite show jumpers with uneven feet start at . years and dressage horses . years of age, which is a significant difference (p< . ) with elite even feet horses ( . respectively . years). at their maximum level of performance horses with even feet linearly scored in show jumping . at regular and . at elite level ( . resp. . with uneven feet), while in dressage horses scores were . at regular and . at elite level ( . resp. . with uneven feet).the conformational trait 'uneven feet' appears to have a significant effect on age at first start, while horses with even feet demonstrate a higher maximal performance than horses with uneven feet. objectives: to identify children with acute otitis media (aom) who might benefit more from treatment with antibiotics. methods: an individual patient data meta-analysis (ipdma) on six randomized trials (n = children). to preclude multiple testing, we first performed a prognostic study in which predictors of poor outcome were identified. subsequently, interactions between these predictors and treatment were studied by fixed effect logistic regression analyses. only if a significant interaction term was found, stratified analyses were performed to quantify the effect in each subgroup. results: interactions were found for: age and bilateral aom, and otorrhea. in children less than years with bilateral aom, a rate difference (rd) of ) % ( % ci ) ; ) %) was found, whereas in children aged years or older with unilateral aom the rd was ) % ( % ci ) ; ) %). in children with and without otorrhea the rd were ) % ( % ci ) ; ) %), and ) % ( % ci ) %; ) %). conclusion: although there still are many areas in which ipdma can be improved, using individual patient data appear to have many advantages especially in identifying subgroups. in our example, antibiotics are beneficial in children aged less than years with bilateral aom, and in children with otorrhea. major injuries, such as fractures, are known to increase the risk of venous thrombosis (vt). however, little is known of the risk caused by minor injuries, such as ankle sprains. we studied the risk of vt after minor injury in a population-based case-control study of risk factors for vt, the mega-study. consecutive patients, enrolled via anticoagulation clinics, and control subjects, consisting both of partners of patients and randomly selected control subjects, were asked to participate and filled in a questionnaire. participants with cancer, recent plastercasts, surgery or bedrest were excluded from the current analyses. out of patients ( . %) and out of controls ( . %) had suffered from a minor injury resulting in a three-fold increased risk of vt (odds ratio adjusted for age and sex . ; % confidence interval . - . ) compared to those without injury. the risk was highest in the first month after injury and was no longer increased after months. injuries located in the leg increased the risk five-fold, while those located in other body parts did not increase the risk. these results show that minor injuries in the leg increase the risk of vt. this effect appears to be temporary and mainly local. introduction: in southeast asia, dengue was considered a childhood disease. in the americas, this disease occurs predominantly in older age groups, indicating the need for studies to investigate the immune status of the child population, since the presence of antibodies against a serotype of this virus is a risk factor for dengue hemorrhagic fever (dhf). objective: to evaluate the seroprevalence and seroincidence of dengue in children living in salvador, bahia, brazil. methods: a prospective study was carried out in a sample of children of - years by performing sequential serological surveys (igg/ dengue). results: seroprevalence in children was . %. a second survey (n = seronegative children) detected an incidence of . % and no difference was found between males and females or according to factors socioeconomic analyzed. conclusion and discussion: these results show that, in brazil, the dengue virus circulates actively in the initial years of life, indicating that children are also at great risk of developing dhf. it is possible that in this age group, dengue infections are mistaken for other febrile conditions, and that there are more inapparent infections in this age group. therefore, epidemiological surveillance and medical care services should be aware of the risk of dhf in children. since , in the comprehensive cancer centre limburg (cccl) region, all women aged - years are invited to participate in the cervical cancer screening programme once every five years. we had the unique opportunity to link data from the cervical screening programme and the cancer registry. we studied individual pap smear testing and participation in the screening programme preceding the diagnosis of cervical cancer. all invasive cases of cervical cancer of women aged - years in the period - were selected. subgroups were based on results of the pap smear and invitation and participation in the screening programme. time interval between screening and detection of tumours was calculated. in - , the non-response rate was %. in total, invasive cervical cancer cases were detected of which were screening and interval carcinomas. in the group of women who were invited but did not participate and women who were not invited, respectively and tumours were detected. these tumours had a higher stage compared to screening carcinomas. in the cccl region, more and higher stage tumours were found in women who did not participate in the screening compared to women with screening tumours. background: pcr for mycobacterium tuberculosis (mtb) has already proved to be a useful tool for the diagnosis and investigation of molecular epidemiology. objectives: evaluation of pcr assay for detection of mycobacterium tuberculosis dna as a diagnostic aid in cutaneous tuberculosis. design and methods: thirty paraffinembedded samples belonging to patients were analyzed for acid fast bacilli. dna was extracted from tissue sections and pcr was performed using specific primers based on is repeated gene sequence of mtb. results: two of the tissue samples were positive for acid fast bacilli (afb). pcr was positive in eight samples from six patients. amongst them, two were suspected of having lupus vulgaris confirmed histopathologically, whom their entire tests were positive. accounting histopathology as gold standard, the sensitivity of pcr in this study was determined as %. conclusion: from cases of skin tuberculosis diagnosed by histopathology, were positive by pcr technique, which shows the priority of previous method to molecular technique. discussion: pcr assay can be used for rapid detection of mtb from cutaneous tuberculosis cases, particularly when staining for afb is negative and there is a lack of growth on culture or when fresh material has not been collected for culture. background: recent epidemiological studies used automated oscillometric blood pressure (aod) devices that systematically measure higher blood pressure values than random zero sphygmomanometer devices (rzs) hampering the comparability of the blood pressure values between these studies. we applied both a random zero and an automated oscillometric blood pressure device in a randomized order in an ongoing cohort study. objectives: the aim of this analysis was to compare the blood pressure values by device and to develop a conversion algorithm for the estimation of blood pressure values from one device to the other. methods: within a randomized subset of subjects aged - years, each subject was measured three times by each device (hawskley random zero and omron hem- cp) in a randomized order. results: the mean difference (aod-rzs) between the devices was . mmhg and . mmhg for the systolic and diastolic blood pressure respectively. linear regression models including age, sex, and blood pressure level can be used to convert rzs blood pressure values to aod blood pressure values and vice versa. conclusions: the results may help to better compare blood pressure values of epidemiological studies that used different blood pressure devices. a form was used to collect relevant perinatal clinical data, as part of a european (mosaic) and italian (action) project. the main outcomes were mortality and a variable combining mortality and severe morbidity at discharge. the cox proportional hazards and logistic regression models were used, respectively, for the two outcomes. results: twenty-two of percent of vpbs were among fbms. comparing to control group, the percentage of babies below weeks and plurality was statistically significant higher among babies of fbms: % vs. . and . % vs. . %. adjusting for potential confounders, no association for mortality among immigrant group was found, whereas a slightly excess of morbidity-mortality was observed (odd ratio, . ; % cis . - . ). conclusions: the high proportion of vpbs among fbms and the slight excess observed in morbidity and mortality indicate the need to improve the health care delivery for the immigrant population. background: high-risk newborns have excess mortality, morbidity and use of health services. objectives: to describe re-hospitalizations after discharge from an italian region. design and methods: the population study consisted of all births with < weeks' gestation discharged alive from the twelve neonatal intensive care units in lazio region during . the perinatal clinical data was collected as part of a european project (mosaic). we used the regional hospital discharge database to find hospital admissions within months, using tax code for record linkage. data were analyzed through logistic regression for re-hospitalization. results: the study group included children; among these, ( . %) were re-hospitalized; overall, readmission were observed. the median total length of stay for re-admissions was d. the two most common reasons for re-hospitalization were respiratory ( . %) and gastrointestinal ( . %) disorders. the presence of a severe morbidity at discharge (odd ratio . : % cis . - . ) and male sex (odd ratio . ; % cis . - . ) predicted re-hospitalization in multivariate model. conclusions: almost one out three preterm infants was re-hospitalized in the first months. readmissions after initial hospitalization for a very preterm birth could be a sensitive indicator of quality of follow-up strategies in high risk newborns. background: self-medication with antibiotics may lead to inappropriate use and increase the risk of selection of resistant bacteria. in europe the prevalence varies from / to / respondents. self-medication may be triggered by experience with prescribed antibiotics. we investigated whether in european countries prescribed use was associated with self-medication with antibiotics. methods: a population survey was conducted in european countries with respondents completing the questionnaire. multivariate logistic regression analysis was used to study the relationship between prescribed use and self-medication (both actual and intended) in general, for a specific symptom/disease or a specific antibiotic. results: prescribed use was associated with selfmedication, with stronger effect in northern/western europe (odds ratio . , % ci . - . ) than in southern ( . , . - . ) and eastern europe ( . , . - . ). prescribing of a specific antibiotic increased the probability of self-medication with the same antibiotic. prescribing for a specific symptom/disease increased the likelihood of self-medication for the same symptom/disease. the use of prescribed antibiotics and actual self-medication were both determinants of intended self-medication in general and for specific symptoms/diseases. conclusions: routine prescribing of antibiotics increases the risk of self-medication with antibiotics for similar ailments, both through the use of leftovers and buying antibiotics directly from pharmacies. background: in the american national kidney foundation published a guideline based on opinion and observational studies which recommends tight control of serum calcium, phosphorus and calcium-phosphorus product levels in dialysis patients. objectives: within the context of this guideline, we explored associations of these plasma concentrations with cardiovascular mortality risk in incident dialysis patients. design and methods: in necosad, a prospective multi-centre cohort study in the netherlands, we included consecutive patients new on haemodialysis or peritoneal dialysis between and . risks were estimated using adjusted time-dependent cox regression models. results: mean age was ± years, % was male, and % was treated with haemodialysis. cardiovascular mortality risk was significantly higher in haemodialysis patients (hr: . ; % ci: . to . ) and in peritoneal dialysis patients (hr: . ; . to . ) with elevated plasma phosphorus levels when compared to patients who met the target. in addition, having elevated plasma calcium-phosphorus product concentrations increased cardiovascular mortality risk in haemodialysis (hr: . ; . to . ) and in peritoneal dialysis patients (hr: . ; . to . ). conclusion: application of the current guideline in clinical practice is warranted since it reduces cardiovascular mortality risk in haemodialysis and peritoneal dialysis patients in the netherlands. background: urologists are increasingly confronted with requests for early detection of prostate cancer in men from hereditary prostate cancer (hpc) families. however, little is known about the benefit of early detection among men at increased risk. objectives: we studied the effect of biennial screening with psa in unaffected men from hpc families, aged - years, on surrogate endpoints (test and tumour characteristics). methods: the netherlands foundation for the detection of hereditary tumours holds information on approximately hpc families. here, nonaffected men from these families were included and invited for psa testing every years. we collected data on screening history and complications related to screening. results: in the first round, serum psa was elevated ( ng/ml or greater) in of men screened ( %). further diagnostic assessment revealed patients with prostate cancer ( . %). compared to population-based prostate cancer screening trials, the referral rate is equal but the detection rate is twice as high. discussion: in conclusion, the results of prostate cancer screening trials will not be directly applicable to screening in hpc families. the balance between costs, side-effects and potential benefits of screening when applied to a high-risk population will have to be assessed separately. background: in industrialized countries occupational tuberculosis among health care workers (hcws) is re-emerging as a public health priority. to prevent and control tuberculosis transmission in nosocomial settings, public health agencies have issued specific guidelines. turin, the capital of the piedmont region in italy, is experiencing a worrying rise of tuberculosis incidence. here, hcws are increasingly exposed to the risk of nosocomial tuberculosis transmission. objectives: a) to estimate the sex-and age-adjusted annual rate of tuberculosis infection (arti) (per person-years [%py]) among the hcws, as indicated by tuberculin skin test conversion (tst) conversion, b) to identify occupational factors associated with significant variations in the arti, c) to investigate the efficacy of the regional preventive guidelines. design and methods: multivariate survival analysis on tst conversion data from a dynamic cohort of hcws in turin, between and . results: the overall estimated arti was . ( % ci: . - . ) %py. the risk of tst conversion significantly differed among workplaces, occupations, and age of hcws. the guidelines implementation was associated with an arti reductions of . ( % ci: . - . ) %py. conclusions: we identify occupational risk categories for targeting surveillance and prevention measures and assessed the efficacy of the local guidelines. background: a positive family history (fh) of breast cancer is an established risk factor for the disease. screening for breast cancer in israel is recommended annually for positive-fh women aged = y and biennially for average-risk women aged - y. objective: to assess the effect of having a positive breast cancer fh on performing screening mammography in israeli women. methods: a cross-sectional survey based on a random sample of the israeli population. the study population consists of , women aged - y and telephone interviews were used. logistic regression models identified variables associated with mammography performance. results: a positive fh for breast cancer was reported by ( . %) participants. performing a mammogram in the previous year was reported by . % and . % of the positive and negative fh subgroups, respectively (p< . ). rates increased with age. among positive fh participants, being married was the only significant correlate for a mammogram in the previous year. conclusions: over % and around % of high-risk women aged - y and = y, respectively, are inadequately screened for breast cancer. screening rates are suboptimal in average-risk women too. discussion: national efforts should concentrate on increasing awareness and breast cancer screening rates. to evaluate the association between infertility, infertility treatments and breast cancer risk. methods: a historical prospective cohort with , women who attended israeli infertility centers between and . their medical charts were abstracted. breast cancer incidence was determined through linkage with the national cancer registry. standardized incidence ratios (sirs) and % confidence intervals were computed by comparing observed cancer rates to those expected in the general population. additionally, in order to control for known risk factors, a casecontrol study nested within the cohort was carried out as well based on telephone interviews with breast cancer cases and controls matched by : ratio. results: compared to . expected breast cancer cases, were observed (sir = . ;non-significant). risk for breast cancer was higher for women treated with clomiphene citrate (sir = . ; % ci . - . ). similar results were noted when treated and untreated women were compared, and when multivariate models were applied. in the nested case-control study, higher cycle index and treatment with clomiphene citrate were associated with significantly higher risk for breast cancer. conclusions: clomiphene citrate may be associated with higher breast cancer risk. smoking is a strong risk factor for arterial disease. some consider smoking also as a risk factor for venous thrombosis, while the results of studies investigating the relationship are inconsistent. therefore, we evaluated smoking as a risk factor for venous thrombosis in the multiple environmental and genetic assessment of risk factors for venous thrombosis (mega) study, a large population-based case-control study. consecutive patients with a first venous thrombosis were included from six anticoagulation clinics. using a random-digit-dialing method a control group was recruited in the same geographical area. all participants completed a questionnaire including questions on smoking habits. persons with known malignancies were excluded from the analyses, leading to a total of patients and controls. current and former smoking resulted in a small increased risk of venous thrombosis (ors adjusted for age, sex and bmi) (or-current: . ci : . - . , or-former: . ci : . - . ). an increasing amount and duration of smoking was associated with an increase in risk. the highest risk was found among young (lowest tertile: to yrs) current smokers; twenty or more pack-years resulted in a . -fold increased risk (ci : . - . ). in conclusion, smoking results in a small increased risk of venous thrombosis, with the greatest relative effect among young heavy smokers. objective: to explore whether the observed association between silica exposure and lung cancer was confounded by exposure to other occupational carcinogens, we conducted a nested case-control-study among a cohort of male workers in chinese mines and potteries. methods: lung cancer cases and matched controls were selected. exposure to respirable silica as well as relevant occupational confounders were evaluated quantitatively based on historical industrial hygiene data. the relationship between silica exposure and lung cancer mortality was analyzed by conditional logistic regression analysis adjusted for exposure to arsenic, polycyclic aromatic hydrocarbons (pahs), radon, and smoking habit. results: in a crude analysis adjusted for smoking only, a significant trend of increasing risk of lung cancer with exposure to silica was found for tin, copper/iron miners, and pottery workers. however, after the relevant occupational confounders were adjusted, no association can be observed between silica exposure and lung cancer mortality (pro mg/m -year increase of silica exposure: or = . , % ci: . - . ). conclusion: our results suggest that, the observed excess risk of lung cancer among silica exposed chinese workers is more likely due to exposure to other occupational carcinogens such as arsenic and pahs rather than due to exposure to respirable silica. background: modelling studies have shown that lifestyle interventions for adults with a high risk of developing diabetes are costeffective. objective: to explore the cost-effectiveness of lifestyle interventions for adults with low or moderate risk of developing diabetes. design and methods: the short-term effects of both a community-based lifestyle program for the general population and a lifestyle intervention for obese adults on diabetes risk factors were estimated from international literature. intervention costs were based on dutch projects. the rivm chronic diseases model was used to estimate long-term health effects and disease costs. costeffectiveness was evaluated from a health care perspective with a time horizon of years. results: intervention costs needed to prevent one case of diabetes in years range from , to , euro for the community program and from , to , euro for the intervention for obese adults. cost-effectiveness was , to , euro per quality adjusted life-year for the community program and , to , for the lifestyle intervention. conclusion: a lifestyle intervention for obese adults produces larger individual health benefits then a community program but, on a population level, health gains are more expensively achieved. both lifestyle interventions are cost-effective. background: in barcelona, the proportion of foreign-born patients with tuberculosis (tb) raised from . % in to , % in . objective: to determine differences in infection by country of origin among contacts investigated by the tb programme in barcelona from - . design and methods: data were collected on cases and their contacts. generalized estimating equations were used to obtain the risk of infection (or and % ci) to account for potential correlation among contacts. results: contacts of foreign born cases were more infected than contacts of natives patients ( % vs %, p< . ) factors related to infection among contacts of foreign cases were inner city residency (or: . , % ci: . - . ) and sputum smear positivity of the case (or: . , % ci: . - . ) and male contact (or: . , % ci: . - . ), but not daily contact (or: . , % ci: . - . ) among natives cases, inner city residency (or: . , % ci: . - . ), sputum smear positivity (or: . , % ci: . - . ) and daily exposure (or: . , % ci: . - . ) increased risk of infection. conclusion: contacts immigrant tb cases have a higher risk of infection than contacts of natives cases, however daily exposure to an immigrant case was not associated with a greater risk of infection. this could be explained by the higher prevalence of tb infection in their country of origin. background: an inverse association between birthweight and subsequent coronary heart disease (chd) has been widely reported but has not been formally quantified. we therefore conducted a systematic review of the association between birthweight and chd. design and methods: seventeen studies including a total of , singletons that had reported quantitative or qualitative estimates of the association between birthweight and chd by october were identified. additional data from two unpublished studies of individuals were also included. in total, the analyses included data on non-fatal and fatal coronary heart disease events in , individuals. results: the mean weighted estimate for the association between birthweight and chd incidence was . ( % ci . - . ) per kg of birthweight. overall, there was no significant heterogeneity between studies (p = . ) or evidence of publication bias (begg test p = . ). fifteen studies were able to adjust for some measure of socioeconomic position, but such adjustment did not materially influence the association: . ( % ci . - . ). discussion: these findings are consistent with one kilogram higher birthweight being associated with - % lower risk of subsequent chd, but the causal nature of this association remains uncertain and its direct relevance to public health is likely to be small. objective: diabetes has been reported to be associated with a greater coronary hazard among women compared with men with diabetes. we quantified the coronary risk associated with diabetes by sex by conducting a meta-analysis of prospective cohort studies. methods: studies reporting estimates of the relative risk for fatal coronary heart disease (chd) comparing those with and without diabetes, for both men and women were included. results: studies of type- diabetes and chd among , individuals were identified. the summary relative risk for fatal chd, diabetes versus not, was significantly greater among women than men: . ( % ci . to . ) versus . ( % ci . to . ), p< . . after excluding eight studies that had only adjusted for age, the sex risk difference was substantially reduced, but still highly significant (p = . ). the pooled ratio of the relative risks (female: male) from the multiple-adjusted studies was . ( % ci . to . ). conclusions: the relative risk for fatal chd associated with diabetes is % higher in women than in men. more adverse cardiovascular risk profiles among women with diabetes, combined with possible treatment disparities that favour men, may explain the greater excess coronary risk associated with diabetes in women. background: malaria in sri lanka is strongly seasonal and often of epidemic nature. the incidence has lowered in recent years which increased the relevance of epidemic forecasting for better targeting control resources. objectives: to establish the spatio/temporal correlation of precipitation and malaria incidence for use in forecasting. design and methods: de-trended long term ( de-trended long term ( - monthly time series of malaria incidence at district level were regressed in a poisson regression against rainfall and temperature at several lags. results: in the north and east of sri lanka, malaria seasonality is strongly positively correlated to rainfall seasonality (malaria lagging one or two months behind rainfall). however, in the south west, no significant (negative) correlation was found. also in the hill country, no significant correlation was observed. conclusion and discussion: despite high correlations, it still remains to be explored to what extent rainfall can be a used as a predictor (in time) of malaria. observed correlation could simply be due to two cyclical seasonal patterns running in parallel, without causal relationship. e.g. similarly, strong correlations were found between temperature and malaria seasonality at months time lag in northern districts, but causality is biologically implausible. background: few studies assessed the excess burden of acute respiratory tract infections (rti) among preschool children in primary care during viral seasons. objective: to determine the excess of rti in preschool children in primary care attributable to influenza and respiratory syncytial virus (rsv). methods: we performed a retrospective cohort study including all children aged - years registered in the database of the utrecht general practitioner (gp) network. during during - , gps recorded episodes of acute rti. surveillance data of influenza and rsv were obtained from the weekly sentinel system of the dutch working group on clinical virology. viral seasons and base-line period were defined as the weeks with respectively more than % and less than % of the yearly number of isolates of influenza or rsv. results: on average episodes of rti were recorded per , child years ( % ci: - ). notably more consults for rti occurred during influenza-season (rr . , % ci: . - . ) and rsv-season (rr . , % ci: . - . ) as compared to base-line period, especially in children younger than two years of age. conclusion: substantial excess rates of rti were demonstrated among preschool children in primary care during influenza-season and particularly during rsvseason, notably in the younger age group. background: many cancer patients who have already survived some time want to know about their prognosis, given the precondition that they are still alive. objective: we described and interpreted population-based conditional -year relative survival rates for cancer patients. methods: the longstanding eindhoven cancer registry collects data on all patients with newly diagnosed cancer in the southeastern part of the netherlands ( . million inhabitants). patients aged - years, diagnosed between and and followed up until january , were included. conditional -year relative survival was computed for every additional year survived. results: for many tumours conditional -year relative survival approached - % after having survived - years. however, for stomach cancer and hodgkin's lymphoma conditional -year relative survival increased to only - % and for lung cancer and non-hodgkin's lymphoma it did not exceed - %. initial differences in survival at diagnosis between age and stage groups disappeared after having survived for - years. conclusion: prognosis for patients with cancer changes with each year survived and for most tumours patients can considered to be cured after a certain period of time. however, for stomach cancer, lymphoma's and lung cancer the odds for death remains elevated compared to the general population. background: systematic review with meta-analysis, now regarded as 'best evidence', depends on availability of primary trials and on completeness of review. whilst reviewers have attempted to assess publication bias, relatively little attention has been given to selection bias by reviewers. method: systematic reviews of three cardiology treatments, that used common search terms, were compared for inclusion/exclusion of primary trials, pooled measures of principal outcomes and conclusions. results: in one treatment, reviews included , , , , and trials. there was little overlap: of trials in the last review only , , , and were included by others. reported summary effects ranged from (most effective to least significant); mortality relative risk . ( . , . ) in trials to . ( . , . ) in , and in one morbidity measure; standardised mean difference from . ( . , . ) in trials ( patients) to . () . , . ) in ( patients). reviewers' conclusions ranged from 'highly effective' to 'no evidence of effect'. conclusions: these examples illustrate strong selection bias in published meta-analyses. post hoc review contravenes one important principal of science 'first the hypothesis, then the test'. selection bias by reviewers may affect 'evidence' more than does publication bias. in the context of a large population based german case control study examining the effects of hormone therapy (ht) on breast cancer risk, we conducted a validation study comparing ht prescription data with participants' self-reports for data quality assurance. included were cases and controls aged - years, stratified by age and hormone use. study participants provided detailed information on ht use to trained interviewers, while gynecologists provided prescription data via telephone or fax. data were compared using proportion of agreement, kappa, intraclass correlation coefficient (icc), and descriptive statistics. overall agreement for ever/never use was . %, while agreement for ever/never use by type of ht was . %, . %, and . % for mono-estrogen, cyclical, and continuous combined therapy, respectively. icc for duration was high ( . ( % ci: . - . )), as were the iccs for age at first and last use ( . ( % ci: . - . ) and . ( % ci: . - . ), respectively). comparison of exact brand name resulted in perfect agreement for . % of participants, partial agreement for . %, and no agreement for . %. higher education and shorter length of recall were associated with better agreement. agreement was not differential by disease status. in conclusion, these self-reported ht data corresponded well with gynecologists' reports. background: legionnaires' disease (ld) is a pneumonia of low incidence. however, the impact of an outbreak can be substantial. objective: to stop a possible outbreak at an early stage, an outbreak detection programme was installed in the netherlands and evaluated after two years. design: the programme was installed nationally and consisted of sampling and controlling of potential sources to which ld patients had been exposed during their incubation period. potential sources were considered to be true sources of infection if two or more ld patients (cluster) had visited them, or if available patients' and environmental strains were indistinguishable by amplified fragment length polymorphism genotyping. all municipal health services of the netherlands participated in the study. the regional public health laboratory kennemerland sampled potential sources and cultured samples for legionella spp. results: rapid sampling and genotyping as well as cluster recognition helped to target control measures. despite these measures, two small outbreaks were only stopped after renewal of the water system. the combination of genotyping and cluster recognition lead to of ( %) patient-source associations. conclusion and discussion: systematic sampling and cluster recognition can contribute to ld outbreak detection and control. this programme can cost-effectively lead to secondary prevention. -up ( - ) , primary invasive breast cancers occurred. results: compared with hrt never-use, use of estrogen alone was associated with a significant . -fold increased risk. the association of estrogen-progestagen combinations with breast cancer risk varied significantly according to the type of progestagen: while there was no increase in risk with estrogen-progesterone (rr . [ . - . ]), estrogen-dydrogesterone was associated with a significant . -fold increase, and estrogen combined with other synthetic progestins with a significant . -fold increase. although the latter type of hrt involves a variety of different progestins, their associations with breast cancer risk did not differ significantly from one another. rrs did not vary significantly according to the route of estrogen administration (oral or transdermal/percutaneous). conclusion and discussion: progesterone rather than synthetic progestins may be preferred when an opposed estrogen therapy is to be prescribed. additional results on estrogen-progesterone are needed. background: although survival of hodgkin's lymphoma (hl) is high (> %), treatment may cause long-term side-effects like premature menopause. objectives: to assess therapy-related risk factors for premature menopause (age < ) following hl. design and methods: we conducted a cohort-study among female year hl-survivors, aged < at diagnose and treated between and . patients were followed from first treatment until june , menopause, death, or age . cumulative dose of various chemotherapeutic agents as well as radiation fields were studied as risk factors for premature menopause. cox-regression was used to adjust for age, year of treatment, smoking, bmi, and oral contraceptive-use. results: after a median follow-up of . years, ( %) women reached premature menopause. overall women ( %) were treated with chemotherapy only, ( %) with radiotherapy only and ( %) with both radio-and chemotherapy. exposure to procarbazine ), cyclophosphamide (hr . [ . - . ] ) and irradiation of the ovaries ]) were associated with significant increased risks for premature menopause. for procarbazine a dose-response relation was observed. procarbazine-use has decreased over time. conclusion: to decrease the risk for premature menopause after hl, procarbazine and cyclophosphamide exposure should be minimized and ovarian irradiation should be avoided. background: casale is an italian town where a large asbestos cement plant was active for decades. previous studies found increased risk for mesothelioma in residents, suggesting a decreasing spatial trend with distance from the plant. objective: to analyse the spatial variation of risk in casale and the surrounding area ($ , inhabitants) focussing on non-occupationally exposed individuals. design/methods: population-based case-control study including pleural mesotheliomas diagnosed between and . information on the cases and controls comprised lifelong residential and occupational history of subjects and their relatives. nonparametric tests of clustering were used to evaluate spatial aggregation. parametric spatial models based on distance between the longest-lasting subject residence (excluding the last years before diagnosis) and the source enabled estimation of risk gradient. results: mesothelioma risk appeared higher in an area of roughly - km radius from the source. spatial clustering was statistically significant (p = . ) and several clusters of cases were identified within casale. risk was highly related to the distance from the source; the best fitting model was the exponential decay with threshold. conclusion/discussion: asbestos pollution has increased dramatically the risk of mesothelioma in the area around casale. risk decreases slowly with the square of distance from the plant. malaria control programmes targeting malaria transmission from man to mosquito can have a large impact of malaria morbidity and mortality. to successfully interrupt transmission, a thorough understanding of disease and transmission parameters is essential. our objective was to map malaria transmission and analyse microenvironmental factors influencing this transmission in order to select high risk areas where transmission reducing interventions can be introduced. each house in the village msitu-wa-tembo was mapped and censused. transmission intensity was estimated from weekly mosquito catches. malaria cases identified through passive case detection were mapped by residence using gis software and the incidence of cases by season and distance to river were calculated. the distribution of malaria cases showed a clear seasonal pattern with the majority of cases during the rainy season (chisquare = . , p< . ). living further away from the river (p = . ) was the most notable independent protective factor for malaria infection. transmission intensity was estimated at . ( % ci . - . ) infectious bites per person per year. we show that malaria in the study area is restricted to a short transmission season. spatial clustering of cases indicates that interventions should be planned in the area closest to the river, prior and during the rainy season. background: the effectiveness of influenza vaccination of elders has been subject of some dispute. its impact on health inequalities also demands epidemiological assessments, as health interventions may affect early and most intensely better-off social strata. objectives: to compare pneumonia and influenza (p&i) mortality of elders (aged or more years old) before and after the onset of a largescale scheme of vaccination in sao paulo, brazil. methods: official information on deaths and population allowed the study of p&i mortality at the inner-city area level. rates related to the period to , during which vaccination coverage ranked higher than % of elders were compared with figures related to the precedent period ( ) ( ) ( ) ( ) ( ) . the appraisal of mortality decrease used a geo-referred model for regression analysis. results: overall p&i mortality reduced . % after vaccination. also the number of outbreaks, the excess of deaths during epidemic weeks, and the proportional p&i mortality ratio reduced significantly after vaccination. besides having higher prior levels of p&i deaths, deprived areas of the city presented a higher proportional decrease of mortality. conclusion: influenza vaccination contributed for an overall reduction of p&i mortality, while reducing the gap in the experience of disease among social strata. background: alcohol's first metabolite, acetaldehyde, may trigger aberrations in dna which predispose to developing colorectal cancer (crc) through several distinct pathways. our objective was to study associations between alcohol consumption and the risk of crc, according to two pathways characterized by mutations in apc and k-ras genes, and absence of hmlh expression. methods: in the netherlands cohort study, , men and women, aged - years, completed a questionnaire on risk factors for cancer in . case-cohort analyses were conducted using crc cases with complete data after . years of follow-up, excluding the first . years. gender-specific adjusted incidence rate ratios (rr) and % confidence intervals (ci) were estimated. results: neither total alcohol, nor beer, wine or liquor consumption was clearly associated with the risk of colorectal tumors lacking hmlh expression or harboring a truncating apc mutation and/or an activating k-ras mutation. in men and women, total alcohol consumption above g/day was associated with an increased risk of crc harboring a truncating apc and/or activating k-ras mutation, though not statistically significant. (rr: . ( % ci: . - . ) in men, rr: . ( % ci: . - . ) in women). in conclusion, alcohol consumption is not involved in the studied pathways leading to crc. background: educational level is commonly used to identify social groups with increased prevalence of smoking. other indicators of socioeconomic status (ses) might however be more discriminatory. objective: this study examines to what extent smoking behaviour is related to other ses indicators, such as labour market position and financial situation. methods: data derived from the european household panel, which includes data on smoking for european countries. we selected data for , respondents aged - years. the association between ses indicators and smoking prevalence was examined through logistic regression analyses. results: preliminary results show that, in univariate analysis, all selected ses indicators were associated with smoking. higher rates of smoking in lower social groups were observed in all countries, except for women in some mediterranean countries. in multivariate analyses, education retained an independent effect on smoking. no strong effect was observed for labour market position (occupational class, employment status) or for income. however, smoking prevalence was strongly related to economic deprivation and housing tenure. conclusion: these results suggest that different aspects of people's ses affect their smoking behaviour. interventions that aim to tackle smoking among high-risk groups should identify risk groups in terms of both education and material deprivation. objective: we investigated time trends in overweight and leisure time physical activities (ltpa) in the netherlands since . intra-national differences were examined stratified for sex, age and urbanisation degree. design and methods: we used a random sample from the health interview survey of about respondents, aged -to- years. self-reported data on weight, height and demographic characteristics were gathered through interviews (yearly) and data on ltpa were collected by selfadministered questionnaires . linear regression was performed for trend analyses. results: during - , mean body mass index (bmi) increased by . kg/m (p = . ). trends were similar across sex and urbanisation degrees. in -to- year old women, mean bmi increased more ( . kg/m ; p = . ) than in older women. concerning ltpa, no clear trend was observed during observed during - and observed during - . however, in year old women spent $ min/wk less on ltpa compared to older women, while this difference was smaller during - . conclusions: mean bmi increased more in younger women, which is consistent with the observation that this group spent less time on ltpa during recent years. although the overall increase in overweight could no´t be explained by trends in ltpa, physical activity interventions should target the younger women. background: prediction rules combine patient characteristics and test results to predict the presence of an outcome (diagnosis) or the occurrence of an outcome (prognosis) for individual patients. when prediction rules show poor performance in new patients, investigators often develop a new rule, ignoring the prior information available in the original rule. recently, several updating methods have been proposed that consider both prior information and information of the new patients. objectives: to compare five updating methods (that vary in extensiveness) for an existing prediction rule that preoperatively predicts the risk of severe postoperative pain (spp). design and methods: the rule was tested and updated on a validation set of new surgical patients ( ( %) with spp). we estimated the discrimination (the ability to discriminate between patients with and without spp) and calibration (the agreement between the predicted risks and observed frequencies of spp) of the five updated rules in other patients ( ( %) with spp). results: simple updating methods showed similar effects on calibration and discrimination as the more complex methods. discussion and conclusion: when the performance of a prediction rule in new patients is poor, a simple updating method can be applied to improve the predictive accuracy. about two million ethnic germans (aussiedler) have resettled in germany since . analyses with a yet incomplete follow-up of a cohort of aussiedler showed a different mortality compared to russia and germany. objectives: we investigated whether the mortality pattern changed after a complete follow-up and whether residential mobility after resettlement has an effect on mortality. we established a cohort of aussiedler who moved to germany between and . we calculated smr for all causes, external causes, cardiovascular deaths and cancer in comparison to german rates. results: with a complete follow-up, the cohort accumulated person years. overall, deaths were observed (smr . , % ci: . - . ). smr for all external causes, all cancer and cardiovascular deaths were . , . and . , respectively. increased number of moves within germany was associated with increased mortality. conclusion and discussion: the mortality in the cohort is surprisingly low, in particular for cardiovascular deaths. there is a mortality disadvantage from external causes and for some selected cancers. this disadvantage is however not as large as would be expected if aussiedler were representative of the general population in fsu countries. mobility as an indicator for a lesser integration will be discussed. background: breast cancer screening (bcs) provides an opportunity to analyze the relationship between lymph node involvement (ln), the most important prognostic factor, and biological and time dependent characteristics. objective: our aim was to assess those characteristics that are associated with ln in a cohort of screen-detected breast cancers. methods: observational population study of all invasive cancers within stage i-iiia detected from to through a bcs program in catalonia (spain). age, tumor size, histological grade, ln status and screening episode (prevalent or incident) were analyzed. pearson chi-square or fisher's exact test, mann-whitney test and stratified analyses were applied, as well as multiple logistic regression techniques. results: twenty nine percent ( % ci . - . %) out of invasive cancers had ln and . % were prevalent cancers. in the bivariate analysis, tumor size and age were strongly associated to ln (p< . ) while grade was related to ln only in incident cancers (p = . ). grade was associated with tumor size (p = . ) and with screening episode (p = . ). adjusting for screening episode and grade, age and tumor size were independent predictors of ln. conclusion: in conclusion, age and tumor size are independent predictors of ln. grade emerges as an important biological factor in incident cancers. background: the evidence regarding the association between smoking and cognitive function in the elderly is inconsistent. objectives: to examine the association between smoking and cognitive function in the elderly. design and methods: in , all participants of a population-based cohort study aged years or older were eligible for a telephone interview on cognitive function using validated instruments, such as the telephone interview of cognitive status (tics). information on smoking history was available from questionnaires administered in . we estimated the odds ratios (or) of cognitive impairment (below th percentile) and the corresponding % confidence intervals (ci) by means of logistic regression adjusting for age, sex, alcohol consumption, body mass index, physical exercise, educational level, depressive symptoms and co-morbidity. results: in total, participants were interviewed and had complete information on smoking history. former smokers had a lower prevalence of cognitive impairment (oradjusted = . ; % ci: . - . ) compared with never smokers, but not current smokers (oradjusted = . ; % ci: . - . ). conclusion: there is no association between current smoking and cognitive impairment in the elderly. discussion: the lack of association between current smoking and cognitive impairment is in line with previous non-prospective studies. the inverse association with former smoking might be due to smoking cessation associated with co-morbidities. background: many studies have reported late effects of treatment in childhood cancer survivors. most studies, however, focused on only one late effect or suffered from incomplete follow-up. objectives: we assessed the total burden of adverse events (ae), and determined treatment-related risk factors for the development of various aes. methods: the study cohort included -year survivors, treated in the emma childrens hospital amc in the netherlands between - . aes were graded for severity by one reviewer according to the common terminology criteria adverse events version . . results: medical follow-up data were complete for . % -year survivors. median follow-up time was years. almost % of survivors had one or more aes, and . % had even or more aes. of patients treated with rt alone, % had a high or severe burden of aes, while this was only % in patients treated with ct alone. radiotherapy (rt) was associated with the highest risk to develop an ae of at least grade , and was also associated with a greater risk to develop a medium to severe ae burden. conclusions: survivors are at increased risk for many health problems that may adversely affect their quality of life and long-term survival. background: studies in the past demonstrated that multifaceted interventions could enhance the quality of diabetes care. however many of these studies showed methodological flaws as no corrections were made for patient case-mix and clustering or a nonrandomised design was used. objective: to assess the efficacy of a multifaceted intervention to implement diabetes shared care guidelines. methods: a cluster randomised controlled trial of patients with type diabetes was conducted at general practises (n = ) and one outpatient clinic (n = ). in primary care, facilitators analysed barriers to change, introduced structured care, gave feedback and trained practice staff. at the outpatient clinic, an abstract of the guidelines was distributed. case-mix differences such as duration of diabetes, education, co-morbidity and quality of life were taken into account. results: in primary care, more patients in the intervention group were seen on a three monthly basis ( . % versus . %, p< . ) and their hba c was statistically significant lower ( . ± . versus . ± . , p< . ). however, significance was lost after correction for case-mix (p = . ). change in blood pressure and total cholesterol was not significant. we were unable to demonstrate any change in secondary care. conclusion: multifaceted implementation did improve the process of care but left cardiovascular risk unchanged. background: we have performed a meta-analysis including studies on the diagnostic accuracy of mr-mammography in patients referred for further characterization of suspicious breast lesions. using the bivariate diagnostic meta-analysis approach we found an overall sensitivity and specificity of . and . , respectively. the aim of the present analysis was to detect heterogeneity between studies. materials and methods: seventeen study and population characteristics were separately included in the bivariate model to compare sensitivity and specificity between strata of the characteristics. results: both sensitivity and specificity were higher in studies performed in the united states compared to non-united states studies. both estimates were also higher if two criteria for malignancy were used instead of one or three. only specificity was affected by the prevalence of cancer: specificity was highest in studies with the lowest prevalence of cancer in the study population. furthermore, specificity was affected by whether the radiologist was blinded for clinical information: specificity was higher if there was no blinding. conclusions: variation between studies was notably present across studies in country of publication, the number of criteria for malignancy, the prevalence of cancer and whether the observers were blinded for clinical information. objective: the aim of this project is to explore variation in three candidate genes involved in cholesterol metabolism in relation to risk of acute coronary syndrome (acs), and to investigate whether dietary fat intake modifies inherent genetic risks. study population: a case-cohort study is designed within the danish 'diet, cancer and health' study population. a total of cases of acs have been identified among , men and women who participated in a baseline examination between - when they were aged - years. a random sample of participants will serve as 'control' population. exposures: all participants have filled out a detailed -item food frequency questionnaire and a questionnaire concerning lifestyle factors. participants were asked to provide a blood sample. candidate genes for acs have been selected among those involved in cholesterol transport (atp-binding cassette transporter a , cholesterol-ester transfer protein, and acyl-coa:cholesterol acyltransferase ). five single nucleotide polymorphisms (snps) will be genotyped within each gene. snps will be selected among those with demonstrated functional importance, as assessed in public databases. methods: statistical analyses of association between genetic variation in the three chosen genes and risk of acs. explorations of methods to evaluate biological interaction will be of particular focus. background: c-reactive protein (crp) has been shown to be associated with type diabetes mellitus. it is unclear whether the association is completely explained by obesity. objective: to examine whether crp is associated with diabetes independent of obesity. design and methods: we measured baseline characteristics and serum crp in non-diabetic participants of the rotterdam study and followed them for a mean of . years. cox regression was used to estimate the hazard ratio. in addition, we performed a meta-analysis on published studies. results: during follow-up, participants developed diabetes. serum crp was significantly and positively associated with the risk to develop diabetes. the risk estimates attenuated but remained statistically significant after adjustment for obesity indexes. age, sex and body mass index (bmi) adjusted hazard ratios ( % ci) were . ( . - . ) for the fourth quartile, . ( . - . ) for the third quartile, and . ( . - . ) for the second quartile of serum crp compared to the first quartile. in the meta-analysis, weighed age, sex, and bmi adjusted risk ratio was . ( . - . ), for the highest crp category (> . mg/l) compared to the reference category (< . mg/l). conclusion: our findings shows that the association of serum crp with diabetes is independent of obesity. background: effectiveness of screening can be predicted by episode sensitivity, which is estimated by interval cancers following a screen. full-field digital or cr plate mammography are increasingly introduced into mammography screening. objectives: to develop a design to compare performance and validity between screen-film and digital mammography in a breast cancer screening program. methods: interval cancer incidence was estimated by linking screening visits from - at an individual level to the files of the cancer registry in finland. these data was used to estimate the study size requirements for analyzing differences in episode sensitivity between screen-film and digital mammography in a randomized setting. results: the two-year cumulative incidence of interval cancers per screening visits was estimated to be . to allow the maximum acceptable difference in the episode sensitivity between screenfilm and digital arm to be % ( % power, . significance level, : randomization ratio, % attendance rate), approximately women need to be invited. conclusion: only fairly large differences in the episode sensitivity can be explored within a single randomized study. in order to reduce the degree of non-inferiority between the screen-film and digital mammography, meta-analyses or pooled analyses with other randomized data are needed. session: socio-economic status and migrants presentation: oral. background: tackling urban/rural inequalities in health has been identified as a substantial challenge in reforming health system in lithuania. objectives: to assess mortality trends from major causes of death of the lithuanian urban and rural populations throughout the period of - . methods: information on major causes of death (cardiovascular diseases, cancers, external causes, and respiratory diseases) of lithuanian urban and rural populations from to was obtained from lithuanian department of statistics. mortality rates were age-standardized, using european standard. mortality trends were explored using the logarithmic regression analysis. results: overall mortality of lithuanian urban and rural populations was decreasing statistically significantly during - . more considerable decrease was observed in urban areas where mortality declined by . % per year in males and . % in females, compare to the decline by . % in males and . % in females in rural areas. the most notable urban/rural differences in mortality trends with unfavourable situation in rural areas were estimated in mortality from stoke, breast cancer in females, and external causes of death (traffic accidents and suicides). background: recent studies indicate that depression plays an important role in the occurrence of cardiovascular diseases (cvd). underlying mechanisms are not well understood. objectives: we investigated whether low intake of omega- fatty acids (fas) is a common cause for depression and cvd. methods: the zutphen elderly study is a prospective cohort study conducted in the netherlands. depressive symptoms were measured with the zung scale in men, aged - years, and free from cvd and diabetes in . dietary factors were assessed with a cross-check dietary history method. results: compared to high intake (= . mg/d), low intake (< . mg/d) of omega- fas, adjusted for demographics and cvd risk factors, was associated with an increased risk of depressive symptoms (or . ; % ci . - . ) at baseline, and non-significantly with -year cvd mortality (hr . ; % ci . - . ). the adjusted hr for a -point increase in depressive symptoms for cvd mortality was . ( % ci . - . ), and did not change after additional adjustment for omega- fas. conclusion: low intake of omega- fas may increase the risk of depression. our results, however, do not support the hypothesis that low intake of omega- fas explains the relation between depression and increased risk of cvd. background: during the last decades the incidence of metabolic syndrome has risen dramatically. several studies have shown beneficial effects of nut and seed intake on components of this syndrome. the relationship with prevalence of metabolic syndrome has not yet been examined. objectives: we studied the relation between nut and seed intake and metabolic syndrome in coronary patients. design and methods: presence of metabolic syndrome (according to international diabetes federation definition) was assessed in stable myocardial infarction patients ( % men) aged - years, as part of the alpha omega trial. dietary data were collected by food-frequency questionnaire. results: the prevalence of metabolic syndrome was %. median nut and seed intake was . g/day (interquartile range, . - . g/day). intake of nuts and seeds was inversely associated with the metabolic syndrome (prevalence ratio: . ; % confidence interval: . - . , for > g/day versus < g/day), after adjustment for age, gender, dietary and lifestyle factors. the prevalence of metabolic syndrome was % lower (p = . ) in men with a high nut and seed intake compared to men with a low intake, after adjustment for confounders. conclusion and discussion: intake of nuts and seeds may reduce the risk of metabolic syndrome in stable coronary patients. background: in epidemiology, interaction is often assessed by adding a product term to the regression model. in linear regression the regression coefficient of the product term refers to additive interaction. however, in logistic regression it refers to multiplicative interaction. rothman has argued that interaction as departure from additivity better reflects biological interaction. hosmer and lemeshow presented a method to quantify additive interaction and its confidence interval (ci) between two dichotomous determinants using logistic regression. objective: our objective was to provide an estimation method for additive interaction between continuous determinants. methods and results: from the abovementioned literature we derived the formulas to quantify additive interaction and its ci between one continuous and one dichotomous determinant and between two continuous determinants using logistic regression. to illustrate the theory, data of the utrecht health project were used, with age and body mass index as risk factors for diastolic blood pressure. conclusions: this paper will help epidemiologists to estimate interaction as departure from additivity. to facilitate its use, we developed a spreadsheet, which will become freely available at our website. background: the incidences of acute myocardial infarction (ami) and ischemic stroke (is) in finland have been among highest in the world. accurate geo-referenced epidemiological data in finland provides unique possibilities for ecological studies using bayesian spatial models. objectives: examine sex-specific geographical patterns and temporal variation of ami and is. design and methods: ami (n = , ) and is (n = , ) cases in - in finland, localized at the time of diagnosis according to the place of residence address using map coordinates. cases and population were aggregated to km x km grids. full bayesian conditional autoregressive models (car) were used for studying the geographic incidence patterns. results: the incidence patterns of ami and is showed on average % ( % ci - %) common geographic variation and significantly the rest of the variation was disease specific. there was no significant difference between sexes. the patterns of high-risk areas have persisted over the years and the pattern of is showed more annual random fluctuations. conclusions: although ami and is showed rather similar and temporally stable patterns, significant part of the spatial variation was disease specific. further studies are needed for finding the reasons for disease specific geographical variation. most studies addressing socio-economic inequalities in health services use fail to take into account the disease the patient is suffering from. the objective of this study is to compare possible socioeconomic differences in the use of ambulatory care between distinct patient groups: diabetics and patients with migraine. data was obtained from the belgian health interview surveys , and . in total patients with self reported diabetes or migraine were identified. in a multilevel analysis the probability of a contact and the volume of contacts with the general practitioner and/or the specialist were studied for both groups in relation to educational attainment and income. adjustment was made for age, sex, subjective health and comorbidity at the individual level, and doctors' density and region at district level. no socio-economic differences were observed among diabetic patients. among patients with migraine we observed a higher probability of specialist contacts in higher income groups (or , ; % ci , - , ) and higher educated persons (or , ; % ci , - , ), while lower educated persons tend to report more visits with the general practitioner. to correctly interpret socio-economic differences in the use of health services there is need to take into account information on the patient's type of disease. background: the suitability of non-randomised studies to assess effects of interventions has been debated for a long time, mainly because of the risk of confounding by indication. choices in the design and analytical phase of non-randomised studies determine the ability to control for such confounding, but have not been systematically compared yet. objective: the aim of the study will be to quantify the role of design and analytical factors on confounding in non-randomised studies. design and methods: a meta-regression analysis will be conducted, based on cohort and case-control studies analysed in a recent cochrane review on influenza vaccine effectiveness against hospitalisation or death in the elderly. primary outcome will be the degree of confounding as measured by the difference between the reported effect estimate (odds ratios or relative risks) and the best available estimate (nichol, unpublished data) . design factors that will be considered include study design, matching, restriction and availability of confounders. statistical techniques that will be evaluated include multivariate regression analysis with adjustment for confounders, stratification and, if available, propensity scores. results the rsults will be used to develop a generic guideline with recommendations how to prevent confounding by indication in non-randomised effect studies. the wreckage of the oil tanker prestige in november produced a heavy contamination of the coast of galicia (spain). we studied relationships between participation in clean-up work and respiratory symptoms in local fishermen. questionnaires including details of clean-up activities and respiratory symptoms were distributed among associates of fishermen's cooperatives, with postal and telephone follow-up. statistical associations were evaluated using multiple logistic regression analyses, adjusted for sex, age, and smoking status. between january and february , information was obtained from , fishermen (response rate %). sixty-three percent had participated in clean-up operations. lower respiratory tract symptoms were more prevalent in clean-up workers (odds ratio (or) . ; % confidence interval . - . ). the risk increased when the number of exposed days, number of hours per day, or number of activities increased (p for linear trend < . ). the excess risk decreased when more time had elapsed since last exposure (or . ( . - . ) and . ( . - . ) for more and less than months, respectively; p for interaction < . ). in conclusion, fishermen who participated in the clean-up work of the prestige oil spill show a prolonged dosedependent increased prevalence of respiratory symptoms one to two years after the beginning of the spill. background. hpv testing has been proposed for cervical cancer screening. objectives: evaluating the protection provided by hpv testing at long intervals vs. cytology every third year. methods: randomised controlled trial conventional arm: conventional cytology. experimental arm: in phase hpv and liquid-based cytology. hpv-positive cytology-negatives referred for colposcopy if age - , for repeat after one year if age - . in phase hpv alone. positives referred for colposcopy independently of age. endpoint: histologically confirmed cervical intraepithelial neoplasia (cin) grade or more. results: overall , women were randomised. preliminary data at recruitment are presented. overall, among women aged - years relative sensitivity of hpv versus conventional cytology was . ( % c.i. . - . ) and relative positive predictive (ppv) value was . ( % c.i. . - . ). among women aged - relative sensitivity of hpv vs. conventional cytology was . ( % c.i. . - . ) during phase but . ( % c.i. . - . ) during phase . conclusions: hpv testing increased cross-sectional sensitivity, but reduced ppv. in younger women data suggest that direct referral of hpv-positives to colposcopy results in relevant overdiagnosis of regressive lesions. measuring detection rate of cin at the following screening round will allow studying overdiagnosis and the possibility of longer screening intervals. background: plant lignans are present in foods such as whole grains, seeds, fruits and vegetables, and beverages. they are converted by intestinal bacteria into the enterolignans, enterodiol and enterolactone. enterolignans possess several biological activities whereby they may influence carcinogenesis. objective: to study the association between plasma enterolignans and the risk of colorectal adenomas. colorectal adenomas are considered to be precursors of colorectal cancer. design and method: the case-control study included cases with at least one histologically confirmed colorectal adenoma and controls with no history of any type of adenoma. associations between plasma enterolignans and colorectal adenomas were analyzed by logistic regression. results: associations were stronger for incident than for prevalent cases. when only incident cases (n = ) were included, high compared to low enterodiol plasma concentrations were associated with a reduction in colorectal adenoma risk after adjustment for confounding variables. enterodiol odds ratios ( % ci) were . , . ( . - . ), . ( . - . ), . ( . - . ) with a significant trend (p = . ) through the quartiles. although enterolactone plasma concentrations were fold higher, enterolactone's reduction in risk was not statistically significant (p for trend = . ). conclusion: we observed a substantial reduction in colorectal adenoma risk among subjects with high plasma concentrations of enterolignans, in particular enterodiol. background: smoking is a risk factor for tuberculosis diseases. recently the question was raised if smoking also increases the risk of tuberculosis infection. objective: to assess the influence of environmental tobacco smoke (ets) exposure in the household on tuberculosis infection in children. design and methods: a crosssectional community survey was done and information on children was obtained. tuberculosis infection was determined with a tuberculin skin test (tst) (cut-off mm) and information on smoking habits was obtained from all adult household members. univariate and multivariate analyses were performed, and odds ratio (or) was adjusted for the presence of a tb contact in the household, crowding and age of the child. results: ets was a risk factor for tuberculosis infection (or: . , % ci: . - . ) when all children with a tst read between two and five days were included. the adjusted or was . ( % ci: . - . ). in dwellings were a tuberculosis case had lived the association was strongest (adjusted or . , % ci: . - . ). conclusion and discussion: ets exposure seems to be a risk factor for tuberculosis infection in children. this is of great concern considering the high prevalence of smoking and tuberculosis in developing countries. background and objective: to implement a simulation model to analyze demand and waiting time (wt) for knee arthroplasty and to compare a waiting list prioritization system (ps) with the usual first-in, first-out (fifo) system. methods: parameters for the conceptual model were estimated using administrative data and specific studies. a discrete-event simulation model was implemented to perform -year simulations. the benefit of applying the ps was calculated as the difference in wt weighted by priority score between disciplines, for all cases who entered the waiting list. results: wt for patients operated under the fifo discipline was homogeneous (standard deviation (sd) between . - . months) with mean . . wt under the ps had higher variability (sd between . - . ) and was positively skewed, with mean . months and % of cases over months. when wt was weighted by priority score, the ps saved . months ( % ci . - . ) on average. the ps was favorable for patients with priority scores over , but penalized those with lower priority scores. conclusions: management of the waiting list for knee arthroplasty through a ps was globally more effective than through fifo, although patients with low priority scores were penalized with higher waiting times. background: we developed a probabilistic linkage procedure for the linking of readmissions of newborns from the dutch paediatrician registry (lnr). % of all newborns ( . ) are admitted to a neonatal ward. the main problems were the unknown number of readmissions per child and the identification of admissions of twins. objective: to validate our linking procedure in a double blinded study. design and methods: a random sample of admissions from children from the linked file has been validated by the caregivers, using the original medical records. results: response was %. for admissions of singletons the linkage contained no errors except for the small uncertain area of the linkage. for admissions of multiple births a high error rate was found. conclusion and discussion: we successfully linked the admissions of singleton newborns with the developed probabilistic linking algorithm. for multiple births we did not succeed in constructing valid admission histories due to too low data quality of twin membership variables. validation showed alternative solutions for linking twin admissions. we strongly recommend that linkage results should always be externally validated. background: salmonella typhimurium definitive phage type (dt) has emerged as an important pathogen in the last two decades. a -fold increase in cases in the netherlands during september-november prompted an outbreak investigation. objective: the objective was to identify the source of infection to enable preventive measures. methods: a subset of outbreak isolates was typed by molecular means. in a case-control study, cases (n = ) and matched population controls (n = ) were invited to complete self-administered questionnaires. results: the molecular typing corroborated the clonality of the isolates. the molecular type was similar to that of a recent s. typhimurium dt outbreak in denmark associated with imported beef. the incriminated shipment was traced after having been distributed sequentially through several eu member states. sampling of the beef identified s. typhimurium dt of the same molecular type as the outbreak isolates. cases were more likely than controls to have eaten a particular raw beef product. conclusions: our preliminary results are consistent with this s. typhimurium dt outbreak being caused by contaminated beef. our findings underline the importance of european collaboration, traceability of consumer products and a need for timely intervention into distribution chains. background: heavy-metals may affect newborns. some of them are presenting tobacco smoke. objectives: to estimate cord-blood levels of mercury, arsenic, lead and cadmium in newborns in areas in madrid, and to assess the relationship with maternal tobacco exposure. design and methods: bio-madrid study obtained cord-blood samples from recruited trios (mother/father/ newborn). cold-vapor atomic absorption spectrophotometry (aas) was used to measure mercury and graphite-furnace aas for the other metals. mothers answered a questionnaire including tobacco exposure. median, means and standard errors were calculated and logistic regression used to estimate or. results: median levels for mercury and lead were . mg/l and . mg/l. arsenic and cadmium were undetectable in % and % of samples. preliminar analysis showed a significant association of maternal tobacco exposure and levels of arsenic (or: . ; % ci: . - . ), cadmium (or: . ; % ci: . - . ), and lead (or: . ; % ci: . - . ). smoking in pregnancy was associated to arsenic (or: . ; % ci: . - . ), while passive exposure was more related to lead (or: . ; % ci: . - . ) and cadmium (or: . ; % ci: . - . ). conclusion: madrid newborns have high cord-blood levels of mercury. tobacco exposure in pregnancy might increase levels of arsenic, cadmium and lead. background: road traffic accidents (rta) are the leading cause of death for young. rta police reports provide no health information other then the number of deaths and injured, while health databases have no information on the accident dynamics. the integration of the two databases would allows to better describe the phenomenon. nevertheless, the absence of common identification variables through the lists makes the deterministic record linkage (rl) impossible. objective: to test feasibility of a probabilistic rl between rta and health information when personal identifiers are lacking. methods: health information came from the rta integrated surveillance for the year . it integrates data from ed visits, hospital discharges and deaths certificates. a deterministic rl was performed with police reports, where the name and age of deceased were present. results of the deterministic rl was then used as gold standard to evaluate the performance of the probabilistic one. results: the deterministic rl resulted in ( . %) linked records. the probabilistic rl, where the name was omitted, was capable to correctly identify ( . %). conclusions: performance of the probabilistic rl was good. further work is needed to develop strategies for the use of this approach in the complete datasets. background: overweight constitutes a major public health problem. the prevalence of overweight is unequally distributed between socioeconomic groups. risk group identification, therefore, may enhance the efficiency of interventions. objectives: to identify which socioeconomic variable best predicts overweight in european populations: education, occupation or income. design: european community household panel data were obtained for countries (n = , ). overweight was defined as a body mass index >= kg/m . uni-and multivariate logistic regression analyses were employed to predict overweight in relationship to socioeconomic indicators. results: major socioeconomic differences in overweight were observed, especially for women. for both sexes, a low educational attainment was the strongest predictor of overweight. after control for confounders and the other socioeconomic predictors, the income gradient was either absent or positive (men) or negative (women) in most countries. similar patterns were found for occupational level. for women, inequalities in overweight were generally greater in southern european countries. conversely, for men, differences were generally greater in other european countries. conclusion: across europe, educational attainment most strongly predicts overweight. therefore, obesity interventions should target adults and children with lower levels of education. background: because incidence and prevalence of most chronic diseases rise with age, their burden will increase in ageing populations. we report the increase in prevalence of myocardial infarction (mi), stroke (cva), diabetes type ii (dm) and copd based on the demographic changes in the netherlands. in addition, for mi and dm the effect of a rise in overweight was calculated. methods: calculations were made for the period - with a dynamic multi-state transition model and demographic projections of the cbs. results: based on ageing alone, between and prevalence of dm will rise from . to . (+ %), prevalence of mi from . to . (+ %), stroke prevalence from . to . (+ %) and copd prevalence from . to . (+ %). a continuation of the dutch (rising) trend in overweight prevalence would in lead to about . diabetics (+ %). a trend resulting in american levels would lead to over million diabetics (+ %), while the impact on mi was much smaller: about . (+ %) in . conclusion: the burden of chronic disease will substantially increase in the near future. a rising prevalence of overweight has an impact especially on the future prevalence of diabetes background: there has been increasing concern about the effects of environmental endocrine disruptors (eeds) on human reproductive health. eeds include various industrial chemicals, as well as dietary phyto-estrogens. intra-uterine exposures to eeds are hypothesized to disturb normal foetal development of male reproductive organs and specifically, to increase the risk of cryptorchidism, hypospadias, testicular cancer, and a reduced sperm quality in offspring. objective: to study the associations between maternal and paternal exposures to eeds and the risks of cryptorchidism, hypospadias, testicular cancer and reduced sperm quality. design and methods: these associations are studied using a case-referent design. in the first phase of the study, we collected questionnaire data of the parents of cases with cryptorchidism, cases with hypospadias and referent children. in the second phase, we will focus on the health effects at adult age: testicular cancer and reduced sperm quality. in both phases, we will attempt to estimate the total eed exposure of parents of cases and referents at time of pregnancy through an exposure-assessment model in which various sources of exposure, e.g. environment, occupation, leisure time activities and nutrition, are combined. in addition, we will measure hormone receptor activity in blood. background: eleven percent of the pharmacotherapeutic budget is spent on acid-suppressive drugs (asd); % of patients are chronic user. most of these indications are not conform to dyspepsia guidelines. objectives: we evaluated the implementation of an asd rationalisation protocol among chronic users, and analysed effects on volume and costs. method: in a cohort study patients from gp's with protocol were compared to a control group of patients from gp's without. prescription data of - were extracted from agis health database. a log-linear regression model compared standardised outcomes of number of patients that stopped or reduced asd (> %) and of prescription volume and costs. results: gp's and patients in both groups were comparable. % in the intervention group had stopped; % in the control group (p< . ). the volume had decreased in another % of patients; % in control group (p< . ). compared to the baseline data in the control group ( %) the adjusted or of volume in the intervention group was . %. the total costs adjusted or was . %. the implementation significantly reduced the number of chronic users, and substantially dropped volume and costs. active intervention from insurance companies can stimulate rationalisation of prescription. background/objective: today, % of lung cancers are resectable (stage i/ii). -year survival is therefore low ( %). spiral computed tomography (ct) screening detects more lung cancers than chest x-ray. it is unknown if this will translate into a lung cancer mortality reduction. the nelson trial investigates whether detector multi-slice ct reduces lung cancer mortality with at least % compared to no screening. we present baseline screening results. methods: a questionnaire was sent to , men and women. of the , respondents, , high-risk current and former smokers were invited. until december , , , of them gave informed consent and were randomised ( : ) in a screen arm (ct in year , and ) and control arm (no screening). data will be linked with the cancer registry and statistics netherlands to determine cancer mortality and incidence. results: of the first , baseline ct examinations % was negative (ct after one year), % indeterminate (ct after months) and % positive (referral pulmonologist). seventy percent of detected tumours were resectable. conclusion/discussion: ct screening detects a high percentage of early stage lung cancers. it is estimated that the nelson trial is sufficiently large to demonstrate a % lung cancer mortality reduction or more. background: due to diagnostic dilemmas in childhood asthma, drug treatment of young children with asthmatic complaints often serves as a trial treatment. objective: to obtain more insight into patterns and continuation of asthma medication in children during the first years of life. design: prospective birth cohort study methods: within the prevention and incidence of asthma and mite allergy (piama) study (n = , children) we identified children using asthma medication in their first year of life. results: about % of children receiving asthma medication before the age of one, discontinued use during follow-up. continuation of therapy was associated with male gender (adjusted odds ratio [aor] . , % confidence interval [ci]: . - . ), a diagnosis of asthma (aor . , % ci: . - . ) and receiving combination or controller therapy (aor , , % ci: . - . ). conclusion: patterns of medication use in preschool children support the notion that both beta -agonist and inhaled corticosteroids are often used as trial medication, since % discontinues. the observed association between continuation of therapy and both an early diagnosis of asthma and a prescription for controller therapy suggests that, despite of diagnostic dilemmas, children in apparent need of prolonged asthma therapy are identified at this very early age. background: this study explored the differences in birthweight between infants of first and second generation immigrants and infants of dutch women, and to what extent maternal, fetal and environmental characteristics could explain these differences. method: during months all pregnant women in amsterdam attending their first prenatel screening were asked to fill out a questionnaire (sociodemographic status, lifestyle) as part of the amsterdam born children and their development (abcd)-study; women ( %) responded. only singleton deliveries with pregnancy duration = weeks were included (n = ). results: infants of all first and second generation immigrant groups (surinam, the antilles, turkey, morocco, ghana, other countries) had lower birthweights (range: - gram) than dutch infants ( gram). linear regression revealed that, adjusted for maternal height, weight, age, parity, smoking, marital status, gestational age and gender, infants of surinamese women ( st and nd generation), antillean and ghanaian women (both st generation) were still lighter than dutch infants ( . , . , . , and . grams respectively; p< . ). conclusion: adjusted for maternal, fetal and environmental characteristics infants of some immigrant groups had substantial lower birthweights than infants of dutch women. other factors (like genetics, culture) can possibly explain these differences. introduction: missing data is frequently seen in cost-effectiveness analyses (ceas). we applied multiple imputation (mi) combined with bootstrapping in a cea. objective: to examine the effect of two new methodological procedures of combining mi and bootstrapping in a cea with missing data. methods: from a trial we used direct health and non-health care costs and indirect costs, kinesiophobia and work absence data assessed over months. mi was applied by multivariate imputation by chained equations (mice) and non-parametric bootstrapping was used. observed case analyses (oca), where analyses were conducted on the data without missings, were compared with complete case analysis (cca) and with analyses when mi and bootstrapping were combined after to % of cost and effect data were omitted. results: by the cca effect and cost estimates shifted from negative to positive and cost-effectiveness planes and acceptability curves were biased compared to the oca. the methods of combining mi and bootstrapping generated good cost and effect estimates and the cost-effectiveness planes and acceptability curves were almost identical to the oca. conclusion: on basis of our study results we recommend to use the combined application of mi and bootstrapping in data sets with missingness in costs and effects. background: since the s, coronary heart disease (chd) mortality rates have halved with approximately % of this decrease being attributable to medical and surgical treatments. objective: this study examined the cost-effectiveness of specific chd treatments. design and methods: the impact chd model was used to calculate the number of life-years-gained (lyg) by specific cardiology interventions given in , and followed over ten years. this previously validated model integrates data on patient numbers, median survival in specific groups, treatment uptake, effectiveness and costs of specific interventions. cost-effectiveness ratios were generated as costs per lyg for each specific intervention. results: in , medical and surgical treatments together prevented or postponed approximately , chd deaths in patients aged - years; this generated approximately , extra life years. aspirin and beta-blockers for secondary prevention of myocardial infarction and heart failure, and spironolactone for heart failure all appeared highly cost-effective ( % (positive predictive value was %). conclusion: omron fails the validation criteria for ankle sbp measurement. however, the ease of use of the device could outweigh the inaccuracy if used as a screening tool for aai< , in epidemiologic studies. background: associations exist between: ) parental birth weight and child birth weight; ) birth weight and adult psychopathology; and ) maternal psychopathology during pregnancy and birth weight of the child. this study is the first to combine those associations. objective: to investigate the different pathways from parental birth weight and parental psychopathology to child birth weight in one model. design and methods: depression and anxiety scores on , mothers and , fathers during weeks pregnancy and birth weights from , children were available. path analyses with standardized regression coefficients were used to evaluate the different effects. results: in the unadjusted path analyses significant effects existed between: maternal (r = . ) and paternal birth weight (r = . ) and child birth weight; maternal birth weight and maternal depression (r=). ) and anxiety (r=). ); and maternal depression (r = . ) and anxiety (r = . ) and child birth weight. after adjustment for confounders, only maternal (r = . ) and paternal (r = . ) birth weight and maternal depression (r=). ) remained significantly related to child birth weight. conclusion after adjustment maternal depression, and not anxiety, remained significantly related to child birth weight. discussion future research should focus on the different mechanisms of maternal anxiety and depression on child birth weight. background: most patients with peripheral arterial disease (pad) die from coronary artery disease (cad). non-invasive cardiac imaging can assess the presence of coronary atherosclerosis and/or cardiac ischemia. screening in combination with more aggressive treatment may improve prognosis. objective: to evaluate whether a non-invasive cardiac imaging algorithm, followed by treatment will reduce the -year-risk of cardiovascular events in pad patients free from cardiac symptoms. design and methods: this is a multicenter randomized controlled clinical trial. patients with intermittent claudication and no history of cad are eligible. one group will undergo computed tomography (ct) calcium scoring. the other group will undergo ct calcium scoring and ct angiography (cta) of the coronary arteries. patients in the latter group will be scheduled for a dobutamine stress magnetic resonance imaging (dsmr) test to assess cardiac ischemia, unless a stenosis of the left main (lm) coronary artery (or its equivalent) was found on cta. patients with cardiac ischemia or a lm/lm-equivalent stenosis will be referred to a cardiologist, who will decide on further (interventional) treatment. patients are followed for years. conclusion: this study assesses the value of non-invasive cardiac imaging to reduce the risk of cardiovascular events in patients with pad free from cardiac symptoms. background: hpv is the main risk factor for cervical cancer and also a necessary cause for it. participation rates in cervical cancer screening are low in some countries and soon hpv vaccination will be available. objectives: aim of this systematic review was to collect and analyze published data on knowledge about hpv. design and methods: a medline search was performed for publications on knowledge about hpv as a risk factor for cervical cancer and other issues of hpv infection. results: of individual studies were stratified by age of study population, country of origin, study size, publication year and response proportion. heterogeneity was described. results: knowledge between included studies varied substantially. thirteen to % (closed question) and . to . % (open question) of the participants knew about hpv as a risk factor for cervical cancer. women had consistently better knowledge on hpv than men. there was confusion of hpv with other sexually transmitted diseases. conclusion and discussion: studies were very heterogeneous, thus making comparison difficult. knowledge about hpv infections depended on the type of question used, gender of the participants and their professional background. education of the general public on hpv infections needs improvement, specially men should also be addressed. background: influenza outbreaks in hospitals and nursing homes are characterized by high attack rates and severe complications. knowledge of the virus' specific transmission dynamics in healthcare institutions is scarce but essential to develop cost-effective strategies. objective: to follow and model the spread of influenza in two hospital departments and to quantify the contributions of the several possible transmission pathways. methods: an observational prospective cohort study is performed on the departments of internal medicine & infectious diseases and pulmonary diseases of the umc utrecht during the influenza season. all patients and regular medical staff are checked daily on the presence of fever and cough, the most accurate symptoms of influenza infection. nose-throat swabs are taken for pcr analysis for both symptomatic individuals and a sample of asymptomatic individuals. to determine transmission, contact patterns are observed between patients and visitors and patients and medical staff. results/discussion: spatial and temporal data of influenza cases will be combined with contact data in a mathematical model to quantify the main transmission pathways. among others the model can be used to predict the effect of vaccination of the medical staff which is not yet common practice in the studied hospital. background: the long term maternal sequelae of stillbirths is unknown. objectives: to assess whether women who experienced stillbirths have an excess risk of long term mortality. study design: cohort study. methods: we traced jewish women from the jerusalem perinatal study, a population-based database of all births to west jerusalem residents ( - who gave birth at least twice during the study period, using unique identity numbers. we compared the survival to - - of women who had at least one stillbirth (n = ) to that of women who had only live births (n = ) using cox proportional hazards models. results: during a median follow up of . years, ( . %) mothers with stillbirths died compared to , ( . %) unexposed women; crude hazard ratio (hr) . ( % ci: . - . ). the mortality risk remained significantly increased after adjustments for sociodemo-graphic variables, maternal diseases, placental abruption and preeclampsia (hr: . , % ci: . - . ). stillbirth was associated with increased risk of death from cardiovascular (adjusted hr: . , . - . ), circulatory ( . , . - . ) and genitourinary ( . , . - . ) causes. conclusions: the finding of increased mortality among mothers of stillbirths joins the growing body of evidence demonstrating long term sequelae of obstetric events. future studies should elucidate the mechanisms underlying these associations. resilience is one of the essential characteristics of successful ageing. however, very little is known about the determinants of resilience in old age. our objectives were to identify resilience in the english longitudinal study of ageing (elsa) and to investigate social and psychological factors determining it. the study design was a crosssectional analysis of wave of elsa. using structural equation modelling, we identified resilience as a latent variable indicated by high quality of life in the face of six adversities: ageing, limiting long-standing illness, disability, depression, perceived poverty and social isolation and we regressed it on social and psychological factors. the latent variable model for resilience showed a highly significant degree of fit (weighted root mean square resid-ual= . ). determinants of resilience included good quality of relationships with spouse (p = . ), family (p = . ), and friends (p< . ), good neighbourhood (p< . ), high level of social participation (p< . ), involvement in leisure activities (p = . ); perception of not being old (p< . ); optimism (p = . ), and high subjective probability of survival to an older age (p< . ). we concluded that resilience in old age was as much a matter of social engagement, networks and context as of individual disposition. implications of this on health policy are discussed. background: there is extensive literature concluding that ses is inversely related to obesity in developed countries. several studies in developing populations however reported curvilinear or positive association between ses and obesity. objectives: to assess the social distribution of obesity in men and women in middle-income countries of eastern and central europe with different level of economic development. methods: random population samples aged - years from poland, russia and czech republic were examined between - as baseline for prospective cohort study. we used body-mass index (bmi) and waist/hip ratio (whr) as obesity measures. we compared age-adjusted bmi and whr for men and women by educational levels in all countries. results: the data collection was concluded in summer . we collected data from about , subjects. lower ses increased obesity risk in women in all countries (the strongest gradient in the czech republic and the lowest in russia), and in czech men. there was no ses gradient in bmi in polish men and positive association between education and bmi in russian men. conclusions: our findings strongly agree with previous literature showing that the association between ses status and obesity is strongly influenced by overall level of country economic development. background: inaccurate measurements of body weight, height and waist circumference will lead to an inaccurate assessment of body composition, and thus of the general health of a population. objectives: to assess the accuracy of self-reported body weight, height and waist-circumference in a dutch overweight working population. design and methods: bland and altman methods were used to examine the individual agreement between self-reported and measured body weight and height in overweight workers ( % male; mean age . +/) . years; mean body mass index [bmi] . +/) . kg/m ). the accuracy of self-reported waistcircumference was assessed in a subgroup of persons ( % male; mean age . +/) . years; mean bmi . +/) . kg/ m ), for whom both measured and self-reported waist circumference was available. results: body weight was underreported by a mean (standard deviation) of . ( . ) kg, body height was on average over-reported by . ( . ) cm. bmi was on average underreported by . ( . ) kg/m . waist-circumference was overreported by . ( . ) cm. the overall degree of error from selfreporting was between . and . %. conclusion and discussion: self-reported anthropometrics seem satisfactorily accurate for the assessment of general health in a middle-aged overweight working population. the incidence of breast cancer and the prevalence of metabolic syndrome are increasing rapidly in chile, but this relationship is still debated. the goal of this study is to assess the association between metabolic syndrome and breast cancer before and after menopause. a hospital based case-control study was conducted in chile during . cases with histologically confirmed breast cancer and age matched controls with normal mammography were identified. metabolic syndrome was defined by atpiii and serum lipids, glucose, blood pressure and waist circumference were measured by trained nurses. data of potential confounders such as, obesity, socioeconomic status, exercise and diet were obtained by anthropometric measures and a questionnaire. odds ratios (ors) and % confidence intervals (cis) were estimated by conditional logistic regression stratified by menopause. in postmenopausal women, a significant increase risk of breast cancer was observed in women with metabolic syndrome (or = . , % ci = . - . ). the elements of metabolic syndrome strongly associated were high levels of glucose and hypertension. in conclusion, postmenopausal women with metabolic syndrome had % of excess risk of breast cancer. these findings support the theory that there is a different risk profile of breast cancer after and before menopause. background: physical exercise during pregnancy has numerous beneficial effects on maternal and foetal health. it may, however, affect early foetal survival negatively. objectives: to examine the association between physical exercise and spontaneous abortion in a cohort study. design and methods: in total, , women recruited to the danish national birth cohort in early pregnancy, provided information on amount and type of exercise during pregnancy and on possible confounding factors. , women experienced foetal death before gestational weeks. hazard ratios for spontaneous abortion in four periods of pregnancy () , - , - , and - weeks) according to amount (min/week) and type of exercise, respectively, were estimated using cox regression. various sensitivity analyses to reveal distortion of the results from selection forces and information bias were made. results: the hazard ratios of spontaneous abortion increased stepwise with amount of physical exercise and were largest in the earlier periods of pregnancy (hrweek - = . (ci . - . ) for min/week, compared to no exercise). weight bearing types of exercise were strongest associated with abortion, while swimming showed no association. these results remained stable, although attenuated, in the sensitivity analyses. discussion: handling of unexpected findings that furthermore challenge official public health messages will be discussed. hemodialysis (hd) patients with a low body mass index (bmi) have an increased mortality risk, but bmi changes over time on dialysis treatment. we studied the association between changes in bmi and all-cause mortality in a cohort of incident hd patients. patients were followed until death or censoring for a maximum follow-up of years. bmi was measured every months and changes in bmi were calculated over each -mo period. with a time-dependent cox regression analysis, hazard ratios (hr) were calculated for these -mo changes on subsequent mortality from all causes, adjusted for the mean bmi of each -mo period, age, sex and comorbidity. men and women were included (age: ? years, bmi: . ? . kg/m , -y mortality: %). a loss of bmi> % was independently associated with an increased mortality risk (hr: . , %-ci: . - . ), while a loss of - % showed no difference (hr: . , . - . ) compared to no change in bmi () % to + % change). a gain in bmi of - % showed beneficial (hr: . , . - . ), while a gain of bmi> % was not associated with a survival advantage (hr: . , . - . ). in conclusion, hd patients with a decreasing bmi have an increased risk of all-cause mortality. background: within the tripss- project, impact of clinical guidelines (gl) on venous thromboembolism (vte) prophylaxis was evaluated in a large italian hospital. gl were effective in increasing the appropriateness of prophylaxis and in reducing vte. objectives: we performed a cost-effectiveness analysis by using a decision-tree model to estimate the impact of the adopted gl on costs and benefits. design and methods: a decision-tree model compared prophylaxis cost and effects before and after gl implementation. four risk profiles were identified (low, medium, high, very high). possible outcomes were: no event, major bleeding, asymptomatic vte, symptomatic vte and fatal pulmonary embolism. vte patients risk and probability of receiving prophylaxis were defined using data from the previous survey. outcome probabilities were assumed from literature. tariffs and hospital figures were used for costing the events. results: gl introduction reduced the average cost per patient from e to () %) with an increase in terms of event free patients (+ %). results are particularly relevant in the very high risk group. conclusion: the implementation of locally adapted gl may lead to a gain in terms of costs and effects, in particular for patients at highest vte risk. background: assisted reproductive techniques are used to overcome infertility. one reason of success is the use of ovarian stimulation. objectives: compare two ovarian stimulation protocols, gonadotropin-releasing hormone agonists/antagonists, assessing laboratorial and clinical outcomes, to provide proper therapy option. identify significant predictors of clinical pregnancy and ovarian response. design and methods: retrospective study (agonist cycles, ; antagonist cycles, ) including ivf/intracytoplasmic sperm injection cycles. multiple logistic and regression models, with fractional polynomial method were used. results: antagonist group exhibited lower length of stimulation and dose of recombinant follicle stimulating hormone (rfsh), higher number of retrieved and fertilized oocytes, and embryos. agonist group presented thicker endometrium, better fertilization, implantation and clinical pregnancy rates. clinical pregnancy has shown positive correlation with endometrial thickness and use of agonist; negative correlation with age and number of previous attempts. retrieved oocytes shown positive correlation with estradiol on day of human chorionic gonadotrophin (hcg) and use of antagonist; negative correlation with rfsh dose. conclusion: patients from antagonist group are more likely to get more oocytes and quality embryos, despite those from agonist group are more likely to get pregnant. background: prevalence studies of the metabolic syndrome require fasting blood samples and are therefore lacking in many countries including germany. objectives: to narrow the incertitude resulting from extrapolation of international prevalence estimates, with a sensitivity analysis of the prevalence of the metabolic syndrome in germany using a nationally representative but partially non-fasting sample. methods: stepwise analysis of the german health examination survey , using the national cholesterol education program (ncep) criteria, hemoglobin a c (hba c), non-fasting triglycerides and fasting time. results: among participants aged - years, the metabolic syndrome prevalence was (i) . % with . % inconclusive cases using the unmodified ncep criteria, (ii) . % with . % inconclusive cases using hba c > . % if fasting glucose was missing, (iii) . % with . % inconclusive cases when additionally using non-fasting triglycerides = th percentile stratified by fasting time, and (iv) . % to . % with < % inconclusive cases using different cutoffs (hba c . %, non-fasting triglycerides and mg/dl). discussion: despite a lower prevalence of obesity in germany compared to the us, the prevalence of the metabolic syndrome is likely to be in the same order of magnitude. this analysis may help promote healthy life styles by stressing the high prevalence of interrelated cardiovascular risk factors. background: epidemiologic studies that directly examine fruits and vegetables (f&v) consumption and other lifestyle factors in relation to weight gain are sparse. objective: we examined the associations between the f&v intake and -y weight gain among spanish adult people. design/methods: the study was conducted with a sub-sample of healthy people aged y and over at baseline in , who participated in a population-based nutrition survey in valencia-spain. data on diet, lifestyle factors and body weight (direct measurement) were obtained in and . information on weight gain was available for participants in . results: during the -y period, participants tended to gain on average . kg (median = . kg). in multivariate analyses, participants with the highest tertile of f&v intake at baseline had a % of lower risk of gaining = . kg compared with those who had the lowest intake tertile after adjustment for sex, age, education, smoking, tv-viewing, bmi, and energy intake (or = . ; % ci: . . ;p-fortrend = . ). for every g/d increase in f&v intake, the or was reduced by % (or = . ; . - . ;p-trend= . ). tvviewing at baseline was positively associated with weight gain, or for- h-increase= . ( . - . ;p-trend= . ). conclusions: our findings suggest that a high f&v intake and low tv-viewing may reduce weight gain among adults. background: diabetic patients develop more readily atherosclerosis thus showing greatly increased risk for cardiovascular disease. objective: the heinz nixdorf recall-study is a prospective cohort-study designed to assess the prognostic value of new risk stratification methods. here we examined the association between diabetes, previously unknown hyperglycemia and the degree of coronary calcification (cac). methods: a population-based sample of , men and women aged - years was recruited from three german cities between - . baseline examination included amongst others a detailed medical history, blood analyses and electron-beam tomography. we calculated adjusted prevalence ratios (pr), adjusting for age, smoking, bmi and %-confidence intervals ( % ci) with log-linear binomial regression. results: the prevalence of diabetes is . % (men: . %, women: . %), for hyperglycemia . % (men: . %, women: . %). prevalence ratio for cac in male diabetics without overt coronary heart disease is . ( % ci: . - . ), for those with hyperglycemia . ( . - . ). in women the association is even stronger: . ( . - . ) with diabetes, . ( . - . ) with hyperglycemia. conclusion: the data support the concept of regarding diabetic patients as being in a high risk category meaning > % hard events in years. furthermore, even persons with elevated blood glucose levels already show higher levels of coronary calcification. background: birth weight is associated with health in infancy and later in life. socioeconomic inequality in birth weight is an important marker of current and future population health inequality. objective: to examine the effect of maternal education on birth weight, low birth weight (lbw, birth weight< , g), and small for gestational age (sga) background: in clinical practice patient data are obtained gradually and health care practitioners tune prognostic information according to available data. prognostic research does not always reproduce this sequential acquisition of data: instead, 'worst', discharge or aggregate data are often used. objective: to estimate prognostic performance of sequentially updated models. methods: cohortstudy of all very-low-birth-weight-babies (< g) admitted to the study neonatal icu < days after birth ( eligible from to ) and followed-up until years old ( . % lost-to-follow-up). main outcomes: disabling cerebral palsy at years ( , . %) or death ( , . % ) % in the first weeks). main prognostic determinants: neonatal cerebral lesions identified with cranial ultrasound (us) exams performed per protocol on days , , and at discharge. logistic regression models were updated with data available at these different moments in time during admission. results: at days , and respectively, main predictor (severe parenchymal lesion) adjusted odds ratio: , and ; us model c-statistic: . , . and . . discussion: prognostic models performance in neonatal patients improved from inception to discharge, particularly for identification of the high risk category. time of data acquisition should be considered when comparing prognostic instruments. in epidemiological longitudinal studies one is often interested in the analysis of time patterns of censored history data. for example, how regular a medication is used or how often someone is depressed. our goal is to develop a method to analyse time patterns of censored data. one of the tools in longitudinal studies is a nonhomogeneous markov chain model with discrete time moments and categorical state space (for example, use of various medications). suppose we are interested only in the time pattern of appearance of a particular state which is in fact a certain epidemiological event under study. for this purpose we construct a new homogeneous markov chain associated with this event. the states of this markov chain are the time moments of the original nonhomogeneous markov chain. using the new transition matrix and standard tools from markov chain theory we can derive the probabilities of occurence of that epidemiological event during various time periods (including ones with gaps). for example, probabilities of cumulative use of medication during any time period. in conclusion, the proposed approach based on markov chain model provides a new way of data representation and analysis which is easy to interpret in practical applications. background: tuberculosis (tb) cases that belong to a cluster of the same mycobacterium tuberculosis dna fingerprint are assumed to be consequence of recent transmission. targeting interventions to fast growing clusters may be an efficient way of interrupting transmission in outbreaks. objective: to assess predictors for large growing clusters compared to clusters that remain small within a years period. design and method out of the culture confirmed tb patients diagnosed between and , ( %) had unique fingerprints while were part of a cluster. of the clustered cases were in a small ( to cases within the first years) and in a large cluster (more than cases within the first years). results independent risk factors for being a case within the first years of a large cluster were non-dutch nationality (or = . % ci [ . - . background: passive smoking causes adverse health outcomes such as lung cancer or coronary heart disease (chd). the burden of passive smoking on a population level is currently unclear and depends on several assumptions. we investigated the public health impact of passive smoking in germany. methods: we computed attributable mortality risks due to environmental tobacco smoke (ets). we considered lung cancer, chd, stroke, copd and sudden infant death. frequency of passive smoking was derived from the national german health survey. sensitivity analyses were performed using different definitions of exposure to passive smoking. results: in total, deaths every year in germany are estimated to be caused by exposure to passive smoking at home (women , men ). most of these deaths are due to chd ( ) and stroke ( ). additional consideration of passive smoking at workplace increased the number of deaths to . considering any exposure to passive smoking and also active smokers who report exposure to passive smoking increased the number of deaths further. conclusions: passive smoking has an important impact on mortality in germany. even the most conservative estimate using exposure to ets at home led to a substantial number of deaths related to passive smoking. des daughters have a strongly increased risk of clear-cell adenocarcinoma of the vagina and cervix (ccac) at a young age. longterm health problems, however, are still unknown. we studied incidence of cancer, other than ccac, in a prospective cohort of des daughters (desnet project). in , questionnaires were sent to des daughters registered at the des center in utrecht. also, informed consent was asked for linkage with disease registries. for this analysis, data of , responders and nonresponders were linked to palga, the dutch nationwide network and registry of histo-and cytopathology. mean age at the end of follow-up was . years. a total of incident cancers occurred. increased standardized incidence rates (sir) were found for vaginal/vulvar cancers (sir = . , % confidence interval ( % ci) . - . ), melanoma (sir = . , % ci . - . )) and breast cancer (sir= . , % ci . - . ) as compared to the general population. no increased risk was found for invasive cervical cancer, possibly due to effective screening. results for breast and cervical cancer are consistent with the sparse literature. the risk of melanoma might be due to surveillance bias. future analyses will include non-invasive cervical cancer, stage specific sirs for melanoma and adjustment for confounding (sister control group) for breast cancer. background: contact tracing plays an important role in the control of emerging infectious diseases in both human and farm animal populations, but little is known yet about its effectiveness. here we investigate in a generic setting for well-mixed populations the dependence of tracing effectiveness on the probability that a contact is traced, the possibility of iteratively tracing yet asymptomatic infectives, and delays in the tracing process. methods and findings: we investigate contact tracing in a mathematical model of an emerging epidemic, incorporating a flexible infectivity function and incubation period distribution. we consider isolation of symptomatic infected as the basic scenario, and determine the critical tracing probability (needed for effective control) in relation to two infectious disease parameters: the reproduction ratio under isolation and the duration of latent period relative to the incubation period. the effect of tracing delays is considered, as is the possibility of single-step tracing vs. iterative tracing of quarantined invectives. finally, the model is used to assess the likely success of tracing for influenza, smallpox, sars, and foot-and-mouth disease epidemics. conclusions: we conclude that single-step contact tracing can be effective for infections with a relatively long latent period or a large variation in incubation period, thus enabling backwards tracing of super spreading individuals. the sensitivity to changes in the tracing delay varies greatly, but small increases may have major consequences for effectiveness. if single-step tracing is on the brink of being effective, iterative tracing can help, but otherwise it will not improve much. we conclude that contact tracing will not be effective for influenza pandemics, only partially for fmd epidemics, and very effective for smallpox and sars epidemics. abstract: infections of highly pathogenic h n avian influenza in humans underline the need for tracking of the ability of these viruses to spread among humans. here we propose a method of analysing outbreak data that allows determination of whether and to what extent transmission in a household has occurred after an introduction from the animal reservoir. in particular, it distinguishes between onward transmission from humans that were infected from the animal reservoir (primary human-to-human transmission) and onward transmission from humans who were themselves infected by humans (secondary human-to-human transmission). the method is applied to data from an epidemiological study of an outbreak of highly pathogenic avian influenza (h n ) in the netherlands in . we contrast a number of models that differ with respect to the assumptions on primary versus secondary human-to-human transmission. session: mathematical modelling of infectious diseases presentation: oral. usually models for the spread of an infection in a population are based on the assumption of a randomly mixing population, where every individual may contact every other individual. however, the assumption of random mixing seems to be unrealistic, therefore one may also want to consider epidemics on (social) networks. connections in the network are possible contacts, e.g. if we consider sexually transmitted diseases and ignore all spread by other than sexual ways, the connections are only between people that may have intercourse with each other. in this talk i will compare the basic reproduction ratio, r and the probability of a major outbreak of network models and for randomly mixing populations. furthermore, i will discuss which properties of the network are important and how they can be incorporated in the model. in this talk a reproductive power model is proposed that incorporates the following points met when an epidemic disease outbreak is modeled statistically: ) the dependence of the data is handled with a non-homogeneous birth process. ) the first stage of the outbreak is described with an epidemic sir model. soon control measures will start to influence the process. these measures are in addition to the natural epidemic removal process. the prevalence is related to the censored infection times in such a way that the distribution function, and therefore the survival function, satisfies approximately the first equation of the sir model. this leads in a natural way to the burr-family of distributions. ) the non-homogeneous birth process handles the fact that in practice, with some delay, it is the infected that are registered and not the susceptibles. ) finally the ending of the epidemic caused by the measures taken is incorporated by modifying the survival function with a finalsize parameter in the same way as is done in long-term survival models. this method is applied to the dutch classical swine individual and area (municipality) measures of income, marital and employment status were obtained. there were , suicides and , controls. after controlling for compositional effects, ecological associations of increased suicide risk with declining area levels of employment and income and increasing levels of people living alone were much attenuated. individual-level associations with these risk factors were little changed when controlling for contextual effects. we found no consistent evidence that associations with individual level risk factors differed depending on the areas characteristics (cross-level interactions). this analysis suggests the ecological associations reported in previous studies are likely to be due in greater part to the characteristics of the residents in those areas than area-influences on risk, rather than to contextual effects. were found to be at higher risk. risk was significantly greater in women whose first full-term pregnancy was at age or more (or . , ). in addition, more than full-term pregnancies would be expected to correlate with an increase in the risk (x . , p< . ). in multivariate analysis, history of breast feeding is a significant factor in decreasing risk (or . , % ci . - . the euregion meuse-rhine (emr) is an area with different regions, regarding language, culture and law. organisations and institutions received frequently signals about an increasing and region-related consumption of addictive drugs and risky behaviour of adolescents. as a reaction institutions from regions of the emr started a cross-border cooperation project 'risky behaviour adolescents in the emr'. the partners intend to improve the efficiency of prevention programmes by investigating the prevalence and pre-conditional aspects related to risky behaviour, and creating conditions for best-practice-public-health. the project included two phases: study. two cross-border (epidemiological) studies where realized: a quantitative study of the prevalence of risky behaviour ( pupils) and a qualitative study mapped preconditional aspects of risky behaviour and possibilities to preventive programmes. implementation. this served bringing about recommendations on policy level as well as on prevention level. during this phase the planning and realisation of cross-border preventionprogrammes and activities started. there is region-related variance of prevalence in risky-behaviour of adolescents in de emr. also there are essential differences in legislation and regulation, (tolerated) policy, prevention structures, political and organizational priorities and social acceptance toward stimulants. cross-border studies and cooperation between institutions have resulted in best-practice-projects in (border) areas of the emr. abstract background: beta-blockers increase bone strenght in mice and may reduce fracture risk in humans. therefore, we hypothesized that inhaled beta- agonists may increase risk of hip fracture. objective: to determine the association between daily dose of beta- agonist use and risk of hip fractures. methods: a case-control study was conducted among adults who were enrolled in the dutch phar-mo database (n = , ). cases (n = , ) were patients with a first hip fracture. the date of the fracture was the index date. four controls were matched by age, gender and region. we adjusted our analyses for indicators of asthma/copd severity, and for disease and drug history. results: low daily doses (dds) (< ug albuterol eq.) of beta -agonists (crude or . , % ci . - . ) did not increase risk of hip fracture, in contrast to high dds (> ug albuterol eq., crude or . , % ci . . - . ). after extensive adjustment for indicators of the severity of the underlying disease, (including corticosteroid intake), fracture risk in the high dd group decreased to . ( % ci . - . ). conclusions: high dds of beta- agonists are linked to increased risk of hip fracture. extensive adjustments for the severity of the underlying disease is important when evaluating this association. abstract salivary nitrate arises from ingested nitrate and is the main source of gastric nitrite, a precursor of carcinogenic n-nitroso compounds. we examinated the nitrate and nitrite levels in saliva of children who used private wells for their drinking water supply. saliva was collected in the morning, from children aged - years. control group (n = ) drank water containing . - . mg/l (milligrams/litre) nitrate. exposure groups consisting of subjects (n = ) who used private wells with nitrate levels in drinking water below mg/l (mean ± standard deviation . ± . mg/l) and above mg/l (n = ) ( . ± . mg/l) respectively. the nitrate and nitrite of saliva samples was determined by high performance liquid chromatographs method. the values of nitrate in saliva samples from exposed groups ranged between . to . mg/l ( . ± . mg/l). for control groups, the levels of . to . mg/l ( . ± . mg/l) were registered. no differences between levels of salivary nitrite from control and exposed groups were found. regression analysis on water nitrate concentrations and salivary nitrate showed significant correlations. in conclusion, we estimate that salivary nitrate may be used as biomarkers of human exposure to nitrate. abstract disinfection of public drinking water supplies produces trihalomethanes. epidemiological studies have associated chlorinated disinfection by-products with cancer, reproductive and developmental effects. we studied the levels of trihalomethanes (chloroform, dibromochloromethane, bromodichloromethane, bromoform) in drinking water delivered to the population living in some urban areas (n= ). the water samples (n= ) were analysed using gas chromatographic method. assessment of exposure to trihalomethanes in tap water has been on monitoring data collected over - months periods and that we averaged over entire water system. analytical data revealed that total trihalomethanes levels were higher in the summer: mean ± standard deviation . ± . lg/l (micrograms/litre). these organic compounds were present in the end of distribution networks ( . ± . lg/l). it is noted that, sometimes, we found high concentrations of chloroform exceeding the sanitary norm ( lg/l) in tap water (maximum value . lg/l). results of sampling programs showed stronger correlations between chlorine and trihalomethanes value (correlation coefficient r = . to . , credible %interval). in conclusion, the population drank water with the law concentration of trihalomethanes, especially chloroform. abstract objective: to determine the validity of a performance-based assessment of knee function, dynaport[rsymbol] kneetest (dpkt), in first-time consulters with non-traumatic knee complaints in general practice. methods: patients consulting for nontraumatic knee pain in general practice aged years and older were enrolled in the study. at baseline and -months follow-up knee function was assessed by questionnaires and the dpkt; a physical examination was also performed at baseline. hypothesis testing assessed cross-sectional and longitudinal validity of the dpkt. results: a total of patients were included for dpkt of which were available for analysis. the studied population included women ( . %), median age was (range - ) years. at follow-up, patients ( . %) were available for dpkt. only out of ( %) predetermined hypotheses concerning cross-sectional and longitudinal validity were confirmed. comparison of the general practice and secondary care population showed a major difference in baseline characteristics, dynaport knee score, internal consistency and hypotheses confirmation concerning the construct validity. conclusion: the validity of the dpkt could not be demonstrated for first-time consulters with non-traumatic knee complaints in general practice. measurement instruments developed and validated in secondary care are not automatically also valid in primary care setting. abstract although animal studies have described the protective effects of dietary factors supplemented before radiation exposure, little is known about the lifestyle effects after radiation exposure on radiation damage and cancer risks in human. the purpose of this study is to clarify whether lifestyle can modify the effects of radiation exposure on cancer risk. a cohort of , japanese atomic-bomb survivors, for whom radiation dose estimates were currently available, had their lifestyle assessed in . they were followed during years for cancer incidence. the combined effect of smoking, drinking, diet and radiation exposure on cancer risk was examined in additive and multiplicative models. combined effects of a diet rich in fruit and vegetables and ionizing radiation exposure resulted in a lower cancer risk as compared to those with a diet poor in fruit and vegetables and exposed to radiation. similarly, those exposed to radiation and who never drink alcohol or never smoke tobacco presented a lower oesophagus cancer risk than those exposed to radiation and who currently drink alcohol or smoke tobacco. there was no evidence to reject either the additive or the multiplicative model. a healthy lifestyle seems beneficial to persons exposed to radiation in reducing their cancer risks. abstract background: clinical trials have shown significant reduction in major adverse cardiac events (mace) following implantation of sirolimus-eluting (ses) vs. bare-metal stents (bms) for coronary artery disease (cad). objective: to evaluate long-term clinical outcomes and economic implications of ses vs. bms in usual care. methods: in this prospective intervention study, cad patients were treated with bms or ses (sequential control design). standardized patient and physician questionnaires , , and months following implantation documented mace, disease-related costs and patient quality of life (qol). results: patients treated with ses (mean age ± , % male), with bms (mean age ± , % male). there were no significant baseline differences in cardiovascular risk factors and severity of cad. after months, % ses vs. % bms patients had suffered mace (p< . ). initial hospital costs were higher with ses than with bms, but respective month follow-up direct and indirect costs were lower ( , ± vs. , ± euro and ± vs. , ± euro, p = ns). overall, disease-related costs were similar in both groups (ses , ± , bms , ± , p = ns) . differences in qol were not significant. conclusions: as in clinical trials, ses patients experienced significantly fewer mace than bms patients during -month follow-up with similar overall costs and qol. abstract background: meta-analyses that use individual patient data (ipd-ma) rather than published data have been proposed as an improvement in subgroup-analyses. objective: to study ) whether and how often ipd-ma are used to perform subgroup-analyses ) whether the methodology used for subgroup-analyses differs between ipd-ma and meta-analyses of published data (map) methods: ipd-ma were identified in pubmed. related article search was used to identify map on the same objective. metaanalyses not performing subgroup-analysis were excluded from further analyses. differences between ipd-ma and map were analysed, reasons for discrepancies were described. we recently developed a simple diagnostic rule (including history and physical findings plus d-dimer assay results) to safely exclude the presence of deep vein thrombosis (dvt) without the need for referral in primary care patients suspected of dvt. when applied to new patients, the performance of any (diagnostic or prognostic) prediction rule tends to be lower than expected based on the original study results. therefore, rules need to be tested on their generalizability. the aim was to determine the generalizability of the rule. in this cross-sectional study, primary care patients with suspicion of dvt were prospectively identified. the rule items were obtained from each patient plus ultrasonography as reference standard. the accuracy of the rule was quantified on its discriminative performance, sensitivity, specificity, negative predictive value, and negative likelihood ratio, with accompanying % confidence interval. dvt could be safely excluded in % ( % in the original study) of the patients, without referral. none of these patients had dvt ( . % in the derivation population). in conclusion, the rule appears to be a safe diagnostic tool for excluding dvt in patients suspected of dvt in primary care. abstract background: long-term exposure to very low concentrations of asbestos in the environment and relation to incidence of mesothelioma contributes to insight into the dose-response relationship and public health policy. aim: to describe regional differences in the occurrence mesothelioma in the netherlands in relation to the occurrence in the asbestos polluted area around goor and to determine whether the increased incidence of pleural mesothelioma among women in this area could be attributed to environmental exposure to asbestos. methods: mesothelioma cases were selected in the period - from the netherlands cancer register (n = ). for the women in the region goor (n = ) exposure to asbestos due to occupation, household or environment was verified from the medical files, the general practitioner and next-of-kin for cases. results: in goor the incidence of pleural mesothelioma among women was -fold increased compared with the netherlands and among men fold. of the additional cases among women, cases were attributed to the environmental asbestos pollution and in cases this was the most likely cause. the average cumulative asbestos exposure was estimated at . fiber-years. . temporal trends and gender differences were investigated by random slope analysis. variance was expressed using median odds ratio (mor). results: ohcs appeared to be more relevant than administrative areas for understanding physicians' propensity to follow prescription guidelines (mor_ohc = . and mor_aa = . ). conclusion and discussion: as expected, the intervention increased prevalence and decreased variance, but at the end of the observation period practice variation remained high. these results may reflect inefficient therapeutic traditions, and suggest that more intensive interventions may be necessary to promote rational statin prescription. abstract background: mortality rates in ska˚ne, sweden have decreased in recent years. if this decline has been similar for different geographical areas have not been examined closely. objectives: we wanted to illustrate trends and geographical inequities in all cause mortality between the municipalities in ska˚ne, sweden from to . we also aimed to explore the application of multilevel regression analysis (mlra) in our study, since it is a relatively new methodology when describing mortality rates. design and methods: we used linear mlra with years at the first level and municipalities at the second to model direct age-standardized rates. temporal trends were examined by random slope analysis. variance across time was expressed using intra-class correlation (icc). results: the municipality level was very relevant for understanding temporal differences in mortality rates (icc = %). in average, mortality decreased by / Ù along the study period but this trend varied considerably between municipalities, geographical inequalities along the years were u-shaped with lowest variance in the s (var = ). conclusion: mortality has decreased in ska˚ne but municipality differences are increasing again. mlra is a useful technique for modelling mortality trends and variation among geographical areas. abstract background: ozone has adverse health effects but it is not clear who is most susceptible. objective: identification of individuals with increased ozone susceptibility. methods: daily visits for lower respiratory symptoms (lrs) in general practitioner (gp) offices in the north of the netherlands ( - , patients) were related to daily ozone levels in summer. ozone effects were estimated for patients with asthma, copd, atopic dermatitis, and cardiovascular diseases (cvd) and compared to effects in patients without these diseases. generalized additive models adjusting for trend, weekday, temperature, and pollen counts were used. results: the mean daily number of lrs-visits in summer in the gp-offices varied from . to . . mean (sd) -hour maximum ozone level was . ( . ) lg/m . rrs ( % ci) for a lg/m increase (from th to th percentile) in the mean of lag to of ozone for patients with/ without disease are: abstract asthma is a costly health condition, its economic effect is greater than that estimated for aids and tuberculosis together. following global initiative for asthma recommendations that require more data about the burden of asthma, we have determined the cost of this illness from - . an epidemiological approach based on population studies was made to estimate global as well as direct and indirect costs. data were obtained mainly from the national health ministry database, the national statistics institute of spain and the national health survey. the costs were averaged and adjusted to e. we have found a global burden (including private medicine) of million e. indirect and direct costs account for , and , %.the largest components within direct costs were pharmaceutical ( . %), primary health care systems ( , %), hospital admissions ( . %) and hospital non-emergency ambulatory visits ( . %). within indirect costs, total cessation of work days ( . %), permanent labour incapacity ( . %) and early mortality ( . %) costs were the main components. pharmaceutical cost is the first component as in most studies from developed countries, followed by primary health care systems unlike some reports that consider hospital admissions in second place. finally, direct costs represent . % of the total health care expenditure. abstract background: it is well known that fair phenotypical characteristics are a risk factor for cutaneous melanoma. the aim of our study was to investigate the analogous associations between phenotypical characteristics and uveal melanoma. design/methods: in our casecontrol study we compared incident uveal melamom patients with population controls to evaluate the role of phenotypical characteristics like iris-, hair-and skin color and other risk factors in the pathogenesis of this tumor. a total of patients and controls matched on sex, age and region were interviewed. conditional logistic regression was used to calculate odds ratio (or) and % confidence intervals ( % ci). results: risk of uveal melanoma was increased among people with light iris color (or = . % ci . - . ) and light skin color was slightly associated with an increased risk of uveal melanoma (or . % . - . ). hair color, tanning ability, burning tendency and freckles as a child showed no increased risk. results of the combined analysis of eye-and hair color, burning tendency and freckles showed that only light iris color was clearly associated with uveal melanoma risk. conclusion: among potential phenotypical risk factors only light iris-and skin color were identified as risk factor for uveal melanoma. abstract background: between-study variation in estimates of the risk of hcv mother-to-child transmission (mtct) and associated risk factors may be due to methodological differences or changes in population characteristics over time. objective: to investigate the effect of sample size and time on risk factors for mtct of hcv. design and methods: heterogeneity was assessed before pooling data. logistic regression estimated odds ratios for risk factors. results: the three studies included mother-child pairs born between and , born between and , and between and . there was no evidence of heterogeneity of the estimates for maternal hcv/hiv co-infection and mode of delivery (q = . , p = . and q = . , p = . , respectively). in pooled analysis the proportion of hcv/hiv co-infected mothers significantly decreased from % before to % since (p< . ). the pooled adjusted odds ratios for maternal hcv/hiv co-infection and elective caesarean section delivery were . ( %ci . - . ), p< . and . ( %ci . - . ), p = . respectively. there was no evidence that the effect of risk factors for mtct changed over time. conclusion: although certain risk factors have become less common, their effect on mtct of hcv has not changed substantially over time. abstract background: the need to gain insight into prevailing eating pattrerns and their health effects is evident. objective: to identify dietary patterns and their relationship with total mortality in dutch older women. methods: principal components analysis on food groups was used to identify dietary patterns among , women ( - y) included in the dutch epic-elderly cohort (follow-up $ . y). mortality ratios for three major principle components were assessed using cox proportional hazard analysis. results: the most relevant principal components were a 'mediterranean-like' pattern (high in vegetable oils, pasta/rice, sauces, fish, and wine), a 'traditional dutch dinner' pattern (high in meat, potatoes, vegetables, and alcoholic beverages) and a 'healthy traditional' pattern (high in vegetables, fruit, non-alcoholic drinks, dairy products, and potatoes). in , person years deaths occurred. independent of age, education, and other life style factors only the 'healthy traditional' pattern score was associated with a lower mortality rate, women in the highest tertile experienced a percent reduced mortality risk. conclusion: from this study a healthy traditional dutch diet, rather than a mediterranean diet, appears beneficial for longevity and feasible for health promotion. this diet is comparable to other reported 'healthy' or 'prudent' diets that have been shown to be protective. parents of (aged - ) and (aged - ) children were sent a questionnaire, as were adolescents (aged - ). to assess validity, generic outcome instruments were included (infant toddler quality of life questionnaire (itqol) or the child health questionnaire (chq) and the euroqol- d). response rate was - %. internal consistency of hobq and boq-scales was good (cronbach's alpha's > . in all but two scales). test-retest results showed no differences in - % of scales. high correlations between hobq-and boq-scales and conceptually equivalent generic outcome instruments were found. the majority of hobq ( / ) and boq scales ( / ) showed significant differences between children with a long versus short length of stay. the dutch hobq and boq can be used to evaluate functional outcome after burns in children. the study estimated caesarean section rates and odds ratios for caesarean section in association with maternal characteristics in both public and private sectors; and maternal mortality associated with mode of delivery in the public sector, adjusted for hypertension, other disorders, problems and complications, as well as maternal age. results: the caesarean section rate was . % in the public sector, and . % in the private sector. the odd ratio for caesarean section was . ( %ci: . - . ) for women with or more years of education. the odd ratio for maternal mortality associated with caesarean section in the public sector was . ( %ci: . - . ). conclusion and discussion: sao paulo presented high caesarean section rates. caesarean section compared to vaginal delivery in the public sector presented higher risk for mortality even when adjusted for hypertension, other disorders, problems and complications, as well as maternal age. we show that serious bias in questionnaires can be revealed by bland-altman methods but may remain undetected by correlation coefficients. we refute the argument that correlation coefficients properly investigate whether questionnaires rank subjects sufficiently well. conclusions: the commonly used correlation approach can yield misleading conclusions in validation studies. a more frequent and proper use of the bland-altman methods would be desirable to improve epidemiological data quality. abstract screening performance relies on quality and efficiency of protocols and guidelines for screening and follow-up. evidence of low attendance rates, over-screening of young women and low smear specificity gathered by the early 's in the dutch cervical cancer screening program called for an improvement. several protocols and guidelines were redefined in , with emphasis on assuring that these would be adhered to. we assessed improvement since by changes in various indicators: coverage rates, follow-up compliance and number of smears. information on all cervix uteri tests in the netherlands registered until st march was retrieved from the nationwide registry of histo-and cytopathology (palga). five-year coverage rate in the age group - years rose to %. the percentage of screened women in follow-up decreased from % to %. fourteen percent more women with abnormal smears were followed-up, and the time spent in follow-up decreased. a % decrease in the annual number of smears made was observed, especially among young women. in conclusion, the changes in protocols and guidelines, and their implementation have increased coverage and efficiency of screening, and decreased the screening-induced negative side effects. similar measures can be used to improve other mass screening programmes. abstract background: it is common knowledge that in low endemic countries the main transmission route of hepatitis b infection is sexual contact, while in high endemic regions it is perinatal transmission and horizontal household transmission in early childhood. objectives: to get insight into what determines the main transmission route in different regions. design and methods: we used a formula for the basic reproduction number r for hepatitis b in a population stratified by age and sexual activity to investigate under which conditions r > . using data extracted from the literature we investigated how r depends on fertility rates, rates of horizontal childhood transmission and sexual partner change rates. results: we identified conditions on the mean offspring number and the transmission probabilities for which perinatal and horizontal childhood transmission alone ensures that r > . those transmission routes are then dominant, because of the high probability for children to become chronic carriers. sexual transmission dominates if fertility is too low to be the driving force of transmission. conclusion: in regions with high fertility rates hepatitis b can establish itself on a high level of prevalence driven by perinatal and horizontal childhood transmission. therefore, demographic changes can influence hepatitis b transmission routes. abstract background: the artificial oestrogen diethylstilboestrol is known to be fetotoxic. thus, intrauterine exposure to other artificial sex hormones may increase the risk of fetal death. objective: to study if use of oral contraceptive months prior to or during pregnancy is associated to an increased risk of fetal death. design and methods: a cohort study of pregnant women who were recruited into the danish national birth cohort during the years - and interviewed about exposures during pregnancy, either during the first part of their pregnancy (n = ) or following a fetal loss (n = ). cox regression analyses with delayed entry were used to estimate the risk of fetal death. results: in total ( . %) women took oral contraceptives during pregnancy. use of combined oestrogen and progesterone oral contraceptives (coc) or progesterone only oral contraceptives (poc) during pregnancy were not associated with increased hazard ratios of fetal death compared to non-users, hr . ( % ci . - . ) and hr . ( % ci . - . ) respectively. neither use of coc nor poc prior to pregnancy was associated with fetal death. conclusion: use of oral contraceptive months prior to conception or during pregnancy is not related to an increased risk of fetal death. abstract background: few studies have been performed to assess if water fluoridation reduces social inequalities among groups of different socioeconomic status, and none of them was conducted in developing countries. objectives: to assess socioeconomic differences between brazilians towns with and without water fluoridation, and to compare dental caries indices among socioeconomic strata in fluoridated and non-fluoridated areas. design and methods: a countrywide survey of oral health performed in - and comprising , children aged years provided information about dental caries indices in brazilian towns. socioeconomic indices, the coverage and the fluoride status of the water supply network of participating towns were also appraised. multivariate regression models were performed. inequalities in dental outcomes were compared in towns with and without fluoridated tap water. results: better-off towns tended to present a higher coverage by the water supply network, and were more inclined to add fluoride. fluoridated tap water was associated with an overall improved profile of caries, concurrent with an expressively larger inequality in the distribution of dental disease. conclusion: suppressing inequalities in the distribution of dental caries requires an expanded access to fluoridated tap water; a strategy that can be effective to foster further reductions in caries indices. objective: to investigate the role of family socioeconomic trajectories from childhood to adolescence on dental caries and associated behavioural factors. design and methods: a population-based birth cohort was carried out in pelotas, brazil. a sample (n= ) of the population of subjects born in were dentally examined and interviewed at aged . dental caries index, care index, toothbrushing, flossing, and pattern of utilization of dental services were the outcomes. these measures were compared among four different family income trajectories. results: adolescents who were always poor showed, in general, a worse dental caries profile, whilst adolescents who never were poor had a better dental caries profile. adolescents who had moved from poverty in childhood to nonpoverty in adolescence and those who had moved from non-poverty in childhood to poverty in adolescence had similar dental profiles to those who were always poor except for pattern of utilization of dental services which was higher in the first group. conclusion: poverty in at least one stage of the lifespan has a harmful effect on dental caries, oral behaviours and utilization of dental services. we assessed contextual and individual determinants of dental caries in the brazilian context. a country-wide survey of oral health informed the dental status of , twelve-year-old schoolchildren living in towns in . a multilevel model fitted the adjustment of untreated caries prevalence to individual (socio-demographic characteristics of examined children) and contextual (geographic characteristics of participating towns) covariates. being black (or = . ; % ci: . - . ), living in rural areas (or = . ; . - . ) and studying in public schools (or = . ; . - . ) increased the odds of having untreated decayed teeth. the multilevel model identified the fluoride status of water supplies (ß=) . ), the proportion of households linked to the water network (ß=) . ) and the human development index (ß=) . ) as town-level covariates of caries experience. better-off brazilian regions presented an improved profile of dental health, besides having a less unequal distribution of dental treatment needs between blacks and whites, rural and urban areas, and public and private schools. dental caries experience is prone to socio-demographic and geographic inequalities. monitoring contrasts in dental health outcomes is relevant for programming socially appropriate interventions aimed both at overall improvements and at the targeting of resources for groups of population presenting higher levels of needs. abstract background: ultraviolet radiation (uvr) is the main cause of nonmelanoma skin cancer but has been hypothesised to protect against development of prostate cancer (pc). if this is true, skin cancer patients should have lower pc incidence than the general population. objectives: to study the incidence of pc after a diagnosis of skin cancer. design methods: using the eindhoven cancer registry, a cohort of male skin cancer patients diagnosed since ( squamous cell carcinoma (scc), basal cell carcinoma (bcc) and melanoma (cm)) was followed up for incidence of invasive pc. observed incidence rates of pc amongst skin cancer patients were compared to those in the reference population, resulting in standardised incidence ratios (sir). results: scc (sir . ( %ci: . ; . )) and bcc (sir . ( %ci: . ; . )) showed a decreased incidence of pc, cm did not. patients with bccs occurring in the chronically sun-exposed head and neck area (sir . ( %ci: . ; . ) had significantly lower pc incidence rates. conclusion discussion: although numbers of scc and cm were too small to obtain unequivocal results, this study partly supports the hypothesis that uvr protects against pc and also illustrate that cm patients are different from nmsc patients in several aspects. abstract introduction: hypo-and hyperthyroidism have been associated to various symptoms and metabolic dysfunctions in men and women. incidences of these diseases have been estimated in a cohort of middle-aged adults in france. methods: the su.vi.max (sup-ple´mentation en vitamines et mine´raux antioxydants) cohort study included volunteers followed-up for eight years since - . the incidence of hypo-and hyperthyroidism was estimated retrospectively from scheduled questionnaires and the data transmitted by the subjects during their follow-up. factors associated to incident cases have been identified by cox proportional hazards models. results: among the subjects free of thyroid dysfunction at inclusion, incident cases were identified. after an average follow-up of . years, the incidence of hyper-and hypothyroidism was . % in men, . % in - year old women, and . % in - year old women. no associated factor was identified in men. in women, age and alcohol consumption (> grams/day) increased the risk of hypo-or hyperthyroidism, while a high urinary thiocyanate level in - would be a protective factor. conclusion: the incidences of hypo and hyperthyroidism observed in our study as well as the associated risk factors found are in agreement with the data of studies performed in other countries. abstract background: lung cancer is the most frequent malignant neoplasm world-wide. in , the number of new lung cancer cases was estimated at . million, which makes over % of all new cases of neoplasm registered all round the globe. it is also the leading cause of cancer deaths. objective: the objective of this paper is to provide a systematic review of life-related factors for lung cancer risk. methods: data sources were medline from january to december , title in the field. search terms included: lung cancer, tobacco smoke, education, diet, alcohol consumption or physical activity terms. book chapters, monographs, relevant news reports, and web material were also reviewed to find articles. results: the results of the literature review suggest that smoking is a major, unquestionable factor of lung cancer risk. exposure to environmental tobacco smoke (ets) and education could also play a role in the occurrence of the disease. diet, alcohol consumption and physical activity level are other important but less extended determinants of lung cancer. conclusions: effective prevention programs against some of the life style-related factors for lung cancer, especially against smoking must be developed to minimize potential health risks and prevent the future cost of health. stedendriehoek twente and south (n = ), additional data (co-morbidity, complications after surgery and follow-up) were gathered. cox-regression analyses were used. results: the proportion resections declined from % of patients < to % of patients aged > = years, whereas primary radiotherapy increased from % to %. in the two regions patients ( %) underwent resection. co-morbid conditions did not influence the choice of the therapy. % had complications. postoperative mortality was %. in multivariate analysis, only treatment had an independent effect.two year survival was % for patients undergoing surgical resection and % for those receiving radiotherapy (p< . ). conclusion: number of co-morbid conditions did not influence choice of treatment, postoperative complications, and survival in patients with nsclc > = years. the epidemiology of oesophageal cancer has changed in recent decades. the incidence has increased sharply, mainly comprising men, adenocarcinoma and tumours of the lower third of the oesophagus. the eurocare study suggested large variation in survival between european countries, primarily related to early mortality. to study potential explanations, we compared data from the rotterdam and thames cancer registry. computer records from , patients diagnosed with oesophageal cancer in the period - were analysed by age, gender, histological type, tumour subsite, period and region. there was a large variation in resection rates between the two regions, % for rotterdam versus % for thames (p< . ). resection rates were higher for men, younger patients, adenocarcinoma and distal tumours. postoperative mortality (pom) was defined as death within days of surgery and was . % on average. pom increased with age from . % for patients younger than years to . % for patients older than years. pom was significantly lower in high-volume hospitals (> operations per year), . % versus . % (p< . ). this study shows a large variation in treatment practice between the netherlands and the united kingdom. potential explanations will need to be studied in detail. abstract russia has experienced tremendous decline in life expectancy after break up of the ussr. surprisingly, im has also been decreasing. less is known on the structure of im in different regions of russia. the official im data may be underestimated partly due to misreporting early neonatal deaths (end) as stillbirths (sb). end/sb ratio considerably exceeding : indicates misreporting. we present the trends and structure of im in arkhangelsk oblast (ao), north-west russia from to as obtained from the regional statistical committee. im decreased from . to . per live births. cause-specific death rates (per , ) decreased from to . for infectious diseases, from to for respiratory causes, from to for traumas, from to for inborn abnormalities but did not change for conditions of the perinatal period ( in both and ) . the end/sb ratio increased from to . . in , im from infections and respiratory causes in the ao are much lower than in russia in general. the degree of misreporting end as sb in the ao is lower than in russia in general. other potential sources of underestimation of im in russia will be discussed. abstract background: epidemiological studies that investigated malocclusion and different physical aspects in adolescents are rare in the literature. objective: we studied the impact of malocclusion on adolescents' self-image regardless of other physical aspects. design and methods: a cross-sectional study nested in a cohort study was carried out, in pelotas, brazil. a random sample of yearsold adolescents was selected. the world health organization ( ) criteria were used to define malocclusion. interviews about self-reported skin colour and appearance satisfaction were administered. the body mass index was calculated. gender, birth weight and socioeconomic characteristics were obtained from the birth phase of the cohort study. poisson regression models were performed. results: the prevalence of moderate or severe malocclusion was . % [ %ci . ; . ] in the whole sample without significant difference between boys and girls. a higher statistically significant difference of appearance dissatisfaction was identified in girls ( . %) than in boys ( . %). a positive association between malocclusion and appearance dissatisfaction was observed only in girls, after adjusting for other physical and socioeconomic characteristics. conclusions: malocclusion influenced appearance dissatisfaction only in young women. abstract background: factors for healthy aging with good functional capacity and those which increase the risk of death and disability need to be identified. objectives: we studied the prevalence of low functional capacity and its associations in a small city in southern brazil. design and methods: a population based cross sectional study was carried out with a random sample size of elderly people. a home-applied questionnaire including socioeconomic, demographic, house conditions, socioeconomic self-perception characteristics was applied. the low functional capacity was defined as the difficulty in the performance of or more activities or inability to carry out of those activities according to scale proposed by rikli and jones. descriptive statistics, association using chi-square test as well as the multiple logistic regression analysis were performed. abstract introduction: assessment of trichiuriasis spatial distribution is important to evaluate sanitation conditions. our objective was to identify risk areas for the trichuris trichiura infection. methods: cross sectional study was held in census tracts of duque de caxias county, rio de janeiro, brazil. collection and analysis of fecal specimens and a standardize questionnaire were carry out in order to evaluate socio-economic and sanitation conditions in a sample of , children between and years old. geoestatistics techniques were used to identify risk areas for trichiuriasis. results: the mean age of the studied population was . years old, which % were females and % were males. the prevalence of trichuris trichiura in the sample was %. children whose mothers studied for years or less had odds ratio (or) = . than children whose mothers studied for more than years old. children who were living in houses without water supply had or = . comparing to children living in houses with water supply. the spatial analysis identified risk areas for infection. conclusion: the results show association between socio-economic conditions and the proliferation of trichuris trichiura infection. the identification of risk areas can guide efficient actions to combat the disease. abstract background: refuge life and diabetes mellitus can affect the healthrelated quality of life (hrqol). objective: to assess how both aspects influence hrqol of the diabetic refugees in gaza strip. methods: overall subjects filled a self-administered questionnaire including world health organization quality of life questionnaire (whoqol-bref) and some socio-demographic information. the sample consisted of three frequency matched groups for gender and sex, each. first group were refugees with diabetes mellitus, second refugees without diabetes and third diabetes patients with no refugee history. the response rate was % on average. global score consisting of all four domains of who-qol-bref was dichotomized by the value of and logistic regression was used for the analysis. results: crude odds ratios (or) for lower quality of life were . ( % ci . - . ) for diabetes refugees compared to diabetes non-refugees and . ( . - . ) compared to non-diabetes refugees. after adjusting for age, gender, education, employment, income status and number of persons depending on the respondents or was . ( . - . ) and . ( . - . ), respectively. additionally, adjusting for length of diabetes and complications reduced the or to . ( . - . ) for diabetes refugees compared to diabetes non-refugees. conclusion: quality of life is highly reduced in refugees with diabetes. abstract background: pesticides have a significant public health benefit by increasing food production productivity and decreasing diseases. on the other hand, public concern has been raised about the potential health effects of the exposure to pesticides on the developing fetus and child. objectives: to review the available literature to find an epidemiological studies dealing with the exposure to pesticides and children health. design and methods: epidemiological studies were identified during search of the literature basis. following health effects were taking into account: adverse reproductive and developmental disorders, childhood cancer, neurodevelopmental effects and the role of pesticides as endocrine disrupters. results: pesticides were associated with wide range of reproductive disorders. the association between exposure to pesticides and the risk of childhood cancer and neurodevelopmental effects was found in several studies. epidemiological studies have been limited by luck of specific pesticide exposure, exposure based on job title, small size of examined groups. conclusions: in the light of existing although still limited evidence of adverse effects of pesticide exposure it is necessary to reduce the exposure. the literature review suggests a great need to increase awareness of people who are occupationally or environmentally exposed to pesticides about its potential negative influence on their children. in order to match local health policy more with the needs of citizens, the municipal health service utrecht started the project 'demand-orientated prevention policy'. one of the aims was to explore the needs of the utrecht citizens. the local health survey from contained questions about needs of information and support with regard to disorders and lifestyle. do these questions about needs give other results compared to questions about prevalence of health problems? in total utrecht citizens aged to years returned the health questionnaire (response rate %). most needs were observed on subjects concerning overweight and mental problems, and were higher among women, moroccans, turks, low educated people and citizens of deprived areas. the prevalence of disorders and unhealthy lifestyles did not correlate well to the needs (majority correlation coefficients: < , ). most striking, of the utrecht population % were smokers and % excessive alcohol drinkers, while needs related to these topics were low. furthermore, higher needs among specific groups did not always correspond to higher prevalences of related health problems in these groups. these results show the importance of including questions about needs in a health survey, because they add additional information to questions about prevalences. abstract background: recent studies associated statin therapy with better outcome in patients with pneumonia. because of an increased risk of pneumonia in patients with diabetes we aimed to assess the effects of statin use on pneumonia occurrence in diabetic patients managed in primary care. methods: we performed a case-control study nested in , patients with diabetes. cases were defined as patients with a diagnosis of pneumonia. for each case, up to controls were matched by age, gender, practice, and index date. patients were classified as current statin user when the index date was between the start and end date of statin therapy. results: statins were currently used in . % of , cases and in . % of , controls (crude or: . , % ci . - . ). after adjusting for potential confounders, statin therapy was associated with a % reduction in pneumonia risk (adjusted or: . , % ci . - . ). the association was consistent among relevant subgroups (stroke, heart failure, and pulmonary diseases) and independent of age or use of other prescription drugs. conclusions: use of statins was significantly associated with reduced pneumonia risk in diabetic patients and may apart from lipid lowering properties be useful in prevention of respiratory infections. abstract introduction: cigarette smoking is the most important risk factor for copd development. therefore, smoking cessation is the best preventive measure. aim: to determine the beneficial effect of smoking cessation on copd development. methods: incidence of copd (gold stage > = ) was studied in smokers without copd who quitted or continued smoking during yr of followup. we performed logistic regression analyses on pairs of observations. correlations within a subject and time, and time between successive surveys were taken into account. abstract objectives: to describe the prevalence and severity of dental caries in adolescents of the city of porto, portugal, and to assess socioeconomic and behavioral covariates of dental caries experience. methods: a sample of thirteen-year-old schoolchildren underwent dental examination. results from the dental examination were linked to anthropometric information and to data supplied by two structured questionnaires assessing nutritional factors, sociodemographic characteristics and behaviors related to health promotion. dental caries was appraised in terms of the dmft index, and two dichotomous outcomes, one assessing the prevalence of dental caries (dmft = ); the other assessing the prevalence of a high level of dental caries (dmft = ). results: consuming soft drinks derived from cola two or more times per week, attending a public school, being girl and having parents with low educational attainment were identified as risk factors both for having dental caries and for having a high level of dental caries. conclusion: the improvement of oral health status in the portuguese context demands the implementation of polices to reduce the frequency of sugar intake, and could benefit from an overall and longstanding expansion of education in society. abstract background: migrant mortality does not conform to a single pattern of convergence towards rates in the host population. to better understand how migrant mortality will develop, there is a need to further investigate how the underlying behavioural determinants change following migration. objective: we studied whether behavioural risk factors among two generations of migrants converge towards the behaviour in the host population. design and methods: cross-sectional interview-data were used including moroccan and turkish migrants, aged - . questions were asked about smoking, alcohol consumption, physical inactivity and weight/height. age-adjusted prevalence rates among first and second generation migrants were compared with prevalence rates in the host population. results: converging trends were found for smoking, physical inactivity and overweight. for example, we found a higher prevalence of physical inactivity in first generation turkish women as compared to ethnic dutch (or = . ( . - . )), whereas among second generation no differences were found (or = . ( . - . )). however, this trend was not found in all subgroups. additionally, alcohol consumption remained low in all subgroups and did not converge. conclusion and discussion: behavioural risk factors in two generations of migrants seem to converge towards the prevalence rates in the host population. although, some groups and risk factors showed a deviant pattern. abstract background/relevance: arm-neck-shoulder complaints are common in general practice. for referral in these complaints, only guidelines exist for shoulder complaints and epicondylitis. besides, other factors can be important. objective: what factors are associated with referral to physiotherapy or specialist in non-traumatic armneck-shoulder complaints in general practice, during the first consultation? design/methods: general practitioners (gps) recruited consulters with new arm, neck or shoulder complaints. data on complaint-, patient-, gp-characteristics and management were collected. the diagnosis was categorised into: shoulder specific, epicondylitis, other specific or non-specific. multilevel analyses (adjustment for treating gp) were executed in procgenmod to assess associated variables (p< . ). results: during the first consultation, % was referred for physiotherapy and % for specialist care. indicators of reference to physiotherapy were: long duration of complaint, recurrent complaint and gp located in a little/not urbanised area. while having shoulder specific or other specific diagnoses was negatively associated. indicators of reference to specialist care were: having other specific diagnosis, long duration of complaint, musculoskeletal co-morbidity, functional limitations and consulting a less experienced gp. conclusion/discussion: most referrals were to physiotherapy and only a minority to specialist care. mainly diagnosis and other complaint variables indicate on 'who goes where'. besides gp-characteristics can play a role. abstract background: the ruhr area has for years been a synonym for a me´gapolis of heavy industry with a high population density. presently, % of the population of the state of north rhine-westphalia live there, i.e. more than five million people. objectives: for the first time, social and health indicators of nrw's health indicator set were brought together for this me´gapolis area. design and methods: new standard tables were constructed for the central area of 'ruhr-city' including seven cities with more than inhabitants/km and the peripheral zone with eight districts and cities. for the pilot phase, four socio-demographic and four health indicators were recalculated. comparability of the figures was achieved by age standardization. the results obtained were submitted to a significance test by identifying % confidence intervals. results: the centre of 'ruhr-city' is characterised by elderly, unemployed, foreign, low-income citizens living closely together. infant mortality lies above nrw's average, male life expectancy is . years lower and female life expectancy . years lower than life expectancy in nrw (without 'ruhr-city'). several avoidable deaths' rate in the ruhr area are significantly higher than the average in nrw. specific intervention strategies are required to improve the health status in 'ruhr-city'. abstract background: general practitioners (gps) have a fundamental role to play in tobacco control, since they reach a high percentage of the target population. objectives: to evaluate specific strategies to enhance promotion of smoking cessation in general practice. design and methods in a cluster-randomized trial, medical practices were randomized following a · factorial design. patients aged - years who smoked at least cigarettes per day (irrespective of their intention to stop smoking) were recruited. the intervention included (ti) the provision of a two-hour physician group training in smoking cessation methods plus direct physician payments for every participant not smoking months after recruitment; and (tm) provision of the same training plus direct participant reimbursements for pharmacy costs associated with nicotine replacement therapy or bupropion treatment. results: in the mixed logistic regression model, no effect was identified for intervention ti (odds ratio (or) = . , % confidence interval (ci) . - . ), but intervention tm strongly increased the odds of cessation (or = . , % ci . - . ). conclusion and discussion: the cost-free provision of effective medication along with improved training opportunities for gps may be an effective measure to enhance smoking cessation promotion in general practice. in europe, little research on international comparison of health surveys has been accomplished, despite a growing interest in this field. smoking prevalence is chosen to explore data comparability. we aim to illustrate methodological problems encountered when comparing data from health surveys and investigate international variations in smoking behaviour. we examined a sample . individuals aged and more, from six european health surveys performed in - . problems met during the comparisons are described. we took the example of current smoking as an indicator allowing a valid comparison of the prevalences. the differences in age and sex distribution between countries were adjusted through direct standardisation. additionally, multivariate analysis will assess variations in current smoking between countries, when controlling for sex, age, and educational level. methodological problems concern comparability of socioeconomic variables. the percentage of current smokers varies from % to %. smoking patterns observed by age groups, sexes and educational level are similar, although rates per country differ. further results will determine if the variations in smoking related to socioeconomic status are alike. this international comparison of health surveys highlights methodological problems encountered when comparing data of several countries. furthermore, variations in smoking may call for adaptations in public health programs. from research it appears that adolescent alcohol use in the achterhoek is much higher than in the rest of the netherlands and rapidly increasing. excessive alcohol use has consequences for health and society. parents play an important role in preventing excessive adolescent alcohol use, but are not aware of the problem and consequences. for this reasons the municipalities in the achterhoek launch an alcohol moderation programme, starting with a regional media campaign to increase problem awareness among parents. the objective of this study is to assess the impact of this media campaign in the achterhoek. three successive independent cross-sectional telephone surveys, interviewing approximately respondents each, will be conducted before, during and after the campaign. respondents will be questioned on knowledge and awareness of excessive adolescent alcohol use, its consequences and the role child raising can play. also the reach and appreciation of the different activities of the campaign will be investigated. results: of the surveys before and during the implementation will be known by may . with these first findings the unawareness of the problem among parents and partly the reach and appreciation of the campaign can be assessed. abstract background: obesity is a growing problem, increasingly so in children and adolescents. overweight is partly 'programmed' during pregnancy, but few comprehensive studies looked prospectively into the changes of body composition and metabolic factors from birth. objectives: the aim of the population-based birth-cohort study within gecko is to study the etiology and prognosis of overweight and the metabolic syndrome during childhood. design and methods: the gecko drenthe will be a population based observational birth-cohort study, which includes all children born from april to april in drenthe, one of the northern provinces of the netherlands. during the first year of life, the study includes repeated questionnaires, extensive anthropometric measurements and blood measurements at birth (cord blood) and at the age of eleven months. results: the number of babies born in the drenthe province is about . per year. the results from a feasibility study conducted in february will be presented. conclusion: gecko drenthe is a unique project that will contribute to the understanding of the development of obesity in childhood and its tracking into adulthood. this will enable early identification of children at risk and opens the way for timely and tailored preventive interventions. abstract background: tunisia is facing an epidemiologic transition with the extension of chronic diseases that share common risk factors. obesity is a leading risk factor and happens to occur frequently in early life. objective: to study the prevalence and the risk factors of obesity and overweight among urban schoolchildren in sousse, tunisia. methods: cross sectional study of a tunisian sample of schoolchildren aged between and years living in the urban area of sousse, tunisia. a representative sample of school children selected by multistage cluster sampling procedure. measurements: weight and height, blood pressure measured by electronic system, fasting blood lipids. questionnaire assessment was used for family history of cardiovascular disease, smoking habits, physical activity and diet. abstract background: quality of life (qol) measurements are acknowledged as very important in the evaluation of health care. objectives: we studied the validity and the reliability of the hungarian version of the whoqol-bref among people living in small settlements. method: a questionnaire-based cross-sectional study was conducted in a representative sample (n = ) of persons aged years and over in south-east-hungary, in . data were analysed by the spss . . the internal consistency was evaluated using cronbach's alpha; for comparison of the qol scores amongst the various groups the two-tailed t-tests were used; convergent validity was assessed by spearman coefficients. results: the male:female ratio was . to . %, and the average age . (sd: . ) years. the domain scores were . (sd: . ) for the physical, . (sd: . ) for psychological, . (sd: . ) for the social, and . (sd: . ) for the environment domains. the cronbach's alpha values ranged from . to . across domains. the whoqol-bref seemed to be suitable to distinguish healthy and unhealthy people. the scores for all domains correlated with the self-evaluated health, and overall quality of life (p< . ). conclusion: our study supported that the whoqol-bref provided a valid, reasonable and useful determination of the qol of people living in hungarian villages. abstract background: further than a cardiovascular disease, arterial hypertension (aht) is the main cardiovascular risk factor. in spain, the aht prevalence reaches %, placed in the third position after germany and finland in affecters percentage. although its high morbi-mortality, the aht is a forecast factor. the treatment's objective (pharmacological and life style modifications) of hypertensive patients is not only to reduce blood pressure levels to optimum levels but also to treat all modifiable vascular risk factors. objective: economic impact evaluation of direct costs due to aht pathology (cie -mc - ) in spain in , according to autonomous region. design and methods: descriptive and transversal study of costs estimation in the period between january to december in spain according to autonomous region. the study is based on data available from the national health ministry database and the national statistics institute of spain. results: the national health service assigned million e to aht treatment. , % of the total cost is owe to pharmaceutical service expenses, , % to primary health care and a , % to hospital admissions. conclusion and discussion: the costs generated by aht are mainly due to the pharmaceutical service. the costs distribution is modified according to the geographical region. abstract background: over the last decades, for low-stage cervical cancer less surgical treatment and for high-stage cervical cancer chemoradiotherapy was recommended in the national guidelines. objectives: to describe changes and variation in treatment and survival in cervical cancer in the regions of the comprehensive cancer centre stedendriehoek twente (cccst) and south (cccs) in the netherlands. design and methods. newly diagnosed cervical cancer cases were selected from both cancer registries in the period - . patient characteristics, tumour characteristics, treatment and follow-up data were collected from the medical records. results: in figo stages ia -ib the percentage hysterectomy decreased from % in - to % in - (p<. ) and survival improved comparing - with - . figo stages iii-ivb had mostly received radiotherapy only ( %). no differences in survival between years of diagnosis were found. in the cccs-region more chemoradiotherapy was given in these stages ( % versus % in the cccst-region in the whole period). conclusion and discussion:. abstract background: the reason for the increased prevalence of depression in type diabetes (dm ) is unknown. objective: we investigated whether depression is associated with metabolic dysregulation or that depression is rather a consequence of having dm . methods: baseline data of the utrecht health project were used. subjects with cardiovascular disease were excluded. , subjects (age: . +/) ) were classified into four mutually exclusive categories: normal fasting plasma glucose (fpg < . mmol/l), impaired fpg (> = . and < . mmol/l), undiagnosed dm (fpg > = . mmol/l), and diagnosed dm . depression was defined as either a score of or more on the depression subscale of the symptom check list- or use of antidepressants. results: subjects with impaired fasting glucose and undiagnosed dm had no increased prevalence of depression. diagnosed dm patients had an increased prevalence of depression (or = . ( . - . )) after adjustment for gender, age, body mass index, smoking, alcohol consumption, physical activity, education level and number of chronic diseases. conclusions: our findings suggest that depression is not related to disturbed glucose homeostasis. the increased risk of depression in diagnosed dm only, suggests that depression is rather a consequence of the psychosocial burden of diabetes. abstract background: breast-conserving surgery (bcs) followed by radiotherapy (bcs-rt) is a safe treatment option for the large majority of patients with tumours less than cm. aim: the use of bcs and bcs-rt in pt (? cm) and pt -tumours ( - cm) was investigated in the netherlands in the period and . methods: from the netherlands cancer registry patients were selected with invasive pt (? . cm) or pt ( . - . cm) tumours, without metastasis at time of diagnosis. trends in the use of bcs and rt after bcs were determined for different age groups and regions. results: in the period - , pt -tumours and , pt -tumours were diagnosed. the %bcs in pt -tumours increased in all age groups. it remained lowest in patients years and older ( % in ). in pt -tumours a decrease was observed in patients years and older (from % to % in ). in both pt and pt tumours the %bcs-rt increased in patients years and older to respectively % and %. between regions and hospitals large differences were seen in %bcs and %bcs-rt. conclusion: multidisciplinary treatment planning, based on specific guidelines, and patient education could increase the use of bcs combined with rt in all age groups. abstract this is a follow-up study on the adverse health effects associated with pesticide exposure among cut-flower farmers. survey questionnaires and detailed physical and laboratory examinations were administered to and respondents, respectively, to determine pesticide exposure, work and safety practices, and cholinesterase levels. results showed that pesticide application was the most frequent activity associated with pesticide exposure, and entry was mostly ocular and dermal. majority of those exposed were symptomatic. on physical examination, or . % of those examined were found to have abnormal peak expiratory flow rate (pefr). eighty-two percent had abnormal temperature, followed by abnormal general survey findings (e.g. cardiorespiratory distress). % had cholinesterase levels below the mean value of . ? ph/hour, and . % exhibited a more than % depression in the level of rbc cholinesterase. certain hematological parameters were also abnormal, namely hemoglobin, hematocrit, and eosinophil count. using pearson's r, factors strongly associated with illness due to pesticides include using a contaminated piece of fabric to wipe sweat off (p.= . ) and reusing pesticide containers to store water (p.= . ). the greatest adverse effect of those exposed is an abnormal cholinesterase level which confirms earlier studies on the effect of pesticides on the body. objectives: this pair-study was performed to find out the rate of spontaneous abortions in female workers exposed to organic solvents from the wood-processing industry. methods: the level of organic solvents was assessed within the workplaces during a year period. exposed female workers from the wood-processing industry were examined. the occupational and non-occupational data associated with their fertility were obtained by applying a standard epidemiological computed questionnaire. the reference group consisted of female workers non-exposed to hypo-fertilizing agents, residing in the same locality. the rate of spontaneous abortions was evaluated in both groups as an epidemiological fertility indicator. results: within the studied period, the organic solvents levels exceeded several time the maximal admissible concentrations in all workplaces. the long-term exposure to organic solvents caused a significant increase in rate of spontaneous abortions compared to the reference group (p< . ). the majority of abortions ( %) have happened in the first trimester of pregnancy. conclusions: the long-term exposure to organic solvents may cause low fertility on female workers because of the spontaneous abortions. it is advised to reduce the organic solvents level in the air of all workplaces, as well as to stop working the pregnant women in exposure to organic solvents. abstract introduction: rio de janeiro city (rj) presents a fast aging of the population with changes in morbi-mortality. cardiovascular diseases are the first cause of death in elderly population. more than a half of ischemic heart diseases (ihd) cases occur in aged people (> years old). objective: describe the spatial distribution of ihd mortality in the elderly population in rj and associations with socio-demographics variables. methods: data were gathered from information on mortality system of the ministry of health and the demographic census of the foundation of the brazilian institute for geography and statistics. the geographic distributions of the standardized coefficient of mortality due to ihd and socio-demographics variables, by districts, in were analyzed in arcgis . . spatial autocorrelation of ihd was assessed by the moran and geary indices. a conditional autoregressive model was used to evaluate the association between idh and socio-demographics variables. results: association between idh mortality and income, educational level, family type and to possess computer, videocassette and microwave was found. conclusion: spatial analysis of the idh mortality and socio-demographics factors influence are fundamental to subsidize more efficient public policies in sense to prevention and control of this important injury of health. abstract purpose: to evaluate the prognostic impact of isolated loco-regional recurrences on metastatic progression among women treated for invasive stage i or ii breast cancer (within phase iii trials concerning the optimal management of breast cancer). patients and methods: the study population consisted of , women primary surgically treated for early stage breast cancer, enrolled in eortc trials , , or , by breast conservation ( %) and mastectomy ( %) with long time of follow-up (median: . range: . - . ). data were analysed in a multi-state model by using multivariate cox regression models, including a state-dependent covariate. results: after the incidence of the loco-regional recurrence, a positive nodal status at baseline is a significant prognostic risk factor for distant metastases. the effects of the young ages at diagnosis and larger tumour size, become less significant after the incidence of loco-regional recurrences. the presence of a locoregional recurrence in itself is a significant prognostic risk factor for distant metastases after loco-regional recurrences. the effect of the time to the loco-regional recurrence is not a significant prognostic factor. conclusion: the presence of local recurrence is an important risk factor for outcome in patients with early breast cancer. abstract background: the relationship between the antral follicles and ovarian reserve tests (ort) to determine ovarian response in ivf is extensively studied. we studied the role of follicle size distribution in the response on the various orts in a large group of subfertile patients. methods: in a prospective cohort study, female patients were included if they had regular ovulatory cycles, subfertility for > months, > = ovary and > = patent ovarian tube. antral follicles were counted by ultrasound and blood was collected for fsh, including a clomiphene challenge test (ccct), inhibin b, and estradiol before and after administration of puregon [rsymbol] . (efort test). statistical analysis was performed using spss . for windows. results: of eligible patients, participated. mean age was . years and mean duration of subfertility was . months. age, baseline fsh, ccct and efort correlated with the number of small follicles ( - mm) but not with large follicles ( - mm). regression analysis confirmed that the number of small follicles and average follicle size contributed to ovarian response after correction for age, while large follicles did not. conclusion: small antral follicles are responsible for the hormonal response in ort and may be suitable to predict ovarian response in ivf. abstract background: dengue epidemics account annually for several million cases and deaths worldwide. the high endemic level of dengue fever and its hemorrhagic form (dhf) correlates to extensive house infestation by aedes aegypti and multiple viral serotype human infection. objective: to describe dengue incidence evolutionary patterns and spatial distribution in brazil. methods: it is a review study that analyzed serial case reports registered since until . results: it was shown that defined epidemic waves followed the introduction of every serotype (den to ) , and reduction in susceptible people possibly responded for downward case frequency. conclusions and discussion: an incremental expansion of affected areas and increasing occurrence of dhf with high lethality were noted in recent years. in contrast, efforts based solely on chemical vectorial combat have been insufficient. moreover, some evidence demonstrated that educational action do not permanently modify population habits. in this regard it was stated that while vaccine is not available, further dengue control would depend on potential results gathered from basic interdisciplinary research and intervention evaluation studies, integrating environmental changes, community participation and education, epidemiological and virological surveillance, and strategic technological innovations aimed to stop transmission. abstract background: patient participation in treatment decisions can have positive effects on patient satisfaction, compliance and health outcomes. objectives: study objectives were to examine attitudes regarding participation in decision-making among psoriasis patients and to evaluate the effect of a decision-aid for discussing treatment options. methods: a 'quasi experiment' was conducted in a large dermatological hospital in italy: a questionnaire evaluating the decision-making process and knowledge on treatments was selfcompleted by a consecutive sample of psoriasis patients after routine clinical practice and by a second sample of patients exposed to a decision-board. results: in routine clinical practice . % of patients wanted to be involved in treatment decisions, . % wanted to leave decisions entirely to the doctor and . % preferred making decisions alone. . % and . % of the control and decision-board group had good knowledge level. at multivariate analysis good knowledge on treatments increased the likelihood of preferring an active role (or = . ; %ci . - . ; p = . ). the decision-board only marginally improved patient knowledge and doctor-patient communication. conclusion and discussion: in conclusion, large proportions of patients want to participate in decision-making, but insufficient knowledge can represent a barrier. further research is needed for developing effective instruments for improving patient knowledge and participation. abstract background: the only available means of controlling infections caused by the dengue virus is the elimination of its principal urban vector (aedes aegypti). brazil has been implementing programs to fight the mosquito; however, since the 's the geographic range of infestation has been expanding steadily, resulting in increased circulation of the virus. objective: to evaluate the effectiveness of the dengue control actions that have been implemented in the city of salvador. methods: prospective design, serologic inquiries were made in a sample population of residents of urban 'sentinel areas'.the seroprevalence and one year seroincidence of dengue are estimated and the relationship between intensity of viral circulation and the standards of living and vector density is analysed. results: there were high overall seroprevalence ( . %) and seroincidence ( . %) for the circulating serotypes (denv- and denv- ). the effectiveness of control measures appears to be low, and although a preventable fraction of . % was found, the incidence of infections in these areas was still very high ( . %). conclusions and discussions: it is necessary to revise the technical and operational strategies of the infection control program in order to attain infestation levels that are low enough to interrupt the circulation of the dengue virus. this study investigates the difference in cancer mortality risks between migrant groups and the native dutch population, and determines the extent of convergence of cancer mortality risks according to migrants' generation, age at migration and duration of residence. data were obtained from the national population & mortality registries in the period - ( person years, cancer deaths). we used poisson regression to compare the cancer mortality rates of migrants originating from turkey, morocco, surinam, netherlands antilles/aruba to the rates for the native dutch. results: all-cancer mortality among all migrant groups combined was significantly lower compared to the native dutch population (rr= . ci: . - . ). mortality rates for all cancers combined were higher among nd generation migrants, among those with younger age at migration, and those with longer duration of residence. this effect was particularly pronounced in lung cancer and colorectal cancer. for most cancers, mortality among nd generation migrants remained lower compared to the native dutch population. surinamese migrants showed the most consistent pattern of convergence of cancer mortality. conclusions: the generally low risk of cancer mortality for migrants showed some degree of convergence but the cancer mortality rates did not yet reach the levels of the native dutch population. abstract background: legionnaires' disease (ld) is a notifiable disease in the netherlands. ld cases are reported to authorities for national surveillance. supplementary, a national ld outbreak detection program (odp) is installed in the netherlands. these two registration systems have their own information exchange process and databases. objectives: surveillance systems are known to suffer from incompleteness of reported data. co-existence of two databases creates the opportunity to investigate accuracy and reliability in a national surveillance system. design and methods: comparison was made between the outcome 'diagnosis by culture' in both databases and physical presence of legionella strains in laboratories for patients. accuracy is described using the parameters sensitivity and correctness. for reliability, cohen's kappa coefficient (?) was applied. results: accuracy and reliability were significantly higher in the odp database, but not optimal in both databases when compared to data in laboratories. the odp database was moderately reliable (? = . ; %ci . - . ), the surveillance database slightly (? = . ; %ci . - . ). conclusion: our findings suggest that diagnostic notification data concerning ld patients are most accurate and reliable when derived directly from diagnostic laboratories. discussion: involvement of data-entry persons in outbreak detection results in higher reliability. unreliable data may have considerable consequences during outbreaks of ld. the aim of the study was to investigate the medical students' plans to emigrate, quantify the scale of migration in the near future and to build a profile of the possible emigrants. data were collected based on anonymous questionnaire delivered to random group of medical students (katowice). we used the binary logistic regression and multivariate analysis to identify the differences between groups preferring to go abroad or stay in poland. % respondents confirmed that considerate the emigration; . % of them declared they are very likely to move and further . % is certain. , % of those considering emigration confirmed having taken practical steps towards moving. binary logistic regression showed no difference between people who were certain or almost certain to go and those who were not considering going for most baseline characteristics: hometown size, socio economic background and having family tradition of the medical profession (p = . ). only marks' mean differentiates between the two groups: . for those who will definitely stay vs . for students who will definitely move (p = . ). the multivariate analysis gave similar results. conclusions: most of the students consider the emigration, but the declarations of will to departure are more frequent among those with the worse marks. abstract background: falls incidence in home resident elderly people varies from % to %. falls induce loss of self-sufficiency and increase mortality and morbidity. objectives: to evaluate falls incidence and risk factors in a group of general practice elderly patients. design : prospective cohort study with year follow-up methods: eight hundreds elderly (> years) were visited by practitioners for a baseline assessment. information on current pathologies and previous falls in the last six months was collected. functional status was evaluated using: short portable mental state questionnaire, geriatric depression scale, activities of daily living (adl), instrumental activities of daily living, total mobility tinetti score. falls were monitored through phone-interviews at and months. data were analyzed through logistic regression. results: twenty-eight percent of the elderly fell in the whole period. sixty percent of falls were not reported to the practitioner. independent predictors for falls were adl score (adl< : or = . ; % ci . - . ) and previous falls (or = . ; % ci . - . ). tinetti score was significantly associated to falls only in univariate analysis. conclusions: practitioners can play a key-role in identifying at-risk subjects and managing prevention interventions. falls monitoring and a continuous practice of comprehensive geriatric assessment should be encouraged. abstract background: oral health represents an important indicator of health status. socio-economic barriers to oral care among elderly are considerable. in the lazio region, a public health program for oral rehabilitation was implemented to offer dentures to elderly people with social security. objectives: to compare hospitalisation between elderly enrolled in the program and a control group. design and methods: for each elderly enrolled in the program living in rome, three controls, matched for sex and age, were selected from rome municipality register. hospital admissions in the two-year period before enrollment were traced by record-linkage with the hospital discharge register. results: totally, , admissions occurred. the annual admissions rate was per elderly among controls and in the program group (incidence rate ratio: irr = . ; % ci . - . ). when comparing diagnosis-specific rates, significant excesses were observed in the program group for respiratory diseases ( abstract background: herpes simplex virus (hsv) type and are important viral sexually transmitted diseases (sti) and can cause significant morbidity. in the netherlands, data about prevalences in the general population are hampered. objective: description of the seroprevalences of hsv- and hsv- and associated factors in the netherlands. design and methods: a population based serum bank survey in the netherlands with an age-stratified sample was used ( ) ( ) . antibodies against hsv- and hsv- were determined using elisa. a questionnaire was used to get information on demographics and risk factors. a logistic regression adjusting for age and full multiple regression were done to establish risk factors. results: questionnaires and sera were available for persons. both hsv- and hsv- seroprevalence increased with age. seroprevalence of hsv- was . % and was amongst others associated with female sex and being divorced. seroprevalence of hsv- was . % and was amongst others associated with being divorced and a history of sti. conclusion: seroprevalence is higher in certain groups like teenagers, women, divorced people and those with a history of sti. prevention should be focused on those groups. more research is needed on prevention methods, which can be used in the netherlands, like screening or vaccination. abstract background: frequently, statistically significant prognostic factors are reported with suggestions that patient management should be modified. however, the clinical relevance of such factors is rarely quantified. objectives: we evaluated the accuracy of predicting the need for invasive treatment among bph patients managed conservatively with alpha -blockers. methods: information on eight prognostic factors was collected from patients treated with alpha -blockers. with phm regression coefficients a risk score for retreatment was calculated for each patient. the analyses were repeated on groups of patients sampled from the original case series. these bootstrap results were compared to the original results. results: three significant predictors of retreatment were identified. the % of patients with the highest risk score had an -month risk of retreatment of only %. analyses of less than half of all the bootstrap samples resulted in the same three significant prognostic factors. the % of patients with the highest risk score in each of the samples experienced a highly variable risk of retreatment of % to %. conclusions: four of the five high risk patients would be overtreated with a modified policy. internal validation procedures may warn against the invalid translation of statistical significance into clinical relevance. background: e-cadherin expression is frequently lost in human epithelium derived cancers, including bladder cancer. for two genetic polymorphisms in the region of the e-cadherin gene (cdh ) promoter, a reduced transcription has been reported: a c/a single nucleotide polymorphism (snp) and a g/ga snp at bp and bp, respectively, upstream of the transcriptional start site. objective: we studied the association between both polymorphisms and the risk of bladder cancer. methods: patients with bladder cancer and population controls were genotyped for the ) c/ a and the ) g/ga promoter polymorphisms using pcr-rflp. results: a significantly increased risk for bladder cancer was found for a allele carriers compared to the homozygous c allele carriers (or . ; % ci: . - . ). the risk for the heterozygous and homozygous a allele carriers, was increased approximately . and -fold, respectively. the association was stronger for more aggressive tumors. we did not find any association between the ) g/ga snp and bladder cancer. conclusion: this study indicates that the ) c/a snp in the e-cadherin gene promoter is a low-penetrance susceptibility factor for bladder cancer. background: health problems, whether somatic, psychiatric or accident-related cluster within persons. the study of allostatic load as a unifying theme (salut) aims to identify risk factors that are shared by different pathologies and that could explain this clustering. studying patients with repetitive injuries might be helpful to identify risk factors that are shared by accident-related and other health problems. objectives: to study injury characteristics in repetitive injury (ri) patients as compared to single injury (si) patients. methods: the presented study included ri patients and si patients. medical records provided information about injury characteristics and patients were asked for possible causes and context. results: ri patients suffered significantly more from contusions than si patients ( % vs %). regarding the context, si patients were significantly more injured in traffic ( % vs %). in both groups most injuries were attributed to 'mere bad luck' (ri %, si %), closely followed by 'clumsiness or inattention' (ri %, si %). ri patients pointed out aggression or substance misuse significantly more often than si patients ( % vs %). conclusion: ri patients seem to have more 'at risk' behavior (i.e. aggression, impulsivity), which will increase their risk for psychiatric health problems. abstract background: breastfeeding may have a protective effect on infant eczema. bias as a result of methodological problems may explain the controversial scientific evidence. objective: we studied the association between duration of breastfeeding and eczema when taking into account the possible influence of reverse causation. design and methods: information on breastfeeding, determinants and outcomes at age one year was collected by repeated questionnaires in mother infant-pairs participating in the koala study ( cases of eczema). to avoid reverse causation, a periodspecific-analysis was performed in which only 'at risk' infants were considered. results: no statistically significant association between the duration of breastfeeding (> weeks versus formula feeding) and the risk of eczema in the first year was found (or . %ci . - . ). after excluding from the analysis all breastfed infants with symptoms of eczema reported in the same period as breastfeeding, also no statistical significant association was found for the duration of breastfeeding and eczema between and months (or . %ci . - . ). conclusion and discussion: in conclusion, no evidence was found for a protective effect of breastfeeding duration on eczema. this conclusion was strengthened by risk period-specific-analysis which made the influence of reverse causation unlikely. abstract background: the internet can be used to meet health information needs, provide social support, and deliver health services. the anonymity of the internet offers benefits for people with mental health problems, who often feel stigmatized when seeking help from traditional sources. objectives: to identify the prevalence of internet use for physical and mental health information among the uk population. to investigate the relationship of internet use with current psychological status. to identify the relative importance of the internet as a source of mental health information. design and methods: self-completion questionnaire survey of a random sample of the uk population (n = ). questions included demographic characteristics, health status (general health questionnaire), and use of the internet and other information sources. results: % of internet users had sought health information online, and % had sought mental health information. use was higher among those with current psychological problems. only % of respondents identified the internet as one of the most accurate sources of mental health information, compared with % who identified it as one of the sources they would use. conclusions: health service providers must recognise the increasing use of the internet in healthcare, even though it is not always regarded as being accurate. abstract old age is a significant risk factor for falls. approximately % of people older than are falling at least once a year, mostly in the own homes. resulting hip fractures cause at least partial immobility in - % of the affected persons. almost % are sent to nursing homes afterwards. in mecklenburg-west pomerania, ageing of the population proceeds particularly fast. to prevent falls and the loss of independent living a falls prevention module was integrated in a community-bbased study conducted in cooperation with a general practitioner (gp). in the patients homes' a trained nurse performed a test to estimate the falls risk of each patient and a consultation how to reduce risk, e.g. eye sight check, gymnastic exercise. in the feasibility study ( %) out of patients (average age years), agreed to a visiting of each room of their homes in search for tripping dangers. the evaluation was assisted by a standardized, computer-based documentation. the prevention module received a considerable acceptance despite the extensive home visiting. within one month the patients started to transfer advice into practice. during the first follow up visits of the nurse three patients reported e.g. to have started gymnastics and/or wear stable shoes. abstract background: the emergence of drug resistant m. tuberculosis (mtb) is an increasing problem in both developed and developing countries. objectives: investigation of isoniazid (inh) and rifampin (rif) susceptibility patterns among mtb isolates from patients. design and methods: in total sputum samples were collected. smears were prepared for acid fast staining and all the isolates were identified as m. tuberculosis by preliminary cultural and biochemical tests. the isolates were examined for inh and rifampin resistance using conventional mic method and pcr technique by using specific inh (kat g) and rifampin (rpo b) resistant primers. results: seven isolates were resistant to both inh and rifampin by mic method. in pcr technique, and out of above mentioned strains showed resistant to inh and rifampin respectively. coclusion: the epidemiology of drug resistance is . % in region of study which is significant. discussion: conventional mic method despite being time consuming is more sensitive for evaluation of drug resistance, however, pcr as a rapid and sensitive technique is recommended additionally to conventional method for having quicker results to start treatment and disease control management. abstract background and objectives: we studied in literature which design characteristics of food frequency questionnaires (ffqs) influence their validity to assess both absolute and relative levels of energy intake in adults with western food habits, and to rank them according to these intakes. this information is required in harmonizing ffqs for multi centre studies. design and methods: we performed a review of studies investigating the validity or reproducibility of ffqs, published since . the included studies validated ffqs against doubly labeled water (for energy expenditure) as a gold standard, or against food records or hour recalls for assessing relative validity (for energy intake). the design characteristics we studied were the number of food items, the reference period, the administration mode, and inclusion of portion size questions. results: and conclusion: for this review we included articles representing the validation of questionnaires. three questionnaires were validated against dlw, ten against urinary n and against -hour recalls or food records. in conclusion a positive linear relationship (r = . , p< . ) was observed between the number of items on the ffq and the reported mean energy intake. details about the influence of other design characteristics on validity will be discussed at the conference. abstract background: high ethanol intake may increase the risk of lung cancer. objectives: to examine the association of ethanol intake with lung cancer in epic. design and methods: information on baseline and past alcohol consumption, lifetime tobacco smoking, diet, and anthropometrics of , participants was collected between and . cox proportional hazard regression was used to examine the association of ethanol intake at recruitment ( cases) and mean lifelong ethanol intake ( cases) with lung cancer. results: non-consumers at recruitment had a higher lung cancer risk than low consumers ( . - . g/day) [hr = . , % ci . - . ]. lung cancer risk was lower for moderate ethanol intake at recruitment ( . - . g/day) compared with low intake (hr = . , % ci . - . ); no association was seen for higher intake. compared with lifelong low consumers, lifelong non-consumers did not have a higher lung cancer risk (hr = . , % ci . , . ) but lifelong moderate consumers had a lower risk (hr = . , % ci: . - . ). lung cancer risk tended to increase with increasing lifelong ethanol intake (= vs. . - . g/ day hr = . , % ci: . - . ). conclusion: while lung cancer risk was lower for moderate compared with low ethanol intake in this study, high lifelong ethanol intake might increase the risk. abstract background: one of the hypotheses to explain the increasing prevalence of atopic diseases (eczema, allergy and asthma) is imbalance between dietary intake of omega- and omega- fatty acids. objectives: we evaluated the role of perinatal fatty acid (fa) supply from mother to child in the early development of atopy. design and methods: fa composition of breast milk was used as a marker of maternal fa intake and placental and lactational fa supply. breast milk was sampled months postpartum from mother-infant pairs in the koala birth cohort study, the netherlands. the infants were followed for atopic symptoms (repeated questionnaires on eczema and wheezing) and sensitisation at age (specific serum ige against major allergens). multivariate logistic regression analysis was used to adjust for confounding factors. results: high levels of omega- long chain polyunsaturated fas were associated with lower incidence of eczema in the first year (odds ratio for the highest vs lowest quintile . , % confidence interval . - . ; trend over quintiles p = . ). wheeze and sensitisation were not associated with breast milk fa composition. conclusion and discussion: the results support the omega- / hypothesis. we suggest that anti-inflammatory activity of omega- eicosanoid mediators is involved but not allergic sensitisation. abstract background: acute myocardial infarction (ami) is among the main causes of death in italy and is characterized by high fatality associated with a fast course of the disease. consequently timeliness and appropriateness of the first treatment are paramount for a positive recovery. objectives: investigate the differences among italian regions of ami first treatment and in-hospital deaths. design and methods: following the theoretical care pathway (from onset of ami to hospitalization and recovery or death), regional in-hospital deaths are decomposed into the contributions of attack rate, hospitalization and in-hospital fatality. hospital discharges, death and population data are provided by the official statistics. results: generally in northern and central regions there is an excess of observed in-hospital deaths, while the opposite occurs in southern regions. conclusion: in northern and central regions the decomposition method suggests a more frequent and severe illness, generally accompanied by a higher availability of hospitals; exceptions are lombardia and lazio, where some inefficiencies in the hospital system are highlighted. in most southern regions the decomposition confirms a less frequent and less severe illness; exceptions are campania and sicilia, where only the less severe patients reach the hospital and then recover, while the others die before reaching the hospital. abstract background: atherosclerotic lesions have typical histological and histochemical compositions at different stages of their natural history. the more advanced atherosclerotic lesions contain calcification. objective: we examined the prevalence of and associations between calcification in the coronary arteries, aortic arch and carotid arteries assessed by multislice computed tomography (msct). methods: this study was part of the rotterdam study,a population-based study of subjects aged years and over. calcification was measured and quantified in subjects. correlation coefficients were computed using spearman's correlation coefficients. results: the prevalence of calcification increased with age throughout the vascular bed. in subjects aged and over, up to % of men had calcification in the coronary arteries and up to % of women had calcification in the aortic arch. in men, the strongest correlation was found between calcification in the aortic arch and the carotid arteries (r= . , p< . ). in women, the relations were somewhat lower, the strongest correlation was found between calcification in the coronary arteries and the carotid arteries (r = . , p< . ). conclusion and discussion: in conclusion, the prevalence of calcification was generally high and increased with increasing age. the study confirms the presence of strong correlations between atherosclerosis in different vessel beds. abstract background: health status deteriorates with age and can be affected by transition from active work to retirement. objective: to assess the effect of retirement on age related deterioration of health. methods: secondary analysis of the german health survey (bundesgesundheitssurvey ) and california health interview survey (chis ) . subjective health was assessed by a single question regarding respondent's health status from = excellent to = poor. locally weighted regression was used for exploratory analysis and b-splines for the effect estimation in regression models. results: subjective health decreased in an obviously non-linear manner with age. in both cases the decrease could be reasonably approximated by two linear segments, however the pattern was different in the german and california sample. in germany, the change point of the slope describing deterioration of health was located at . abstract objective: to assess the effectiveness of physiotherapy compared to general practitioners' care alone, in patients with acute sciatica. design, setting and patients: a randomised clinical trial in primary care with a -months follow-up period. patients with acute sciatica (recruited - ) were randomised in two groups: ) intervention group received physiotherapy (active exercises), and ) control group received general practitioners' care only. main outcome measures the primary outcome was patients' global perceived effect. secondary outcomes were severity of leg and back pain, severity of disability, general health and absence from work. the outcomes were measured at , , and weeks after randomisation. results: at months follow-up, % of the intervention group and % of the control group reported improvement (rr . ; % ci . to . ). at months follow-up, % of the intervention group and % of the control group reported improvement (rr . ; % ci . ; . ). no significant differences in secondary outcomes were found at short-term or long-term follow-up. conclusion: at months follow-up, evidence was found that physiotherapy added to general practitioners' care is more effective in the treatment of patients with acute sciatica than general practitioners' care alone. abstract background: only little is known about the epidemiology of skin melanoma in the baltic states. objectives: the aim of this study was to provide insights into the epidemiology of skin melanoma in lithuania by analyzing population-based incidence and mortality ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) time trends and relative survival based on skin melanoma. methods: we calculated age-standardized incidence and mortality rates (cases per , ) using the european standard population and calculated period estimates of relative survival. for the period - , % of all registered cases were checked by reviews of the medical charts. results: about % of the cases of the period - were reported to the cancer registry indicating a high quality of cancer registration of skin melanoma in lithuania. the incidence rates increased from (men: . , women: . ) to (men: . , women: . ). mortality rates increased from (men: . , women: . ) to (men: . , women: . ). relative -year relative survival rates among men were % lower than among women. the overall difference in survival is mainly due to a more favorable survival among women aged - years. conclusions: overall prognosis is less favorable among men most likely due to diagnoses at later stages. abstract background: only little is known about the epidemiology of skin melanoma in the baltic states. objectives: the aim of this study was to provide insights into the epidemiology of skin melanoma in lithuania by analyzing population-based incidence and mortality ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) time trends and relative survival based on skin melanoma. methods: we calculated age-standardized incidence and mortality rates (cases per , ) using the european standard population and calculated period estimates of relative survival. for the period - , % of all registered cases were checked by reviews of the medical charts. results: about % of the cases of the period - were reported to the cancer registry indicating a high quality of cancer registration of skin melanoma in lithuania. the incidence rates increased from (men: . , women: . ) to (men: . , women: . ). mortality rates increased from (men: . , women: . ) to (men: . , women: . ). relative -year relative survival rates among men were % lower than among women. the overall difference in survival is mainly due to a more favorable survival among women aged - years. conclusions: overall prognosis is less favorable among men most likely due to diagnoses at later stages. abstract background: multifactorial diseases share many risk factors, genetic as well as environmental. to investigate the unresolved issues on etiology of and individual susceptibility to multifactorial diseases, the research focus must move from single determinantoutcome relations to modification of universal risk factors. objectives: the aim of the lifelines project is to study universal risk factors and their modifiers for multifactorial diseases. modifiers can be categorized into factors that determine the effect of the studied risk factor (eg gen-expression), those that determine the expression of the studied outcome (eg previous disease), and generic factors that determine the baseline risk for multifactorial diseases (eg age). design and methods: lifelines is carried out in a representative sample of . participants from the northern provinces of the netherlands. apart from questionnaires and clinical measurements, a biobank is constructed (blood, urine, dna). lifelines will employ a three-generation family design (proband design with relatives), which has statistical advantages, enables unique possibilities to study social characteristics, and offers practical benefits. conclusion: lifelines will contribute to the understanding of how disease-overriding risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline. abstract background: obesity-related mortality is a major public health problem, but few studies have been conducted on severely obese individuals. objectives: we assessed long-term mortality in treatment-seeking, severely obese persons. design and methods: we enrolled persons in six centres for obesity treatment in four italian regions, with body mass index (bmi) at first visit => kg/m and age => . after exclusion of duplicate registrations and persons with missing personal or clinical data, persons were followed up; as ( . %) could not be traced, persons ( men, women) were retained for analysis. results: there were ( men, women) deaths; the standardized mortality ratios (smrs) and % confidence intervals were ( - ) among men and ( - ) among women. mortality increased with increasing bmi, but the trend was not monotonic in men. lower smrs were observed among persons recruited more recently. excess mortality was inversely related to age attained at follow-up. conclusions and discussion: the harmful, long-term potential of severe obesity we documented confirms observations from studies carried out in different nutritional contexts. the decrease in mortality among most recently recruited persons may reflect better treatment of obesity and of its complications. abstract background: in finland, every cancer patient should have equal access to high quality care provided by the public sector. therefore no regional differences in survival should be observed. objectives: the aim of the study was to find any regional differences in survival, and to elaborate whether possible differences could be explained, e.g., by differences in distributions of prognostic factors. design and methods: the study material consisted of , patients diagnosed in to with cancer at one of the major primary sites. the common closing date was dec. . finland was divided into five university hospital regions. stage, age at diagnosis and sex were used as prognostic factors. the relative survival rates for calendar period window, - , were tabulated using period method and modelled. results: survival differences between the regions were not significant for most primary sites. for some sites, the differences disappeared in the modelling phase after adjusting for the prognostic factors. for a few of the primary sites (e.g., carcinoma of the ovary), regional differences remained after modelling. conclusion: we were able to highlight certain regional survival differences. ways to improve the equity of cancer care will be considered in collaboration with the oncological community. abstract background: the prevalence of cardiovascular disease (cvd) is extremely high in dialysis patients. disordered mineral metabolism, including hyperphosphatemia and hypercalcaemia, contributes to the development of cvd in these patients. objectives: to assess associations between plasma calcium, phosphorus and calciumphosphorus product levels and risk of cvd-related hospitalization in incident dialysis patients. design and methods: in necosad, a prospective multi-centre cohort study in the netherlands, we included consecutive patients new on haemodialysis or peritoneal dialysis between and . risks were estimated using adjusted time-dependent cox regression modeling. results: mean age was ± years, % was male, and % was treated with haemodialysis. cvd was the cause of hospitalization in haemodialysis patients ( % of hospitalizations) and in peritoneal dialysis patients ( %). most common cardiovascular morbidities were peripheral vascular disease and coronary artery disease in both patient groups. in haemodialysis patients risk of cvd-related hospitalization increased with elevated plasma calcium (hazard ratio: . ; % ci: . to . ) and calcium-phosphorus product levels ( . ; % ci: . to . ). in peritoneal dialysis patients, we observed similar effects that were not statistically significant. conclusion: tight control of plasma calcium and calcium-phosphorus product levels might prevent cvd-related hospitalizations in dialysis patients. abstract background: nurses are at health risk due to the nature of their work. analysis of morbidity among nurses was conducted to provide insight concerning the relationship between their occupational exposure and health response. methods: self reported medical history, was collected from an israeli female-nurses cohort (n = , + years old) and their siblings (n = , age matched +/) years) using a structured questionnaire. to compare disease occurrence between the two groups we used chi-square tests and hazard ratio (hr) was calculated by cox-regression to account for age of onset. results: cardiovascular diseases were more frequent among the nurses compared to the controls: heart diseases . % vs. . , p = . (hr= . , p = . ); hypertension . % vs. . %, p<. (hr= . , p = . ). the frequency of hyperlipidemia was . % among the nurses, and only . % among the controls. (hr= . ,p = . ). for the following chronic diseases the occurrence were significantly higher among the nurses and the hrs were significantly higher than : thyroid, hr= . ; liver, hr= . . total cancer and diabetes rates were similar in the groups (hr$ ). conclusions: the results suggest an association between working as a nurse and the existence of risk factors for cardiovascular diseases. the specific related determinants of their work should be further evaluated. abstract background: early referral (er) to a nephrologist and arteriovenous fistulae as first vascular access (va) reduce negative outcomes in chronic dialysis patients (cdp). objectives: to evaluate the effect of nephrologist referral timing and type of the first va on mortality. design and methods: prospective cohort study of incident cdp notified to lazio dialysis registry (italy) in - . late referral (lr) was a patient not referred to nephrologists within months before starting dialysis. we dichotomized va as fistulae versus catheters. to estimate mortality hazard ratios (hr) a multivariate cox model was performed. results: we observed . % lr subjects and . % catheters as first va; proportion of catheters was . % vs. . % (p< . ) for lr and er, respectively. we found a higher mortality hr for patient with a catheter as first va both for er (hr = . ; %c.i. = . - . ) and lr (hr = . ; %c.i. = . - . ); the interaction between referral and va was slight significant (p = . ). conclusions: the originality of our study was to investigate the influence of nephrologist referral timing and va on cdp mortality using a population registry, area-based: we found that a catheter as first va has an independent effect for mortality and modifies the effect of referral timing on this outcome. abstract patients with idiopathic venous thrombosis (vt) without known genetic risk factors but with a positive family history might carry yet unknown genetic defects. to determine the role of unknown hereditary factors in unexplained vt we calculated the risk associated with family history. in the multiple environmental and genetic assessment of risk factors for vt (mega) study, a large population-based case-control study, we collected blood samples and questionnaires on acquired risk factors (surgery, immobilisation, malignancy, pregnancy and hormone use) and family history of patients and control subjects. overall, positive family history was associated with an increased risk of vt (or ( % ci): . ( . - . )), especially in the absence of acquired risk factors (or ( % ci): . ( . - . ) ). among participants without acquired factors but with a positive family history, prothrombotic defects (factor v leiden, prothrombin a, protein c or protein s deficiency) were identified in out of ( %) patients compared to out of ( %) control subjects. after excluding participants with acquired or prothrombotic defects, family history persisted as a risk factor (or ( % ci): . ( . - . )). in conclusion, a substantial fraction of thrombotic events is unexplained. family history remains an important predictor of vt. abstract background: alcohol may have a beneficial effect on coronary heart disease (chd) through elevation of high-density lipoprotein cholesterol (hdlc) or other alterations in blood lipids. data on alcohol consumption and blood lipids in coronary patients are scarce. objectives: to assess whether alcohol consumption and intake of specific types of beverages are associated with blood lipids in older subjects with chd. design and methods: blood lipids (total cholesterol, hdlc, ldl cholesterol, triglycerides) were measured in myocardial infarction patients aged - years ( % male), as part of the alpha omega trial. intake of alcoholic beverages, total ethanol and macro and micronutrients were assessed by food-frequency questionnaire. results: seventy percent of the subjects used lipidlowering medication. mean total cholesterol was . mmol/l and hdlc was . mmol/l. in men but not in women, ethanol intake was positively associated with hdlc (difference of . mmol/l for = g/d vs. g/d, p = . ) after adjustment for diet, lifestyle, and chd risk factors. also, liquor consumption was weakly positively associated with hdlc in men (p = . ). conclusion and discussion: moderate alcohol consumption may elevate hdlc in (treated) myocardial infarction patients. this is probably due to ethanol and not to other beneficial substances in alcoholic beverages. session: posters session : july presentation: poster. abstract objective: early detection and diagnosis of silicosis among dust exposed workers is based mainly on the presence of rounded opacities on radiographs. it is thus important to examine how reliable the radiographic findings are in comparison to pathological findings. methods: a systematic literature search via medline was conducted. validity of silicosis detection and its influence on risk estimation in epidemiology was evaluated in a sensitivity analysis. results: studies on comparison between radiographic and pathological findings of silicosis were identified. the sensitivity of radiographic diagnosis of silicosis (ilo / ) varied between % and %, and specifity between % and %. under the realistic assumption of silicosis prevalence between % and % in dust exposed workers, % to % of silicosis identified may be falsely diagnosed. the sensitivity analysis indicates that invalid diagnostics alone may lead to the finding of an increased risk of lung cancer among patients with silicosis. it may also lead to findings of % to % of radiographic silicosis even when there is no case of silicosis. however, the risk of silicosis could also be underestimated if the prevalence of silicosis exceeds %. conclusion: epidemiological studies based on patients with silicosis should be interpreted with caution. abstract introduction: epidemics of dengue occurring in various countries have stimulated investigators to seek innovative ways of improving current knowledge on the issue. objective: to identify the characteristics of spatial-temporal diffusion of the first dengue epidemic in a major brazilian city (salvador, bahia). methods: notified cases of dengue in salvador in were georeferenced according to census sector (cs) and by epidemiological week. kernel density estimation was used to identify the spatial diffusion pattern. results: of the cs in the city, cases of dengue were registered in ( %). spatial distribution showed that in practically the entire city had been affected by the virus, with a greater concentration of cases in the western region, comprising cs of high population density and predominantly horizontal dwellings. conclusion and discussion: the pattern found showed characteristics of a contagious diffusion process. it was possible to identify the epicenter of the epidemic from which propagation initiated. the speed of progression suggested that even if a rapid intervention was initiated to reduce the vector population, it would probably have little effect in reducing the incidence of the disease. this finding confirms the need for new studies to develop novel technology for prevention of this disease. abstract background: knowing the size of drug user hidden populations in a community is important to plan and evaluate public health interventions. objectives: the aim of this study was to estimate the prevalence of opiate and cocaine users in liguria region by using the covariate capture-recapture method applied to four data sources. methods: we performed a cross-sectional study in the resident population aged - years ( . people at census). during individual cases identified as opiate or cocaine primary users were flagged by four sources (drug dependence services, social service at prefectures, therapeutic communities, hospital discharges). poisson regression models were fitted, adjusting for dependence among sources and for heterogeneity in catchability among categories of the two examined covariates: age ( - and - years) and gender. results: the prevalence of opiate or cocaine users was , % ( % c.i., , - , %) and , % ( % c.i.= , - , %) respectively. conclusions: the estimated prevalence of opiate and cocaine users is consistent with that found in inner london: . % and . % respectively (hickman m., ; hope v.d., ) . the covariate capture-recapture method applied to four data sources allowed identifying a large cocaine-using population and resulted appropriate to determine drug user hidden populations. abstract background: in - we performed a population based diabetes screening programme. objectives: to investigate whether the yield of screening is influenced by gp and practice characteristics. methods: a questionnaire containing items on the gp (age, gender, employment, specialty in diabetes, applying insulin therapy) and the practice (setting, location, number of patients from ethnic minority groups, specific diabetes clinic, involvement of practice assistant and practice nurse in diabetes care, cooperation with a diabetes nurse) was sent to general practitioners (gps) in practices in the southwestern region of the netherlands. multiple linear regression analysis was performed. outcome measure was the ratio screen detected diabetic patients/known diabetic patients per practice (sdm/kdm). results: sdm/kdm was independently associated with higher age of the gp (regression coefficient . ; % confidence interval . to . ), urban location () . ; ) . to ) . ) and involvement of the practice assistant in diabetes care ( . ; . to . ) . conclusion: a lower yield of screening, assumably reflecting a lower prevalence of undiagnosed diabetes, was found in practices of younger gps and in urban practices. a lower yield was not associated with an appropriate practice organization nor with a specialty of the gp in diabetes. session: posters session : july presentation: poster. background: since few years increased incidence rates for childhood cancer were reported from industrialized countries. these findings were discussed controversial, because increases could be caused by changing of potential risk factors. objectives: the question is: are observed increasing rates due to actual changes in incidence rates or mainly caused by changes in registration practice or artefacts? methods: for europe, data from the accis project (pooled data from european population-based cancer registries, performed at iarc, lyon; responsible: e. steliarova-foucher) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) , and for germany, data of the german childhood cancer registry available from onwards were used. results: accis data (based on , cases) show significantly increased data with an overall average annual percentage change of about % and it is seen for mainly all diagnostic subgroups. for germany, increases are seen for neuroblastoma (due to screening programmes) and brain tumours (due to improved registration). for acute leukaemia the observed increase is explained by changes in classification. conclusion and discussion: the increased incidence for europe can only partly be explained by registration artefacts or improved diagnostic methods. the observed patterns suggest that an actual change exists. in germany, from till now the observed increased rates could be explained by artefacts. abstract suicide is the fourth most common cause of death among working age finns. among men socioeconomic status is strongly and inversely associated with suicide mortality, but little is known about socioeconomic differences in female suicide. we studied the direct and indirect effects of different socioeconomic indicators -education, occupation-based social class and income -on suicide among finnish women aged - . also the effect of main economic activity was studied. we used individual level data from the census linked to the death register for the years - . altogether over million person-years were included and suicides were committed. age-adjusted rii conducted using poissonregression model was . ( % ci . - . ) for education, . ( . - . ) for social class and . ( . - . ) for income. however, almost all of the effect of education was mediated by social class. fifteen per cent of social class was explained by education and per cent was mediated by income. the effect of income on suicide was mainly explained by economic activity. in conclusion, net of other indicators occupation-based social class is a strong determinant of socioeconomic differences in female suicide mortality, and actions aimed at preventing female suicide should target this group. abstract c-reactive protein levels (crp) in the range between and mg/ l independently predict the risk of future cardiovascular events. besides being a marker of atherosclerotic processes, high-normal crp levels may also be a sign of a more pronounced response to everyday inflammatory stimuli. the aim of our study is to assess the association between response to everyday stimuli and the risk of myocardial infarction. we will perform a population based case-control study including a total of persons. cases (n = ) are first myocardial infarction (mi) patients. controls (n = ) are partners of the patients. offspring of the mi patients (n = ) are included because disease activity and the use of medication by the mi patients may influence the inflammatory response. in order to assess the inflammatory response in mi patients the mean genetically determined inflammatory response in the offspring will be assessed and used as a measure for the inflammatory response in the mi patients. the offspring is free of disease and medication use. partners of the offspring (n = ) are the controls for the offspring. influvac vaccine will be given to assess crp concentration, i.e. inflammatory response, before and after the vaccination. abstract background:. ischemic heart disease risk may be influenced by long-term exposure to electromagnetic fields (emf) in vulnerable subjects, but epidemiological data is inconsistent. objectives: we studied whether the long-term occupational exposure to emf is related to an increased myocardial infarction (mi) risk. design and methods: we conducted a prospective case-control study, which involved mi cases and controls. emf exposure in cases and controls was assessed subjectively. the effect of emf exposure on mi risk was estimated using multivariate logistic regression. results: after adjustment for age, smoking, blood pressure, body mass index and psychological stress the odds ratios for emf exposure < years was . ; % ci . - . , for emf exposure - years - . ; % ci . - . and for emf exposure > years - . ; % ci . - . . conclusion: longterm occupational exposure to emf may increase the risk of mi. our crude estimates of emf exposure might have impact on excess risk because of nondifferential misclassification in assigning exposure. abstract background: it has been suggested that noise exposure is associated with ischemic heart disease risk, but epidemiological evidence is still limited. objectives: we studied whether road traffic noise exposure increases the risk of myocardial infarction (mi). design and methods: we conducted population-based prospective casecontrol study, which involved mi cases and controls. we measured traffic-related noise levels at the electoral districts and linked these levels to residential addresses. we used multiple logistic regression to assess effect of noise exposure on mi risk. results: after adjustment for age, smoking, blood pressure, body mass index, and psychological stress the risk of mi was higher for the men exposed to - dba ( background: some studies have suggested that patients, depressed following acute myocardial infarction (mi), experience poorer survival. however, i) other studies show no significant association, when adjusted for recognized prognostic indicators and ii) some 'natural' responses to mi may be recorded in questionnaires as indicators of depression. method: depression was assessed in mi patients, by interview on two measures (gwb and sf ) - weeks after discharge, clinical data were abstracted from patients' medical records and vital status was assessed at - years. survivals of depressed, marginally depressed and normal patients were calculated by kaplan meier method and comparisons made by log rank and cox proportional hazard modelling. results: crude survival at years in patients was higher for depressed and marginally depressed ( %) than for normals ( %), although not significantly. in multivariate analysis, four patient characteristics contributed significantly to survival: age (p< . ), previous mi (< . ), diabetes (< . ) and sex (< . ): other potential explanatory variables, including hypertension, infarct severity and depression were excluded by the model. abstract background: the low coronary heart disease (chd) incidence in southern europe could result in lower low density lipoprotein cholesterol oxidation (oxldl). objective. the aim of this study was to compare oxldl levels in chd patients from several european countries. methods: a cross-sectional multicenter study included stable chd male subjects aged to years from northen (finland and sweden), central (germany), and southern europe (greece and spain). lipid peroxidation was determined by plasma oxldl. results: the score of adherence to mediterranean diet, antioxidant intake, alcohol intake, and lipid profile were significantly associated with oxldl. oxldl levels were higher in northern ( . u/l) than in centre ( . u/l) and southern populations ( . u/l), p = . , in the adjusted models. the probability of northern europe to have the highest oxldl levels was . %, and . , % in logarithm of triglyceride-adjusted and fully adjusted models, respectively. the probability of this order to hold after adjustment for country was . %. conclusion: a gradient on lipoperoxidation from north to central and southern europe is very likely to exist, and parallels that observed in the chd mortality, and incidence rates. southern populations may have more favourable environmental factors against oxidation than northern europe. abstract background: whereas socioeconomic status (ses) has been established as a risk factor for a range of adverse health outcomes, little literature exists examining socio-economic inequalities in the prevalence of congenital anomalies. objectives: to investigate the relationship between the ses and the risk of specific congenital anomalies, such as neural tube defects (ntd), oral clefts (oc) and down's syndrome (ds). design and methods: a total of cases of congenital anomaly and non-malformed control births were collected between and from the italian archive of the certificates of delivery care. as a measure of ses, cases and controls were given a value for a level deprivation index. data were analysed using a logistic regression model. results: we found cases of dtn, cases of sof and cases of ds. the risk of having a baby with ntd was significantly higher for women of low ses (or = . ;c.i.: . - . ), as well as for oc (or = . ; c.i.: . - . ). no significant evidence for ses variation was found for ds. conclusion and discussion: our data suggest risk factors linked to ses, such as nutritional factors, lifestyle, and access to health services, may play a role in the occurrence of some malformations. abstract background: general practitioners (gp) are well-regarded by their patients and have the opportunity to play an active role in providing cessation advice. objectives: this study was run to examine whether a public health programme based on a carefully adapted programme of continuing education can increase gps' use of cessation advice and increase the success rates of such advice. methods: the particular context due to a randomization of gp leads us to consider a cluster randomization trial. marginal models, estimated by gee and mixed generalized linear models are used for this type of design. results: the cessation rate is relatively high for all smokers enrolled in the trial (n = ): a total of smokers were ex-smokers at one year ( . %). patients who were seen by trained gps were more likely to successfully stop smoking than those seen by the control gps ( . % vs . %). motivated subjects, aged over , lower had anxiety scores, and confidence in their ability to stop smoking, were predictive of successful cessation at one year follow-up. conclusions: cluster analysis indicated that factors important to successful cessation in this population of smokers are factors commonly found to influence cessation. abstract background: and purpose conventional meta-analysis showed no difference in primary outcome for coronary bypass surgery without (offpump) or with (onpump) the heart-lung machine. secondary outcome parameters such as transfusion requirements or hospitalization days favored offpump surgery. combined individual data analysis improves precision of effect estimates and allows accurate subgroup analyses. objective: our objective is to obtain accurate effect estimates for stroke, myocardial infarction, or death, after offpump versus onpump surgery, by meta-analysis on pooled individual patient data. method and results: bibliographic database search identified eleven large trials (> patients). the obtained data for trials data included patients. primary endpoint was composite (n = ), secondary endpoints were death (n = ), stroke (n = ) and myocardial infarction (n = ). hazard ratio for event-free survival after offpump vs onpump ( % ci) was: composite endpoint . ( . ; . ), death . ( . ; . ) myocardial infarction . ( . ; . ), stroke . ( . ; . ). after stratification for diabetes, gender and age the results slightly favored offpump for high-risk groups. hazard ratios remained not statistically significant. conclusion: no clinical or statistical significant differences were found for any endpoint or subgroup. offpump coronary bypass surgery is at least equal to conventional coronary bypass surgery. offpump surgery therefore is a justifiable option for cardiac surgeons for cardiac bypass surgery. in - an outbreak of pertussis occurred, mostly among vaccinated children. since then the incidence has remained high. therefore, a fifth dose with acellular booster vaccine for -yearolds was introduced in october . the impact of this vaccination on the age-specific pertussis incidence was assessed. mandatory notifications and hospitalisations were analysed for - and compared with previous years. surveillance data show 'epidemic' increases of pertussis in , , and . the total incidence/ , in ( . ) was higher than in the previous epidemic year ( . ). nevertheless, the incidence of notifications and hospitalisations in the age-groups targeted for the booster-vaccination had decreased with respectively % and % compared to . in contrast, the incidence in adolescents and adults almost doubled. unlike other countries that introduced a pre-school booster, the incidence of hospitalised infants < months also decreased ( % compared with ). as expected, the booster-vaccination for -year-olds has decreased the incidence among the target-population itself. more importantly, the decreased incidence among infants < months suggests that transmission from siblings to infants has also decreased. in further exploration of the impact of additional vaccination strategies (such as boostering of adolescents and/or adults) this effect should not be ignored. abstract acute respiratory infections (ari) are responsible for considerable morbidity in the community, but little is known about the presence of respiratory pathogens in asymptomatic individuals. we hypothesized that asymptomatic persons could have a sub clinical infection and so act as a source of transmission. between and all patients with ari who visited their sentinel general practitioner were reported to estimate the incidence of ari in dutch general practices. a random selection of them (cases) and an equal number of asymptomatic persons visiting for other complaints (controls) were included in a case-control study. nose/ throat swabs of participants were tested for a broad range of pathogens. the overall incidence of ari was per , person years, suggesting that in the dutch population an estimated , persons annually consult their general practitioner for respiratory complaints. viruses were detected in % of the cases, ?-haemolytic streptococci group a in % and mixed infections in %. besides, pathogens were detected in approximately % of controls, particularly in the youngest age groups. this study confirms that most ari are viral and supports the reserved policy of prescribing antibiotics. furthermore, we demonstrated that asymptomatic persons might be a neglected source of transmission. abstract background: the baking and flour producing industries in the netherlands agreed on developing a health surveillance system to reduce the burden of and improve prognosis of occupational allergic diseases. objectives: to develop and validate a diagnostic model for sensitization to wheat and fungal amylase allergens, as triage instrument to detect occupational allergic diseases. design and methods: a diagnostic regression model was developed in bakers from a cross-sectional study with ige serology to wheat and or amylase allergens as the reference standard. model calibration was assessed with hoshmer-lemeshow goodness of fit test; discriminative ability using area under receiver operating characteristic curve (auc); and internal validity using bootstrapping procedure. external validation was conducted in other bakers. results: the diagnostic model consisted of four questionnaire items (history of asthma, rhinitis, conjunctivitis, and work-related allergic symptom) showed good calibration (p = . ) and discriminative ability (auc . ; % ci . to . ). internal validity was reasonable (correction factor of . and optimism corrected auc of . ). external validation showed good calibration (p = . ) and discriminative ability (auc . ; % ci . to . ). conclusions and discussion: this easily applicable diagnostic model for sensitization to flour and enzymes shows reasonable diagnostic accuracy and external validation. abstract background: in the netherlands the baking and flour producing industries ( , small bakeries, industrial bakeries, and flour manufactures) agreed to reduce the high rate (up to %) of occupational related allergic diseases. objectives: to conduct health surveillance for early detection of occupational allergic diseases by implementing a diagnostic model as triage instrument. design and methods: in the preparation phase, a validated diagnostic regression model for sensitization to wheat and or a-amylase allergens was converted into score chart for use in occupational health practice. two cut off points of the sum scores were selected based on diagnostic accuracy properties. in the first phase, a questionnaire including the diagnostic predictors from the model was applied in . bakers. surveillance simulation was done in bakers recently enrolled in the surveillance. workers with high questionnaire scores were referred for advanced medical examination. results: implementing the diagnostic questionnaire model yielded %, %, and % bakers in the low, intermediate, and high score groups. workers with high scores showed the highest percentage of occupational allergic diseases. conclusions and discussion: with proper cut off points for referral, the diagnostic model could serve as triage instrument in health surveillance to early detect occupational allergic diseases. abstract background: the prevalence of cardiovascular risk factors in spain is high but myocardial infarction incidence is lower than in other countries. objective: to determine the role of basic lipid profile on coronary heart disease (chd) incidence in spain. methods: a cohort of , healthy spanish individuals aged to years was followed for years. the end-points were fatal and non-fatal myocardial infarction, and angina. results: the participants who developed a coronary end-point were significantly older ( vs ), more often diabetic ( % vs %), smoker ( % vs %) and hypertensive ( % vs %) than the rest, and their average total and hdl-cholesterols (mg/dl) were: vs (ns) and vs , (p< . ), respectively. chd incidence among individuals with low hdl levels (< in men/< in women) was higher than in the rest: . &aeyear- vs . &aeyear- (p< . ) in men, and . &aeyear- vs . &aeyear- (p< . ) in women. hdl-cholesterol was the only lipid related variable significantly associated with chd: hazard ratio for mg/dl increase was . ( % ci: . - . ) in men, and . ( % ci: . - . ) in women, after adjusting for classical risk factors. conclusion: hdl-cholesterol is the only classical lipid variable associated with chd incidence in spain. abstract background: it is widely recognized that health service interventions may reduce infant mortality/imr rate which usually occurs alongside with economic growth. however, there are reports showing that imr decrease under adverse economic and social conditions, indicating the presence of other unknown determinants. objective: this study aims to analyze temporal tendency of infant mortality in brazil during a recent period ( to ) of economic crisis. methods: temporal series study using data from the mortality information system, censuses (ibge) and epidemiological information (funasa). applying arima -autoregressive integrated moving average, it was described series parameters and, spearman correlation coefficients were used to evaluate the association between infant mortality coefficient and some determinants. results: the infant mortality showed a declining tendency () . %) and strong correlation to the majority of the indicators analyzed. however, only correlation between infant mortality coefficient and total fecundity and birth rates differed significantly within decades. conclusions/discussion: fecundity variation was responsible to the persistence of mortality decline during the eighties. in the next period those indicators of life conditions, mostly health care, could be more important. abstract background: across european union (eu) member states, considerable variation exists in the structure and performance of surveillance systems for communicable disease prevention and control. objectives: the study aims to support the improvement of surveillance systems of communicable diseases in europe while using benchmarking for the comparison of national surveillance systems. design and methods: surveillance systems from england & wales, finland, france, germany, hungary and the netherlands were described and analysed. benchmarking processes were performed with selected criteria (e.g. case definitions, early warning systems). after the description of benchmarks, best practices were identified and described. results: the six countries have in general wellfunctioning communicable disease control and prevention systems. nevertheless, different strengths and weaknesses in could be identified. practical examples for best practice from various surveillance systems demonstrated fields for improvement. conclusion and discussion: benchmarking national surveillance systems is applicable as a new tool for the comparison of communicable disease control in europe. a gold standard of surveillance systems in various countries is very difficult to achieve because of heterogeneity (e.g. in disease burden, personal and financial resources). however, to improve the quality of surveillance systems across europe, it will be useful to benchmark surveillance systems of all eu member states. abstract background: therapeutic decisions in osteoarthritis (oa) often involve trade-offs between accepting risks of side effects and gaining pain relief. data about the risk levels patients are willing to accept are limited. objectives: to determine patients' maximum acceptable risk levels (marls) for different adverse effects from typical oa medications and to identify the predictors of these risk attitudes. design and methods: marls were measured with a probabilistic threshold technique for different levels of pain relief. baseline pain and risk levels were controlled for in a x factorial design. clinical and sociodemographic characteristics were assessed using a selfadministered questionnaire. results: for subjects, marls distributions were skewed, and varied by level of pain relief, type of adverse effect, and baseline risk level. given a % baseline risk, for a -point ( - scale) pain benefit the mean (median) marls were . % ( %) for heart attack/stroke; . % ( %) for stomach bleed; . % ( . %) for hypertension; and . % ( . %) for dyspepsia. most clinical and sociodemographic factors were not associated with marls. conclusion: subjects were willing to trade substantial risks of side effects for pain benefits. this study provides new data on risk acceptability in oa patients that could be incorporated into practice guidelines for physicians. background: several independent studies have shown that single genetic determinants of platelet aggregation are associated with increased ihd risk. objectives: to study the effects of clustering prothombotic (genetic) determinants on the prediction of ihd risk. design and methods: the study is based on a cohort of , women, aged to years, who were followed from to . during this period, there were women with registered ihd (icd- - ) . a nested case cohort analysis was performed to study the relation of plasma levels vwf and fibrinogen, blood group genotype and prothrombotic mutations in the gene of a b , gpvi, gpib and aiibb to ihd. results: blood group ab, high vwf concentrations and high fibrinogen concentrations were associated with increased incidence of acute ihd. when the effects of blood group ab/o genotype, plasma levels fibrinogen and vwf were clustered with a score, there was a convincing relationship between a high prothrombotic score and increased incidence of acute ihd: the full-adjusted hr ( % confidence interval) was . ( . - . ) for women with the highest score when the lowest score was taken as reference. conclusions: clustering of prothrombotic markers is a major determinant of increased incidence of acute ihd. abstract background: studies have revealed heart rate variability (hrv) was a predictor of hypertension; however its h-recording has not been analysed with the -hour ambulatory blood pressure. objective: we studied the relationship between hrv and blood pressure. methods: hrv and blood pressure were measured by -hour ambulatory recordings, in randomly selected population without evidence of heart disease. cross-sectional analyses were conducted in women and men (mean age: . ± . ). hrv values, measured by the standard deviation of rr intervals (sdnn), were compared after logarithmic transformation between the blood pressure levels ( / mmhg), using analysis of variance. stepwise multiple-regression was performed to assess on sdnn the cumulative effects of systolic and diastolic blood pressure, clinical obesity, fasting glycaemia, c-reative protein, treatments, smoking and alcohol consumption. results: sdnn was lower in hypertensive men and women (p< . ), independently of drug treatments. after adjustment for factors associated with hypertension, sdnn was no more associated with hypertension, but with obesity, glycaemia and c-reative protein in both genders. sdnn was negatively associated with diastolic blood pressure in men (p = . ) in the multivariate approach. conclusion: whereas blood pressure levels were not related to the sdnn in the multivariate analysis, diastolic blood pressure contributed to sdnn in men. it has been proposed that n- fatty acids may protect against the development of allergic disease, while n- fatty acids may promote its development. in the piama (prevention and incidence of asthma and mite allergy) study we investigated associations between breast milk fatty acid composition of allergic and non allergic mothers and allergic disease (doctor diagnosed asthma, eczema or hay fever) in their children at the age of year and at the age of years. in children of allergic mothers prevalences of allergic disease at age and at age were relatively high if the breast milk they consumed had a low content (wt%) of n- fatty acids and particularly of n- long chain polyunsaturates (lcps), a low content of trans fatty acids, or a low ratio of n- lcps/n- lcps. the strongest predictor of allergic disease was a low breast milk n- lcps/n- lcps ratio (odds ratios ( % ci) of lowest vs highest tertile, adjusted for maternal age, parity and education: . ( . to . ) for allergic disease at age and . ( . to . ) for allergic disease at age ). in children of non allergic mothers no statistically significant associations were observed. abstract background/relevance: to find out about the appropriateness of using two vision related quality of life instruments to measure outcome of visually impaired elderly in a mono-and multidisciplinary rehabilitation centre. objective/design: to evaluate sensitivity of the vision quality of life core measure (vcm ) and the low vision quality of life questionnaire (lvqol) to measure changes in vision related quality of life in a non-randomised followup study. methods: visually impaired patients (n= ) recruited from ophthalmology departments administered questionnaires at baseline ( ) ( ) ( ) ( ) , months and year after rehabilitation. person measures were analysed using rasch analyses for polytomous rating scales. results: paired sample t-tests for the vcm showed improvement at months (p = . ; effect size = . and p = . ; effect size= . ) for the monodisciplinary and the multidisciplinary groups respectively. at year only the multidisciplinary group showed improvement on the vcm (p = . ; effect size = . ). on the lvqol, no significant improvement or deterioration was found for both groups. discussion: although, vcm showed improvement in vision related quality of life over time, the effect sizes appeared to be quite small. we conclude that both instruments lack sensitivity to measure changes. another explanation is that rehabilitation did not contribute to quality of life improvements. abstract background: the natural history of asthma severity is poorly known. objective: to investigate prognostic factors of asthma severity. methods: all current asthmatics identified in / in the european community respiratory health survey were followed up and their severity was assessed in by using the global initiative for asthma categorization (n = ). asthma severity was related to baseline/follow-up potential determinants by a multinomial logistic model, using intermittent asthmatics as reference category for relative risk ratios (rrr). results: patients in the lowest/highest levels of severity at baseline had an % likelihood of remaining in a similar level. severe persistent had a poorer fev %predicted at baseline, higher ige levels (rrr= . ; % ci: . - . ), higher prevalence of chronic cough/phlegm ( . ; . - . ) than intermittent asthmatics. moderate persistent showed similar associations. mild persistent were similar to intermittent asthmatics, although the former showed a poorer control of symptoms than the latter. subjects in remission had a lower probability of an increase in bmi than current asthmatics ( . ; . - . ). allergic rhinitis, smoking, respiratory infections in childhood were not associated with severity. conclusion: asthma severity is a relatively stable condition, at least for patients at the two extremes of the severity spectrum. high ige levels and persistent cough/phlegm are strong markers of moderate/severe asthma. abstract background: thyroid cancer (tc) has a low, yet growing, incidence in spain. ionizing radiation is the only well established risk factor. objectives: this study sought to depict the municipal distribution of tc mortality in spain and to argue about possible risk factors. design and methods: posterior distribution of relative risk for tc was computed using a single bayesian spatial model covering all municipal areas of spain ( , ) . maps were plotted depicting standardised mortality ratios, smoothed municipal relative risk (rr) using the besag, york and mollie`model, and the distribution of the posterior probability that rr> . results: from to a total of , tc deaths were registered in , municipalities. there was a higher risk of death in some areas of canary islands, galicia and asturias. abstract igf-i is an important growth factor associated with increased breast cancer risk in epidemiological and experimental studies. lycopene intake has been associated with decreased cancer risk. although some data indicate that lycopene can influence the igfsystem, this has never been extensively tested in humans. the purpose of this study is to evaluate the effects of a lycopene intervention on the circulating igf-system in women with an increased risk of breast cancer. we conducted a randomized, placebo-controlled cross-over intervention study on the effects of lycopene supplementation ( mg/day, months) in pre-menopausal women with ) history of breast cancer (n = ) and ) high familial breast cancer risk (n = ). drop-out rate was %. mean igf-i and igfbp- concentrations after placebo were . ± . ng/ml and . ± . mg/ml respectively. lycopene supplementation did not significantly alter serum total igf-i (mean lycopene effect: ) . ng/ml; % ci: ) . - . ) and igfbp- () . mg/ml; ) . - . ) concentrations. dietary energy and macronutrient intake, physical activity, body weight, and serum lycopene concentrations were assessed, and are currently under evaluation. in conclusion, this study shows that months lycopene supplementation has no effect on serum igf-system components in a high risk population for breast cancer. abstract introduction: patients who experience burden during diagnostic tests may disrupt these tests. the aim was to describe the perception of melanoma patients with lymph node metastases of the diagnostic tests. methods: patients were requested to complete a self-administrated questionnaire. experienced levels of embarrassment, discomfort and anxiety were calculated, as well as (total) scores for each burden. the non-parametric friedman test for related samples was used to see if there was a difference in burden. results: the questionnaire was completed by patients; response rate was %. overall satisfaction was high. in total % felt embarrassment, % discomfort and % anxiety. overall, % felt some kind of burden. there was no difference in anxiety between the three tests. however, patients experienced more embarrassment and discomfort during the pet (positron emission tomography) scan (p = . and p = . ). conclusion: overall levels of burden were low. however, patients experienced more embarrassment and discomfort during the pet scan, possibly as a result of lying immobile for a long time. the accuracy, costs and patients upstaged will probably be the most important to determine the additional value of fdg-pet and ct, but it is reassuring to know that only few patients experience severe or extreme burden. abstract gastric cancer (gc) is the second most frequent cause of cancer death in lithuania. some intercultural aspects of diet that is related to the outcome could be the risk factors of the disease. the objective of the study was to assess an associations between risk of gc and dietary factors. a case-control study included cases with diagnose of gc and controls that were cancer and gastric diseases free. a questionnaire used to collect information on possible risk factors. the odds ratios (or) and % confidence intervals (ci) estimated by the conditional logistic regression model. after controlling for possible confounders that were associated with gc, use of salted meat (or = . ; % ci = . - . ; > - times/week vs. almost never) smoked meat (or = . ; % ci = . - . ; > - times/week vs. less), smoked fish (or = . ; % ci = . - . ; > - times/week vs. less) was associated with increased risk of gc. higher risk of gc was associated with frequent use of butter, eggs and noodles. while frequent consumption of carrots, cabbages, broccolis, tomatoes, garlic, beans decreased the risk significantly. the data support a role of salt processed food and some animal foods in increasing the risk of gc and plant foods in reducing the risk of the disease. abstract background: standards for the evaluation of measurement properties of health status measurement instruments (hsmi), including explicit criteria for what constitutes good measurement properties, are lacking. nevertheless, many systematic reviews have been performed to compare and select hsmi, using different criteria to judge the measurement properties. objectives: ( ) to determine which measurement properties are reported in systematic reviews of hsmi and how these properties are defined, ( ) which standards are used to define how measurement properties should be assessed, and ( ) which criteria are defined for good measurement properties. methods: a systematic literature search was performed in pubmed, embase and psychlit. articles were included if they met the following inclusion criteria: ( ) systematic review, ( ) hsmi were reviewed, and ( ) the purpose of the review is to identify all measurement instruments assessing (an aspect of) health status and to report on the clinimetric properties of these hsmi. two independent reviewers selected the articles. a standardised data-extraction form was used. preliminary results: systematic reviews were included. conclusions: large variability in standards and criteria used for evaluating measurement properties was found. this review can serve as basis for reaching consensus on standards and criteria for evaluating measurement properties of hsmi. abstract residential exposure to nitrogen dioxide is an air quality indicator and could be very useful to assess the effects of air pollution on respiratory diseases. the present study aims at developing a model to predict residential exposure to no , combining data from questionnaires and from local monitoring stations (ms). in the italian centres of verona, torino and pavia, participating in ecrhs-ii, no concentrations were measured using passive samplers (ps-no ) placed outside the kitchen of subjects for days. simultaneously, average no concentrations were collected from all the mss of the three centres (ms-no ). a multiple regression model was set up with ps-no concentrations as response variable and questionnaire information and ms-no concentrations as predictors. the model minimizing the root mean square error (rmse), obtained from a ten fold cross validation, was selected. the model with the best predictive ability (rmse= . ), had as predictors: ms-no concentrations, season of the survey, centre, type of building, self-reported intensity of heavy vehicle traffic. the correlation coefficient between predictive and observed values was . ( % ci: . - . ). in conclusion, this preliminary analysis suggests that the combination of questionnaire information and routine data from the mss could be useful to predict the residential exposure to no . abstract background: currently only % of dutch mothers comply with the who recommendation to give exclusive breastfeeding for at least six months. therefore, the dutch authorities consider policies on breastfeeding. objectives: quantification of the health effect of several breastfeeding policies. methods: a systematic literature search of published epidemiological studies conducted in the general 'western' population. based on this overview a model is developed. the model simulates incidences of diseases of mother and child depending on the duration that mothers breastfeed. each policy corresponds to a distribution in the duration of breastfeeding. the health effects of each policy are compared to the present situation. results: breastfeeding has beneficial health effects on both the short and the long term for mother and child. the longer the duration of breastfeeding, the larger is the effect. most public health gain is achieved by introducing breastfeeding to all newborns rather than through a policy focusing just on extending the lactation period of women already breastfeeding. conclusion: breastfeeding has positive health effects. policy should focus preferentially on encouraging all mothers to start with breastfeeding. abstract background: constant increase of international trade and travel activities has risen the significance of pandemic infectious diseases worldwide. the / sars outbreak rapidly spread from china to countries, from which were located in europe. in order to control and prevent pandemic infections in europe, systematic and effective public health preparation by every member state is essential. method: supported by the european commission, surveys focusing on national sars (september ) and influenza (october ) preparedness were accomplished. a descriptive analysis was undertaken to identify differences in european infectious disease policies. results: guidelines and guidance for disease management were well established in most european countries. however, the application of control measures, like e.g. measures for mass gatherings or public information policies, had varied among member states. discussion: the results show that european countries are aware of preparing for pandemic infections. yet, the effectiveness of certain control measures is analysed insufficiently. further research and detailed knowledge about factors influencing international spread of diseases is required. 'hazard analysis of critical control points' (haccp) will be applied to evaluate national health response in order to provide comprehensive data for recommendations to european pandemic preparedness. abstract background: influenza is still an important problem for public health. knowing its space-time evolution is of special interest in order to carry out prevention plans. objectives: to analyze the geographical diffusion of the epidemic wave in extremadura. methods: the influenza incidence absolute rates in every town have been calculated, according to the registered cases per week in the compulsory disease declaration system. continuous maps have been represented using a geographical interpolation method (inverse distance weighting (idw) was applied with weighting exponents of ). results: the / season began in the th week of , with a small influenza incidence. there have been concrete cases in those towns until the th week. punctual areas diffusion in the north and southwest of the region between the th and the st weeks. the highest incidence appeared between the nd week of and the rd of . influenza cases started to decrease in the northwest and north of the region from the rd week of , till the th week, when most of the cases were found in the southwest. conclusion: there is a space-time diffusion of influenza, due to the higher population density. we propose to analyze these data combining temperature information. abstract background: acute lower respiratory tract infection (lrti) can cause various complications leading to morbidity and mortality notably among elderly patients. antibiotics are often given to prevent complications. to minimise costs and bacterial resistance, antibiotics are only recommended in case of pneumonia or in patients at serious risk for serious complications. objective: we assessed the course of illness of lrtis among dutch elderly primary care patients and assessed whether gps were inclined to prescribe antibiotics more readily to patients at risk for complications. methods: we retrospectively analysed medical data from , episodes of lrti among patients? years of age presenting in primary care to describe the course of illness. the relation between prescriptions of antibiotics and patients with risk factors for a complicated course was assessed by means of multivariate logistic regression. results: in episodes of acute bronchitis antibiotics were more readily prescribed to patients aged years or older. in exacerbations of copd or asthma gps favoured antibiotics in male patients and when diabetes, neurological disease or dementia was present. conclusion: gp's do not take all high risk conditions into account when prescribing antibiotics to patients with lrti despite recommendations of national guidelines. abstract background: the putative association between antidepressant treatment and increased suicidal behaviour has been under debate. objectives: to estimate the risk of suicide, attempted suicide, and overall mortality during antidepressant treatments. design and methods: study cohort consisted all subjects without non-affective psychosis, hospitalized due to a suicide attempt during the years - , followed up by using nationwide registers. main outcome were completed suicides, attempted suicides, and mortality. main explanatory variable was antidepressant usage. results: suicides, suicide attempts and deaths were observed. when the effect of background variables was taken into account, the risk of suicide attempts was increased markedly during antidepressant treatment (rr for selective serotonin reuptake inhibitors or ssri . , . - . ) compared with no antidepressants. however, the risk of completed suicides was not increased. a lower mortality was observed during ssri use (rr . , . - . ), which was mainly attributable to decrease in cardiovascular deaths. conclusion and discussion: in this suicidal high risk cohort the use of any antidepressant is associated with an increased risk of suicide attempts, but not with the increased risk of completed suicide. antidepressants and, especially, ssri use is associated with a substantial decrease in cardiovascular deaths and overall mortality. abstract background: the quattro study is a rct on the effectiveness of intensified preventive care in primary health care centres in deprived neighbourhoods. additional qualitative research on the execution of interventions in primary care was considered necessary for the explanation of differences in effectiveness. objectives: our question was: can we understand rct outcomes better with qualitative research? design and methods: an ethnographic design was used. in their daily work we observed researchers for months days a week, and practice nurses for days each. two other practice nurses were interviewed. all transcribed observations were analysed thematically. results: from the rct showed differences in effectiveness among the centres and that intensified preventive care provided no additional effect compared to structural physical measurements. ethnographic results show that these differences are due to variations in execution of the intervention among the centres. conclusion: in conclusion ethnographic analysis showed that differences in execution of intervention lead to differences in rct outcomes. the rct conclusion 'no additional effect' is problematic. discussion as variations in primary care influence a rcts' execution they create methodological problems for research. to what extent can additional qualitative research improve rct research. abstract background: acute myocardial infarction is the most important cause of morbidity from ischemic heart disease (ihd) and is the leading cause of death in the western world. objectives: to assess the benefits and harms of 'dan shen' compound for acute myocardial infarction. methods: we searched the cochrane controlled trials register on the cochrane library, medline, embase, chinese biomedical database and the chinese cochrane centre controlled trials register. we included randomized controlled studies lasting at least days. main results: eleven studies with participants in total were included. seven studies compared the mortality in routine treatment plus 'dan shen' compound and single routine treatment. one trial compared the arrhythmia in routine treatment plus 'dan shen' compound injection and single routine treatment. two trials compared the revascularization in routine treatment plus 'dan shen' compound injection and single routine treatment. conclusions: evidence is insufficient to recommend the routine use of 'dan shen' compound because of the small number of included studies and their low quality. no well designed randomized controlled trials with adequate power to provide a more definitive answer have been conducted. in addition, the safety of 'dan shen' compound is unproven, though adverse events were rarely reported. abstract antimicrobial resistance is emerging. to identify the scope of this threat and to be able to take proper actions and evaluate these, monitoring is essential. the remit of earss is to maintain a comprehensive surveillance system that provides comparable and validated data on the prevalence and spread of major invasive bacteria with clinically and epidemiologically relevant antimicrobial resistance in europe. since , earss collects routine antimicrobial susceptibility test (ast) results of invasive isolates of five indicator bacteria, tested according to standard protocols. in , ast results for , isolates were provided by laboratories, serving hospitals, covering million inhabitants in countries. through a biannual questionnaire denominator information was collected. the quality of ast results of laboratories was evaluated by the yearly external quality assessment. currently, earss includes all member and candidate states ( ) of the european union, plus israel, bosnia, bulgaria and turkey. participating hospitals treat a wide range of patients and laboratory results are of sufficient validity. earss identified antimicrobial resistance time trends and found a steady increase for most pathogen-compound combinations. in conclusion, earss is a comprehensive system with sufficient quality to show that antimicrobial resistance is increasing in europe and threatens health-care outcomes. abstract introduction: since chloroform has been detected in drinking waters, the number of studies has increased to identify the presence of trihalomethanes (thms) in drinking waters, as well as to establish the possible effects they may have population health. objectives: to determine thms levels in the water distributing network in the city of valencia. design and methods: over a one-year period, points of the drinking water distributing netowrk have undergone sampling at week intervals. the concentration of these pollutants was determined by gas chromatography. results: our results showed greater concentrations of the species substituted by chlorine and bromine atoms (dichlorobromomethane and dibromochloromethane) in the range of - z lg/l for both, - lg/l for trichloromethane and between - lg/l for tribromomethane. an increase in thms concentration was observed in those points near the sea, although they did not exceed the legal limit of lg/l. conclusion: we established two areas of concentration of these species in water: high and average, according to their proximity to the sea. abstract background: childhood cancer survivors are known to be at increased risk for second malignancies. objectives: we studied longterm risk of second malignancy in -year survivors, according to therapy and follow-up interval. methods: the risk of second malignancies was assessed in -year survivors of childhood cancer treated in the emma children's hospital amc in amsterdam and compared with incidence in the general population of the netherlands. complete follow-up till at least january was obtained for . % of the patients. the median follow-up time was . . results: sixty second malignancies were observed against . expected, yielding a standardized incidence ratio (sir) of ). the absolute excess risk (aer) was . per , persons per year. the sir appeared to stabilize after years of follow-up, but the absolute excess risk increased with longer follow-up (aer follow-up > = years of . ). patients who were treated with radiotherapy experienced the greatest increase of risk. conclusions: in view of the quickly increasing background rate of cancer with ageing of the cohort, it is concerning that even after more than years of follow-up the sir is still increased, as is the absolute excess risk. the chek * delc germline variant has been shown to increase susceptibility for breast cancer and could have an impact on breast cancer survival. this study aimed to determine the proportion of chek * delc germline mutation carriers, and breast cancer survival and tumor characteristics, compared to non-carriers in an unselected (for family history) breast cancer cohort. women with invasive mammary carcinoma, aged < years and diagnosed in several dutch hospitals between and , were included. for all patients, paraffin embedded tissue blocks were collected for dna isolation (normal tissue), and subsequent mutation analyses, and tumor revision. in breast cancer patients, ( . %) chek * delc carriers were detected. chek * delc tumors characteristics, treatment and patient stage did not differ from those of non-carriers. chek * delc carriers had times increased risk of developing a second breast cancer compared to non-carriers. with a mean follow up of years, chek * delc carriers had worse recurrence free and breast cancer specific survival than non-carriers. in conclusion, this study indicates a worse breast cancer outcome in chek * -delc carriers compared to non-carriers. the extension of the presence of the chek * delc germline mutation warrants research into therapy interaction and possibly into screening of premenopausal breast cancer patients. abstract background: for primary or secondary prevention (e.g. myocardial infarction) hormone therapy (ht) is no longer recommended in postmenopausal women. however, physicians commonly prescribe ht to climacteric women as a treatment of hot flashes/night sweats. objective: to assess efficacy and adverse reactions of ht in climacteric women with hot flashes (including night sweats). methods: for our systematic review (sr), we searched databases (medline, embase, cochrane) for randomized controlled trials, other srs and meta-analyses, published to . the quality of the studies was assessed using checklists corresponding to the study type. results: we identified studies of good/excellent quality. they included predominantly caucasian women and lasted - months. in all studies, ht showed a reduction of - % in the number of hot flashes, which was significantly better than placebo. most common adverse events of ht were uterine bleeding and breast pain/tenderness. cardiovascular diseases and neoplasms were reported only sporadically. conclusions: ht is highly effective in treating hot flashes in climacteric women. however, to assess serious adverse events longer studies (including also non-caucasian women) are needed, as there are only sparse data available. abstract igf-i is an important growth factor, and has been associated with increased colorectal cancer risk in both prospective epidemiological and experimental studies. however, it is largely unknown which lifestyle factors are related to circulating levels of the igf-system. studies investigating the effect of isoflavones on the igf-system have thus far been conflicting. the purpose of this study was to evaluate the effects of isoflavones on the circulating igf-system in men with high colorectal cancer risk. we conducted a randomized, placebo-controlled, cross-over study on the effect of a -month isoflavone supplementation ( mg/day) on igf levels in men with a family history of colorectal cancer or a personal history of colorectal adenomas. dropout rate was %, and all but men were more than % compliant. isoflavones supplementation did not significantly alter serum total igf-i () . %; %ci: ) . - . ) and igf binding protein (+ . %, %ci: ) . - . ) concentrations. other covariables, e.g. dietary energy and macronutrient intake, physical activity, and body weight, are currently under evaluation. in conclusion, this study shows that a -month isoflavone supplementation has no effect on serum igf-system components in men with high colorectal cancer risk. abstract background/objective: eurociss-european cardiovascular indicators surveillance set project, funded under the health monitoring programme of european commission, aims developing health indicators and recommendations for monitoring cardiovascular diseases (cvd). methods: prioritise cvd according to their importance in public health; identify morbidity and mortality indicators; develop data collection and harmonizing recommendations; describe data collection, validation procedures and discuss their comparability. population (geographical area, age, gender), methods (case definition, icd codes), procedures (record linkage, validation), morbidity indicators (attack rate, incidence, case fatality) collected by questionnaire. results: the main outcome was the inventory of acute myocardial infarction (ami) populationbased registers in the european partner countries: countries have no register, regional, of which also national. registers differ for: icd codes (only ami or also acute and subacute ischemic forms), ages ( - , - , all) , record linkage (probabilistic, personal identification number), calendar years, validation (monica, esc/acc diagnostic criteria). differences make morbidity indicators difficult to compare. conclusion: new diagnostic criteria led to a more exhaustive definition of myocardial necrosis as acute coronary syndrome (acs). given the high burden of ami/acs, efforts are needed to implement population-based registers in all countries. application of recommended indicators, validated through standardized methodology, will provide reliable, valid and comparable data. abstract objective: the objective of this paper was to compare and discuss the use of odd ratios and prevalence ratios using real data with complex sampling design. method: we carried out a cross-sectional study using data obtained from a two-stage stratified cluster sample from a study conducted in - (n = , ) . odds ratios and prevalence ratios were obtained by unconditional logistic regression and poisson regression, respectively, for later comparison using the stata statistical package (v. . ). confidence intervals and design effects were considered in the evaluation of the precision of estimates. two outcomes of a cross-sectional study with different prevalence were evaluates: vaccination against influenza ( . %) and self-referred lung disease ( . %). results: in the high-prevalence scenario, using prevalence ratios the estimates were more conservative and we found narrower confidence intervals. in the low-prevalence scenario, we found no important differences between the estimates and standard errors obtained using the two techniques. discussion: however, it is the researcher's task to choose which technique and measure to use for each set of data, since this choice must remain within the scope of epidemiology. abstract background: in italy coronary heart disease chd mortality has been falling since the s. objective: to examine how much of the fall between and could be attributed to trends in risk factors, medical and surgical treatments. methods: a validated model was used to combine and analyse data on uptake and effectiveness of cardiological treatments and risk factor trends. published trials, meta-analyses, official statistics, longitudinal studies, surveys are main data sources. results: chd mortality fell by % in men and % in women aged - ; , fewer deaths in . approximately half mortality fall was attributed to treatments in patients and half to population changes in risk factors: in men, mainly improvements in cholesterol ( %) and smoking ( %) rather than blood pressure ( %). in women / mortality fall attributable to improvements in cholesterol ( %) and blood pressure ( %); adverse trends in smoking () %). adverse trends also in bmi () % in both genders) and diabetes () % in men; ) . % in women). conclusion: half chd mortality fall was attributable to risk factors reductions, principally cholesterol in men and women and smoking in men; in women rising smoking rates generated substantial additional deaths. a comprehensive strategy promoting primary prevention is needed. objective: to investigate the efficacy of ni in the post exposure prophylaxis (pep), i.e. in persons who had contact with an influenza case. design and methods: we conducted a systematic electronic data base review for the period between and . studies were selected and graded by two independent reviewers. the proportion of influenza-positive patients was chosen as primary outcome. for all analyses fixed effect models were used. weighted relative risks (rr) and % confidence intervals (ci) were calculated on an intention-to-treat basis. results: randomized controlled trials (n= , ) were included in the analysis. zanamivir and oseltamivir were effective against an infection with influenza (rr= . , % ci . - . and rr= . , % ci . - . , respectively). prophylactic efficacy was comparable in the subgroup of persons who had contact with an index case with lab-confirmed influenza ( studies, all ni, rr= . , % ci . - . ). conclusions: the available evidence suggests that ni are effective in the pep of influenza. discussion: results have to be interpreted with caution when transferred into general medical practice because study populations mainly included young and healthy adults without chronic diseases. abstract an important risk factor of breast cancer, mammographic breast density (mbd) is inversely associated with reproductive factors (age at first childbirth, and lactating). as pregnancy and lactating are highly correlated, whether this decline is induced by pregnancy or lactating is still unclear. we hypothesize that lactation reduces mbd independent of age at first pregnancy and parity. a study was done on women in the third sub-cohort of the dom project who had complete data regarding lactating, dy, had a child but varied by duration of lactating. multiple logistic regression analysis was done using dy (yes/ no) as outcome variable. explanatory variables added into the model were age, bmi, parity and age at first childbirth. a significant univariate relation was seen between lactating of the first child and dy. or . (ci % . ; . ). adjusted for explanatory variables, the or changed to . (ci % . ; . ). lactating seems to contribute independently to the reduction of mbd over and above pregnancy itself. given the limitations of the dichotomous dy ratio scores, additional studies will address which part; either glandular mass or fat tissue is responsible for the observed relation which will be measured from mammograms to be digitized. abstract background: alcohol consumption is common, but little is known about whether drinking patterns vary across geographic regions. objectives: to examine potential disparities in alcohol consumption across census regions and urban, suburban, and rural areas of the united states. design and methods: the data source was the national epidemiologic survey on alcohol and related conditions, an in-person interview of approximately , adults. the prevalence of abstinence and, among drinkers, the prevalences of heavy and daily drinking were calculated by census region and metropolitan status. multivariate logistic regression analyses were conducted to test for differences in abstinence and per drinker consumption after controlling for confounders. results: the odds of abstinence, heavy, and daily drinking varied widely across geographic areas. additional analyses stratified by census region revealed that rural residents in the south and northeast as well as urban residents in the northeast had higher odds of abstinence. rural residents in the midwest had higher odds of heavy drinking. conclusion and discussion: heavy alcohol consumption is of particular concern among drinkers living in the rural areas of the united states, particularly the rural midwest. other nations should consider testing for similar differences as they implement policies to promote safe alcohol consumption. abstract background: long-term exposure to particulate air pollution (pm) has been suggested to accelerate atherogenesis. objective: we examined the relationship between long-term exposure to traffic emissions and the degree of coronary artery calcification (cac), a measure of atherosclerosis. methods: in a population based, crosssectional study, distances between participants' home addresses and major roads were calculated with a geographic information system. annual mean pm . -exposure at the residence was derived from a small scale geostatistical model. cac, assessed by electronbeam computer tomography, was modelled with linear regression by proximity to major roads, controlling for background pm . air pollution and individual level risk factors. results: of participants lived within m of a major road. background-pm . ranged from . to . lg/m (mean . ). mean cacvalues were strongly dependent on age, sex and smoking status. reducing the distance to major roads by % leads to increases in cac by . % ( %ci . - . %) in the unadjusted model and , % ( %ci ) . - . ) in the adjusted model. stronger effects (adjusted model) were seen in men ( , %, %ci ) . - . ) and male non-smokers ( , %, %ci ) . - . ). conclusions: this study provides epidemiologic evidence that long-term exposure to traffic emissions is an important risk factor for coronary calcification. abstract background: this polymorphism has been associated risk factor levels and in one study with a reduced risk of acute myocardial infarction (ami). yet, the risk relation has not been confirmed. objectives: we investigated the role of this polymorphism on occurrence of ami, coronary heart disease (chd) and stroke in healthy dutch women. design and methods: a case-cohort study in a prospective cohort of initially healthy dutch from until january st . results: we applied a cox proportional hazards model with an estimation procedure adapted for case-cohort designs. a lower ami (n= ) risk was found among carriers of the ala allele (n= ) compared with those with the more common pro pro genotype (hazard ratio= . ; % ci, . to . ). no relation was found for chd (n= ;hr . ; % ci, . - . ) and for stroke (n= ;hr . ; % ci, . - . ). in our data little evidence was found for a relation between pparg and risk factors. conclusion and discussion: this study shows the pro ala polymorphism in pparg gene is modestly related to a reduced risk of ami in our study. no statistically significant relation was found for chd and stroke. abstract background: pseudo cluster randomization was used in a services evaluation trial because individual randomization risked contamination and cluster randomization risked selection bias due to expected treatment arm preferences of recruiting general practitioners (gps). gps were randomized in two groups. depending on this randomization, participants were randomized in majority to one study arm: intervention:control/ : or intervention:control/ : . objectives: to evaluate internal validity of pseudo cluster randomization. have gps treatment arm preferences? what is the effect on allocation concealment and selection bias? design and methods: we compared the baseline characteristics of participants to study selection bias. using a questionnaire, gps indicated their treatment arm preferences on a visual analogue scale (vas) and the allocation proportions they believed were used to allocate their patients over treatment arms. results: gps preferred allocation to the intervention (vas . (sd . ); - : indicates strongly favoring the intervention arm). after recruitment % of gps estimated a randomization ratio of : was used. the participants showed no relevant differences at baseline. conclusion and discussion: gps profoundly preferred allocation to the intervention group. few indications of allocation disclosure or selection bias were found in the dutch easycare trial. pseudo cluster randomization proofs to be a valid randomization method. abstract background: epidemiological studies rely on self-reporting to acquire data on participants, although such data are often limited in reliability. the aim here is to assess nuclear magnetic resonance (nmr) based metabonomics for evaluation of self-reported data on paracetamol use. method: four in-depth -hour dietary recalls and two timed -hour urine collections were obtained for each participant in the intermap study. a mhz h nmr spectrum was acquired for each urine specimen (n = , ). training and test sets involved two strata, i.e., paracetamol metabolites yes or no in the urine spectra, selected from all population samples by a principal component analysis model. the partial least squares-discriminant analysis (pls-da) model based on a training set of samples was validated by test set (n = ). the model correctly predicted stratum for of samples ( %) after removal of outliers not fitting the model, sensitivity . %, specificity %. this model was used to predict paracetamol status in all intermap specimens. it identified participants ( . %) who underreported analgesic use, of whom underreported analgesic use in both clinical visits. conclusion: nmr-based metabonomics can be used as a tool to enhance reliability of self-reported data. abstract background: in patients with asthma, the decline in forced expiratory volume in one second (fev ) is accelerated compared with non-asthmatics. objective: to investigate long-term prognostic factors of fev change in asthmatics from the general population. methods: a cohort of asthmatics ( - years-old) was identified in the frame of the european community respiratory health survey ( / ), and followed up in / . spirometry was performed on both occasions. the annual fev decrease (?fev ) was analysed by multi-level regression models, according to age, sex, height, bmi, occupation, familiarity of asthma, hospitalization for asthma (baseline factors); cumulative time of inhaled corticosteroid (ics) use and annual weight gain during the follow-up; lifetime pack-years smoked. results: when adjusting for all covariates, ics use for > years significantly reduced ?fev , with respect to non-users, of . ( %ci: . - . ) ml/year. ?fev was . ( . - . ) ml/year lower in women than in men. it increased by . ( . - . ) ml/year for every additional year in patient age and by . ( . - . ) ml/year for every additional kg/year in the rate of weight gain. conclusion: long-term ics use (> years) seems to be associated with a reduced ?fev over a -year followup. body weight gain seems a crucial factor in determining lung function decrease in asthmatics. abstract background: effectiveness of screening can be predicted by episode sensitivity, which is estimated by interval cancers following a screen. full-field digital or cr plate mammography are increasingly introduced into mammography screening. objectives: to develop a design to compare performance and validity between screen-film and digital mammography in a breast cancer screening program. methods: interval cancer incidence was estimated by linking screening visits from - at an individual level to the files of the cancer registry in finland. these data were used to estimate the study size requirements for analyzing differences in episode sensitivity between screen-film and digital mammography in a randomized setting. results: the two-year cumulative incidence of interval cancers per screening visits was estimated to be . to allow the maximum acceptable difference in the episode sensitivity between screen-film and digital arm to be % ( % power, . significance level, : randomization ratio, % attendance rate), approximately women need to be invited. conclusion: only fairly large differences in the episode sensitivity can be explored within a single randomized study. in order to reduce the degree of non-inferiority between the screen-film and digital mammography, meta-analyses or pooled analyses with other randomized data are needed. according to the literature up to % of colorectal cancers worldwide is preventable by dietary change. however the results of the epidemiologic studies are not consistent across the countries. the objective of the study is to evaluate the role of dietary nutrients on colorectal cancer risk in poland. the hospital-based case-control study was carried out in - . in total, histologically confirmed cancer cases and controls were recruited. adjustment for age, sex, education, marital status, multivitamin use, alcohol consumption, cigarette smoking, family history and energy consumption was done by logistic regression model. low tertile of daily intake in the control group was defined as a reference level. the lower colorectal cancer risk was found in cases with high daily intake of dietary fiber (or = , ; %ci: , - , ) and vitamin e (or = , ; %ci: , - , ). on the other hand, an increased risk for high monosaccharides consumption was observed. the risk pattern wasn't changed after additional adjustment for physical activity and body mass index. the results of the present study support the protective role of dietary fiber and some antyoxidative vitamins in the etiology of colorectal cancer. additionally they suggest that high consumption of monosaccharides may lead to elevated risk of investigated cancers. abstract assessment of nutrition is very difficult in every population, but in children there's additional question if child can properly recognize and recall foods that have been eaten. the aim of this study was to assess if dietary recall administered to adolescents can be used in epidemiological studies on nutrition. subjects were children, - years old, and they caretakers. -h recall was used to evaluate children's nutrition. both, child and caretaker were asked to recall all products, drinks and dishes eaten by child during the day before recall. the statistical analyses were done separately for each meal. we have noticed statistically significant differenced for intake of energy and almost all nutrients from the lunch. the observed spearman rank correlation coefficients between child and his caretaker ranged from . for vitamin c up to . for intake of carbohydrates. only calcium intake ( . vs. . mg/day) differentiated groups for the breakfast and b-carotene for the supper. the study showed that the recall with adolescents could be helpful source of data for the research in the population aspect. however, one shouldn't use such data for the examination of the individual nutritional habit of children, especially information about dinner can be biased. abstract background: acute bronchitis is one of the most common diagnoses made by primary care physicians. in addition to antibiotics, chinese medicinal herbs may be a potential medicine of choice. objectives: this review aims to summarize the existing evidence of comparative effectiveness and safety of chinese medicinal herbs for treating uncomplicated acute bronchitis. methods: we searched the cochrane central register of controlled trials, medline, embase, chinese biomedical database and etc. we included randomised controlled trials comparing chinese medicinal herbs with placebo, antibiotics or other western medicine for treating uncomplicated acute bronchitis. at least two authors extracted data and assessed trial quality. main results: four trials reported the time to improvement of cough, fever, and rales; two trials reported the proportion of patients with improved signs and symptoms; thirteen trials analyzed the data of global assessments of improvement. one trial reported the adverse effect during treatment. conclusions: there is insufficient quality data to recommend the routine use of chinese herbs for acute bronchitis. the benefit found in individual studies and this systematic review could be due to publication bias and study design limitations. in addition, the safety of chinese herbs is unknown, though adverse events are rarely reported. design and methods: patients with a definite ms and classified as dead or alive at st january were included in this retrospective observational study. influence of demographic and clinical variables was assessed with kaplan meier and cox methods. standardised mortality ratios were computed to compare patients' mortality with the french general population. results: a total of patients were included ( men, women). the mean age at ms onset was +/) years and the mean follow-up duration was +/) years ( patients-years). by , deaths occurred ( per patients-years). male gender, progressive course, polysymptomatic onset and high relapse rate were related to a worse prognosis. ms did not increase the number of deaths in our cohort compared to the general french population ( expected), except for highly disabled patients ( observed, expected). conclusion: this study gave precise insights on mortality in multiple sclerosis in west france. mattress dust. methods: we performed nested case-control studies within ongoing birth cohort studies in germany, the netherlands, and sweden and selected approximately sensitised and non-sensitised children per country. we measured levels of bacterial endotoxin, ß( -> )-glucans, and fungal extracellular polysaccharides (eps) in dust samples collected on the children's mattresses. results: combined across countries, higher amounts of dust and higher endotoxin, ß( -> )-glucans, and eps loads of mattress dust were associated with a significantly decreased risk of sensitization to inhalant allergens, but not food allergens. after mutual adjustment, only the protective effect of the amount of mattress dust remained significant [odds ratio ( % confidence interval) . ( . - . )]. conclusion: higher amounts of mattress dust might decrease the risk of allergic sensitization to inhalant allergens. the effect might be partly attributable to endotoxin, ß( -> )-glucans, and eps. it is not possible to distinguish with certainty, which component relates to the effect, since microbial agents loads are highly correlated with amount of dust and with each other. abstract background: postmenopausal hormone therapy (ht) increases mammographic density, a strong breast cancer risk factor, but effects vary across women. objective: to investigate whether the effect of ht use on changes in mammographic density is modified by polymorphisms in the estrogen (esr ) and progesterone receptor (pgr) genes. design and methods: information on ht use, dna and two consecutive mammograms were obtained from ht users and never ht users of the dutch prospect-epic and the english epic-norfolk cohorts. mammographic density was assessed using a computer-assisted method. changes in density between mammograms before and during ht use were analyzed using linear regression. results: a difference in percent density change between ht users and never users was seen in women with the esr pvuii pp or pp genotype ( . %; p< . ), but not in those with the pp genotype ( . %; p = . ). similar effects were observed for the esr xbai and the pgr + g/a polymorphisms. the pgr progins polymorphism did not appear to make women more susceptible to the effects of ht use. discussion and conclusion: our results suggest that specific polymorphisms in the esr and pgr genes may make women more susceptible to the effects of ht use on mammographic density. abstract background: there is a paucity of data on the cancer risk of turkish migrant children in germany. objectives: to identify cancer cases of turkish origin in the german childhood cancer registry (gccr) and to compare the relative incidence of individual cancers among turkish and non-turkish children. design and methods: we used a name algorithm to identify children of turkish origin among the , cancer cases below years of age registered since . we calculated proportional cancer incidence ratios (pcir) stratified for sex and time period. results: the name algorithm performed well (high sensitivity and specificity), and turkish childhood cancers were identified. overall, the relative frequency of tumours among turkish and non-turkish children is similar. there are specific sites and cancers for which pcirs are different; these will be reported during the conference. conclusion: our study is the first to show differences in the relative frequency of cancers among turkish and non-turkish children in germany. discussion: case control studies could help to explain whether observed differences in the relative frequency of cancers are due to differences in genetic disposition, lifestyle or socio-economic status. mutations in the netherlands cohort study on diet and cancer. data from , participants, cases and , subcohort members were analysed from a follow-up period between . to . years after baseline. adjusted gender-specific incidence rate ratios (rr) and % confidence intervals (ci) were calculated over tertiles of folate intake in case-cohort analyses. high folate intake did not reduce overall colon cancer risk. however, in men only, it was inversely associated with apc[csymbol] colon tumours (rr . , % ci . - . for the highest versus the lowest tertile of folate intake), but positively associated with apc+ colon tumours (highest vs. lowest tertile: rr . , ci . - . ). folate intake was neither associated with overall rectum cancer risk, nor with rectum cancer when apc mutation status was accounted for. we observed opposite associations between folate intake and colon cancer risk with or without apc mutations in men, which may implicate a distinct mutated apc pathway mediated by folate intake in men. abstract background and objectives: ten years after completion of the first serum bank of the general population to evaluate the long-term effects of the national immunisation programme (nip) a new serum collection is desirable. the objective is to provide insight into age-specific estimates of the immunity to childhood diseases and estimates of the incidence of infectious diseases with a frequent sub clinical course. design and methods: a two-stage cluster sampling technique was used to draw a nationwide sample. in each of five geographic regions, eight municipalities were randomly selected proportionally to their size. within each municipality, an age-stratified sample of individuals ( - yr) will be drawn from the population register. in addition eight municipalities will be selected with lower immunization coverage to obtain insight into the immune status of persons who often refuse vaccination on religious grounds. furthermore over sampling of migrants will be performed to study whether their immune status is satisfactory. participants will be asked to fill in a questionnaire and to allow blood to be taken. extra blood will be taken for a genetic study. results and conclusion: the design of a population-based serum collection aimed at the establishment of a representative serum bank will be presented. abstract background: during the last decade, the standard of diabetes care evolved to require more intensive management focussing on multiple cardiovascular risk factors. treatment decisions for lipidlowering drugs should be based on cholesterol and blood pressure levels. objectives: to investigate the influence of hba c, blood pressure and cholesterol levels on subsequent intensification of lipid-lowering therapy between - . design and methods: we conducted a prospective cohort study including , type diabetes patients who had at least two consecutive annual visits to a diabetes nurse. treatment intensification was measured by comparing drug regimes per year, and defined as initiation of a new drug class or dose increase of an existing drug. results: between - , the prevalence of lipid-lowering drug use increased from % to %. rates of intensification of lipid-lowering therapy remained low in poorly controlled patients ( % to %;tc/hdl ratio> ). intensification of lipid-lowering therapy was only associated with tc/hdl ratio (age-adjusted or = . ; %ci . - . ) and this association became slightly stronger over time. blood pressure was not found to be a predictor of the intensification of lipid-lowering therapy (or = . ). conclusion: hypercholesterolemia management intensified between - , but therapy intensification was only triggered by elevated cholesterol levels. more attention for multifactorial risk assessment is needed. abstract background: there are no standard severity measures that can classify the range of illness and disease seen in general practice. objectives: to validate new scales of morbidity severity against age, gender, deprivation and poor physical function. design and methods: in a cross-sectional design, morbidity data for consulters in a -month period was linked to their physical function status . there were english older consulters ( years +) and dutch consulters ( years +). consulters for morbidities classified on four gp-defined ordinal scales of severity ('chronicity', 'time course', 'health care use' and 'patient impact on activities of daily living') were compared to consulters for morbidity other than the , by age-groups, gender, and dichotomised deprivation and physical function scores. results: for both countries, on all scales, there was an increasing association between morbidity severity and older ages, female gender, more deprivation (minimum p< . ) and poor physical function (all trends p< . ). the estimates for categories, for example, within the 'chronicity' scale was ordered as follows: 'acute' (unadjusted odds ratio . ), 'acute-on-chronic' ( . ), 'chronic' ( . ) and 'life-threatening' ( . ). conclusions: new validated measures of morbidity severity indicate physical health status and offer the potential to optimise general practice care. hospitalization or death. calibration and discriminative capacity were estimated. results: among episodes of lrti in elderly patients with dm, endpoints occurred (attack rate %). reliability of the model was good (goodness-of-fit test p = . ). the discriminative properties of the original rule was acceptable (area under the receiver-operating curve (auc): . , % ci: . to . ). conclusion: the prediction rule for the probability of hospitalization or death derived from an unselected elderly population with lrti appeared to have acceptable discriminative properties in diabetes patients and can be used to target management of these common diseases. confounding by indication is a major threat to the validity of nonrandomized studies on treatment effects. we quantified such confounding in a cohort study on the effect of statin therapy on acute respiratory disease (ard) during influenza epidemics in the umc utrecht general practitioner research database among persons aged > = years. the primary endpoint was a composite of pneumonia or prednisolone-treated ard during epidemic, non-epidemic and summer seasons. to quantify confounding, we obtained unadjusted and adjusted estimates of associations for outcome and control events. in all, , persons provided , persons-periods, statin therapy was used in . % and in , person-periods an outcome event occurred. without adjustments, statin therapy was not associated with the primary endpoint during influenza epidemics (relative risk [rr] . ; % confidence interval [ %ci]: . - . ). after applying multivariable generalized estimating equations (gee) and propensity score analysis the rrs were . ( % ci: . - . ) and . ( % ci: . - . ). the findings were consistent across relevant strata. in non-epidemic influenza and summer seasons the rr approached . while statin therapy was not associated with control event rates. observed confounding in the association between statin therapy and acute respiratory outcomes during influenza epidemics masked a potential benefit of more than %. abstract background: despite several advances in the treatment of schizophrenia, the currently available pharmacotherapy does not change the course of illness or prevent functional deterioration in a substantial number of patients. therefore, research efforts into alternative or adjuvant treatment options are needed. in this project, called the 'aspirine trial', we investigate the effect of the antiinflammatory drug acetylsalicylic acid as an add-on to regular antipsychotic therapy on the symptoms of schizophrenia. objectives: to objective is to study the efficacy of acetylsalicylic acid in schizophrenia on positive and negative psychotic symptoms, immune parameters and cognitive functions. design and methods: a randomized placebo controlled double-blind add-on trial of inpatients and outpatients with schizophrenia, schizophreniform or schizoaffective disorder is performed. patients are : randomized to either months mg acetylsalicylic acid per day or months placebo, in addition to their regular antipsychotic treatment. all patients receive pantoprazole treatment for gastroprotection. participants are recruited from various major psychiatric hospitals in the netherlands. the outcomes of this study are -month change in psychotic and negative symptom severity, cognitive function, and several immunological parameters. status around participants have been randomized. no interim analysis was planned. abstract background: congenital cmv infection is the most prevalent congenital infection worldwide. epidemiology and outcome are known to vary with socio-economic background, but few data are available on epidemiology and outcome in a developing country, where the overall burden of infectious diseases is high. objective: to determine prevalence, riskfactors and outcome of congenital cmv infection in an environment with high infectious disease burden methods: as part of an ongoing birth cohort study, baby and maternal samples were collected at birth, and tested with an inhouse pcr for the presence of cmv. standardised clinical assessment were performed by a paediatrician. placental malaria was also assessed. follow-up is ongoing till the age of years. preliminary results: the prevalence of congenital cmv infection was / ( . %). the infected children were more often first born babies ( . % vs . %, p< . ). while no seasonality was observed, placental malaria was more prevalent among congenitally infected children ( . % vs . %,p = . ). there were no symptomatic babies detected. conclusion: this prevalence of congenital cmv is much higher than reported in industrialised countries, in the absence of obvious clinical pathology. further follow up is needed to assess impact on response to vaccinations, growth, and morbidities. of wheeze or cough at night in the first years. data on respiratory symptoms and dda were collected by yearly questionnaires. in total, symptomatic children with and without an early dda were included in the study population. results: fifty-one percent of the children with and % of the children without an early dda had persistent respiratory symptoms at age . persistence of symptoms was associated with parental atopy, eczema, nose symptoms without a cold, or a combination of wheeze and cough in the first years. conclusions: monitoring the course of symptoms in children with risk factors for persistent symptoms, irrespective of a diagnosis of asthma, may contribute to early recognition and treatment of asthma. little is known about the response mechanisms of survivors of disasters. objective: to examine selective non-response and to investigate whether attrition has biased the prevalence estimates among survivors of a disaster. design and methods: a longitudinal study was performed after the explosion of a fireworks depot in enschede, the netherlands. survivors completed a questionnaire weeks (t ), months (t ) and years post-disaster (t ). prevalence estimates resulting from multiple imputation were compared with estimates resulting from complete case analysis. results: non-response differed between native dutch and nonwestern immigrant survivors. for example, native dutch survivors who participate at t only were more likely to have health problems at t such as depression than native dutch who participated at all three waves (or = . , % ci: . - . ) . in contrast, immigrants who participated at t only were less likely to have depression at t (or = . , % ci: . - . ). conclusion and discussion: among native dutch survivors, the imputed estimates of t health problems tended to underestimated than the complete case estimates. the imputed t estimates among immigrants were unaffected or somewhat overestimated than the complete case estimates. multiple imputation is a useful statistical technique to examine whether selective non-response has biased the prevalence estimates. session: posters session : july presentation: poster. background: several epidemiologic studies have shown decreased colon cancer risk in physically active individuals. objectives: this review provides an update of the epidemiologic evidence for the association between physical activity and colon cancer risk. we also explored whether study quality explains discrepancies in results between different studies. methods: we included cohort (male n = ; female n = ) and case-control studies (male n = ; female n = ) that assessed total or leisure time activities in relation to colon cancer risk. we developed a specific methodological quality scoring system for this review. due to the large heterogeneity between studies, we refrained from statistical pooling. results: in males, the cohort and case-control studies lead to different conclusions: the case-control studies provide strong evidence for a decreased colon cancer risk in the physically active while the evidence in the cohort studies is inconclusive. these discrepant findings can be attributed to either misclassification bias in cohort or selection bias in case-control studies. in females, the small number of high quality cohort studies precludes a conclusion and the case-control studies indicate an inverse association. conclusion: this review indicates a possible association of physical activity and reduction of colon cancer risk in both sexes but the evidence is not yet convincing. abstract background/objectives: radiotherapy after lumpectomy is commonly applied to reduce recurrence of breast cancer but may cause acute and late side effects. we determined predictive factors for the development of late toxicity in a prospective study of breast cancer patients. methods: late toxicity was assessed using the rtog/ eortc classification among women receiving radiotherapy following lumpectomy after a mean follow-up time of months. predictors of late toxicity were modelled using cox regression in relation to observation time, adjusting for age, bmi and biologically effective dose in the maximum at the skin. results: ( . %) patients presented with telangiectasia and ( . %) patients with fibrosis. we observed a strong association between development of telangiectasia and fibrosis (p< . ). increasing patient age was a risk factor for telangiectasia and fibrosis (p for trend . and . , respectively). boost therapy (hazard ratio (hr) . , % ci . - . ) and acute skin toxicity (hr . , % ci . - . ) significantly increased risk of telangiectasia. risk of fibrosis was elevated among patients with atopic diseases (hr . , % ci . - . ). discussion: our study revealed several risk factors for late complications of radiotherapy. further understanding of differences in response to irradiation may enable individualized treatment and improve cosmetic outcome. doctor-diagnosed asthma and respiratory symptoms (age ) were available for (rint) and (no) children. results: the discriminative capacities of rint and exhaled no were statistically significant for the prediction of doctor-diagnosed asthma, wheeze (rint only) and shortness of breath (rint only). due to the low prevalence of disease in this general population sample, the positive predictive values of both individual tests were low. however, the positive predictive value of the combination of increased rint (cutoff . kpa.l- .second) and exhaled no (cut-off ppb) was % for the prediction of doctor-diagnosed asthma, with a negative predictive value of %. combinations of rint or exhaled no with atopy of the child showed similar results. conclusions: the combination of rint, exhaled no and atopy may be useful to identify high-risk children, for monitoring the course of their symptoms and to facilitate early detection of asthma. abstract background: in a cargo aircraft crashed into apartment buildings in amsterdam, killing people, and destroying apartments. an extensive, troublesome aftermath followed with rumours on toxic exposures and health consequences. objectives: we studied the long-term physical health effects of occupational exposure to this disaster among professional assistance workers. design and methods: in this historical cohort study we compared the firefighters and police officers who were occupationally exposed to this disaster (i.e. who reported one or more disasterrelated tasks) with their nonexposed colleagues (n = , and n = , respectively), using regression models adjusted for background characteristics. data collection took place from january to march , and included various clinical blood and urine parameters (including blood count and kidney function), and questionnaire data on occupational exposure, physical symptoms, and background characteristics. the overall response rate was %. results: exposed workers reported various physical symptoms (including fatigue, skin and musculoskeletal symptoms) significantly more often than their nonexposed colleagues. in contrast, no consistent significant differences between exposed and nonexposed workers were found regarding clinical blood and urine parameters. discussion and conclusion: this epidemiological study demonstrates that professional assistance workers involved in a disaster are at risk for long-term unexplained physical symptoms. abstract background and objectives: recent studies indicate that women with cosmetic breast implants have significantly increased risk of suicide. reasons for elevated risk are not known. it is suggested that women with cosmetic breast implants differ in their characteristics and have more mental problems than women of general population. aim of this study was to find out possible associations between physical or mental health and postoperative quality of life among finnish women with cosmetic breast implants. design and methods: information was collected from patient records of women and structured questionnaires mailed to women of the same cohort. data was analysed by using pearson chi square testing and logistic regression modelling. results: although effects of implantation on postoperative quality of life in different areas were mainly reported as positive or neutral, % of the women reported decreased state of health. postoperative dissatisfaction and decreased quality of life were significantly associated with diagnoses of depression (p = . ) and local complication called capsular contracture (p< . ). conclusion: our results are consistent with previous results finding most of the cosmetic breast surgery patients satisfied after implantation. however, this study brings new information on associations between depression, capsular contracture and decreased quality of life. abstract cancer and its treatments often produce significant persistent morbidities that reduce quality of life (qol) in cancer survivors. research indicates that both, physical exercise and psycho-education might enhance qol. therefore, we developed a -week multidisciplinary rehabilitation program that combines physical training with psycho-education. the aim of the present multicenter study is to determine the effect of multidisciplinary rehabilitation on qol as compared to no treatment and, additionally, to physical training alone. furthermore, we will explore which variables are related to successful outcome (socio-demographic, disease related, physiological, psychological and environmental characteristics). participants are needed to detect a medium effect. at present, cancer survivors are randomised to either the multidisciplinary or physical rehabilitation program or a -month waiting list control group. outcome assessment will take place before, halfway, directly after, and months following the intervention by means of questionnaires. physical activity will be measured before, halfway and directly after rehabilitation using maximal and submaximal cycle ergometer testing and muscle strength measurement. effectiveness of multidisciplinary rehabilitation will be determined by analysing changes between groups from baseline to post-intervention using multiple linear and logistic regression. positive evaluation of multidisciplinary rehabilitation may lead to implementation in usual care. continuous event recorders (cer) have proven to be successful in diagnosing causes of palpitations but may affect patient qol and increase anxiety. objectives: determine qol and anxiety in patients presenting with palpitations, and to evaluate the burden of the cer on qol and anxiety in patients presenting to the general practitioner. methods: randomized clinical trial in general practice. the short form- (sf- ) and state-trait anxiety inventory (stai) were administered at study inclusion, -weeks and months. results: at baseline, patients with palpitations (n = ) reported lower qol and more anxiety than a healthy population for both males and females. there were no differences between the cer arm and usual gp care at -weeks. at -months the usual care group (n = ) showed minimal qol improvement and less anxiety compared to the cer group (n = ). type of diagnosis did not account for any of these reported differences. conclusion: anxiety decreases and qol increases in both groups at -weeks and -month follow-up. hence it is a safe and effective diagnostic tool, which is applicable for all patients with palpitations in the general practice. abstract background: clinical benefits of statin therapy are accepted, but their safety profiles have been under scrutiny, particularly for the most recently introduced statin, rosuvastatin, relating to serious adverse events involving muscle, kidney and liver. objective: to study the association between statin use and the incidence of hospitalizations for rhabdomyolysis, myopathy, acute renal failure and hepatic impairment (outcome events) in real life. methods: in and , , incident rosuvastatin users, , incident other statin users and , patients without statin prescriptions from the pharmo database of > million dutch residents were included in a retrospective cohort study. potential cases of hospitalization for myopathy, rhabdomyolysis, acute renal failure or hepatic impairment were identified using icd- -cm codes and validated using hospital records. results: there were validated outcome events in the three cohorts including one case each of myopathy (other statin group) and rhabdomyolysis (non-treated group). there were no significant differences in the incidence of outcome events between rosuvastatin and other statin users. discussion: this study indicated that the number of outcome events is less than per person years. rosuvastatin does not lead to an increased incidence of rhabdomyolysis, myopathy, acute renal failure and hepatic impairment compared to other statins. the aim: the aim of the study was to assess the influence of insulin resistance (ir) on the coronary artery disease (cad) occurrence in middle aged women with normal glucose tolerance (ngt) material and methods: in - year women aged - , participants of the polish multicenter study on diabetes epidemiology were examined. anthropometric, biochemical (fasting lipids, fasting and after glucose load plasma glucose and insulin) and blood pressure determinations were performed . ir was defined as the matsuda index (irmatsuda) below the lower quartile of the irmatsuda distribution in ngt population the questionnaire examination of the lifestyle, present and past diseases was performed. results: ir was observed in % of all examined women and in . % with ngt. cad was diagnosed in , % of all examined women and in , % of those with ngt. the relative risk of cad related to ir in ngt and normotensive women was , ( % ci: , - , ) (p< . ). regular menstruation was observed in , % of cad women. irmatsuda was not different for cad menstruating and non menstruating women (respectively , ± , and , ± , ). conclusion: in middle aged, normotensive and normal glucose tolerant women ir seems to be an important risk factor of cad abstract background: in germany, primary prevention at population level is provided by general practitioners (gp). little is known about gps' strategies to identify patients at high risk for vascular diseases using standardised risk scores. objectives: we studied gp attitudes and current practice in using risk scores. methods-a cross-sectional survey was conducted among gps in north rhine-westphalia, germany, using mailed self-administered questionnaires on attitudes and current practice in identification of patients at high risk for vascular diseases. results: in , gps participated in the study. . % of gps stated to know the framingham-score, . % the procam-score and . % the score-score. . % of gps reported regular use of standardised risk scores to identify patients at high risk for vascular diseases, most frequently procam-score ( . %), followed by score-score ( . %) and framingham-score ( . %). main reasons for not using standardised risk scores were assumed rigid assessment of individual patients' risk profile ( . %), time-consuming appliance ( . %) and higher confidence in own work experience ( . %). conclusion: use of standardised risk scores to identify patients at high risk for vascular diseases is common among gps in germany. however, more educational work might be useful to strengthen gps' belief in the flexible appliance of standardised risk scores in medical practice. among epilepsy patients than in general population, but effects of specific antiepileptic drugs on birth rate are not well known. objectives: to estimate birth rate in epilepsy patients on aed treatment or without aeds and in a population-based reference cohort without epilepsy. design and methods: patients (n = , ) with reimbursement for aeds for the first time between and and information on their aed use, were identified from the databases of social insurance institution of finland. reference cohort without epilepsy (n = , ) and information on live births were identified from the finnish population register centre. the analyses were performed using poisson regression modelling. results:birth rate was decreased in epilepsy patients in relation to reference cohort without epilepsy in both genders regardless of aed use. in relation to untreated patients, women on any of the aeds had non-significantly lower birth rates. among men, birth rate was decreased in men on oxcarbazepine (rr = . , % ci = . , . ), but was not clearly lower among those on carbamazepine (rr = . , % ci = . , . ) or valproate (rr = . , % ci = . , . ) when compared to untreated patients. conclusion: our results suggest that birth rate is decreased among epilepsy patients on aeds, more so in men. abstract background: hereditary hemochromatosis (hh), characterised by excessive iron absorption, subsequent iron storage and progressive clinical features, can when diagnosed at an early stage be successfully treated. high prevalence of the c y-mutation on the hfe-gene in the hh patient population may motivate genetic screening. objectives: in first-degree relatives of c y-homozygotes we studied the gender and age -related biochemical penetrance of hfe-genotype to define a high-risk population eligible for screening. design and methods: one-thousand-six first-degree family members of probands with clinically overt hfe-related hh from five medical centres in the netherlands were approached. data on levels of serum iron parameters and hfe-genotype were collected. elevation of serum ferritin was defined using the centre-specific normal-values by age and gender. results: among the participating relatives, highest serum iron parameters were found in male c y-homozygous siblings aged > years: % had elevated levels of serum ferritin. generally, male gender and increased age are related with higher iron values. discussion and conclusion: genetic screening for hh is most relevant in male and elderly first-degree relatives of patients with clinically overt hfe-related hh, enabling regular investigations of iron parameters in homozygous individuals. abstract background: nosocomial infection causes increased hospital morbidity and mortality rates. although handwashing is known to be the most important action in its prevention, adherence of health care workers to recommended hand hygiene procedures is extremely poor. objective: evaluation of compliance of hand hygiene recommendations in health care workers of a tertiary hospital in barcelona after a course on hand hygiene was given to all nurses in the hospital during the previous year. methods: by means of nondeclared observation, compliance (handwashing or disinfecting, not solely glove exchange) of recommendations given by the center for disease control related to opportunities for hand hygiene was registered, in procedures of diverse risk level for infection, both in physicians and nurses. results: in opportunities for hand hygiene carried out by health care workers compliance of recommendations was . %. adherence differed between wards ( . % in intensive care units, . % in medical wards and . % in surgical wards) and slightly between health care workers ( . % in physicians, . % in nurses). discussion: in conclusion, after one year of an intervention on education, adherence to hand hygiene recommendations is very low. these results enhance the need of reconsidering the type of interventions implemented. type of comorbidity affects qol most. objectives: we studied whether qol differed in subjects with dm with and without comorbidities. in addition, we determined differences in type of comorbidity. design and methods: cross-sectional data of dm patients, participants of a population-based dutch monitoring project on risk factors for chronic disease (morgen) were analyzed. qol was measured by the short form . we compared the means of subdimensions for dm patients with one comorbidity (cardiovascular diseases (cvd), musculoskeletal diseases (msd) and asthma/copd) to dm patients without this comorbidity, by regression analyses adjusted for age and sex. results: the prevalences of cvd, msd and asthma/copd were . %, . %, and . %. all comorbidities were associated with lower qol, especially for physical functioning. the mean difference ( % ci) was . abstract background: the extent or increase of ueds is suggested repeatedly, but never before the scientific literature was systematic studied. objectives: a systematic appraisal of the worldwide incidence and prevalence rates of upper extremity disorders (ueds) available in scientific literature was executed to gauge the range of these estimates in various countries and to determine whether the rates are increasing in time. design and methods: studies that recruited at least people, collected data by using questionnaires, interviews and/or physical examinations, and reported incidence or prevalence rates of the whole upper-extremity including neck, were included. results: no studies were found with regard to the incidence of ueds and studies that reported prevalence rates of ueds were included. the point prevalence ranged from . - %; the months prevalence ranged from . - %. one study reported on the lifetime prevalence ( %). we did not find evidence of a clear increasing or decreasing pattern over time. it was not possible to pool the date, because the definitions used for ueds differed enormously. conclusions: there are substantial differences in reported prevalence rates on ueds. main reason for this is the absence of a universally accepted way of labelling or defining ueds. abstract background: the absolute number of women diagnosed with breast cancer increased from , in to , in in the netherlands. likewise, the age standardized rate increased from . to . per , women. besides the current screening programme, changes in risk profile could be a reason for the increased incidence. objective: we studied the changes in breast cancer risk factors for women in nijmegen. methods: in the regional screening programme in nijmegen, almost , women aged - years filled in a questionnaire about risk factors in [ ] [ ] . similar questions were applied in the nijmegen biomedical study in , where women of - year participated. the median age in both studies was years. results: the frequency of a first-degree relative with breast cancer was . % and . % in and , respectively . none of the other risk factors, as the age of women at st birth ( . % respectively . %), nulliparity ( . % resp. . %), age at menarche ( . % resp. . %), age at menopause ( . % resp. . %) and obesity ( . % resp. . %), changed in time. conclusion: the distribution of risk factors hardly changed, and is unlikely to explain the rise in breast cancer incidence from onwards. abstract background: a single electronic clinical history system has been developed in the bac (basque autonomous community) for general use for all health centres, thus making it possible to collect information online on acute health problems as well as chronic ailments. method: the prevalence of diabetes, high blood pressure and copd (chronic obstructive pulmonary disease) was estimated using icd- -cm diagnosis performed by primary care physicians. an estimate was also made of the prevalence of cholesterolemia based on the results of analyses requested by physicians. results: in , , patients (out of a total population of , , ) were assessed for serum cholesterol levels. based on this highly representative sample, it was estimated that . % had serum cholesterol levels above mg/dl. the prevalence of diabetes mellitus in people over the age of was . %. the prevalence of high blood pressure in people over was %. discussion: the primary care database makes it possible to access information on problems related to chronic illnesses. knowing the prevalence of diabetes patients enables doctors to analyse all aspects related to services used by the diabetic population. it also makes it possible to monitor analytical data in real time and evaluate health service outcomes. examinations were used to asses risk factors for diabetes. cases (n = ) were matched on age and sex to controls (n = ) who were not treated with antidiabetic drugs. logistic regression was used to calculate odds ratios (or). results: the or of incident diabetes for acei-use versus non-acei use was . ( %ci : . - . ). for ace dd homozygotes the or was . ( %ci: . - . ) and for ace-i allele carriers . ( %ci: . - . ). the interaction or was . ( %ci: . - . ). the agt and at r genotypes did not modify the association between acei use and diabetes. abstract background: lignans have antioxidant and estrogen-like activity, and may therefore lower cardiovascular and cancer risk. objective: we have investigated whether intake of four plant lignans (lariciresinol, pinoresinol, secoisolariciresinol, matairesinol) was inversely associated with coronary heart disease (chd), cardiovascular diseases (cvd), cancer, and all-cause mortality. design: the zutphen elderly study is a prospective cohort study in which men aged - y were followed for years. lignan intake was estimated using a recently developed database, and related to mortality using cox proportional hazards analysis. results: median total lignan intake in was lg/d. beverages such as tea and wine, vegetables, bread, and fruits were the major lignan sources. total lignan intake was not related to mortality. however, matairesinol was inversely associated with chd, cvd, cancer, and all-cause mortality. multivariate adjusted rrs ( % ci) per sd increase in intake were . ( . - . ) for chd, . ( . - . ) for cvd, . ( . - . ) for cancer, and . ( . - . ) for allcause mortality. conclusions: total lignan intake was not associated with mortality. the intake of matairesinol was inversely associated with mortality from chd, cvd, cancer, and all-causes. we can not rule out that this is due to an associated factor, such as wine consumption. abstract despite the drastic increase in the amount of research into neighbourhood-level contextual effects on health, studies contrasting these effects between different domains of health within one contextual setting are strikingly sparse. in this study we use multilevel logistic regression models to estimate the existence of neighbourhood-level variations of physical health functioning (pcs) and mental well-being (ghq) in the helsinki metropolitan area and assess the causes of these differences. the individual-level data are based on a health-survey of - year old employees of the city of helsinki (n = , response rate %). the metropolitan area is divided into neighbourhoods, which are characterised using a number of area-level indicators (e.g. unemployment rate). our results show moderate but systematic negative effect of indicators of neighbourhood deprivation on physical functioning, whereas for mental health the effect is absent. these effects were strongest for proportion of manual workers; odds ratio for poor physical functioning was . for respondents living in areas with low proportion of manual workers. part of this effect was mediated by differences in health behaviour. analyses on cross-level interactions show that individual-level socioeconomic differences in physical health are smallest in most deprived areas, somewhat contradicting the results of earlier studies. abstract background: the second-eye cataract surgery is beneficial, nevertheless, there is a considerable proportion of unmet needs. objective: to estimate the proportion of second-eye cataract surgery in the public health system of catalonia, and explore differences in utilisation by patients' gender, age, and region of residence. methods: a total of , senile cataract surgeries performed between and were included. proportions observed were adjusted through independent logarithmic regression models for each study factor. results: the proportion of second-eye surgery showed an increasing trend (r . %) from . % ( % ci . ; . ) in november to . % ( % ci . ; . ) in december , and its projection to years was , % ( % ci . ; . ). the proportion of second-eye surgery was % ( % ci . ; . ) greater in women than in men. patients years or older had a lowest proportion ( . %; % ic . ; . ), which nevertheless increased during the period, unlike that of patients aged less than years. differences among regions were moderate and decreased throughout the period. conclusions: if the observed trends persist, there will be a substantial proportion of unmet need for bilateral surgery. we predict greater use of second-eye surgery by older patients. abstract background: persistence with bisphosphonates is suboptimal which could limit prevention of fractures in daily practice. objectives: to investigate the effect of long term persistent bisphosphonate usage on the risk of osteoporotic fractures. methods: the pharmo database, including drug-dispensing and hospital discharge records for > two million subjects in the netherlands, was used to identify new female bisphosphonate users > years from jan ' -jun ' . persistence with bisphosphonates was determined using the method of catalan. a nested matched case-control study was performed. cases had a first hospitalization for an osteoporotic fracture (index-date). controls were matched : to cases on year of inclusion and received a random index-date. the association with fracturerisk was assessed for one and two year persistent bisphosphonate use prior to the index-date. analyses were adjusted for di fferences in patient characteristics. results: , bisphosphonate users were identified and had a hospitalization for osteoporotic fracture during follow-up. one year persistent bisphosphonate use resulted in a % lower fracture rate (or . ; % ci . - . ) whereas two year persistent use resulted in a % lower rate (or . ; % ci . - . ). conclusion and discussion: these results emphasize the importance of persistent bisphosphonate usage to obtain maximal protective effect of treatment. abstract background: in the who recommended all countries to add hepatitis b (hbv) vaccination to their national immunization programs. the netherlands is a low hbv endemic country and therefore adopted a vaccination policy targeted towards high-risk groups. methods: during , epidemiological data and blood samples were collected from all reported patients with an acute hbv infection. a fragment of the s-gene was sequenced and phylogenetically analysed to clarify transmission patterns between risk groups. results: of hbv cases reported, % was infected through sexual contact ( % homo-/bisexual, % heterosexual). for patients samples were available for genotyping. phylogenetic analysis identified genotypes: a( %), b( %), c( %), d( %), e( %) and f( %). of men who have sex with men (msm), % were infected with genotype a. among heterosexuals, all genotypes were found. in many cases, genotypes b-f were direct or indirect related to countries abroad. only injecting drug user was found (genotype a). conclusion: genotype a is predominant in the netherlands, including most of the msm. migrant hbv carriers play an important role in the dutch hbv epidemic. genotyping provides insight into the spread of hbv among highrisk groups. this information will be used to evaluate the vaccination policy in the netherlands. abstract background: excess weight might affect the perception of both physical and mental health in women. objective: to examine the relationship between body mass index (bmi) and hrqol in women aged -to -year-old in a rural zone of galicia. design and methods: population-based cross-sectional study covering women, personally interviewed, from villages. hrqol was assessed with sf- questionnaire, through personal interviews. each scale of sf- was dichotomised in suboptimal or optimal hrqol using previously defined cut-offs. odds ratios (or) obtained from logistic regression summarize the relationship of bmi with each scale, adjusting for sociodemographic variables, sedentary leisuretime, number of chronic diseases and sleeping hours. results: a . % of women were obese (bmi = kg/m ) and . % overweight kg/m ) . frequency of suboptimal physical function was higher among overweight women (adjusted or: . ; % ci: . - . ) and obesity (adjusted or: . ; % ci: . - . ). furthermore, obese women had higher frequency of suboptimal scores on the general health scale (adjusted or: . ; % ci: . - . ). no differences were observed regarding mental health scores among women with different bmi categories. conclusion: in women from rural villages, overweight is associated with worse hrqol in physical function and general health. abstract background: pneumococcal vaccination among elderly is recommended in several western countries. objectives: we estimate the cost-effectiveness of a hypothetical vaccination campaign among the + general population in lazio region (italy). methods: a cohort was followed during a years timeframe. we estimated the incidence of invasive pneumococcal disease, in absence of vaccine, based on actual surveillance and hospital data. the avoided deaths and cases have been estimated from literature according to trial results. health expenditures included: costs of vaccine program, inpatient and some outpatient costs. cost-effectiveness was expressed as net healthcare costs per episode averted and life-year gained (lyg) and was estimated at baseline and in deterministic and stochastic sensitivity analyses. all parameters were age-specific and varied according to literature data. results: at baseline net costs per event averted and lyg at prices were, respectively, e , ( % ci: e , -e , ) and , ( % ci: e , -e , ). in the sensitivity analysis, bacteraemic pneumonia incidence and vaccine effectiveness increased the net cost per lyg by % and % in the worst-case scenario, and decreased it to e , in the best-case. conclusions: the intervention was not cost saving. the uncertainties concerning invasive pneumococcal disease incidence and vaccine effectiveness make the cost-effectiveness estimates instable. spain - abstract background: spatial data analysis can detect possible sources of heterogeneity in spatial distribution of incidence and mortality of diseases. moreover small area studies have greater capacity to detect local effects linked to environmental exposures. objective: to estimate the patterns of cancer mortality at municipal level in spain using smoothing techniques in a single spatial model. design and methods: cases were deaths due to cancer, registered at a municipal level nation-wide for the period - . expected cases for each town were calculated using overall spanish mortality rates and standard mortality ratios were computed. to plot the maps, smoothed municipal relative risks were calculated using besag york and mollie`model and markov chain monte carlo simulation methods. as an example maps for stomach and lung cancer neoplasms are shown. results: it was possible to obtain the posterior distribution of relative risk by a single spatial model including towns and the adjacencies. maps showed the singular patterns for both cancer locations. conclusion: the municipal atlas allows to avoid edge local effects, improving the detection of spatial patterns. discussion: bayesian modelling is a very efficient way to detect spatial heterogeneity by cancer and other causes of death. abstract background: little is known about the impact of socioeconomic status (ses) on outcomes of surgical care. objectives: we estimated the association between ses and outcomes of selected complex elective surgical procedures. methods: using hospital discharge registries (icd-ix-cm codes) of milan, bologna, turin and rome we identified patients undergoing cardiovascular operations (coronary artery bypass grafting, valve replacement, carotid endarterectomy, repair of unruptured thoracic aorta aneurysm) (n = , ) and cancer resections (pancreatectomy, oesophagectomy, liver resection, pneumonectomy, pelvic evisceration) (n = , ) in four italian cities, - . an area-based income index was calculated. post-operative mortality (in-hospital or within days) was the outcome. logistic regression adjusted for gender, age, residence, comorbidities, concurrent and previous surgeries. results: high income patients were older and had fewer comorbidities. mortality varied by surgery type (cabg , %, valve , %, endartectomy , %, aorta aneurysm , %, cancer . %). low income patients were more likely to die after cabg (or = . abstract background: an important medical problem of renal transplant patients who receive immunosuppression therapy, is the development of a malignancy during the long term follow-up. however, existing studies are not in agreement over whether patients who undergo renal transplantation have an increased risk of melanoma. objective: the aim of this study was to determine the incidence of melanoma in renal transplantation patients in the northern part of the netherlands. methods: we linked a cohort of patients who received a renal transplantation in the university medical centre groningen between and with the cancer registry of the comprehensive cancer centre north-netherlands, to identify all melanoma patients in this cohort. results: only patient developed a melanoma following the renal transplantation; no significant increase in the risk of melanoma was found. conclusion: although several epidemiologic studies have shown that the risk of melanoma is increased in renal transplantation patients who receive immunosuppression therapy to prevent allograft rejection, this increased risk was not found in the present study. the lower level of immunosuppressive agents given in the netherlands might be responsible for this low incidence. abstract background: socio-economic health inequalities are usually studied for self-reported income, although the validity of self-reports is uncertain. objectives: to compare self-reports of income by respondents to health surveys with their income according to tax registries, and determine to what extent choice of income measure influences the health-income relation. methods: around . respondents from the dutch permanent survey on living conditions were linked to data from dutch tax and housing registries of . both self-reported and registry-based measures of household equivalent income were calculated and divided into deciles. the association with less than good self-assessed health was studied using prevalence rates and odds ratios. results: around % of the respondents did not report their income. around % reported an income deciles lower or higher than the actual income value. the relation between income and health was influenced by choice of income measure. larger health inequalities were observed with selfreports compared to registry-based measures. while a linear healthincome relation was found using self-reported income, a curvilinear relation (with the worst health in the second lowest deciles) was observed for registry-based income. conclusion: choice of the income source has a major influence on the health-income relation that is found in inequality research. abstract background: while many health problems are known to affect immigrant groups more than the native dutch population, little is known about health differences within immigrant groups. objectives: to determine the association between self assessed health and socioeconomic status (ses) among people of turkish, moroccan, surinamese and antillean origin. methods: data were obtained from a social survey held among immigrants - years in the netherlands, with almost respondents per immigrant group. ses differences in the prevalence of 'poor' self-assessed health were measured using prevalence rate ratios estimated with regression techniques. results: within each immigrant group, poor health was much more common among those with low ses. the health of women was related to their educational level, occupational position, household income, financial situation and (to a lesser extent) their parents' education. similar relationships were observed for men, except that income was the strongest predictor of poor health. the health differences were about as large as those known for the native dutch population. conclusion and discussion: migrant groups are not homogenous. also within these groups, low ses is related to poor general health. in order to identify subgroups where most health problems occur, different socioeconomic indictors should be used. abstract background: genetic damage quantification can be considered as biomarker of exposure to genotoxic agents and as early-effect biomarker regarding cancer risk. objectives: to assess genetic damage in newborns and its relationship with anthropometrical, sociodemographic variables, maternal tobacco consumption and pollution perception. design and methods: the bio-madrid study recruited trios (mother/father/newborn) from areas in madrid to assess the impact of pollutants in humans. parents answered a questionnaire about socio-economic characteristics, pregnancy, life-style and perception of pollution. genetic damage in newborns were measured with the micronucleus(mn) test in peripheral lymphocytes poisson regression models were fitted using mn frequency per binucleated cells as dependent variable. explanatory variables included sex, parents age, tobacco, area and reported pollution level. results: the mean frequency of mn was . per (range: - ). no differences were found regarding area, sex and maternal tobacco consumption. mn frequency was higher in underweighted newborns and in those residing near heavy traffic roads. in recent years minimally invasive surgery procedures underwent rapid diffusion and laparoscopic cholecystectomy has been among the first to be introduced. after its advent, increasing rates of overall and laparoscopic cholecystectomy have been observed in many countries. we evaluated the effect of the introduction of laparoscopic procedure on the rates of cholecystectomy in friuli venezia giulia region, performing a retrospective study. from regional hospitals discharge data we selected all records with procedure code of laparoscopic (icd cm: ) or open ( ) cholecystectomy and diagnosis of uncomplicated cholelithiasis (icd cm: . ; . ; , ) or cholecystitis ( , ; , ), in any field, from to . in the year study period, the number of overall cholecystectomies increased from to (+ , %), mainly for the relevant increase of laparoscopic interventions from procedures, ( , % of overall cholecystectomies), to ( , %). rates of laparoscopic cholecystectomies increased from , to , per admitted patients with diagnosis of cholelithiasis or cholecystitis. the introduction of laparoscopic cholecystectomies was followed not only by a shift towards laparoscopically performed interventions but also by an increase in overall cholecystectomies in friuli venezia giulia region. abstract background: although a diminished doses scheme of -valent pneumococcal conjugate vaccination (pcv ) may offer protection against invasive pneumococcal disease, it might affect pneumococcal carriage and herd immunity. long term memory has to be evaluated. objective: to compare the influence of a and -doses pcv -vaccination scheme on pneumococcal carriage, transmission, herd immunity and anti-pneumococcal antibody levels. methods: in a prospective, randomized, controlled trial infants are randomly allocated to receive pcv at ages and months; ages , and months and the age of months only. nasopharyngeal (np) swabs are regularly obtained from infants and family members. the np swabs are cultured by conventional methods and pneumococcal serotypes are determined by quellung reaction. antibody levels are obtained at and months from infants in group i and ii and from infants in group iii. one thousand infants are needed to detect a % difference in pneumococcal carriage (a = . , ß = . ) between the three groups. results: so far, infants have been included. preliminary results show that prior to vaccination pneumococcal carriage was %. conclusion: this trial will provide insight into the effects of a diminished dose scheme on herd immunity and long-term antipneumococcal antibody development. abstract background: oil-spills cause important environmental damages and acute health problems on affected populations. objectives: to assess the impact of the prestige oil-spill in the hrqol of the exposed population. design and methods: we selected residents in coastal areas heavily affected by the oil-spill and residents in unaffected inland villages through random sampling, stratified by age and sex. hrqol was measured with the sf- questionnaire in personal interviews. individual exposure was also explored. mean differences in sf- scores > points were considered 'clinically relevant'. odds ratios (or) summarized the association between area of residence (coast vs inland) and suboptimum hrqol (lower than percentile th), adjusting for possible confounders. results: neither clinically relevant nor statistically significant differences were observed in most of the sf- scales regarding place of residence or individual exposure. worse scores (inland = , ; coast = , ; p< , ) abstract background: patient comorbidities are usually measured and controlled in health care outcome research. hypertension is one of the most commonly used comorbidity measures. objectives: this study aims to assess underreporting of hypertension in ami patients, and to analyze the impact of coding practices among italian regions or hospitals' type. methods: a cohort of ami hospitalisations in italy from november to october was selected. patients with a previous hospital admission reporting a diagnosis of complicated hypertension within the preceding months were studied. a logistic model was constructed. both crude and adjusted probability of reporting a hypertension in ami admissions, depending from the number of diagnosis fields compiled in discharge abstracts, and presence of other diseases were estimated. results: in . % of patients hypertension was not reported. probability of reporting hypertension increased with the number of compiled diagnosis fields (adjusted ors range: . - . ). there were no significant differences among italian regions, while private hospitals' reporting was less accurate. disorders of lipoid metabolism were more probably coded with hypertension (adjusted or: . ). conclusions: information from both ami and previous hospitali-sations would be needed to include hypertension in a comorbidity measure. abstract background: the angiotensin converting enzyme inhibitors (acei) should be considered the standard initial treatment of the systolic heart failure. this treatment is not recommended in patients with hypotension, although figures of systolic blood pressure around - mmhg during the treatment are allowed if the patient remains asymptomatic. objectives: to know the proportion of patients with systolic heart failure receiving treatment with acei, and the proportion of these patients with signs oh hypotension. design and methods: the electronic clinical records of all the patients diagnosed of systolic heart failure were reviewed. the electronic information system covers a % of the population of the basque country, approximately. diagnosis of heart failure was defined as the presence of any of the following cie- codes: or . or . . to evaluate the blood pressure, the last available determination was considered. results: out of patients with left heart failure, ( . %) have been prescribed acei. among the patients with blood pressure lower than mmhg (systolic) or than mmhg (diastolic), ( . %) were also receiving this treatment. conclusions: acei are clearly underprescribed in the basque country for the treatment of heart failure. attention should be given to the group at risk of hypotension. abstract background: epidemiologic studies have shown an association between c-reactive protein (crp) and cardiovascular endpoints in population samples. methods: in a longitudinal study of myocardial infarction (mi) survivors, crp was measured repeatedly (up to times) within a period of months. data on disease history and life style were collected at baseline. we examined the association between different variables and the level of crp using a random effects model. results: in total crp samples were collected in athens, augsburg, barcelona, helsinki, rome and stockholm. mean levels of crp were . , . , . , . , . , . [mg/l] respectively. body mass index (bmi) and chronic bronchitis (ever diagnosed) had the largest effect on crp ( % (for kg/m ) and % change from the mean level, respectively, p< . ). age classes showed a cubic function with a minimum at ages to . glycosylated hemoglobin (hba c) < . % as a measure of long-term blood glucose control and being male were found to be protective () % and ) % respectively, p< . ). conclusion: it was shown that bmi and history of bronchitis are important in predicting the level of crp. other variables, like alcohol intake, play a minor role in this large sample of mi patients. abstract background: during the last decades a remarkable increase in incidence rates of malignant lymphoma was seen. although some reasons are known or suspect underlying risk factors are not well understood. objectives: we studied the influence of medical radiation (x-ray, radiotherapy and szintigraphy) on the risk of malignant lymphoma. methods: we analysed data from a population-based case-control study with incident lymphoma cases in germany from - . after informed consent cases were pair-matched with controls recruited from registration office by age, gender and study region. data was collected in a personal interview. we analysed data using conditional logistic regression. results: the linear model shows an or = . /msv due to x-ray exposure and or = . ( %-ci = . - . ) comparing higher with lower exposure. radiotherapy shows an or = . (n = cases). there is no association between all lymphomas and szintigraphies but in the subgroup containing multiple myeloma, cll, malt-and marginalcell lymphoma we found an or = . ( %-ci = . - . ) in the multivariate model. discussion: no excess risk was observed for x-ray examinations. ionising radiation may increase risk for specific lymphoma subgroups. however, it should be noted that numbers in the subgroups are small and that radiation dose may be somehow inaccurate as no measures were available. abstract background: varus-alignment (bow-leggedness) is assumed to correlate with knee osteoarthritis (oa), but it is unknown whether varus-alignment precedes the oa or whether varus-alignment is a result of oa. objective: to assess the relationship between varusalignment and the development, as well as progression, of knee oa. methods: , participants in the rotterdam study were selected. knee oa at baseline and at follow-up (mean follow-up . years) was defined as kellgren & lawrence (k&l) grade , and progression of oa as an increase of k&l degree. alignment was measured by the femoro-tibial angle on baseline radiographs. multivariable logistic regression for repeated measurements was used. results: of , knees, . % showed normal alignment, . % varus-alignment, and . % valgus-alignment. comparison of high varus-alignment versus normal, low and mediate varus-alignment together, showed a two-fold increase in the development of knee oa. (or = . ; %ci = . - . ). the risk of progression was higher in the high varus group compared to the normal, low and mediate varus group (or = . ; %ci = . - . ). stratification for overweight gave similar odds ratio's in the overweight group, but weaker odds ratio's in the non-overweight group. conclusion: a higher value of varus-alignment is associated with the onset and progression of knee oa. abstract background: echocardiographic image quality in copd patients can be hampered by hyperinflated lungs. cardiovascular magnetic resonance imaging (cmr) may overcome this problem and provides accurate and reproducible information about the heart without geometric assumptions. objective: to determine the value of easily assessable cmr parameters compared to other diagnostic tests in identifying heart failure (hf) in copd patients. design and methods: participants were recruited from a cohort of copd patients = years. a panel established the diagnosis of hf during consensus meetings using all diagnostic information, including echocardiography. in a nested case-control study design, copd patients with hf (cases) and a random sample of copd patients without hf (controls) received cmr. the diagnostic value of cmr for diagnosing hf was quantified using univariate and multivariate logistic modelling and roc-area analyses. results: four easily assessable cmr measurements had significantly more added diagnostic value beyond clinical items (roc-area . ) than amino-terminal pro b-type natriuretic peptide (roc-area . ) or electrocardiography (roc-area . ). a 'cmr model' without clinical items had an roc-area of . . conclusion: cmr has excellent capacities to establish a diagnosis of heart failure in copd patients and could be an alternative for echocardiography in this group of patients. abstract background: the prevalence of overweight (i.e, body mass index [bmi] > = kg/m ) is increasing. new approaches to address this problem are needed. objectives: ) to assess the effectiveness of distance counseling (i.e., by phone and e-mail/internet) on body weight and bmi, in an overweight working population. ) to assess differences in effectiveness of the two communication methods. design and methods: overweight employees ( % male; mean age . ± . years; mean bmi . ± . kg/m ) were randomized to a control group receiving general information on overweight and lifestyle (n = ), a phone based intervention group (n = ) and an internet based intervention group (n = ). the intervention took months and used a cognitive behavioral approach, addressing physical activity and diet. the primary outcome measures, body weight and bmi, were measured at baseline and at six months. statistical analyses were performed with multiple linear regression. results: the intervention groups (i.e., phone and e-mail combined) lost . kg (bmi reduced by . kg/m ) over the control group (p = . ). the phone group lost . kg more than the internet group (p = . ). abstract objective: although an inverse gradient education-mortality has been shown in the general population, little is known about this trend in groups with higher risks of death.we examine differences in mortality by education and hiv-status among injecting drug users (idus) before and after introduction of highly active antiretroviral therapy (haart) in . methods: communitybased cohort study of idus recruited in aids prevention centres ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) abstract background: pancreatic cancer is an aggressive cancer with low survival time, with health-related quality of life (hrqol) being of major importance. objectives: the aim of our study was to assess both generic and disease-specific hrqol in patients with pancreatic cancer. methods: patients with suspected pancreatic cancer were consecutively included at admission to the hospital. hrqol was determined with the disease-specific european organization for research and treatment of cancer (eortc) health status instrument and generic euroqol (eq- d). results: a total of patients (mean age years ± , % men) were admitted with suspected pancreatic cancer. of these patients, ( %) had pancreatic cancer confirmed as final diagnosis. hrqol was significantly impaired in patients with pancreatic cancer for most eortc and eq- d scales in comparison to norm populations. the ed- d visual analogue scale (vas) and utility values were significantly correlated to the five functional scales, to the global health scale and to some but not all of the eortc symptom scales/items. conclusions: hrqol was severely impaired in patients with pancreatic cancer. there was a significant correlation between most eortc and eq- d scales. our results may facilitate further economic evaluations and aid health policy makers in resource allocation. abstract background: organised violence has health impact both on those who experience the violence directly and indirectly. the numbers of people affected by mass violence is alarming. substantial knowledge on the long-term health impact of organized violence is of importance for public health and for epidemiology. objectives: to investigate research results of long term mental health impact of organised violence. design and methods: a search of papers for the keywords genocide, organised violence, transgenerational effects, mental health was carried out in pubmed, science citation index and psychinfo. results: the systematic review on the long-term health impact of genocide showed that exposure to organised violence has an impact on mental health. methodological strenghts and weaknesses varied between studies. the found mental health consequences were associated with the country of research and the time of study. overall data showed organised violence has transgenerational impact on mental health of individuals and societies. conclusion: longitudinal studies have to be carried out to get further insight into the long-term health effects of organised violence. discussion: research results on mental health effects of organised violence have to be analysed in the context of changing concepts of illness. overweight is increasing and associated with various health problems. there are no well-structured primary care programs for overweight available in the netherlands. therefore, we developed a -month multidisciplinary treatment program in a primary care setting. the aim of the present study is to determine the feasibility and efficacy of a multidisciplinary treatment program on weight loss and risk profile in an adult overweight population. hundred participants of the utrecht health project are randomised to either a dietetic group or a dietetic plus physiotherapy group. the control group consist of another participants recruited from the utrecht health project and receives routine health care. body weight, waist circumference, blood pressure, serum levels, energy-intake and physical activity are measured at baseline, halfway and at the end of the treatment program. feasibility of the treatment program is assessed by response, compliance and program-associated costs and workload. efficacy is determined by analysing changes in outcome measures between groups over time using t-tests and anova repeated measurements. the treatment program is considered effective with at least a % difference in mean weight change over time between groups. positive evaluation of the multidisciplinary treatment program for overweight may lead to implementation in routine primary health care. abstract background: examining patient's quality of life (qol) before icu admission will permit to compare and analyze its relation with other variables. objectives: analyze qol of patients admitted to a surgical icu before admission and study its relation with baseline characteristics and outcome. design and methods: the study was observational and prospective in a surgical icu, enrolling all patients admitted between november and april . baseline characteristics of patients, history of co morbidities and quality of life survey score (qolss) were recorded. assessment of the relation between each variable or outcome and the total score of qolss was performed by multiple linear regression. results: total qolss demonstrated worse qol in patients with hypertension, cerebrovascular disease, renal insufficiency, severely ill (as measured by saps and asa physical status), and in older patients. there was no relation between qol and longer icu los. conclusions: preadmission qol correlates with age, severity of illness, comorbidities and mortality rates but is an able to predict longer icu stay. discussion: qolss appears to be a good indicator of outcome and severity of illness. abstract background: transient loss of consciousness (tloc) has a cumulative lifetime incidence of %, and can be caused by various disorders. objectives: to assess the yield and accuracy of initial evaluation (standardized history, physical examination and ecg), performed by attending physicians in patients with tloc, using follow-up as a gold standard. design and methods: adult patients presenting with tloc to the academic medical centre between february and may were included. after initial evaluation physicians made a certain, likely or no initial diagnosis. when no diagnosis was made additional cardiological testing, expert history taking and autonomic function testing were performed. the final diagnosis, after years follow-up, was determined by an expert committee. results: patients were included. after initial evaluation, % of the patients were diagnosed with a certain and % with a likely cause for their episodes. overall diagnostic accuracy was % ( %ci - %); % ( %ci - %) for the certain diagnoses and % ( %ci - %) for the likely diagnoses. conclusion and discussion: attending physicians make a diagnosis in % of patients with tloc after initial evaluation, with high accuracy. the use of abundant additional testing can be avoided in many patients. abstract background: the possibility of an influenza pandemic is one of the major public health challenges of today. risk perceptions among the general public may be important for successful public health measures to better control an outbreak situation. objectives: we investigated risk perception and efficacy beliefs related to an influenza pandemic in the general population in countries in europe and asia. design and methods: telephone interviews were conducted in . risk perception of an influenza pandemic was measured on a -point scale and outcome-and self-efficacy on a point scale (low-high). the differences in risk perception by country, sex and age were assessed with a general linear model including interaction effects. results: , persons were interviewed. the mean risk perception of flu was . and was significantly higher in europe ( . ) compared to asia ( . ) (p< . ) and higher in women ( . ) than men ( . ) (p< . ). outcome-and self-efficacy were lower in europe than asia. conclusion: in europe higher risk perceptions and lower efficacy beliefs were found as compared to asia. in developing preparedness plans for an influenza pandemic specific attention should therefore be paid to risk communication and how perceived self-efficacy can be increased. abstract background: increased survival of patients with cf has prompted interest towards their hrqol. objectives: .to measure hrqol and its predictors in cf patients cared for at the bambino gesuc hildren's hospital in rome; . to assess the psychometric properties of the italian version of the cf specific hrqol instrument (cystic fibrosis questionnaire, cfq). design and methods: crosssectional survey. all cf patients aged years or more were asked to complete the cfq (age-specific format). psychological distress was assessed through standardized questionnaires in patients (achenbach and general health questionnaire, ghq) and their parents (ghq and sf- ). results: one-hundred-eighteen patients ( males, females, age range to years) participated in the study (response rate %). internal consistency of cfq was satisfactory (cronbach alpha from . to . ); all item-test correlation were greater than . . average cfq standardized scores were very good in all domains (> on a - scale), except perceived burden of treatments ( ) and degree of socialization ( ). multiple regression analysis was performed to identify factors associated with different hrqol dimensions. conclusion: support interventions for these patients should concentrate on finding a balance between need to prevent infections and promotions of adequate, age-appropriate social interactions. abstract background: the metabolic syndrome (metsyn) -a clustering of metabolic risk factors with diabetes and cardiovascular diseases as the primary clinical outcomes -is thought to be highly prevalent with an enormous impact on public health. to date, consistent data in germany are missing. objective: the study was conducted to examine the prevalence of the metsyn (according to ncap atp iii-definition) among german patients in primary care. methods: the german-wide cross-sectional study run two weeks in october with randomly selected general practitioners included. blood glucose and serum lipids were analyzed, waist circumference and blood pressure assessed, data on smoking, dietary and exercise habits, regional and sociodemographic characteristics collected. abstract background: excessive infant crying is a common and often stress inducing problem than can ultimately result in child abuse. from previous research is known that maternal depression during pregnancy is related to excessive crying, but so far little attention is paid to paternal depression. objective: we studied whether paternal depression is independently associated to excessive infant crying. design and methods: in a prospective multiethnic population-based study we obtained depression scores of , mothers and , fathers at weeks pregnancy using the brief symptom inventory, and information on crying behaviour of , infants at months. we used logistic regression analyses in which we adjusted for depression of the mother, level of education, smoking and alcohol use. results: paternal depressive symptomatology was related to the widely used wessel's criteria for excessive crying (adusted odds ratio . , . - . ). conclusion: our findings indicate that paternal depressive symptomatology might be a risk factor for excessive infant crying. discussion genetic as well as other direct (e.g. interaction between father and child) or indirect (e.g. marital distress or poor circumstances) mechanisms could explain the found association. abstract background: in studying genetic background of congenital anomalies the comparison of affected cases to non-affected controls is popular method. investigation of case-parent triads uses observation of cases and their parents exclusively. methods: both casecontrol approach and log-linear case-parent triads model were implemented to spina bifida (sb) cases and their parents ( triads) and controls in analysis of impact of the c t and a c mthfr polymorphisms on occurrence of sb. results: observed frequencies for tt genotype were , % in sb children, , % in mothers, , % in fathers, , % in controls and for cc genotype were , % of sb children, , % of mothers, , % of fathers and , % of controls. both genotype frequencies in sb triads did not differ significantly from controls. case-control approach showed nonsignificant increase in risk of having sb for t allele carriers either in homozygous (or = , ) or heterozygous form (or = , ) and for c allele carriers in heterozygous form (or = , ). log-linear model revealed significant relative risk of sb in children with both tt and ct genotype (rr = , and rr = , respectively). child's genotype at a c and mother's genotypes did not contribute to the risk. conclusions: caseparent triads approach adds new information regarding impact of parental imprinting on congenital anomalies. abstract background: previous studies showed an association of autonomic dysfunction with coronary heart disease (chd) and with depression as well as an association of depression with chd. however, there is limited information on autonomic dysfunction as potential mediator of the adverse effect of depression on chd. objectives: to examine the role of autonomic dysfunction as a potential mediator of the association of depression with chd. design/ methods: we used data of participants aged - years of the ongoing population-based cross-sectional carla study ( % male). time-and frequency-domain parameters of heart rate variability (hrv) as a marker of autonomic dysfunction were calculated. prevalent myocardial infarction (mi) was defined as selfreported physician-diagnosed mi or diagnostic minnesota code in the electrocardiogram. depression was defined based on the cesd-depression scale. logistic regression was used to assess associations between depression, hrv and mi. results: in ageadjusted logistic regression models, there was no statistically significant association of hrv with depression, of depression with mi, or of hrv with mi in men and women. discussion/conclusion: the present analyses do not support the hypothesis of an intermediate role of autonomic dysfunction on the causal path from depression to chd. abstract background: hypertension is an established risk factor for cardiovascular disease. however, prevalence of untreated or uncontrolled hypertension is often high (even in populations at high risk). objectives: to assess the prevalence of untreated and of uncontrolled hypertension in an elderly east german population. design and methods preliminary data of a cross-sectional, populationbased examination of men and women aged - years were analysed. systolic (sbp) and diastolic blood pressure (dbp) were measured and physician-diagnosed hypertension and use of antihypertensive drugs were recorded. prevalence of hypertension was calculated according to age and sex. results: of all participants, . % were hypertensive ( . % of men, . % of women). of these, . % were untreated, . % treated but uncontrolled, and . % controlled. women were more often properly treated than men. the prevalence of untreated hypertension was highest in men aged - years ( . %) and lowest in men and women aged > = years ( . %). uncontrolled hypertension increases with age in both sexes. conclusion and discussion: in this elderly population, there is a high prevalence of untreated and uncontrolled hypertension. higher awareness in the population and among physicians is needed to prevent sequelae such as cardiovascular disease. abstract background: exposure to pesticides is a potential risk factor for subfertility, which can be measured by time-to-pregnancy (ttp). as female greenhouse workers constitute a major group of workers exposed to pesticides at childbearing age, a study was performed among these and a non-exposed group of female workers. objectives: to measure the effects of pesticide exposure on time-topregnancy. design and methods: data were collected through postal questionnaires with detailed questions on ttp, lifestyle factors, and work tasks (e.g. application of pesticides, re-entry activities, and work hours) during six months prior to conception of the most recent pregnancy. associations between ttp and exposure to pesticides were studied in cox's proportional hazards models among female greenhouse workers and referents. results: the initial fecundability ratio (fradjusted) for greenhouse workers versus referents was . ( %ci: . - . ). this fr proved to be biased by the reproductively unhealthy worker effect. restricting the analyses to fulltime workers only gave an fradjusted of . ( %ci: . - . ). among primigravidous greenhouse workers, an association was observed between prolonged ttp and gathering flowers (fr = . , %ci: . - . ). conclusion and discussion: this study adds some evidence to the hypothesis of adverse effects of pesticide exposure on time-topregnancy, but more research is needed. abstract background: hfe-related hereditary hemochromatosis (hh) is an iron overload disease for which screening is recommended to prevent morbidity and mortality. however, discussion has risen on the clinical penetrance of the hfe-gene mutations. objective: in the present study the morbidity and mortality of families with hferelated hh is compared to a normal population. methods: c yhomozygous probands with clinically overt hfe-related hh and their first-degree relatives filled in a questionnaire on health, diseases and mortality among relatives. laboratory results on serum iron parameters and hfe-genotype were collected. the self-reported morbidity, family mortality and laboratory results were compared with an age and gender matched subpopulation of the nijmegen biomedical study (nbs), a population-based survey conducted in the eastern part of the netherlands. results: twohundred-twenty-eight probands and first-degree relatives participated in the hefas. serum iron parameters were significantly elevated in the hefas population compared to the nbs controls. also, the morbidity within hefas families was significantly increased for fatigue, hypertension, liver disease, myocardial infarction, osteoporosis and rheumatism. mortality among siblings, children and parents of hefas probands and nbs participants was similar. discussion: the substantially elevated morbidity within hefas families justifies further exploration for a family cascade screening program for hh in the netherlands. abstract objectives: to evaluate awareness levels and effectiveness of warning labels in cigarette packs, among portuguese students enrolled in the th to the th grades. design and methods: a cross sectional-study was carried out in may ( ) in a high school population ( th- th grades) in the north of portugal (n = ). a confidential self-reported questionnaire was administered. warning labels effectiveness was evaluated by changes in smoking behaviour and cigarette consuption, during the period between june/ (before the implementation of the tobacco warnings labels in portugal) and may/ . continuous variables were compared by the t-test for paired samples and kruskal-wallis test. crude and adjusted odds ratios and confidential intervals were calculated by logistic regression analysis. results: the majority of students ( . %) have a high level of awareness about warning labels content. this knowledge was significantly associated with school grade and current smoking status. none of these variables was significantly associated with changes in smoking behaviour. although not reaching statistic significance, the majority of teenagers ( . %) increased or kept their smoking pattern. awareness level was not associated with smoking prevalence or consumption decreases. conclusions: current warning labels are ineffective in changing smoking behaviour among portuguese adolescents. abstract background: injuries are an important cause of morbidity. the presence of pre-existing chronic conditions (pecs) have been shown to be associated with higher mortality. objectives: aim of this study is to evaluate the association between pecs and risk of death in elder trauma patients. methods: an injury surveillance, based on the integration between emergency, hospital, and mortality databases of lazio region, year , was used. patients were the elder people visited at the emergency departments, and hospitalised. pecs were evaluated on the basis of the charlson comorbidity index (cci). to measure the effect of pecs on the probability of death, we used logistic regression. results: patients were admitted to the hospital. the . % of the injured subjects were affected by one or more chronic conditions. risk of death for non urgent and urgent patients increased at increasing cci score abstract background: c-reactive protein (crp) was shown to predict prognosis in heart failure (hf). objective: to assess variability of crp over time in patients with stable hf. methods: we measured high-sensitivity crp (hscrp) times ( -week intervals) in patients with stable hf. patients whose hscrp was > mg/dl or whose clinical status deteriorated were excluded. two consecutive hscrp measurements were available for patients: men, mean(sd) age . ( . ) years, % depressed left ventricular systolic function. forty-four patients had a third measurement. using the cutoff point of . mg/dl for prediction of adverse cardiac events we assessed the proportion of patients who changed risk category. results: median(p -p ) baseline hscrp was . mg/dl( . - . ). hscrp varied largely particularly for higher levels. the th and th percentiles of differences between first two measurements were ) . mg/dl and + . mg/dl. correlation coefficient between these measurements: . , p< . . eleven ( %) patients changed risk category, kappa = . , p< . . among patients whose first two measurements were concordant, . % changed category in third measurement, kappa = . , p< . . conclusion: large variability in hscrp in stable hf may decrease the validity of risk stratification based on single measurements. it remains to be demonstrated whether the pattern of change over time adds predictive value in hf patients. abstract background: instrumental variables can be used to adjust for confounding in observational studies. this method has not yet been applied with censored survival outcomes. objectives: to show how instrumental variables can be combined with survival analysis. design and methods: in a sample of patients with type- diabetes who started renal-replacement therapy in the netherlands between and , the effect of pancreas-kidney transplantation versus kidney transplantation on mortality was analyzed using region as the instrumental variable. because the hospital could not be chosen with this type of transplantation, patients can be assumed to be naturally randomized across hospitals. we calculated an adjusted difference in survival probabilities for every time point including the appropriate confidence interval (ci %). results: the -year difference in survival probabilities between the two transplantation methods, adjusted for measured and unmeasured confounders, was . (ci %: . - . ) favoring the pancreas-kidney transplantation. this is substantially larger than the intention-to-treat estimate of . (ci %: . - . ) where policies are compared. conclusion and discussion: instrumental variables are not restricted to uncensored data, but can also be used with a censored survival outcome. hazard ratios with this method have not yet been developed. the strong assumptions of this technique apply similarly with survival outcomes. . ] . sir of coronary heart disease was . [ %ci: . - . ] and remained significantly increased up to years of follow-up. cox regression analysis showed a . -fold ( % ci, . - . ) increased risk of congestive heart failure after anthracyclines and a . -fold ( % ci, . - . ) increased risk of coronary heart disease after radiotherapy to the mediastinum. conclusion: the incidence of several cardiac diseases was strongly increased after treatment for hl, even after prolonged follow-up. anthracyclines increased the risk of congestive heart failure and radiotherapy to the mediastinum increased the risk of coronary heart disease. abstract background: the concept of reproductive health is emerging as an essential need for health development. objectives: to know the opinions of parents, teachers and students about education of reproductive health issues to students of mid and high schools. design and methods: focus group discussions (fgd) as a qualitative research was chosen. a series of group discussions with participation of persons ( students, teachers, and parents) was held. each group had included to persons. results: all the participants noted to a true need in education of puberty health in order to provide essentials for pre-adolescent students to adopt the psycho-and somatic changes of puberty. however, a few fathers and a group of mothers believed that education of family planning is not suitable for students. a need for education of aids and marital problems for students was the major concern in all groups. the female students emphasized a need for programming counseling in pre-marital period. conclusion: essentials in puberty health, family planning, aids and marital problems should be provided in mid-and high schools in order to narrow the knowledge gap of the students. abstract background: the association between social support and hypertension in pregnancy remains controversial. objective: the objective of this study was to investigate whether level of social support is a protective factor against preeclampsia and eclampsia. design and methods: a case-control study was carried out in a public high-risk maternity hospital in rio de janeiro, brazil. between july -may , all cases, identified at diagnosis, and controls, matched on gestational age, were included in the study. participants were interviewed about clinical history, socio-demographic and psychosocial characteristics. the principal exposure was the level of social support available during the pregnancy, using the medical outcomes study scale. adjusted odds ratios were estimated using multivariate conditional logistic regression. results: multiparous women with a higher level of social support had a lower risk of presenting with preeclampsia and eclampsia (or = . ), although this association was not statistically significant ( % ci . - . ). in primiparous women, a higher level of social support was seen amongst cases (or = . ; % ci . - . ). an interaction between level of social support and stressful life events was not identified. these results contribute to increased knowledge of the relationship between preeclampsia and psychosocial factors in low-income pregnant and puerperal women. abstract background: current case-definitions for cfs/me are designed for clinical-use and not appropriate for health needs assessment. a robust epidemiological case-definition is crucial in order to achieve rational allocation of resources to improve service provision for people with cfs/me. objectives: to identify the clinical features that distinguish people with cfs/me from those with other forms of chronic fatigue and to develop a reliable epidemiological case-definition. methods-primary care patient data for unexplained chronic fatigue was assessed for symptoms, exclusionary and comorbid conditions and demographic characteristics. cases were assigned to disease and non-disease groups by three members of the chief medical officer's working group on cfs/me (reliability-cronbach's alpha . ). results: preliminary multivariate analyses were conducted and classification and regression tree analysis included a -fold cross-validation approach to prevent over fitting. the results suggested that there were at least four strong discriminating variables for cfs/ me with 'post-exertional malaise' being the strongest predictor. risk and classification tables showed an overall correct classification rate of . %. conclusion: the analyses demonstrated that the application of the combination of the four discriminating variables (the defacto epidemiological case-definition) and predefined comorbid conditions had the ability to differentiate between cfs/me and non-cfs/me cases. abstract background: infection with high-risk human papillomavirus (hpv) is a necessary cause for cervical cancer. vaccines against the most common types (hpv , hpv ) are being developed. relatively little is known about factors associated with hpv or hpv infection. we investigated associations between lifestyle factors and hpv and hpv infection. methods: uk women aged - years with a recent abnormal cervical smear underwent hpv testing and completed a lifestyle questionnaire. hpv and hpv status was determined using type-specific pcrs. associations between lifestyle factors and hpv status were assessed by multivariate logistic regression models. results: . % ( %ci . %- . %) of women were hpv -positive. . % ( % ci . %- . %) were hpv -positive. for both types, the proportion testing positive decreased with increasing age, and increased with increasing grade of cytological abnormality. after adjusting for these factors, significant associations remained between (i) hpv and marital, employment, and smoking status and (ii) hpv and marital status and contraceptive pill use. gravidity, ethnicity, barrier contraceptive use and socio-economic status were not related to either type. conclusions we identified modest associations between several lifestyle factors and hpv and hpv . studies of this type help elucidate hpv natural history in different populations and will inform development of future vaccine delivery programmes. in ageing men testosterone levels decline, while cognitive function, muscle and bone mass, sexual hair growth, libido and sexual activity decline and the risk of cardiovascular diseases increase. we set up a double-blind, randomized placebo-controlled trial to investigate the effects of testosterone supplementation on functional mobility, quality of life, body composition, cognitive function, vascular function and risk factors, and bone mineral density in older hypogonadal men. we recruited men with serum testosterone levels below nmol/l and ages - years. they were randomized to either four capsules of mg testosterone undecanoate (tu) or placebo daily for weeks. primary endpoints are functional mobility and quality of life. secondary endpoints are body composition, cognitive function, aortic stiffness and cardiovascular risk factors and bone mineral density. effects on prostate, liver and hematological parameters will be studied with respect to safety. measure of effect will be the difference in change from baseline visit to final visit between tu and placebo. we will study whether the effect of tu differs across subgroups of baseline waist girth, testosterone level, age, and level of outcome under study. at baseline, mean age, bmi and testosterone levels were (yrs), (kg/m ) and .x (nmol/l), respectively. abstract at a student population, the carie's prevalence was , %. objectives: to evaluate the efficiency between two types of oral health education programmes and the adherence towards tooth brushing. study design: case control study: youngsters took part, in the case group. an health education programme was carried out in schools and included two types of strategies: a participative strategy (learning based on the colouring of the dental plaque) towards a case group; and a traditional strategy (oral expository method) towards a control group. during the outcome of the programmes, the oral health condition evaluation was done through cpo index, the adherence towards tooth brushing and the (iho's) oral hygiene index. results: in the initial dental exam the (iho) average was of , . three months after the application of the oral health programme, there was a general decrease in the average of iho's to , . discussion and conclusion: in the case group the decrease was higher: , to , . the students submitted to a session of oral health education based on the colouring of the dental plaque showed an lower iho's average and higher knowledge. this can be due to the teaching session being more active, participative and demonstrative. abstract background: violence perpetuated by adolescents is a major problem in many societies. objectives: the aim of this study is to examine high school students' violent behaviour and to identify predictors. design and methods: a cross-sectional study was conducted in timis county, romania between may-june . the sample consisted of randomly selected classes, stratified proportionally according to grades - , high school profile, urban and rural environment. the students completed a self administered questionnaire in their classroom. a weighting factor was applied to each student record to adjust for non-response and for the varying probabilities of selection. results: a total of students were included in the survey. during the last months, . % of adolescents got mixed into a physical fight outside school and . % on school property. abstract background: drug use by adolescents has become an increasing public health problem in many countries. objectives: the aim of this study is to identify prevalence of drug use and to examine high school students' perceived risks of substance use. design and methods: a cross-sectional study was conducted in timis county, romania between may-june . the sample consisted of randomly selected classes, stratified proportionally according to grades - , high school profile, urban and rural environment. the students completed a self administered questionnaire in their classroom. eighteen items regarding illicit drug use suggesting different intensity of use were listed. the response categories were 'no risk', 'slight risk', 'moderate risk', 'great risk' and 'don't know'. results: a total of students were included in the survey. the lifetime prevalence of any illicit drug was . %. significant beliefs associated with drug use are: trying marijuana once or twice (p< . ), smoking marijuana occasionally (p = . ), trying lsd once or twice (p = . ), trying cocaine once or twice (p = . ), trying heroine once or twice (p = . ). conclusion: the overall drug use prevalence is small. however, use of some drugs once or twice is not seen as a very risky behaviour. abstract background: the health ombudsman service was created in ceara´, brazil, in , with the objective of receiving user opinions about public services. objectives: to describe user profiles, evaluating their satisfaction with health services and the ombudsman service itself. design and methods: a transversal and exploratory study with a random sample of users who had used the service in the last three months. the data were analyzed with the epi info program. results: women were those who used the service most ( . %). the users sought the service for complaints ( . %), guidance ( . %) and commendation ( . %). users made the following complaints about health services: lack of care ( . %), poor assistance ( . %) lack of medication ( . %). in relation to the ombudsman service, the following failures were mentioned: lack of autonomy ( . %), delay in solving problems ( . %) and few ombudsmen ( . %). conclusion: participation of the population in use of the serviced is small. the service does not satisfy the expectations of users, it is necessary to publicize the service and try to establish an effective partnership between users and ombudsmen so that the population finds in the ombudsman service an instrument to put into effect social control and improve the quality of health services. in chile, the rates of breast cancer and diabetes have dramatically increased in the last decade. the role of insulin resistance in the development of breast cancer, however, remains unexplored. we conducted a hospital-based case-control to assess the relationship of insulin resistance (ir) and breast cancer in chilean pre and postmenopausal women. we compared women, - y, with incident breast cancer diagnosed by biopsy and controls with normal mammography. insulin and glucose were measured in blood and ir was calculated by homeostasis model assessment method. anthropometric measurements and socio-demographic and behavioural data were also collected. odds ratios (ors) and % confidence intervals (cis) were estimated by multivariate logistic regression. the risk of breast cancer increased with age. ir was significantly associated to breast cancer in postmenopausal women (or = . , %ci = . - . ), but not in premenopausal (p> . ). socioeconomic status and smoking appeared as important risk factors for breast cancer. obesity was not associated with breast cancer at any age (p> . ). in these women, ir increased the risk of breast cancer only after menopause. overall, these results suggest a different risk pattern for breast cancer before and after menopause. keywords: insulin resistance; breast cancer; chile. abstract background: previous european community health indicators (echi) projects have proposed a shortlist of indicators as a common conceptual structure for health information. the european community health indicators and monitoring (echim) is a -year project to develop and implement health indicators and to develop health monitoring. objectives: our aim is to assess the availability and comparability of the echi-shortlist indicators in european countries. methods: four widely used health indicators i) perceived general health ii-iii) prevalence of any and certain chronic diseases or conditions iv) limitations in activities of daily living (adl) were evaluated. our evaluation of available sources for these indicators is based on the european health interview & health examination surveys database ( surveys in chile, breast cancer, obesity and sedentary behaviour rates are increasing. the role of specific nutrients and exercise in the risk of breast cancer remains unclear. the aim of the present study was to evaluate the role of fruits and vegetables intake and physical activity in the prevention of breast cancer. we undertook an age matched case-control study. cases were women with breast cancer histologically confirmed and controls were women with normal mammography, admitted to the same hospital. a structured questionnaire was used to obtain dietary information and measurement of physical activity was obtained from the international physical activity questionnaire. odds ratios (ors) and % confidence intervals (cis) were estimated by conditional logistic regression adjusted by obesity, socioeconomic status and smoking habit. a significant association was found with fruit intake (or = . , %ci = . - . ). the consumption of vegetables (or = . , %ci = . - . ), moderate (or = . , %ci = . - . ) and high physical activity (or = . , %ci = . - . ) were not observed as protective factors. in conclusion, the consumption of fruit is protective in breast cancer. these findings need to be replicated at chile to support the role of diet and physical activity in breast cancer and subsequence contribution in public health policy. keywords: diet; physical activity; breast cancer; chile. the role of trace elements in pathogenesis of liver cirrhosis and its complications is still not clearly understood. serum concentrations of zinc, copper, manganese and magnesium were determinated in patients with alcoholic liver cirrhosis and healthy subjects by means of plasma sequential spectrophotometer. serum levels of zinc were significantly lower (median . vs . lmol/l, p = . ) in patients with liver cirrhosis in comparison to controls. serum levels of copper were significantly higher in patients with liver cirrhosis ( . vs . lmol/l, p< . ) as well as manganese ( . vs . lmol/l, p = . ). concentration of magnesium was not significantly different between patients with liver cirrhosis and controls ( . vs . mmol/l, p = . ). there was no difference in trace elements concentrations between child-pugh groups. zinc level was significantly lower in patients with hepatic encephalopathy in comparison to cirrhotic patients without encephalopathy ( . vs . lmol/l, p = . ). manganese was significantly higher in cirrhotic patients with ascites in comparison to those without ascites ( . vs . lmol/l, p = . ). correction of trace elements concentrations might have beneficial effect on complications and maybe progression of liver cirrhosis. it would be recommendable to provide analyzis of trace elements as a routine. abstract background: respiratory tract infections (rti) are very common in childhood and knowledge of pathogenesis and risk factors is required for effective prevention. objective: to investigate the association between early atopic symptoms and occurrence of recurrent rti during first years of life. design and methods: in the prospective prevention and incidence of asthma and mite allergy birth cohort study, children were followed from birth to the age of years. information on atopic symptoms, potential confounders, and effect modifiers like passive smoking, daycare attendance and presence of siblings was collected at ages months and year by parental questionnaires. information on rti was collected at ages , , , and years. results: children with early atopic symptoms, i.e. itchy skin rash and/or eczema or doctordiagnosed cow's milk allergy at year of age had a slightly higher risk to develop recurrent rti (aor . ( . - . ); and . ( . - . ), respectively). the association between atopic symptoms and recurrent rti was stronger in children whose mother smoked during pregnancy and who had siblings (aor . ( . - . ) the aim : the aim of the study was to assess the relative risk (rr) of obesity and abdominal fat distribution on the insulin resistance (ir), diabetes, hyperlipidemia and hypertension in polish population. materials and methods: subjects at age - , were randomized and invited to the study. in participants anthropometric and blood pressure examination was performed. fasting lipids, fasting and after glucose load glucose and insulin were determined. ir was defined as the upper quartile of the homa-ir distribution for the normal glucose tolerant population. results: overweight and obesity was observed in , % and , % of subjects. visceral obesity was found in subjects ( , %-men and , %-women). rr of ir in obesity was , ( % ci: , - , ), for obese subjects at age below was , ( % ci: , - , ). in men with visceral obesity rr of ir was the highest for men aged below . rr of diabetes was increasing with the increase of body weight, in obese subjects with abdominal fat distribution was , ( %ci: , - , ). the same was observed for the hypertension and hyperlipidemia. conclusions: obesity and the abdominal fat distribution seems to be an important risk factor of ir, diabetes, hypertension, hiperlipidemia, especially in the younger age groups. abstract background: age as an effect modifier in cardiovascular risk remains unclear. objective: to evaluate age-related differences in the effect of risk factors for acute myocardial infarction (ami). methods: in a population-based case-control study, with data collected by trained interviewers, consecutive male cases of first myocardial infarction (participation rate %) and randomly selected male control dwellers (participation rate %) were compared. effect-measure modification was evaluated by the statistical significance of a product term of each independent variable with age. unconditional logistic regression was used to estimate ors in each age stratum (< years/> years). results: there was a statistically significant interaction between education (> vs. < years), sports practice, diabetes and age: the adjusted (education, ami family history, dyslipidemia, hypertension, diabetes, angina, waist circumference, sports practice, alcohol and caffeine consumption, and energy intake) ors ( %ci) were respectively . ( . - . ), . ( . - . ) and . ( . - . ) in younger, and . ( . - . ), . ( . - . ) and . ( . - . ) in older participants. conclusions: in males, age has a significant interaction with education, sports practice and diabetes in the occurrence of ami. the effect is evident in the magnitude but not in the direction of the association. abstract there are few studies on the role of diet in lung cancer etiology. thus, we calculated both, squamous cell and small cell carcinoma risks in relation to the frequency of consumption of vegetables, cooked meat, fish and butter in silesian male in industrial area of poland. in the case-control study, the studied population comprised men with squamous cell carcinoma and men with small cell carcinoma, and healthy controls. multivariate logistic regression was employed to calculate lung cancer risk in the relation to simultaneous influence of dietary factors. the relative risk was adjusted for age and smoking. we observed a significant decrease in lung cancer risk related to more frequent consumption of raw vegetables, cooked meat and fish. however, stronger protective effect was reported for squamous cell carcinoma. frequent fish consumption significantly decreases the risk especially in cigarette smokers. the frequent consumption of pickles lowers squamous cell carcinoma risk in all cases but small cell carcinoma risk only in smokers. the presence of butter, cooked meat, fish and vegetables in diet significantly decreases the lung cancer risk especially in smokers. the association between diet and lung cancer risk is more pronounced for squamous cell carcinoma. abstract background: in functional disease research selection mechanisms need to be studied to assure external validity of trial results. objective: we compared demographic and disease-specific characteristics, history, co-morbidity and psychosocial factors of patients diagnosed, approached and randomised for a clinical trial analysing the efficacy of fibre therapy in irritable bowel syndrome (ibs). design and methods: in primary care patients were diagnosed with ibs by their gp in the past two years. characteristics were compared between ( ) randomised patients (n = ); ( ) patients who did not give their informed consent (n = ); ( ) patients who decided not to participate (n = ); and ( ) those not responding to the mailing (n = ). results: the groups showed no significance differences in age and gender ( % females, mean age years, s.d. ). patients consulting their gp for the trial compared to patients not attending their gp showed significant more severe ibs symptoms, more abdominal pain during the previous three months, and a longer history of ibs (p< , ). patients randomised have more comorbidity (p = , ). conclusion and discussion: patients included in this ibs trial differ from no participating and excluded patients mainly in ibs symptomatology, history and comorbidity. this may affect the external validity of the trial results. abstract objectives: to evaluate smoking prevalence among teenagers and identify associated social-behavioral factors. study design and methods: a cross sectional-study was carried out in may ( ) in high school population ( th- th grades) in the north of portugal (n = ). a confidential self-reported questionnaire was administered. crude and adjusted odds ratios and confidence intervals were calculated by logistic regression analysis. results: overall smoking prevalence was . % (boys = . %; girls = . %) (or = . ; ci % = . - . ; p< . ). smoking prevalence was significantly and positively associated with gender, smoking parents, school failure and school grade; in the group of students with smoking relatives, smoking was significantly associated with parents who smoke near the student (or = . ; ci % = . - . ; p< . ); in the group of the secondary grade ( th- th grades) smoking was significantly associated with belonging to 'non science-courses' (or = . ; ci % = . - . ; p = . ). conclusions: smoking is a growing problem among portuguese adolescents, increasing with age, prevailing among males, although major increases have been documented in the female population. parents' behaviours and habits have an important impact in their children's smoking behaviour. school failure is also an important factor associated with smoking. there is a need for further prevention programmes that should include families and consider students' social environment. abstract background: social environment of school can contribute to etiology of health behaviors. objective: to evaluate the role of school context for substance abuse in youth. design: a cross-sectional study was carried out in , using self-completed classroomadministered questionnaire. subjects: from a representative sample of students, a sub-sample of students was selected (including / classes with at least persons without missing data)*. methods: substance abuse was measured by: tobacco smoking at present, episodes of drunkenness and marijuana use in the lifetime. overall index was created as main independent variable, ranging - (cronbach's alpha = . ). class membership, type of school, gender, place of domicile, and school climate were included as contextual variables, measured on individual or group level. results: on individual level, the mean index was equal to . (sd = . ), and ranged from . in general comprehensive schools to . in basic vocational schools and from . to . for separated classes. about . % of total variance in this index may be attributed to differences between classes. conclusion: individual differences in substance abuse in youth could be partly explained by factors at school level. * project no po d . abstract background: rates of c-section in brazil are very high, . % in . brazil illustrates an extreme example of medicalization of birth. c-section, as any major surgery, increases the risk of morbidity, which can persist long after discharge from hospital. objectives: to investigate how social, reproductive, prenatal care and delivery factors interact after hospital discharge, influencing post partum complications. design and methods: a cross-sectional study of women gathered information through home interviews and clinical examination during post-partum. a hierarchical logistic regression model of factors associated with post-partum complications was applied. results: physical and emotional post partum complications were almost twice as high among women having c-section. most of this effect were associated with lower socioeconomic conditions which influences, were mainly explained by longer duration of delivery (even in the presence of medical indications), and less social support when returning home. conclusion: risk of c-section complications is higher among women from the lower socioeconomic strata. social inequalities mediate the association between type of delivery and postpartum complications. discussion: c-section complications should be taken into account when decisions concerning type of delivery are made. social support after birth, from the public health sector, has to be provided for women in socioeconomic deprivation. the relationship between unemployment and increased mortality was previously reported in western countries. the aim of this study was to assess the influence of the changes in unemployment rate on survival in general population in northern poland at the time of economic transition. to analyze the association between the unemployment and risk of death we collected survival data from death certificates and data on rates of unemployment from regions of gdansk county from period - . kaplan-meier method and cox proportional hazard model were used in univariate and multivariate analysis. a change of unemployment (percentage) in the year of death in the area of residence, sex and educational level ( categories) were included into multivariate analysis. the change of unemployment rate was associated with significantly worse overall survival: hazard ratio . % confidence interval . to . . the highest risk associated with the change of unemployment in the area of residence was for death from congenital defects (hazard ratio . % confidence interval . to . ) and for death from cardiovascular diseases (hazard ratio . % confidence interval . to . abstract background: there is no evidence from randomized trials as to whether or not educational interventions improve voluntary reporting systems in terms of quantity or quality. objectives: evaluation of the effectiveness of educational outreach visits aimed at improving adverse drug reaction (adr) reporting by physicians design and methods: cluster-randomized controlled trial covering all health system physicians in northern portugal. four spatialclusters assigned to intervention group (n = ) received outreach visits tailored to training needs detected in previous study and clusters were assigned to the control (n = ). the main was the total number of reported adr; the second was the number of serious, unexpected, high-causality and new-drug-related adr. a follow-up was conducted for a period of months. results: the intervention increased reports as follows: total adr, . -fold (p< . ); serious adr, . -fold (p = . ); high-causality adr, . -fold (p< . ); unexpected adr, . -fold (p< . ); and newdrug-related adr, . -fold (p = . ). the intervention had its maximum effect during the first four months ( . -fold increase, p< . ), yet the effect was nonetheless maintained over the four month periods post-intervention (p = . ). discussion and conclusion: physician training based on academic detailing visits improves reporting quality and quantity. this type of intervention could result in sizeable improvements in voluntary reporting in many countries. there were no evidence of differences in absolute indications between the years. conclusion: most of the increase in rates in the period may be attributable to relative and non-medical indications. discussion policies to promote rational use of c-sections should take into account the role played by obstetrician's convenience and the increased medicalization of birth on cesarean rates. abstract background: the changing environment has led to unhealthy dietary habits and low physical activity of children resulting in overweight/obesity and related comorbid conditions. objective: idefics is a five-year multilevel epidemiological approach proposed under the sixth eu framework to counteract the threatening epidemic of diet-and lifestyle-induced morbidity by evidence-based interventions. design and methods: a population-based cohort of . children to years old will be established in nine european countries to investigate the aetiology of these diseases. culturally adapted multi-component intervention strategies will be developed, implemented and evaluated prospectively. results: idefics compares regional, ethnic and sex-specific distributions of the above disorders and their key risk factors in children within europe. the impact of sensory perception, genetic polymorphisms and the role of internal/external triggers of food choice and children's consumer behaviour are elucidated. risk profile inventories for children susceptible to obesity and its co-morbid conditions are identified. based on controlled intervention studies an evidencebased set of guidelines for health promotion and disease prevention is developed. conclusions: provision of effective intervention modules, easy to implement in larger populations, may reduce future obesity related disease incidence. discussion: transfer of feasible guidelines into practice requires involvement of health professionals, stakeholders and consumers. abstract background: non-medically indicated cesarean deliveries increase morbidity and health care costs. brazil has one of the highest rates of caesarean sections in the world. variations in rates are positively associated with socioeconomic status. objectives: to investigate factors associated with cesarean sections in public and private sector wards in south brazil. design and methods: cross sectional data from post partum interviews and clinical records of consecutive deliveries ( in the main public and in a private maternity) was analyzed using logistic regression. results: multiple regression showed privately insured women having much higher cesarean rates than those delivering in public sector wards (or = . ; ci %: . - . ). obstetricians individual rates varied from %- %. doctors working in both, public and private sectors had a higher rates of cesarean in private wards (p< . ). wanting and having a cesarean was significantly more common among privately insured women. conclusion: women from wealthier families are at higher risk of cesarean, particularly those willing this type of delivery and whose obstetrician works in the private sector. discussion: women potentially at lower clinical risk are more like to have a caesarean. the obstetricians' role and women's preferences must be further investigated to tackle this problem. abstract background: in the netherlands, bcg-vaccination is offered to immigrant children and children of immigrant parents in order to prevent severe tuberculosis. the effectiveness of this policy has never been studied. objectives: assessing the effectiveness of the bcg-vaccination policy in the netherlands. design and methods: we used data on the size of the risk population per year (from statistics netherlands), number of children with meningitis or miliary tuberculosis in the risk population per year, and vaccination status of those cases (from the netherlands tuberculosis register) over the period - . we estimated the vaccine efficacy and annual risk of acquiring meningitis or miliary tuberculosis by log-linear modelling and treating the vaccination coverage as missing data. results: in the period - cases of meningitis or miliary tuberculosis were registered. the risk for unvaccinated to children to acquire such a serious tuberculosis infection was . ( %ci . - . ) per per year; the reduction in risk for vaccinated children was % ( %ci - %). conclusion and discussion: this means that, discounting future effects with %, a ( %ci: - ) extra children should be vaccinated to prevent one extra case of meningitis or miliary tuberculosis. given that bcg-vaccination is relatively inexpensive, the current policy could even be cost-saving. abstract background: psychotic symptom experiences in the general population are frequent and often longlasting. objectives: the zurich cohort study offered the opportunity of differentiating the patterns of psychotic experiences over a span of years. design and methods: the zurich study is based on a stratified community sample of persons born in (women) and (men). the data were collected at six time points since . we examined variables from two subscales of the scl- -r -'paranoid ideation' and 'psychoticism' -using factor analysis, cluster analysis and polytomous logistic regression. results: two new subscales were derived representing 'thought disorders' and 'schizotypal signs'. continously high symptom load on one of these subscales (both subscales) was found in % ( . %) of the population. cannabis use was the best predictor of continuously high symptom load in the 'thought disorders' subscale, whereas several variables representing adversity in childhood / youth were associated with continuously high symptom load in the 'schizotypal signs' subscale. conclusion and discussion: psychotic experiences can be divided at least in two different syndromes -thought disorders and schizotypal signs. despite similar longitudinal course patterns and also similar outcomes these syndromes rely on different risk factors, thus possibly defining separate pathways to psychosis. abstract background: the reasons for the rise in asthma and allergies remain unclear. to identify influential factors several european birth cohort studies on asthma and allergic diseases have been initiated since . objective: the aim of one work package within the global allergy and asthma european network (ga len), sponsored by the european commission, was to identify and compare european birth cohorts specifically designed to examine asthma and allergic diseases. methods: for each study, we collected detailed information (mostly by personal visits) regarding recruitment process, study setting, follow-up rates, subjective/objective outcomes and exposure parameters. results: by june , we assessed european birth cohort studies on asthma and allergic diseases. the largest recruited over children each. most studies determined specific immunoglobulin e levels to various allergens or used the isaac questionnaire for evaluation of asthma or allergic rhinitis symptoms. however, the assessment of other objective and subjective outcomes (e.g. lung function or definitions of eczema) were rather heterogeneous across the studies. conclusions due to the unique cooperation within the ga len project a common database was established containing study characteristics of european birth cohorts on asthma and allergic diseases. the possibility to pool data and perform meta-analyses is currently being evaluated. abstract background: birth weight is an important marker of health in infancy and health trajectories later in life. social inequality in birth weight is a key component in population health inequalities. objective: to comparatively study social inequality in birth weight in denmark, finland, norway, and sweden from to . design and methods as part of the nordic collaborative project on health and social inequality in early life (norchase), register-based data covering all births in all involved countries - was linked with national registries on parental socioeconomic position, covering a host of different markers including income, education and occupation. also, nested cohort studies provide opportunity to test hypotheses of mediation. results: preliminary results show that the social inequality in birth weight, small for gestational age, and low birth weight has increased in denmark through out the period. also, preliminary results from finland, norway and sweden will be presented. discussion: crosscountry comparisons pose several methodological challenges. these challenges include characterizing the societal context of each country so as to correctly interpret inter-country differences in social gradients, along with dealing with differences in the data collection methods and classification schemes used by different national registries. also, strategies for influencing policy will be discussed. abstract background: modifying the availability of suicide methods is a major issue in suicide prevention. objectives: we investigated changes in the proportion of firearm suicides in western countries since the 's, and their relation to the change of legislation and regulatory measures. design and methods: data from previous publications, from the who mortality database, and from the international crime victims survey (icvs) were used in a multilevel analysis. results: multilevel modeling of longitudinal data confirmed the effect of the proportion of households owning firearms on firearm suicide rates. several countries stand out with an obvious decline in firearm suicides since the s: norway, united kingdom, canada, australia, and new zealand. in all of these countries legislative measures have been introduced which led to a decrease in the proportion of households owning firearms. conclusion and discussion: the spread of firearms is a main determinant of the proportion of firearm suicides. legislative measures restricting the availability of firearms are a promising option in suicide prevention. abstract background: fatigue is a non-specific but frequent symptom in a number of conditions, for which correlates are unclear. objectives: to estimate socio-demographic and clinical factors determining the magnitude of fatigue. methods: as part of a follow-up evaluation of a cohort of urban portuguese adults, socio-demographic and clinical variables for consecutive participants were collected through personal interview. lifetime history of chronic disease diagnosis was inquired (depression, cancer, cardiovascular, rheumatic, and respiratory conditions), anthropometry was measured, and haemoglobin determined. krupp's -item fatigue severity scale was applied and severe fatigue defined as mean score over . mean age (sd) was . ( . ) and . % of participants were females. logistic regression was used to compute adjusted odds ratios, and attributable fractions were estimated using the formula ar = -s(?j/orj). results: adjusted for age and clinical conditions, female gender (or = . , %ci: . - . ) and education (under -years schooling: or = . , %ci: . - . ) were associated with severe fatigue. obesity (or = . , %ci: . - . ) and diagnosed cardiovascular disease (or = . , %ci: . - . ) also increased fatigue. attributable fractions were . % for gender, . % for education, . % for obesity, and . % for cardiovascular disease. conclusion: gender and education have large impact on severe fatigue, and, to a lesser extent, obesity and cardiovascular disease. abstract introduction: analysis of infant mortality allows identification of death contributing factors and assessment of child health care quality. objective: to study characteristics of infant and fetal mortality using data from a committee for prevention of maternal and infant mortality, in sobral, brazil. methods: all cases of infant deaths between and were analyzed. medical records were reviewed and mothers, interviewed. using a tool to identify preventable deaths (seade classification -brazil) the committee characterized causes of death. meetings with governmental groups involved in family health care took place to identify death contributing factors. results: in , infant mortality decreased from . to . . in the next years there was an increase from . to . . the increase in was due to respiratory illnesses. in , was due to diarrhea. analysis of preventable deaths indicated a reduction from to deaths that could have been prevented by adequate gestational care, and an increase in preventable deaths by early diagnosis and treatment. conclusion: pre-natal and delivery care improved whereas care for children less than yr old worsened. analysis of death causes allowed a reduction of infant mortality rate to . abstract objective: to identify dietary patterns and its association with metabolic syndrome. design and methods: we evaluated noninstitutionalised adults. diet was assessed using a semi-quantitative dietary frequency questionnaire, and dietary patterns were identified using principal components analysis followed by cluster analysis (k-means method) with bootstrapping (choosing the clusters presenting the lowest intra-cluster variance). metabolic syndrome (mets) was defined according to the ncep-atp-iii. results: the overall prevalence of metabolic syndrome was . %. in the population sample clusters were identified in females - .healthy, .milk/soup; .fast food; .wine/low calories; and in males - .milk/carbohydrates; . codfish/soup; .fast food; .low calories. in males, using milk/carbohydrates as the reference and adjusting for age and education, high blood pressure (or = . ; %ci: . - . ) and high triglycerides (or = . ; %ci: . - . ) were associated with the fast food pattern, and low calories pattern presented higher frequency of high blood pressure (or = . ; %ci: . - . ). in females, after age and education adjustment, no significant association was found either with metabolic syndrome or its individual features and the dietary patterns identified. conclusion: we found no specific dietary pattern associated with an increased prevalence of metabolic syndrome. however, a fast food diet was significantly more frequent in males with dyslipidemia and high blood pressure. abstract aim: to determine the prevalence of stress urinary incontinence (sui) before, during pregnancy and following childbirth, and also to analyse the impact of a health education campaign about sui prevention, following childbirth in viana district, portugal. methods: participants (n = ), interviewed during hospitalization, after birth and two months later at health centres, were divided into two groups: a first group of non-exposed and a second exposed to a health education campaign. this second group was encouraged to perform an exercise programme and given a 'suiprevention-treatment' brochure, approved by the regional health authority. results: sui prevalence was . %( %ci: . - . ) before pregnancy, . %( %ci: . - . ) during pregnancy and . ( %ci: . - . ) four weeks after birth. less than half of the women with sui sought help from healthcare professionals. statistical significant differences were found between groups: sui knowledge level and practice of pelvic floor muscles re-education exercises were higher in the exposed group ( . and . times, respectively). conclusions: sui affects a great number of women but only a small percentage reveals it. this campaign improved women knowledge and modified their else behaviors. healthcare professionals must be aware of this reality, providing an early and continuous intervention that would optimise the verified benefits of this campaign. abstract background: social inequalities have been associated with poorer developmental outcomes, but little is known about the role of the area of residence. objectives: examine whether the housing infrastructure of the area modifies the effect of socio-economic conditions of the families on child development. design and methods: community-based survey of under-fives in southern brazil applied hierarchical multi-level linear regression to investigate determinants of child development, measured by a score from the denver developmental screening test. results: in multivariable models, the mean score of child development increased with maternal and paternal education and work qualification, family income and better housing and was higher when the mother was in paid work (all p< . ). paternal education had an effect in areas of lower housing quality only; the effect of occupational status and income in these areas were twice as large as in better-provided areas (likelihood test for all interactions p< . ). this model explained % of the variation in developmental score between the areas of residence. conclusion: the housing quality and sanitation of the area modified the effects of socioeconomic conditions on child development. discussion: housing and sanitation programs are potentially beneficial to decrease the negative effect of social disadvantage on child development. abstract background: it is known that both genetic and environmental factors are involved in the early development of type diabetes (t d), and that incidence varies geographically. however we still need to explain why there is variation in incidence. objectives: in order to better understand the role of non-genetic factors, we decided to examine whether prevalence of newborns with high risk genotypes or islet autoantibodies varies geographically. design and methods: the analysis was performed on a cohort of newborns born to non-diabetic mothers, between september and august , who were included in diabetes prediction in ska˚ne study (dipis) in sweden. neighbourhoods were defined by administrative boundaries and variation in prevalence was investigated using multi-level regression analysis. results: we observed that prevalence of newborns with islet autoantibodies differed across the municipalities of ska˚ne (s = . , p < . ), with highest prevalence found in wealthy urban areas. however there was no observed difference in the prevalence of newborns with high risk genes. conclusion and discussion: newborns born with autoantibodies to islet antigens appear to cluster by region. we suggest that non-genetic factors during pregnancy may explain some of the geographical variation in the incidence of t d. abstract background: risk assessment is a science-based discipline used for decision making and regulatory purposes, such as setting acceptable exposure limits. estimation of risks attributed to exposure to chemical substances are traditionally mainly the domain of toxicology. it is recognized, however, that human, epidemiologic data, if available, are to be preferred to data from laboratory animal experiments. objectives: how can epidemiologic data be used for (quantitative) risk assessment? results: we described a framework to conduct quantitative risk assessment based on epidemiological studies. important features of the process include a weight-of-theevidence approach, estimation of the optimal exposure-risk function by fitting a regression model to the epidemiological data, estimation of uncertainty introduced by potential biases and missing information in the epidemiological studies, and calculation of excess lifetime risk through a life table to take into account competing risks. sensitivity analyses are a useful tool to evaluate the impact of assumptions and the variability of the underlying data. conclusion and discussion: many types of epidemiologic data, ranging from published, sometimes incomplete data to detailed individual data, can be used for risk assessment. epidemiologists should better facilitate such use of their data, however. abstract background: high-virulence h. pylori (hp) strains and smoking increase the risk of gastric precancerous lesions. its association with specific types of intestinal metaplasia (im) in infected subjects may clarify gastric carcinogenesis pathways. objectives: to quantify the association between types of im and infection with highvirulence hp strains (simultaneously caga+, vacas and va-cam ) and current smoking. design and methods: male volunteers (n = ) underwent gastroscopy and completed a self-administered questionnaire. participants were classified based on mucin expression patterns in biopsy specimens (antrum, body and incisura). hp vaca and caga were directly genotyped by pcr/reverse hybridization. data were analysed using multinomial logistic regression (reference: normal/superficial gastritis), models including hp virulence, smoking and age. results: high-virulence strains increased the risk of all im types (complete: or = . , %ci: . - . ; incomplete: or = . , %ci: . - . ; mixed: or = . , %ci: . - . ) but smoking was only associated with an increased risk of complete im (or = . %ci: . - . ). compared to non-smokers infected with lowvirulent strains, infection with the high-virulence hp increased the risk of im similarly for smokers (or = . , %ci: . - . ) and non-smokers (or = . %ci: . - . ). conclusion: gastric precancerous lesions, with different potential for progression, are differentially modulated by hp virulence and smoking. the risk of im associated with high-virulence hp is not further increased by smoking. abstract background: in may , the portuguese government created the basic urgency units (buu). these buu must attend at least . persons, be open hours per day, and be at maximum minutes of distance to all the users. objectives: determine the optimal location of buu, considering the existing health centers, in the viseu district, north portugal. methods: from a matrix of distances between population and health centers an accessibility index was created (sum of distances traveled by population to reach a buu). the location-allocation models were used to create simulations based on p-median model, maximal covering location problem (mclp) and set covering location problem (sclp). the solutions were ranked by weighting the variables of accessibility ( %), number of doctors in the health centers ( %), equipments ( %), distance/time ( %) and total number of buu ( %). results: the best solution has buu, doctors, attends users and the accessibility index is . km. conclusions: it was proved that it is impossible to attend all the criterion for creation of a buu. in some areas with low population density, to sum at least persons in a buu, the travel time is necessarily more than hour. background: a prospective observational study of fatigue in staff working a day/ off/ night/ off roster of hour shifts was conducted at a fly-in/fly-out fertilizer mine in remote northern australia. objectives: to determine whether fatigue in staff increased: from the start compared to the finish of shift; with the number of consecutive shifts; and from day-compared to nightshift. methods: data of sleep diaries, the mackworth clock test and the swedish occupational fatigue inventory were obtained at the start and finish of each shift from august to november . results: a total of staff participated in the study. reaction times, sleepiness and lack of energy scores were highest at the finish of nights to . the reaction times increased significantly at both the start and finish of day onwards, and at the finish of night . reaction times and lack of motivation were highest during nightshift. conclusions: from the above results, a disturbed diurnal rhythm and decreased motivation during night-shift; and a roster of more than eight consecutive shifts can be inferred as the primary contributors to staff fatigue. discussion: the implications for changes to workplace practices and environment will be discussed. the aim of this survey was to assess the impact of a meta-analysis comparing resurfacing with nonresurfacing of the patella on the daily practice of experts. participants in this study were experts which had participated in a previous survey on personal preferences regarding patella resurfacing. these experts in the field of patella resurfacing were identified by a thorough search of medline, an internet search (with googletm search engine), and personal references from the identified experts. participants of the 'knee arthroplasty trial' (kat) in the united kingdom were also included. two surveys were sent to the participants, one before and one after the publication of the meta-analysis. the response rate is questionnaires or %. the vast majority of responders are not persuaded to change change their practice after reading the metaanalysis. this is only in part due to the fact that best evidence and practice coincide. other reasons given are methodology related, an observation which is shared by the authors of the review, which force the orthopedic community to improve its research methodology. reasons such as 'i do not believe in meta-analysis' either demands a fundamental discussion or demands the reader to take evidence based medicine more seriously. abstract background: patients with type diabetes (dm ) have a - fold increased risk of cardiovascular disease. delegating routine tasks and computerized decision support systems (cdss) such as diabetes care protocol (dcp) may improve treatment of cardiovascular risk factors hba c, blood pressure and cholesterol. dcp includes consultation-hours exclusively scheduled for dm patients, rigorous delegation of routine tasks in diabetes care to trained paramedics, and software to support medical management. objective: to investigate the effects of dcp, used by practice assistants, on the risk of coronary heart disease for patients with dm . design and methods: in an open-label pragmatic trial in general practices with patients, hba , blood pressure and cholesterol were examined before and prospectively one year after implementation of dcp. the primary outcome was the change in the year ukpds coronary heart disease (chd) risk estimate. results: the median year ukpds chd risk estimate improved significantly from . % to . %. hba decreased from . % to . %, systolic blood pressure from . to . mmhg and total cholesterol from . to . mmol/l. (all p< . ). conclusion: delegating routine task in diabetes care to trained paramedics and using cdss improves the cardiovascular risk of dm patients. tuberculosis in exposed immigrants by tuberculin skin test, ifn-g tests and epidemiologic abstract background: currently immigrants in western countries are only investigated for active tuberculosis (tb) by use of a chest x-ray. recent latent tuberculosis infection (ltbi) is hard to diagnose in this specific population because the only available test method, the tuberculin skin test (tst), has a low positive predictive value (ppv). recently interferon-gamma (ifn-g) tests have become available that measure cellular responses to specific m. tuberculosis antigens and might have a better ppv. objective: to determine the predictive value of tst and two different ifn-g tests combined with epidemiological characteristics for developing active tb in immigrants who are close contacts of smear positive tb patients. methods in this prospective cohort study close contacts will be included. demographic characteristics and exposure data are investigated. beside their normal examination they will all have a tst. two different ifn-g tests will be done in those with a tst induration of ? mm. these contacts will be followed for years to determine the occurrence of tb. results since april , municipal health services have started with the inclusion. preliminary results on the predictive value of tst, both ifn-g tests and epidemiological characteristics will be presented. abstract background: different factors contribute to the quality of ed (emergency department) care of an injured patient. objective: determine factors influencing the disagreement between er diagnoses and those assigned at hospital admission in injuried patients, and evaluate if disagreement between the diagnoses could have worsened the outcome. methods: all the er visits of the emergency departments of lazio region for unintentional injuries followed by hospitalisation in . concordant diagnoses were established on the basis of the barell matrix cells. logistic regression was used to assess the role of individual and er care factors on the probability of concordance. a logistic regression where death within days was the outcome and concordance the determinant was uses. results: , injury er visits were considered. in . % cases, the er and discharge diagnoses were concordant. higher concordance was found with increasing age and less urgent cases. factors influencing concordance were: the hour of the visit, er level, initial outcome, length of stay in hospital. patients who had non concordant diagnoses had a % higher probability of death. conclusions: a correct diagnosis at first contact with the emergency room is associated with lower mortality. methods: a cohort of consecutive patients treated for secondary peritonitis were sent the posttraumatic stress syndrome inventory (ptss- ) and impact of events scale-revised (ies-r) - years following their surgery for secondary peritonitis. results: from the patients operated upon between and , questionnaires were sent to the long-term survivors of which % responded (n = ). ptsd-related symptoms were found in % of patients by both questionnaires. patients admitted to icu (n = ) were significantly older, with higher apache-ii scores, but reported similar ptsd symptomology scores compared to non-icu patients (n = ). traumatic memories during icu and hospital-stay were most predictive for higher scores. adverse memories did not occur more often in the icu group than in the hospital-ward group conclusions: longterm ptsdrelated symptoms in patients with secondary peritonitis were very barthé lé my c cabanas ruiz-carrillo de la cruz den boon jimé nez-moleó n mü ller-nordhorn national evaluation team rich-edwards in the netherlands. design and methods we used the populationbased databases of the netherlands cancer registry, the eindhoven cancer registry (ecr) and the central bureau of statistics. patients from the ecr were followed until - - for vital status and relative survival was calculated. results: the number of breast cancer cases increased from in to . in , an annual increase of . % (p< . ). the death rate decreased , % annually (p< . ), which resulted in deaths in . the relative -yr survival was less than % for patients diagnosed in the seventies, this increased to over % for patient diagnosed since , patients with stage i disease even have a % -yr relative survival. conclusion: the alarming increase in breast cancer incidence is accompanied with a serious improvement in survival rates. this results in a large number of women (ever) diagnosed with breast cancer, about , in of whom % demand some kind of medical care. abstract background: nine % of the population in the netherlands belongs to non-western ethnic minorities. perceived health is worse and health care use different from dutch natives. objectives. which factors are associated with ethnic differences in self-rated health? which factors are associated with differences in utilisation of gp care? methods: during one year all contacts with gps were registered. adult surinam, antillean, turkish, moroccan and dutch responders were included (total n: . ). we performed multivariate analyses of determinants of self-rated health and on the number of contacts with gps. results: self-rated health differ from native dutch: surinam/antillean (or . ) and turkish/moroccan patients (or . / . ) , especially in turkish/moroccan females. more turks visit the gps at least once a year (or . ). less surinamese (or . ) and antillean patients (or . ) visit their gps than the dutch do. people from ethnic minorities in good health visit their gps more often ( . - . consults per year vs. . ). incidence rates of acute respiratory infections and chest complaints were significantly higher than in the dutch. conclusions: ethnicity is independently associated with self-rated health. higher use of gp-care by ethnic minorities in good health, points towards possible inappropriate use of resources. the future: do they fulfil it? first results of the limburg youth monitoring project abstract background: incidence of coronary heart disease (chd) and stroke can be estimated from local, population-based registers. it is unclear, to what extent local register data are applicable on a nationwide level. therefore, we compared german register data with estimates derived with who global burden of disease (gbd) method. methods: incidence of chd and stroke was computed with the gbd method using official german mortality statistics and prevalences from the german national health survey. results were compared to estimates from the kora/monica augsburg register (chd) and the erlangen stroke project in southern germany. results: gbd estimates and register data showed good agreement: chd (age group - years) , (gbd) versus , (register) and stroke (all ages) , versus , incident cases per year. chd incidence among all age groups was estimated with the gbd method to be , per year (no register data available). chd incidence in men and stroke incidence in women were underestimated with the gbd method as compared to register data. conclusions: gbd method is a useful tool to estimate incidence of chd and stroke. the computed estimates may be seen as lower limit for incidence data. differences between gbd estimates and register data are discussed. abstract background: children with mental retardation (mr) are a vulnerable not much studied population. objectives: to investigate psychopharmacotherapy in children with mr and to examine possible factors associated with psychopharmacotherapy. methods: participants were recruited through all facilities for children with mental retardation in friesland, the netherlands, resulting in participants, - years old, including all levels of mental retardation. the dbc and the pdd-mrs were used to assess general behavior problems and pervasive developmental disorders (pdd). information on medication was collected through a parent-interview. logistic regression was used to investigate the relationship between the psychotropic drug use and the factors dbc, pdd, housing, age, gender and level of mr. results: % of the participants used psychotropic medication. main factors associated with receiving psychopharmacotherapy were pdd (or . ) and dbc score (or . ). living away from home and mr-level also played a role whereas gender and age did not. dbc score was associated with clonidine, stimulants and anti-psychotics. pdd was the main factor associated with anti-psychotics use (or . ). discussion: psychopharmacotherapy is especially prevalent among children with mr and comorbid pdd and general behavior problems. although many psychotropic drugs are used off-label, specific drugs were associated with specific psychiatric or behavior problems. abstract background: increased survival in children with cancer has raised interest on the quality-of-life of long-term survivors. objective: to compare educational outcomes of adult survivors of childhood cancer and healthy controls. methods: retrospective cohort study including a sample of adult survivors ( ) treated for childhood cancer in the three existing italian paediatric research hospitals. controls ( ) were selected among siblings, relatives or friends of survivors. when these controls were not available, a search was carried out in the same area of residence of the survivors though random digit dialling. data collection was carried out through a telephone-administered structured questionnaire. results: significantly more survivors than controls needed school support (adjusted odds ratio -oradj- . , % ci . - . ); failed at least a grade after disease onset (oradj . , % ci . - . ); achieved a lower educational level (oradj . , % ci . - . ) and did not reach an educational level higher than their parents' (oradj . , % ci . - . ). subject's age, sex, parents' education and area of residence were taken into account as possible confounders. conclusions: these findings suggest the need to provide appropriate school support to children treated for childhood cancer. abstract background: in italy supplementation with folic acid (fa) in the periconceptional period to prevent congenital malformations (cms) is quite low. the national health institute has recently launched ( ) a programme to improve awareness about the role of fa in reducing the risk of serious defects also by providing . mg fa tablets free of charge to women planning a pregnancy. objectives: we analysed cms that are or may be sensitive to fa supplementation in order to establish an adequate baseline to allow a fa impact assessment in the next years and to investigate spatial differences among cms registries, time trends and time-space interactions. design and methods data collected over - by the italian registries members of eurocat and icbdsr on births and induced abortions with neural tube defects, ano-rectal atresia, omphalocele, oral clefts, cardiovascular, limb reduction and urinary system defects. results: all the cms showed statistically significant differences among registries with the exception of ano-rectal atresia. the majority of cms by registry showed stable or increasing trends over time. conclusions results show the importance of fa intake during the periconceptional period. differences among registries indicate also the need of having a baseline for each registry to follow trends over time. abstract country-specific resistance proportions are more biased by variable sampling and ascertainment procedures than incidence rates. within the european antimicrobial resistance surveillance system (earss) resistance incidence rates and proportions can be calculated. in this study, the association between antimicrobial resistance incidence rates and proportions and the possible effect of differential sampling of blood cultures was investigated. in , earss collected routine antimicrobial susceptibility test data from invasive s. aureus isolates, tested according to standard protocols. via a questionnaire denominator information was collected. the spearman correlation coefficient and linear regression were used for statistical analysis. this year, of hospitals and of laboratories from of earss countries responded to the questionnaire. they reported of, overall, , s. aureus isolates. in the different countries, mrsa proportions ranged from < % to % and incidence rates per , patient days from . ae - to . ae - . overall, the proportions and rates highly correlated. blood culturing rates only influenced the relationship between mrsa resistance proportions and incidence rates for eastern european countries. in conclusion, resistance proportions seem to be very similar to resistance incidence rates, in the case of mrsa. nevertheless, this relationship appears to be dependent of some level of blood culturing. . key: cord- -a ngjuyz authors: bertsimas, d.; boussioux, l.; cory wright, r.; delarue, a.; digalakis, v.; jacquillat, a.; lahlou kitane, d.; lukin, g.; li, m. l.; mingardi, l.; nohadani, o.; orfanoudaki, a.; papalexopoulos, t.; paskov, i.; pauphilet, j.; skali lami, o.; stellato, b.; tazi bouardi, h.; villalobos carballo, k.; wiberg, h.; zeng, c. title: from predictions to prescriptions: a data-drivenresponse to covid- date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: a ngjuyz the covid- pandemic has created unprecedented challenges worldwide. strained healthcare providers make difficult decisions on patient triage, treatment and care management on a daily basis. policy makers have imposed social distancing measures to slow the disease, at a steep economic price. we design analytical tools to support these decisions and combat the pandemic. specifically, we propose a comprehensive data-driven approach to understand the clinical characteristics of covid- , predict its mortality, forecast its evolution, and ultimately alleviate its impact. by leveraging cohort-level clinical data, patient-level hospital data, and census-level epidemiological data, we develop an integrated four-step approach, combining descriptive, predictive and prescriptive analytics. first, we aggregate hundreds of clinical studies into the most comprehensive database on covid- to paint a new macroscopic picture of the disease. second, we build personalized calculators to predict the risk of infection and mortality as a function of demographics, symptoms, comorbidities, and lab values. third, we develop a novel epidemiological model to project the pandemic's spread and inform social distancing policies. fourth, we propose an optimization model to reallocate ventilators and alleviate shortages. our results have been used at the clinical level by several hospitals to triage patients, guide care management, plan icu capacity, and re-distribute ventilators. at the policy level, they are currently supporting safe back-to-work policies at a major institution and equitable vaccine distribution planning at a major pharmaceutical company, and have been integrated into the us center for disease control's pandemic forecast. america is applying our infection risk calculator to determine how employees can safely return to work. a major hospital system in the united states planned its intensive care unit (icu) capacity based on our forecasts, and leveraged our opti- mization results to allocate ventilators across hospitals when the number of cases was rising. our epidemiological predic- tions are used by a major pharmaceutical company to design a vaccine distribution strategy that can contain future phases of the pandemic. they have also been incorporated into the us center for disease control's forecasts ( ). early responses to the covid- pandemic have been in- hibited by the lack of available data on patient outcomes. individual centers released reports summarizing patient char- acteristics. yet, this decentralized e ort makes it di cult to construct a cohesive picture of the pandemic. to address this problem, we construct a database that ag- gregates demographics, comorbidities, symptoms, laboratory blood test results ("lab values", henceforth) and clinical out- comes from clinical studies released between december and may -made available on our website for broader use. the database contains information on , covid- patients ( . % of the global covid- patients as of may , ), spanning mainly europe ( , patients), asia ( , patients) and north america ( , patients). to our knowledge, this is the largest dataset on covid- . a. data aggregation. each study was read by an mit re- searcher, who transcribed numerical data from the manuscript. the appendix reports the main transcription assumptions. each row in the database corresponds to a cohort of patients-some papers study a single cohort, whereas oth- ers study several cohorts or sub-cohorts. each column reports cohort-level statistics on demographics (e.g., average age, gen- der breakdown), comorbidities (e.g., prevalence of diabetes, hypertension), symptoms (e.g., prevalence of fever, cough), treatments (e.g., prevalence of antibiotics, intubation), lab values (e.g., average lymphocyte count), and clinical outcomes (e.g., average hospital length of stay, mortality rate). we also track whether the cohort comprises "mild" or "severe" patients (mild and severe cohorts are only a subset of the data). due to the pandemic's urgency, many papers were published before all patients in a cohort were discharged or deceased. ac- cordingly, we estimate the mortality rate from discharged and deceased patients only (referred to as "projected mortality"). using a similar nomenclature, figure a d. discussion and impact. our database is the largest avail- able source of clinical information on covid- assembled to date. as such, it provides new insights on common symp- toms and the drivers of the disease's severity. ultimately, this database can support guidelines from health organizations, and contribute to ongoing clinical research on the disease. another benefit of this database is its geographical reach. results highlight disparities in patients' symptoms across regions. these disparities may stem from (i) di erent reporting criteria; (ii) di erent treatments; (iii) disparate impacts across di erent ethnic groups; and (iv) mutations of the virus since it first appeared in china. this information contributes to early evidence on covid- mutations ( , ) and on its disparate e ects on di erent ethnic groups ( , ). the insights derived from this descriptive analysis highlight the need for personalized data-driven clinical indicators. yet, our population-level database cannot be leveraged directly to support decision-making at the patient level. we have therefore initiated a multi-institution collaboration to collect electronic medical records from covid- patients and de- velop clinical risk calculators. these calculators, presented in the next section, are informed by several of our descriptive insights. notably, the disparities between severe patients and the rest of the patient population inform the choice of the fea- tures included in our mortality risk calculator. moreover, the geographic disparities suggest that data from asia may be less predictive when building infection or mortality risk calculators designed for patients in europe or north america-motivating our use of data from europe. throughout the covid- crisis, physicians have made dif- ficult triage and care management decisions on a daily basis. oftentimes, these decisions could only rely on small-scale clinical tests, each requiring significant time, personnel and equipment and thus cannot be easily replicated. once the burden on "hot spots" has ebbed, hospitals began to aggregate rich data on covid- patients. this data o ers opportu- nities to develop algorithmic risk calculators for large-scale decision support-ultimately facilitating a more proactive and data-driven strategy to combat the disease globally. we have established a patient-level database of thousands of covid- hospital admissions. using state-of-the-art machine learning methods, we develop a mortality risk calculator and an infection risk calculator. together, these two risk assessments provide screening tools to support critical care management decisions, spanning patient triage, hospital admissions, bed assignment and testing prioritization. . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. the copyright holder for this preprint this version posted june , . . table . count and prevalence of symptoms among covid- patients, in aggregate, broken down into mild/severe patients, and broken down per continent (asia, europe, north america). mild and severe patients only form a subset of the data, and so do patients from asia, europe and north america. a "-" indicates that fewer than patients in a subpopulation reported on this symptom. all patients discriminative ability of the proposed models. we report in the appendix average results across all random data partitions. we also report in the appendix threshold-based metrics, † hm hospitals patients were not included since no negative case data was available. c. discussion and impact. the models with lab values provide algorithmic screening tools that can deliver covid- risk predictions using common clinical features. in a constrained healthcare system or in a clinic without access to advanced diagnostics, clinicians can use these models to rapidly identify high-risk patients to support triage and treatment decisions. the models without lab values o er an even simpler tool that could be used outside of a clinical setting. in strained healthcare systems, it can be di cult for patients to obtain direct advice from providers. our tool could serve as a pre- screening step to identify personalized infection risk-without visiting a testing facility. while the exclusion of lab values reduces the auc (especially for infection), these calculators still achieve strong predictive performance. our models provide insights into risk factors and biomark- ers related to covid- infection and mortality. our results suggest that the main indicators of mortality risk are age, bun, crp, ast, and low oxygen saturation. these findings validate several population-level insights from section and are in agreement with clinical studies: prevalence of shortness of breath ( ), elevated levels of crp as an inflammatory marker ( , ), and elevated ast levels due to liver dysfunc- tion in severe covid- cases ( , ). turning to infection risk, the main indicators are crp, leukocytes, calcium, ast, and temperature. these findings are also in agreement with clinical reports: an elevated crp generally indicates an early sign of infection and implies lung lesions from covid- ( ), elevated levels of leukocytes suggest cytokine release syndrome caused by sars-cov- virus ( ), and lowered levels of serum calcium signal higher rate of organ injury and septic shock ( ) . since our findings agree with clinical observations, our calculators can be used to support clinical decision making-although they are not intended to substitute clinical diagnostic or medical expertise. when lab values are not available, the widely accepted risk factors of age, oxygen saturation, temperature, and heart rate become the key indicators for both risk calculators. we observe that mortality risk is higher for male patients (blue in figure b ) than for female patients (red), confirming clinical reports ( , ). an elevated respiratory frequency becomes an important predictor of infection, as reported in ( ). these findings suggest that demographics and vitals provide valuable information in the absence of lab values. however, when lab values are available, these other features become secondary. a limitation of the current mortality model is that it does not take into account medication and treatments during hos- pitalization. we intend to incorporate these in future research to make these models more actionable. furthermore, these models aim to reveal associations between risks and patient characteristics but are not designed to establish causality. overall, we have developed data-driven calculators that allow physicians and patients to assess mortality and infection risks in order to guide care management-especially with scarce healthcare resources. these calculators are being used by several hospitals within the asst cremona system to support triage and treatment decisions-alleviating the toll of the pandemic. our infection calculator also supports safety protocols for banco de credito del peru, the largest bank in peru, to determine how employees can return to work. the inverse tangent function provides a concave-convex re- lationship, capturing three phases of government response. in phase i, most activities continue normally as people adjust their behavior. in phase ii, the infection rate declines sharply as policies are implemented. in phase iii, the decline in the infection rate reaches saturation. the parameters t and k can be respectively thought of as the start date and the strength of the response. ultimately, delphi involves parameters that define the transition rates between the states. we calibrate six of them from our clinical outcomes database (section ). using of the pandemic ( ). it has also been used by the hartford healthcare system-the major hospital system in connecticut, us-to plan its icu capacity, and by a major pharmaceutical company to design a vaccine distribution strategy that can most e ectively contain the next phases of the pandemic. b. delphi-presc: toward re-opening society. to inform the relaxation of social distancing policies, we link policies to the infection rate using machine learning. specifically, we predict the values of "(t), obtained from the fitting procedure of delphi-pred. for simplicity and interpretability, we consider a simple model based on regression trees ( ) and restrict the independent variables to the policies in place. we classify policies based on whether they restrict mass gatherings, school and/or other activities (referred to as "others", and including business closures, severe travel limitations and/or closing of non-essential services). we define a set of seven mutually exclusive and collectively exhaustive policies observed in the us data: (i) no measure; (ii) restrict mass gatherings; (iii) restrict others; (iv) authorize schools, restrict mass gatherings and others; (v) restrict mass gatherings and schools; (vi) restrict mass gatherings, schools and others; and (vii) stay- at-home. we report the regression tree in the appendix, obtained from state-level data in the united states. this model achieves an out-of-sample r of . , suggesting a good fit to the data. as expected, more stringent policies lead to lower values of "(t). the results also provide comparisons between various policies-for instance, school closures seem to induce a stronger reduction in the infection rate than restricting "other" activ- ities. more importantly, the model quantifies the impact of each policy on the infection rate. we then use these results to predict the value of "(t) as a function of the policies (see appendix for details), and simulate the spread of the disease as states progressively loosen social distancing policies. figure d plots the projected case count in the state of new york (ny), for di erent policies (we report a similar plot for the death count in the appendix). note that the stringency of the policies has a significant impact on the pandemic's spread and ultimate toll. for instance, relaxing all social distancing policies on may can increase the cumulative number of cases in ny by up to % by september. using a similar nomenclature, figure e shows the case count if all social distancing policies are relaxed on may vs. may . note that the timing of the policies also has a strong impact: a two-week delay in re-opening society can greatly reduce a resurgence in ny. the road back to a new normal is not straightforward: results suggest that the disease's spread is highly sensitive to both the intensity and the timing of social distancing policies. as governments grapple with an evolving pandemic, delphi- presc can be a useful tool to explore alternative scenarios and ensure that critical decisions are supported with data. we model ventilator pooling as a multi-period resource allocation over s states and d days. the model takes as input ventilator demand in state s and day d, denoted as v s,d , as well as parameters capturing the surge supply from the federal government and the extent of inter-state collaboration. we formulate an optimization problem that decides on the number of ventilators transferred from state s to state s Õ on day d, and on the number of ventilators allocated from the federal government to state s on day d. we propose a bi-objective formulation. the first objective is to minimize ventilator-day shortages; for robustness, we consider both projected shortages (based on demand forecasts) and worst-case shortages (includ- ing a bu er in the demand estimates). the second objective is to minimize inter-state transfers, to limit the operational and political costs of inter-state coordination. mixed-integer optimization provides modeling flexibility to capture spatial- temporal dynamics and the trade-o s between these various objectives. we report the mathematical formulation of the model, along with the key assumptions, in the appendix. put on a ventilator, which we use to estimate the demand for ventilators. we also obtain the average length of stay from our clinical outcomes database (figure ). we discuss these trade-o s further in the appendix. a similar model has been developed to support the re- distribution of ventilators across hospitals within the hartford healthcare system in connecticut-using county-level fore- casts of ventilator demand obtained from delphi-pred. this model has been used by a collection of hospitals in the united states to align ventilator supply with projected demand at a time where the pandemic was on the rise. looking ahead, the proposed model can support the alloca- tion of critical resources in the next phases of the pandemic- spanning ventilators, medicines, personal protective equipment etc. since epidemics do not peak in each state at the same time, states whose infection peak has already passed or lies weeks ahead can help other states facing immediate shortages at little costs to their constituents. inter-state transfers of ventilators occurred in isolated fashion through april ; our model proposes an automated decision-making tool to support these decisions systematically. as our results show, proactive coordination and resource pooling can significantly reduce shortages-thus increasing the number of patients that can be treated without resorting to extreme clinical recourse with side e ects (such as splitting ventilators). this paper proposes a comprehensive data-driven approach to address several core challenges faced by healthcare providers and policy makers in the midst of the covid- pandemic. we have gathered and aggregated data from hundreds of clini- dimitris bertsimas et al. pnas | may , | vol. xxx | no. xx | . cc-by . international license it is made available under a is the author/funder, who has granted medrxiv a license to display the preprint in perpetuity. (which was not certified by peer review) the copyright holder for this preprint this version posted june , . . cal studies, electronic health records, and census reports. we sophia xing and cynthia zheng from our extended team for helpful discussions persistence of coronaviruses on inanimate sur- faces and its inactivation with biocidal agents high contagiousness and rapid spread of severe acute respiratory how will country-based mitigation measures influence the course of the covid- epidemic? economic effects of coronavirus outbreak (covid- ) on the world economy. available at ssrn the global macroeconomic impacts of covid- : seven scenarios. cama work covid- forecasts check if you have coronavirus symptoms clinical characteristics of coronavirus disease in china clinical characteristics of covid- in factors associated with hospitalization and critical illness among , patients with covid- disease in new york city phylogenetic network analysis of sars-cov- genomes an nucleotide deletion in sars-cov- orf a identified from sentinel surveillance in arizona hospitalization rates and characteristics of patients hospitalized with laboratory- confirmed coronavirus disease -covid-net, states racial and ethnic disparities in sars-cov- pandemic: analysis of a covid- observational registry for a diverse us metropolitan population positive rt-pcr test results in patients recovered from covid- missing value estimation methods for dna microarrays xgboost: a scalable tree boosting system in proceedings of the nd acm sigkdd international conference on knowledge discovery and data mining a unified approach to interpreting model predictions in advances in neural information processing systems from local explanations to global understanding with explainable ai for trees unique epidemiological and clinical features of the emerg- ing novel coronavirus pneumonia (covid- ) implicate special control measures the covid- epidemic chest ct features of covid- in rome, italy clinical features of patients infected with novel coronavirus in wuhan, china c-reactive protein levels in the early stage of covid- covid- infection: the perspectives on immune responses serum calcium as a biomarker of clinical severity and prognosis in patients with coronavirus disease : a retrospective cross-sectional study covid- : risk factors for severe disease and death neutrophil-to-lymphocyte ratio as an independent risk factor for mortality in hospitalized patients with covid- clinical course and risk factors for mortality of adult inpatients with covid- in china: a retrospective cohort study. the lancet projecting the transmission dy- namics of sars-cov- through the postpandemic period classification and regression trees effects of prone positioning on lung protection in patients with acute the standard of care of patients with ards: ventilatory settings and rescue therapies for refractory hypoxemia intubation and ventilation amid the covid- outbreakwuhan's experience critical supply shortages-the need for ventilators and per- sonal protective equipment during the covid- pandemic stockpiling ventilators for influenza pandemics a model of supply-chain decisions for resource sharing with an application to ventilator allocation to combat covid- key: cord- - swjzic authors: nan title: scientific opinion on the public health hazards to be covered by inspection of meat from sheep and goats date: - - journal: efsa j doi: . /j.efsa. . sha: doc_id: cord_uid: swjzic a risk ranking process identified toxoplasma gondii and pathogenic verocytotoxin‐producing escherichia coli (vtec) as the most relevant biological hazards for meat inspection of sheep and goats. as these are not detected by traditional meat inspection, a meat safety assurance system using risk‐based interventions was proposed. further studies are required on t. gondii and pathogenic vtec. if new information confirms these hazards as a high risk to public health from meat from sheep or goats, setting targets at carcass level should be considered. other elements of the system are risk‐categorisation of flocks/herds based on improved food chain information (fci), classification of abattoirs according to their capability to reduce faecal contamination, and use of improved process hygiene criteria. it is proposed to omit palpation and incision from post‐mortem inspection in animals subjected to routine slaughter. for chemical hazards, dioxins and dioxin‐like polychlorinated biphenyls were ranked as being of high potential concern. monitoring programmes for chemical hazards should be more flexible and based on the risk of occurrence, taking into account fci, which should be expanded to reflect the extensive production systems used, and the ranking of chemical substances, which should be regularly updated and include new hazards. control programmes across the food chain, national residue control plans, feed control and monitoring of environmental contaminants should be better integrated. meat inspection is a valuable tool for surveillance and monitoring of animal health and welfare conditions. omission of palpation and incision would reduce detection effectiveness for tuberculosis and fasciolosis at animal level. surveillance of tuberculosis at the slaughterhouse in small ruminants should be improved and encouraged, as this is in practice the only surveillance system available. extended use of fci could compensate for some, but not all, the information on animal health and welfare lost if only visual post‐mortem inspection is applied. following a request from the european commission to the european food safety authority (efsa), the panel on biological hazards (biohaz) was asked to deliver a scientific opinion on the public health hazards to be covered by the inspection of meat from sheep and goats. the panel was supported by the efsa panels on contaminants in the food chain (contam) and animal health and welfare (ahaw) in the preparation of this opinion. briefly, the main risks for public health that should be addressed by meat inspection were identified and ranked; the strengths and weaknesses of the current meat inspection system were evaluated; recommendations were made for inspection methods fit for the purpose of meeting the overall objectives of meat inspection for hazards currently not covered by the meat inspection system; and recommendations for adaptations of inspection methods and/or frequencies of inspections that provide an equivalent level of protection were made. in addition, the implications for animal health and animal welfare of any changes proposed to current inspection methods were assessed. sheep and goats were considered together, unless otherwise stated. decision trees were developed and used for priority ranking of biological and chemical hazards present in meat from sheep and goats. for biological hazards the ranking was based on the magnitude of the human health impact, the severity of the disease in humans and the evidence supporting the role of meat from sheep and goats as a risk factor for disease in humans. the assessment was focused on the public health risks that may occur through the handling, preparation for consumption and/or consumption of meat from these species. the term 'priority' was considered more appropriate than 'risk' for categorizing the biological hazards associated with meat from small ruminants, given that a significant amount of data on both the occurrence of the hazards and on the attributable fraction of human cases to meat from small ruminants were not available. risk ranking of chemical hazards into categories of potential concern was based on the outcomes of the national residue control plans (nrcps), as defined in council directive / /ec for the period - , and of other testing programmes, as well as on substance-specific parameters such as the toxicological profile and the likelihood of the occurrence of residues and contaminants in sheep and goats. based on the ranking for biological hazards, toxoplasma gondii and pathogenic verocytotoxinproducing escherichia coli (vtec) were classified as high priority for public health regarding meat inspection of small ruminants. the remaining hazards were classified as low public health relevance, based on available data, and were therefore not considered further. for chemical hazards, dioxins and dioxin-like polychlorinated biphenyls (dl-pcbs) were ranked as being of high potential concern owing to their known bioaccumulation in the food chain, their frequent findings above maximum levels (mls), particularly in sheep liver, and in consideration of their toxicological profile; all other substances were ranked as of medium or lower concern. it should be noted that the ranking into specific-risk categories of hazards is based on current knowledge and available data and therefore ranking should be updated regularly, taking account of new information and data and including 'new hazards'. the main elements of the current meat inspection system include analysis of food chain information (fci), ante-mortem examination of animals and post-mortem examination of carcasses and organs. the assessment of the strengths and weaknesses of the current meat inspection was based on its contribution to the control of the meat-borne human health hazards identified in sheep and goats. a number of strengths and weaknesses of the current inspection system were identified. currently, the use of fci for food safety purposes is limited for small ruminants because the data that it contains is very general and doesn't address specific hazards of public health importance. however, fci could serve as a valuable tool for risk management decisions and could be used for risk categorisation of farms or batches of animals. to achieve this, the system needs further development to include additional information important for food safety, including definition of appropriate and standardized indicators for the main public health hazards identified above. ante-mortem and post-mortem inspections of sheep and goats enable the detection of observable abnormalities and provide a general assessment of animal/herd health, which if compromised may lead to a greater public health risk. visual inspection of live animals and carcasses can detect animals heavily contaminated with faeces, which increase the risk for cross-contamination during slaughter and may constitute a food safety risk if the animals are carrying hazards of public health importance. if such animals or carcasses are dealt with adequately, this risk can be reduced. visual detection of faecal contamination on carcasses can also be an indicator of slaughter hygiene, but other approaches to verify this should be considered. post-mortem inspection can also detect non meat-borne hazards of public health significance, such as echinococcus granulosus, that can be present in carcasses or offal from small ruminants. ante-mortem and post-mortem inspection also have the potential to detect new diseases, which may be of direct public health significance. with regard to chemical hazards, it was noted that current procedures for sampling and testing are, in general, well established and coordinated, including follow-up actions subsequent to the identification of non-compliant samples. the regular sampling and testing for chemical residues and contaminants is an important disincentive for the development of undesirable practices and the prescriptive sampling system allows for equivalence in the control of eu-produced sheep and goat meat. the current combination of animal traceability, ante-mortem inspection and gross tissue examination can support the collection of appropriate samples for residue monitoring. the main weakness of ante-mortem and post-mortem inspection is that they are not able to detect any of the public health hazards identified as the main concerns for food safety. in addition, given that the current post-mortem procedures involve palpation and incision of some organs, the potential for crosscontamination of carcasses exists. for chemical hazards, a major weakness is that, with very few exceptions, presence of chemical hazards cannot be identified by current ante-/post-mortem meat inspection procedures at the slaughterhouse level and there is a lack of sufficient cost-effective and reliable screening methods. in addition, sampling is mostly prescriptive rather than risk or information based. there is limited ongoing adaptation of the sampling and testing programmes to the results of the residue monitoring programmes, with poor integration between the testing of feed materials for undesirable substances and the nrcps and sampling under the nrcps reflecting only a part of testing done by a number of mss, the results of which should be taken into consideration. as neither of the main public health hazards associated with meat from small ruminants can be detected by traditional visual meat inspection, other approaches are necessary to identify and control these microbiological hazards. a comprehensive meat safety assurance system for small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual mss. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. if these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. to meet these targets and criteria, a variety of control options for the main hazards are available, at both farm and abattoir level. flock/herd categorisation according to the risk posed by the main hazards is considered an important element of an integrated meat safety assurance system. this should be based on the use of farm descriptors and historical data in addition to batch-specific information. farm-related data could be provided through farm audits using harmonised epidemiological indicators (heis) to assess the risk and protective factors for the flocks/herds related to the given hazards. in addition, classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: ( ) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. process hygiene criteria); and ( ) the use of operational procedures and equipment that reduce faecal contamination, as well as industry led quality systems. there are a variety of husbandry measures that can be used to control t. gondii on sheep and goat farms but at present these are impractical to implement in most farms. a number of post-processing interventions are effective in inactivating t. gondii such as cooking, freezing, curing, high pressure and irradiation treatments, although further research is required to validate these treatments in meat from small ruminants. there are also a variety of husbandry measures that can be used to reduce the levels of vtec on farms, but their efficacy is not clear in small ruminants. there are also a number of challenges that need to be overcome regarding the setting of targets for pathogenic vtec, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for pathogenic vtec due to the difficulty to correctly identify pathogenic vtec. the main sources of vtec on sheep and goat carcasses are the fleece/hide and the viscera. to control incoming faecal contamination only clean animals should be accepted for slaughter. there are also a number of measures that can help reducing the spillage or leakage of digestive contents onto the carcass, as well as post-processing interventions to control pathogenic vtec are also available. these include hot water and steam carcass surface treatments. risk categorisation of slaughterhouses should be based on trends of data derived from process hygiene assessments and from hazard analysis critical control point programmes. improvement of slaughter hygiene through technological and managerial interventions should be sought in slaughterhouses with repeatedly unsatisfactory performance. fci can be improved by including information on participation in quality assurance schemes and by greater feedback to the primary producer, as this would likely result in the production of healthier animals. ante-mortem inspection assesses the general health status of the animals and helps to detect animals heavily contaminated with faeces on arrival at the slaughterhouse, so no adaptations for the existing visual ante-mortem inspection are required. routine post-mortem examination cannot detect the meat-borne pathogens of public health importance. palpation of the lungs, the liver, the umbilical region and the joints, and incision of the liver could contribute to the spread of bacterial hazards through cross contamination. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. sheep and goat production in the eu is marked by being largely extensive in nature, involving frequent trading of animals and involving nomadic flocks. these differences in husbandry systems and feeding regimes result in different risks for the occurrence of chemical residues and contaminants. extensive periods on pasture or/as nomadic flocks and the use of slaughter collection dealerships may preclude detailed lifetime fci. it is recommended regarding chemical hazards, that fci should be expanded for sheep and goats produced in extensive systems to provide more information on the specific environmental conditions where the animals are produced and that future monitoring programmes should be based on the risk of occurrence of chemical residues and contaminants, taking into account the completeness and quality of the fci supplied, and the ranking of chemical substances into categories of potential concern, which ranking needs to be regularly updated. control programmes for chemical residues and contaminants should be less prescriptive, with sufficient flexibility to adapt to results of testing, should include 'new hazards', and the test results for sheep and goats should be separately presented. 'new' chemical hazards identified are largely persistent organic pollutants that have not been comprehensively covered by the sampling plans of the current meat inspection or which have not been included in such sampling plans. there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and monitoring of environmental contaminants. a series of further recommendations are made in relation to chemical hazards dealing with control measures, testing and analytical techniques and also on data collection and source attribution studies for biological hazards, as well as on methods of detection of viable t. gondii in meat and on assessing the effect of the omission of palpation and incision on the risk posed by non-meat-borne zoonoses. the implications for surveillance of animal health and welfare of the changes proposed to the current meat inspection system were evaluated quantitatively and qualitatively. the proposed changes related to biological hazards included shorter transport and lairage time, improved collection of food chain information, and omission of palpation and incision in animals subjected to routine slaughter at postmortem inspection. recommendations on chemical hazards included the ranking system for chemical substances of potential concern and its updating, the use of food chain information to help facilitate risk based sampling strategies, and the inclusion of 'new hazards' in control programmes for chemical residues and contaminants. from the quantitative assessment, a change to visual only inspection caused a significant reduction of the probability of detection of detectable cases of fasciolosis and tuberculosis in goats. with regard to exotic diseases, clinical surveillance had a greater sensitivity for detecting foot and mouth disease than slaughterhouse surveillance. a change in post-mortem protocol to a visual only system did not significantly reduce the detection of any welfare conditions. following the qualitative analysis, it was concluded that a change to visual inspection (which implies no palpation) would reduce detection effectiveness for tuberculosis. surveillance of tuberculosis at the slaughterhouse in small ruminants should be improved and encouraged, as this is in practice the only surveillance system available in these species. the detection of tuberculosis in small ruminants should be adequately recorded and followed at the farm level. moving to a visual only meat inspection system would decrease the sensitivity of inspection of fasciolosis at animal level, however it would be sensitive enough to identify most if not all affected herds. therefore the consequences of the change would be of low relevance. the feedback to farmers of fasciola hepatica detected at meat inspection should be improved, to allow farmer information to support rational on-farm fluke management programmes. qualitative analysis suggested that the proposal for shortened transport and lairage time would be beneficial to improving the welfare of small ruminants. food chain information should include animal welfare status in order to complement the slaughterhouse surveillance systems (ante-mortem and postmortem inspection) and the latter could be used to identify on farm welfare status. regulation (ec) no / of the european parliament and of the council lays down specific rules for the organisation of official controls on products of animal origin intended for human consumption . inspection tasks within this regulation include: checks and analysis of food chain information animal welfare specified risk material and other by-products laboratory testing the scope of the inspection includes monitoring of zoonotic infections and the detection or confirmation of certain animal diseases without necessarily having consequences for the placing on the market of meat. the purpose of the inspection is to assess if the meat is fit for human consumption in general and to address a number of specific hazards: in particular the following issues: transmissible spongiform encephalopathies (only ruminants), cysticercosis, trichinosis, glanders (only solipeds), tuberculosis, brucellosis, contaminants (e.g. heavy metals), residues of veterinary drugs and unauthorised substances or products. during their meeting on november , chief veterinary officers (cvo) of the member states agreed on conclusions on modernisation of sanitary inspection in slaughterhouses based on the recommendations issued during a seminar organised by the french presidency from to july . the cvo conclusions have been considered in the commission report on the experience gained from the application of the hygiene regulations, adopted on july . council conclusions on the commission report were adopted on november inviting the commission to prepare concrete proposals allowing the effective implementation of modernised sanitary inspection in slaughterhouses while making full use of the principle of the 'risk-based approach'. in accordance with article of regulation (ec) no / , the commission shall consult efsa on certain matters falling within the scope of the regulation whenever necessary. efsa and the commission's former scientific committee on veterinary measures relating to public health have issued in the past a number of opinions on meat inspection considering specific hazards or production systems separately. in order to guarantee a more risk-based approach, an assessment of the risk caused by specific hazards is needed, taking into account the evolving epidemiological situation in member states. in addition, methodologies may need to be reviewed taking into account risks of possible cross-contamination, trends in slaughter techniques and possible new inspection methods. the scope of this mandate is to evaluate meat inspection in order to assess the fitness of the meat for human consumption and to monitor food-borne zoonotic infections (public health) without jeopardising the detection of certain animal diseases nor the verification of compliance with rules on animal welfare at slaughter. if and when the current methodology for this purpose would be considered not to be the most satisfactory to monitor major hazards for public health, additional methods should be recommended as explained in detail under points and of the terms of reference. the objectives of the current legal provisions aimed at carrying out meat inspection on a risk-based analysis should be maintained. in order to ensure a risk-based approach, efsa is requested to provide scientific opinions on meat inspection in slaughterhouses and, if considered appropriate, at any other stages of the production chain, taking into account implications for animal health and animal welfare in its risk analysis. in addition, relevant international guidance should be considered, such as the codex code of hygienic practice for meat (cac/rcp - ) , and chapter . on control of biological hazards of animal health and public health importance through ante-and post-mortem meat inspection, as well as chapter . on slaughter of animals of the terrestrial animal health code of the world organisation for animal health (oie). the following species or groups of species should be considered, taking into account the following order of priority identified in consultation with the member states: domestic swine, poultry, bovine animals over six weeks old, bovine animals under six weeks old, domestic sheep and goats, farmed game and domestic solipeds. in particular, efsa, in consultation with the european centre for disease prevention and control (ecdc), is requested within the scope described above to: . identify and rank the main risks for public health that should be addressed by meat inspection at eu level. general (e.g. sepsis, abscesses) and specific biological risks as well as chemical risks (e.g. residues of veterinary drugs and contaminants) should be considered. differentiation may be made according to production systems and age of animals (e.g. breeding compared to fattening animals). . assess the strengths and weaknesses of the current meat inspection methodology and recommend possible alternative methods (at ante-mortem or post-mortem inspection, or validated laboratory testing within the frame of traditional meat inspection or elsewhere in the production chain) at eu level, providing an equivalent achievement of overall objectives; the implications far animal health and animal welfare of any changes suggested in the light of public health risks to current inspection methods should be considered. . if new hazards currently not covered by the meat inspection system (e.g. salmonella, campylobacter) are identified under tor , then recommend inspection methods fit for the purpose of meeting the overall objectives of meat inspection. when appropriate, food chain information should be taken into account. . recommend adaptations of inspection methods and/or frequencies of inspections that provide an equivalent level of protection within the scope of meat inspection or elsewhere in the production chain that may be used by risk managers in case they consider the current methods disproportionate to the risk, e.g. based on the ranking as an outcome of terms of reference or on data obtained using harmonised epidemiological criteria (see annex ) . when appropriate, food chain information should be taken into account. the scope of the mandate is to evaluate meat inspection in a public health context; animal health and welfare issues are covered with respect to the possible implications of adaptations/alterations to current inspection methods, or the introduction of novel inspection methods proposed by this mandate. issues other than those of public health significance but that still compromise the fitness of the meat for human consumption (regulation (ec) no / , annex i, section ii, chapter v) are outside the scope of the mandate. examples of these include sexual odour or meat decolouration. transmissible spongiform encephalopathies (tses) are also outside the scope of the mandate. the impact of changes to meat inspection procedures on the occupational health of abattoir workers, inspectors, etc. is outside the scope of the mandate. additionally, hazards representing primarily occupational health risks, the controls related to any hazard at any meat chain stage beyond the abattoir, and the implications for environmental protection are not dealt with in this document. in line with article of regulation (ec) no / the european commission has recently submitted a mandate to efsa (m- - ) to cover different aspects of meat inspection. the mandate comprises two requests: one for scientific opinions and one for technical assistance. the european food safety authority (efsa) is requested to issue scientific opinions related to inspection of meat in different species. in addition, technical assistance has been requested on harmonised epidemiological criteria for specific hazards for public health that can be used by risk managers to consider adaptation of the meat inspection methodology. meat inspection is defined by regulation / . the species or groups of species to be considered are: domestic swine, poultry, bovine animals over six weeks old, bovine animals under six weeks old, domestic sheep and goats, farmed game and domestic solipeds. taking into account the complexity of the subject and that consideration has to be given to zoonotic hazards, animal health and welfare issues and chemical hazards (e.g. residues of veterinary drugs and chemical contaminants), the involvement of several efsa units was necessary. more specifically, the mandate was allocated to the biological hazards panel (biohaz), which prepared this scientific opinion with the support of the animal health and welfare (ahaw) and contaminants in the food chain (contam) panels. in addition, the delivery of the technical assistance was allocated to the biological monitoring (biomo), scientific assessment support (sas), and dietary and chemical monitoring (dcm) units of the risk assessment and scientific assistance directorate. this scientific opinion therefore concerns the assessment of meat inspection in sheep and goats, and it includes the answer to the terms of reference proposed by the european commission. owing to the complexity of the mandate, the presentation of the outcome does not follow the usual layout. for ease of reading, main outputs from the three working groups (biohaz, contam and ahaw) are presented at the beginning of the document. the scientific justifications for these outputs are found in the various appendices as endorsed by their respective panels, namely biological hazards (appendix a), chemical hazards (appendix b), and the potential impact that the proposed changes envisaged by these two could have on animal health and welfare (appendix c). differentiation may be made according to production systems and age of animals (e.g. breeding compared to fattening animals). based on the priority ranking, the hazards were classified as follows: -toxoplasma gondii and pathogenic verocytotoxin-producing escherichia coli (vtec) were classified as high priority for sheep/goat meat inspection. -the remaining identified hazards, bacillus anthracis, campylobacter spp. (thermophilic) and salmonella spp. were classified as low priority, based on available data. as new hazards might emerge and/or hazards that presently are not a priority might become more relevant over time or in some regions, both hazard identification and the risk ranking should be revisited regularly to reflect this dynamic epidemiological situation. particular attention should be given to potential emerging hazards of public health importance. a multi-step approach was used for the identification and ranking of chemical hazards. evaluation of the - national residue control plans (nrcps) outcome for sheep and goats indicated that only . % of the total number of results was non-compliant for one or more substances listed in council directive / /ec. potentially higher exposure of consumers to these substances from sheep and goat meat takes place only incidentally, as a result of mistakes or non-compliance with known and regulated procedures. available data however, do not allow for a reliable assessment of consumer exposure. ranking of chemical residues and contaminants in domestic sheep and goats based on predefined criteria, relating to bioaccumulation, toxicological profile and likelihood of occurrence, and taking into account the findings from the nrcps for the period - was as follows: -dioxins and dioxin-like polychlorinated biphenyls (dl-pcbs) were ranked as being of high potential owing to their known bioaccumulation in the food chain, their frequent findings above mls, particularly in sheep liver, and in consideration of their toxicological profile. -stilbenes, thyreostats, gonadal (sex) steroids, resorcylic acid lactones and beta-agonists, especially clenbuterol, chloramphenicol and nitrofurans were ranked as being of medium potential concern, as they have proven toxicity for humans, are effective as antibacterial treatments for sheep/goats and as non-compliant samples are found in most years of the nrcps. -chloramphenicol and nitrofurans were ranked as being of medium potential concern, as they have proven toxicity for humans, they are effective as antibacterial treatments for sheep/goats and as non-compliant samples are found in most years of the nrcps. -non dioxin-like polychlorinated biphenyls (ndl-pcbs) bioaccumulate, and there is a risk of exceeding the mls, but they were ranked in the category of medium potential concern, because they are less toxic than dioxins and dl-pcbs. -the chemical elements cadmium, lead and mercury were allocated to the medium potential concern category taking into account the number of non-compliant results reported under the nrcps and their toxicological profile. -all other substances listed in council directive / /ec were ranked as of low or negligible potential concern owing to the toxicological profile of these substances at residue levels in edible tissues or to the very low or non-occurrence of non-compliant results in the nrcps - , and/or to the natural occurrence in sheep and goats of some of these substances. strengths -ante-mortem and post-mortem inspection of sheep and goats enable the detection of observable abnormalities. in that context, they are an important activity for monitoring animal health and welfare. they provide a general assessment of animal/herd health, which if compromised may lead to a greater public health risk. visual inspection of live animals and carcasses can also detect animals heavily contaminated with faeces. such animals increase the risk for cross-contamination during slaughter and may consequently constitute a food safety risk if carrying hazards of public health importance. if such animals or carcasses are dealt with adequately, this risk can be reduced. visual detection of faecal contamination on carcasses can also be an indicator of slaughter hygiene, but other approaches to verify slaughter hygiene should be considered. -post-mortem inspection can also detect non meat-borne hazards of public health significance that can be present in carcasses or offal from small ruminants. ante-mortem and post-mortem inspection also have the potential to detect new diseases if these have clinical signs, which may be of direct public health significance. weaknesses -currently, the use of food chain information (fci) for food safety purposes is limited for small ruminants, mainly because the data that it contains is very general and doesn't address specific hazards of public health importance. however, fci could serve as a valuable tool for risk management decisions and could be used for risk categorisations of farms or batches of animals. to achieve this, the system needs further development to include additional information important for food safety, including definition of appropriate and standardized indicators for the main public health hazards identified in section of appendix a. -ante-and post-mortem inspection is not able to detect any of the public health hazards identified as the main concerns for food safety. it would therefore be expected that more efficient procedures might be implemented to monitor the occurrence of non-visible hazards. in addition, given that the current post-mortem inspection procedures involve palpation and incision of some organs, the potential for cross-contamination of carcasses exists. strengths of the current meat inspection methodology for chemical hazards are as follows: -the current procedures for sampling and testing are a mature system, in general well established and coordinated including follow-up actions subsequent to the identification of non-compliant samples. -the regular sampling and testing for chemical residues and contaminants in the system is an important disincentive to the development of undesirable practices. -the prescriptive sampling system allows for equivalence in the control of eu-produced sheep and goat meat. any forthcoming measures have to ensure that the control of imports from third countries remains equivalent to the controls within the domestic market. -the current combination of animal traceability, ante-mortem inspection and gross tissue examination can support the collection of appropriate samples for residue monitoring. weaknesses of the current meat inspection methodology for chemical hazards are as follows: -a weakness of the system is that presence of chemical hazards cannot be identified by current ante-/post-mortem meat inspection procedures at the slaughterhouse level, indicating the need for further harmonization of the risk reduction strategies along the entire food chain. -integration between testing of feed materials for undesirable contaminants and the nrcps in terms of communication and follow-up testing strategies or interventions is currently limited. moreover, a routine environmental data flow is not established and keeping habits for sheep and goats provides opportunities for feed coming in without a clear feed chain history. -under the current system, sampling is mostly prescriptive rather than risk-or informationbased. it appears that individual samples taken under the nrcp testing programme may not always be taken as targeted samples, as specified under council directive / / ec, but sometimes may be taken as random samples. -there is a lack of sufficient cost-effective and reliable screening methods and/or the range of substances prescribed/covered by the testing is sometimes limited. -there is limited flexibility to adopt emerging chemical substances into the nrcps and limited ongoing adaptation of the sampling and testing programme to the results of the residue monitoring programmes. in addition, sampling under the nrcps reflects only a part of testing done by a number of mss, the results of which should be taken into consideration. -sheep and goats may not be subject to surveillance over their lifetime at the same level as is the case for other food animal categories such as pigs, poultry and, to a large extent, bovine animals due to the traditional nomadic/outdoor farming systems. as shown in the comisurv assessment, a change to visual only inspection would cause a significant reduction in the probability of detection (i.e. non-overlapping % probability intervals) of detectable cases of fasciolosis and of tuberculosis in goats. small ruminants are usually not subjected to official tuberculosis eradication campaigns, and farm controls are only performed on premises where cattle and goats are kept together, or in flocks/herds that commercialise raw milk. surveillance for small ruminant tuberculosis at present relies on meat inspection of sheep and goats slaughtered for human consumption, or other limited diagnostic surveillance activities. as is the case with tuberculosis in bovines, the contribution of meat inspection surveillance of tuberculosis in small ruminants is to support the detection of flocks/herds with tuberculosis. detection of tuberculosis in individual animals is merely the first step in improving the effectiveness of flock/herd surveillance, and for any given flock/herd, the flock/herd sensitivity will increase with the number of animals slaughtered. in recent years tuberculosis has been reported in small ruminants in several eu countries and most information derives from recognition of tuberculous lesions at the slaughterhouse and from laboratory reports. although small ruminants are not considered to represent a significant reservoir of the disease for the persistence of bovine tuberculosis in cattle, it is still possible that infected sheep and goat herds could act as vectors of infection for other domestic and wild animals. therefore, surveillance and control of tuberculosis in domestic small ruminants does have consequences for the overall surveillance and control of tuberculosis. the feedback to farmers of fasciola hepatica detected at meat inspection is low at present and the real risk to animal health/welfare for this disease, caused by a change to a visual only meat inspection method, is probably low. implementation of welfare assessment protocols using appropriate animal based indicators during clinical and slaughterhouse (ami + pmi) surveillance system would improve the welfare of small ruminants. extended use of food chain information has the potential to compensate for some, but not all, of the information on animal health and welfare that would be lost if visual only post-mortem inspection is applied. food chain information is a potentially effective tool to perform more targeted ante-mortem and post-mortem inspection tasks in the slaughterhouse which may increase the effectiveness of those tasks in detecting conditions of animal health and animal welfare significance. the existing ineffective flow of information from primary production to the slaughterhouses and vice versa reduces the ability of detection of animal diseases and animal welfare conditions at the slaughterhouse and as a result it limits possible improvements on animal health and welfare standards at the farm as farmers will not be aware of the slaughterhouse findings. the conclusions and recommendations on chemical hazards were reviewed by the ahaw working group and none of them were considered to have impact on animal health and welfare surveillance and monitoring. as neither of the main public health hazards associated with meat from small ruminants can be detected by traditional meat inspection, other approaches are necessary to identify and control these microbiological hazards. a comprehensive meat safety assurance system for meat from small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual member states. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. in the event that these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. to meet these targets and criteria, a variety of control options for the main hazards are available, at both farm and abattoir level. flock/herd categorisation according to the risk posed by the main hazards is considered an important element of an integrated meat safety assurance system. this should be based on the use of farm descriptors and historical data in addition to batch-specific information. farmrelated data could be provided through farm audits using harmonised epidemiological indicators (heis) to assess the risk and protective factors for the flocks/herds related to the given hazards. classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: ( ) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. process hygiene criteria); and ( ) the use of operational procedures and equipment that reduce faecal contamination, as well as industry-led quality systems. as mentioned in section . of appendix a, further studies are necessary to determine with more certainty the risk of acquiring t. gondii through consumption of meat from small ruminants. in addition, the lack of tests that can easily identify viable cysts in meat is a significant drawback. further, if there is a high prevalence in the animal population, this will hamper the development of systems based on risk categorisation of animals. for these reasons, the setting of targets for t. gondii is not recommended at the moment. there are a variety of animal husbandry measures that can be used to control t. gondii on sheep and goat farms but at present these are impractical to implement in most farms. a number of post-processing interventions might be effective in inactivating t. gondii such as cooking, freezing, curing and high-pressure and -irradiation treatments. however, most of the information available for these treatments originates from research in pigs, so further research is required to validate these treatments in meat from small ruminants. there are also a variety of animal husbandry measures that can be used to reduce the levels of vtec on infected farms, but their efficacy is not clear in small ruminants. in addition, there are a number of challenges that need to be overcome regarding the setting of targets for pathogenic vtec, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for pathogenic vtec due to the difficulty to correctly identify pathogenic vtec. the two main sources of vtec on sheep and goat carcasses are the fleece/hide and the viscera. to control faecal contamination from the fleece or hide only clean animals should be accepted for slaughter, as currently required by eu legislation. there are also a number of measures that can help reducing the spillage or leakage of digestive contents onto the carcass, particularly rodding of the oesophagus and bagging of the rectum. post-processing interventions to control vtec are also available. these include hot water and steam pasteurization. risk categorisation of slaughterhouses should be based on trends of data derived from process hygiene assessments and from hazard analysis critical control point programmes. improvement of slaughter hygiene through technological and managerial interventions should be sought in slaughterhouses with repeatedly unsatisfactory performance. dioxins and dl-pcbs which accumulate in food-producing animals have been ranked as being of high potential concern. as these substances have not yet been comprehensively covered by the sampling plans of the current meat inspection (nrcps), they should be considered as 'new' hazards. in addition, for a number of chemical elements used as feed supplements and for organic contaminants that may accumulate in food-producing animals only limited data regarding residues in sheep and goats are available. this is the case, in particular, for brominated flame retardants, including polybrominated diphenylethers (pbdes) and hexabromocyclododecanes (hbcdds) and perfluorinated compounds (pfcs) including (but not limited to) perfluorooctane sulphonate (pfos) and perfluorooctanoic acid (pfoa). fci can be improved by including information on participation in quality assurance schemes and by giving greater feedback to the primary producer, as this would probably result in the production of healthier animals. ante-mortem inspection assesses the general health status of the animals and helps to detect animals heavily contaminated with faeces on arrival at the slaughterhouse. taking these factors into consideration, and given that current methods do not increase the microbiological risk to public health, no adaptations to the existing visual ante-mortem inspection procedure are required. although visual examination contributes by detecting visible faecal contamination, routine post-mortem examination cannot detect the meat-borne pathogens of public health importance. palpation of the lungs, the livers, the umbilical region and the joints and incision of the liver could contribute to the spread of bacterial hazards through cross-contamination. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. sheep and goat production in the eu is marked by being largely extensive in nature, involving frequent trading of animals and nomadic flocks. this involves differences in husbandry systems and feeding regimes resulting in different risks for chemical substances and contaminants. extensive periods on pasture or/as nomadic flocks and the use of slaughter collection dealerships may preclude detailed lifetime fci. similarly, in these situations, the level of feedback from the slaughterhouse and authorities to farmers regarding the results of residue testing may be suboptimal. there is less concern about fci from dairy sheep and goats as they are reared under more intensive and controlled conditions. better integration of results from official feed control with residue monitoring seems essential to indicate whether monitoring of residues in slaughter animals needs to be directed to particular substances. therefore, there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and environmental monitoring. to provide a better evidence base for future risk ranking of hazards, initiatives should be instigated to: improve and harmonise data collection of incidence and severity of human diseases caused by relevant hazards; systematically collect data for source attribution; collect data to identify and risk rank emerging hazards that could be transmitted through handling, preparation and consumption of sheep and goat meat. source attribution studies are needed to determine the relative importance of meat and to ascertain the role of the different livestock species as sources of t. gondii and pathogenic vtec for humans. methods should be developed to estimate the amount of viable t. gondii tissue cysts in meat, especially in meat cuts that are commonly consumed. the effect of the omission of palpation and incision on the risk posed by non-meat-borne zoonoses such as echinococcus granulosus and fasciola hepatica should be assessed, particularly in those regions where these hazards are endemic. fci should be expanded for sheep and goats produced in extensive systems to provide more information on the specific environmental conditions where the animals are produced. it is recommended that sampling of sheep and goats should be based on the risk of occurrence of chemical residues and contaminants and on the completeness and quality of the fci supplied. regular updating of the ranking of chemical substances in sheep and goats as well as of the sampling plans should occur taking into account any new information regarding the toxicological profile of chemical residues and contaminants, usage in sheep and goat production, and actual occurrence of individual substances in sheep and goats. control programmes for chemical residues and contaminants should be less prescriptive, with sufficient flexibility to adapt to results of testing, should include 'new hazards', and the test results for sheep and goats should be separately presented. there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and monitoring of environmental contaminants. the development of analytical techniques covering multiple analytes and of new biologically based testing approaches should be encouraged and incorporated into the residue control programmes for prohibited substances, testing should be directed where appropriate towards the farm level and, in the case of substances that might be used illicitly for growth promotion, control measures, including testing, need to be refocused to better identify the extent of abuse in the eu. in addition, control measures for prohibited substances should not rely exclusively on nrcp testing, but should include veterinary inspection during the production phase and the use of biological methods and biomarkers suitable for the identification of abuse of such substances in sheep and goat production in the eu. data collected during clinical and slaughterhouse (ante-mortem and post mortem inspection) surveillance systems should be utilised more effectively to improve animal welfare at farm level. slaughterhouse surveillance of tuberculosis in small ruminants should be improved and encouraged, as this is in practice the only surveillance system available. the detection of tuberculosis in small ruminants should be adequately recorded and notified, followed by control measures at the farm level. lack of feedback of post-mortem inspection results to the farmer prevents instigation of a fluke management programme, which could be detrimental to animal health and welfare. an improvement in this feedback of information is recommended. welfare surveillance systems should become an integral part of the food chain information. an integrated system should be developed whereby food chain information for public health and for animal health and welfare can be used in parallel, more effectively provide farmers with background information on the animal diseases and welfare conditions of key concern that may affect their livestock and why it is important to provide this information to the slaughterhouse through the use of food chain information. following a request from the european commission, the panel on biological hazards (biohaz) was asked to deliver a scientific opinion on the public health hazards to be covered by inspection of meat for several animal species, with the contribution of the panel on contaminants in the food chain (contam) and the panel on animal health and welfare (ahaw). briefly, the main risks for public health that should be addressed by meat inspection were identified and ranked; the strengths and weaknesses of the current meat inspection were evaluated; and recommendations were made for inspection methods capable of meeting the overall objectives of meat inspection for hazards currently not covered by the meat inspection system, as well as recommendations for adaptations of inspection methods and/or frequencies of inspections that provide an equivalent level of protection. in addition, the implications for animal health and animal welfare of any changes proposed to current inspection methods were assessed. this opinion covers the inspection of meat from sheep and goats. the biohaz panel considered sheep and goats together . a decision tree was used for priority ranking of meat-borne hazards present in meat from sheep and goats. the ranking was based on the magnitude of the human health impact, the severity of the disease in humans and the evidence supporting the role of meat from sheep and goats as a risk factor for disease in humans. the assessment was focused on the public health risks that may occur through the handling, preparation and/or consumption of meat from these species. the term 'priority' was considered more appropriate than 'risk' for categorizing the hazards associated with meat from small ruminants, given that a significant amount of data on both the occurrence of the hazards and on the attributable fraction of human cases to meat from small ruminants were not available. based on the priority ranking, the hazards were classified as follows: toxoplasma gondii and pathogenic vtec were classified as high priority for sheep/goat meat inspection. the remaining identified hazards, bacillus anthracis, campylobacter spp. (thermophilic) and salmonella spp., were classified as low priority, based on available data. as new hazards might emerge and/or hazards that presently are not a priority might become more relevant over time or in some regions, both hazard identification and the risk ranking should be revisited regularly to reflect this dynamic epidemiological situation. particular attention should be given to potential emerging hazards of public health importance. the main elements of the current meat inspection system include analysis of fci, ante-mortem examination of animals and post-mortem examination of carcasses and organs. the assessment of the strengths and weaknesses of the current meat inspection was based on its contribution to the control of the meat-borne human health hazards identified in sheep and goats. a number of strengths and weaknesses of the current system were identified. currently, the use of food chain information (fci) for food safety purposes is limited for small ruminants because the data that it contains is very general and doesn't address specific hazards of public health importance. however, fci could serve as a valuable tool for risk management decisions and could be used for risk categorisation of farms or batches of animals. to achieve this, the system needs further development to include additional information important for food safety, including definition of appropriate and standardized indicators for the main public health hazards identified above. ante-mortem and post-mortem inspections of sheep and goats enable the detection of observable abnormalities and provide a general assessment of animal/herd health, which if compromised may lead to a greater public health risk. visual inspection of live animals and carcasses can detect animals heavily contaminated with faeces, which increase the risk for cross-contamination during slaughter and may constitute a food safety risk if the animals are carrying hazards of public health importance. if such animals or carcasses are dealt with adequately, this risk can be reduced. visual detection of faecal contamination on carcasses can also be an indicator of slaughter hygiene, but other approaches to verify this should be considered. post-mortem inspection can also detect non meat-borne hazards of public health significance, such as echinococcus granulosus, that can be present in carcasses or offal from small ruminants. ante-mortem and post-mortem inspection also have the potential to detect new diseases, which may be of direct public health significance. the main weakness of ante-mortem and post-mortem inspection is that they are not able to detect any of the public health hazards identified as the main concerns for food safety. in addition, given that the current post-mortem procedures involve palpation and incision of some organs, the potential for crosscontamination of carcasses exists. as neither of the main public health hazards associated with meat from small ruminants can be detected by traditional visual meat inspection, other approaches are necessary to identify and control these microbiological hazards. a comprehensive meat safety assurance system for small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual mss. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. if these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. to meet these targets and criteria, a variety of control options for the main hazards are available, at both farm and abattoir level. flock/herd categorisation according to the risk posed by the main hazards is considered an important element of an integrated meat safety assurance system. this should be based on the use of farm descriptors and historical data in addition to batch-specific information. farm-related data could be provided through farm audits using harmonised epidemiological indicators (heis) to assess the risk and protective factors for the flocks/herds related to the given hazards. in addition, classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: ( ) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. process hygiene criteria); and ( ) the use of operational procedures and equipment that reduce faecal contamination, as well as industry led quality systems. there are a variety of husbandry measures that can be used to control t. gondii on sheep and goat farms but at present these are impractical to implement in most farms. a number of post-processing interventions are effective in inactivating t. gondii such as cooking, freezing, curing, high pressure and irradiation treatments, although further research is required to validate these treatments in meat from small ruminants. there are also a variety of husbandry measures that can be used to reduce the levels of vtec on farms, but their efficacy is not clear in small ruminants. there are also a number of challenges that need to be overcome regarding the setting of targets for pathogenic vtec, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for vtec due to the difficulty to correctly identify pathogenic vtec. the main sources of vtec on sheep and goat carcasses are the fleece/hide and the viscera. to control incoming faecal contamination only clean animals should be accepted for slaughter. there are also a number of measures that can help reducing the spillage or leakage of digestive contents onto the carcass, as well as post-processing interventions to control vtec are also available. these include hot water and steam pasteurization. risk categorisation of slaughterhouses should be based on trends of data derived from process hygiene assessments and from hazard analysis critical control point programmes. improvement of slaughter hygiene through technological and managerial interventions should be sought in slaughterhouses with repeatedly unsatisfactory performance. fci can be improved by including information on participation in quality assurance schemes and by greater feedback to the primary producer, as this would likely result in the production of healthier animals. ante-mortem inspection assesses the general health status of the animals and helps to detect animals heavily contaminated with faeces on arrival at the slaughterhouse, so no adaptations for the existing visual ante-mortem inspection are required. routine post-mortem examination cannot detect the meat-borne pathogens of public health importance. palpation of the lungs, the liver, the umbilical region and the joints, and incision of the liver could contribute to the spread of bacterial hazards through cross contamination. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. a series of recommendations were made on data collection, source attribution studies, methods of detection of viable t. gondii in meat and on assessing the effect of the omission of palpation and incision on the risk posed by non-meat-borne zoonoses. assessing current meat inspection systems for sheep and goats with the aim of introducing improvements requires a common understanding of the term "meat inspection". however, as discussed previously (efsa, (efsa, , , it seems that there is no precise, universally agreed definition of meat inspection. the term meat inspection is not described specifically in current european union (eu) legislation (regulation (ec) no / ) or in the codex alimentarius's code of hygienic practice for meat (cac/rcp - ) ; rather, there are references to elements of the inspection process for meat such as ante-and post-mortem inspections and food chain information. consequently, the current understanding of the term meat inspection is probably based more on its practical application, and somewhat intuitive, than on a specific, formal definition. the biohaz panel defined the main scope of this scientific opinion as identifying and ranking the most relevant public health risks associated with meat from sheep and goats, assessing the strengths and weaknesses of the current meat inspection system, proposing alternative approaches for addressing current meat safety risks, and outlining a generic framework for inspection, prevention and control for important hazards that are not sufficiently covered by the current system. outside of the scope of the opinion were: microbiological hazards representing only occupational health risks transmissible spongiform encephalopathies (tses) issues other than those of public health significance, but which still compromise fitness of meat for human consumption (for example quality issues such as dark firm and dry (dfd) meat). as the eu regulations do not include different inspection requirements for sheep and goats, both species are considered together, but any important differences between these species are considered when necessary. in this document, the term small ruminant is used to refer to a combination of sheep and goats. in order to evaluate any important differences in meat inspection procedures between countries and/or regions as well as between species, the biohaz panel was supported by input provided during a technical hearing on meat inspection of small ruminants, during which experts from several stakeholder organisations presented information that had previously been requested by means of a questionnaire. following the hearing, an event report was compiled (efsa, ). the conclusions from this report are referred to when relevant. chemical hazards and associated meat safety risks in small ruminants are considered in a separate part of this opinion (see appendix b). although highest priority is given to the public health aims of the improvements of the biological/chemical meat safety system, any implications for animal health and welfare of the proposed changes were assessed (see appendix c). furthermore, issues related to epidemiological indicators and associated sampling/testing methodologies for hazards dealt with in this opinion were addressed by the biological monitoring unit in a separate document (efsa, ). the structure of the eu small ruminants farming industry has already been described in an efsa opinion (efsa, ) . briefly, sheep farming takes place in many areas of europe because sheep are able to live in a wide range of environments, even those hostile for other animals. goats are generally reared in extensive systems, traditionally in less developed areas, such as mountains or arid regions, and are often reared with sheep, especially in southern europe. milk sheep and goats are reared in similar systems, either grazed near the farm or kept housed, with the milk used in most cases for cheese production. meat production in europe reflects the diverse farming systems. lamb meat production originates from sheep milk farms or from farms raising meat breeds. in the mediterranean countries, the lambs from milk farms are slaughtered at approximately one month of age (suckling lambs, the same applies to goat kids). in some of these countries, lambs from meat breeds are generally slaughtered at - days of age and represent the majority of total national lamb meat production. in northern countries, the rearing systems usually produce heavier lambs that may be slaughtered at six or more months of age. the proportion of sheep raised for wool production has steadily decreased over time, but it is still significant in parts of the eu. sheep and goats at the end of their productive life can also be destined for meat production, with the resulting meat usually processed into meat products or exported. although the production and consumption of lambs have decreased in recent years, lamb meat continues to be a traditional product consumed in some countries of the eu such as the united kingdom, ireland and the mediterranean countries (spain, france, greece and italy). these countries have the largest populations of sheep in the eu. in general, the southern countries produce lighter carcasses (about kg) than the northern ones ( - kg). sheep are relatively small animals, with a lower yield of meat per carcass and higher slaughter and processing costs per unit of meat produced. as a result, sheep meat is relatively expensive in the market compared with other protein sources. the co-products (e.g. hides, wool, offal, feet, tails, etc.) have a major effect on the prices received by producers, and the impact on the profitability of the enterprise is profound (byrne et al., ) . eurostat statistics show that sheep meat production in the eu was over tonnes in , with the united kingdom and spain as the greatest producers ( figure ). goat meat production in the eu is concentrated in the southern european countries, especially greece and spain ( there are many forces instigating change in sheep and goat meat production. legislative forces present in the hygiene package and microbiological regulation have increased meat hygiene service costs through structural and food safety requirements as well as mandating the provision of traceability and food chain information (palmer c.m., ) . commercial considerations, such as lower coproduct returns, higher costs of by-product disposal and the sourcing policies of the multiple retailers (using their market power to control margins) have also put pressure on slaughterhouse profitability (palmer, ) . in spite of the eu being only about % self-sufficient in sheep meat, the predictions are that eu sheep numbers are expected to continue to decline over the next years. this problem of falling sheep supplies has led to an overcapacity in the processing sector (byrne et al., ) . the effect of this decline is most acute for large slaughterhouses, which can only be run profitably at certain levels of throughput. given the energy market expectations, greater environmental controls and the pressure on enforcement costs, relief from falling costs looks unlikely (palmer, ) . , ) . these variations, individually and their combinations, lead to between-slaughterhouse differences in process hygiene performance and, consequently, in the hygienic status of the final carcass. at the end of the slaughter line prior to chilling, process hygiene microbiological criteria, as defined in regulation (ec) no / , verify the effectiveness of each plant's food safety management system (which includes ghp and good manufacturing practices (gmp) prerequisite programmes), based on the principles of hazard analysis and critical control points (haccp) systems. generally, smaller slaughterhouses process much smaller quantities of meat for localised markets and operate at a slower line speed. operators in such establishments tend to have a wider skill base than their counterparts in large establishments owing to the many varied roles they perform. however, small slaughterhouses have reduced investment capital for expenditure on premises, equipment and staff food safety management training. disposal of animal by-products and compliance with the microbiological testing regulation (ec) no / places further financial pressure on these lowthroughput businesses. to ameliorate the financial impact of this testing, article in this regulation states that the frequency of this microbiological sampling may be adapted to the nature and size of the food business, based on a standardised risk assessment and authorised by the competent authority. larger slaughterhouses operate more efficiently, with greater separation of duties and better sampling and food safety oversight. these larger units have larger co-product/by-product markets and therefore produce less waste per animal processed. however, the requirement for high-volume throughput with increased slaughter line speed can impinge on operational hygiene and therefore food safety (food standards agency, a; palmer, ) . such differences in structure and operational practices in the varying sized slaughterhouses can determine the effectiveness of the food safety management system (motarjemi, ). hazard identification and risk ranking . . a hazard is defined by the codex alimentarius commission (cac) as a "biological, chemical or physical agent or property of food with the potential to cause an adverse health effect". the first step in the hazard identification carried out in this assessment focused on identifying biological hazards occurring in small ruminants and small ruminant meat that can be transmitted to humans, where they may cause disease. hazards were identified based on evidence found in peer-reviewed literature, textbooks, through reported data (e.g. eu summary reports on zoonoses), previous assessments and efsa opinions, and the biohaz panel's and working group's expert knowledge. from this "long" list of identified hazards, the panel excluded those hazards: for which no causal relationship between human infections and the handling, preparation and consumption of meat from small ruminants could be documented through targeted literature reviews. not presently found in the small ruminant population in the eu. the final "short" list of identified hazards to be included in the priority ranking consisted of hazards occurring in the eu and for which evidence could be found of foodborne transmission through the handling, preparation and/or consumption of sheep and goat meat. in the context of this opinion, when referring to handling and preparation this should be interpreted as handling of sheep and goat meat that occurs immediately prior to consumption, when these activities are carried out by consumers or professional food handlers. based on a review of the scientific literature, a wide range of biological hazards were identified as potential zoonotic hazards related to small ruminants (see table ). from these, the majority were considered not to be small ruminant meat-borne pathogens, as no evidence could be found in the literature to support transmission through handling, preparation or consumption of small ruminant meat (for further information on hazards not included see annex , and section . . in this appendix for those for which evidence for meat-borne transmission was documented). other potential pathogenic microorganisms were found not to be relevant as they are not considered to be currently present in small ruminants in europe (chandipura virus, cryptococcus neoformans var. neoformans and hepatitis e virus), or, if they are, consumption of meat is not considered a significant source of infection. the latter situation applies in particular to linguatula serrata, for which contact with the final host (canids) is the source for the human cases described in europe. for some of these hazards (e.g. extended-spectrum β-lactamase-(esbl-)/ampc-carrying escherichia coli), despite their presence in the animal reservoir, no studies have been conducted to establish whether there is a link between consumption of meat from small ruminants and disease in humans. the presence of mycobacteria has been previously reported in the small ruminant population in the eu (domenis et al., ; malone et al., ; marianelli et al., ; . despite these reports, evidence of meat-borne transmission of these pathogens to humans from small ruminants is lacking, so this potential pathway of infection remains unproven in the context of livestock processed through the eu meat inspection system. a more detailed discussion on the potential for meat-borne transmission of mycobacteria can be found in the scientific opinion dealing with bovines (efsa biohaz panel, ). the remaining hazards were considered eligible for further assessment and risk ranking ( table ). the panel developed a decision tree that was used for the risk ranking of the small ruminant meatborne hazards according to their risk of causing infection in humans following the handling, preparation and/or consumption of sheep or goat meat ( figure ). the cac defines risk as "a function of the probability of an adverse health effect and the severity of that effect, consequential to one or more hazards in a food". in other words, a foodborne risk is a product of the likelihood of occurrence of the hazard and the magnitude and severity of the consequences of the illness it causes on human health. this decision tree was adapted from that presented in the scientific opinion on poultry meat inspection (efsa panel on biological hazards (biohaz), efsa panel on contaminants in the food chain (contam) and efsa panel on animal health and welfare (ahaw), ). however, there are key differences as follows: carcass pathogen prevalence and source attribution are not considered as separate questions, or ranking steps, but these two questions are addressed together in a single step. this modification was considered appropriate as there was insufficient data at eu level for qualifying carcass prevalence and source attribution for the given hazards. furthermore, consumption of meat from small ruminants is both lower and unevenly distributed in the eu relative to that of meat from other animal species such as pigs or poultry. attribution at the population level, as applied in the previous scientific opinions on meat inspection (efsa panel on biological hazards (biohaz), efsa panel on contaminants in the food chain (contam) and efsa panel on animal health and welfare (ahaw), , ), may not provide a sufficiently detailed perspective on the relative risk of different hazards in meat from small ruminants. the risk to consumers of meat from these species, rather than to the population as a whole, was therefore assessed. an added consequence is that the categorisation has been reduced from three to two categories (i.e. the medium category is not used in this opinion). the term "priority" has replaced the term "risk" used in the pork and poultry opinions. risk ranking requires a significant amount of data on both the occurrence of the relevant hazards and the fraction of cases of human disease caused by the different hazard-meat species combinations (i.e. source attribution). while there were sufficient data to perform a risk ranking of the hazards associated with pork and poultry, this was not the case for all potential hazards in small ruminants, for which eu-wide baseline surveys and harmonised monitoring do not exist and relevant studies published in the scientific and technical literature are often limited. the term "priority" was therefore considered more appropriate than "risk" for categorising the hazards associated with meat from small ruminants. based on this, the panel identified the following criteria to be important for determining the final priority category: step : identifying and excluding those hazards that are introduced and/or for which the risk for public health requires microbial growth during steps that take place after carcass chilling. the reasons for excluding such hazards from further assessment were that: ( ) the scope and target of meat inspection are focused on the food safety risks of the carcasses at the end of slaughter when they are chilled but before they are further processed; and ( ) hazards introduced and/or for which the risk relates exclusively to growth during post-chilling processes are better controlled later in the food production chain through, for instance, haccp programmes. step : to assess the magnitude of the human health impact, as measured by the reported incidence (notification rate) or number of cases. where data allowed, the estimated total number of cases was presented, i.e. adjusting for under-reporting. incidence was considered high if the notification rate in humans at eu level, as reported to ecdc, was equal to or higher than cases in population in any given year. step : to assess the severity of the disease in humans based on mortality. if necessary, severity was also evaluated by comparing disease burden estimates, expressed for example in disability-adjusted life-years (dalys) per cases. the daly metric quantifies the impact of disease on the health-related quality of life of acute diseases and sequelae, as well as the impact of premature deaths. severity was considered high if mortality in humans at eu level, as reported to ecdc, was higher or equal to . % in more than one year. step : evidence supporting the role of meat from small ruminants as a risk factor for disease in humans. for this, the following sources of information were considered: . epidemiological link, based on a significant likelihood that the consumption of meat from the given species is a risk factor for human cases, or on outbreak data . carcass prevalence /farm level prevalence (prevalence studies) . comparative considerations for meat from related species . expert opinion that meat consumption is a risk factor. the final outcome of this process involved categorising each hazard as high or low priority, as follows: the priority was characterized as 'high' when a hazard was identified as causing a high incidence and/or severity of illness in humans, and when strong evidence existed for meat from sheep or goats being an important risk factor for human disease. considering the limitations of the data available for the priority ranking, this risk category could be regarded as combining both the medium and high risk categories of the risk ranking carried out in the poultry meat inspection opinion (efsa panel on biological hazards (biohaz), efsa panel on contaminants in the food chain (contam) and efsa panel on animal health and welfare (ahaw), ); the priority was characterized as 'low' when a hazard was identified as not associated with a high incidence and a high severity of human disease or if, despite the hazard causing a high incidence and/or severity in humans, the evidence available did not identify meat from sheep or goats as an important risk factor for human disease; all hazards placed in the low priority category were further evaluated to determine if this was low due to currently controls applied (i.e. any hazard specific control measure implemented at farm and/or slaughter level before chilling of the carcass, including meat inspection procedures). if this was not the case, the hazard was not considered further. however, if this was the case then the hazard was further considered and the effect of any recommendations regarding the removal of specific control measures or meat inspection activities on these hazards was assessed and the categorisation of the hazard was reconsidered. figure : flowchart for priority ranking different public health hazards. risk of infection by handling, preparation or consumption of sheep and/or goat meat. current controls: any hazard-specific control measures implemented at farm and/or slaughterhouse level before chilling of the carcasses. for the hazards shortlisted (table ) , data on the incidence and severity in humans and the prevalence of the pathogens in the carcasses of small ruminants were sought to allow the risk from these microbiological hazards to be ranked, based on the decision tree in figure . see tables , , and for details. the data in table were obtained from the european surveillance system (tessy), covering the years , , and . the data are officially reported to the european centre for disease prevention and control (ecdc) by eu mss; however, some countries do not report on certain diseases; these were specified. the data were supplied as aggregates from all reporting mss. data show notification rates of confirmed human disease cases as per persons, and severity of illness in humans. cases include all reported confirmed occurrences of the disease, regardless of the origin of the infection. in fact, establishing the food-related origin of infection is often not possible and seldom reported. the data on severity include as a proxy the proportion of confirmed human cases that died. this information is usually only available in a small proportion of cases. finally, it has to be kept in mind that the surveillance systems are set up differently in the various eu mss, with different case definitions, national or restricted coverage, voluntary or compulsory reporting, focus, target groups, etc., in addition to the fact that only a small fraction of diseased patients is sampled and the casual organism typed and reported to the respective national health institutes. because of all these caveats, the incidence and severity figures quoted here are only approximate and must be considered with caution. n.a. h a eu population data based on individual ms population sizes reported in eurostat (data extracted: september ). when the given hazard was not reported by a ms to tessy, the population size reported by that ms was also taken out of the calculation of the overall eu population size. b calculated as the percentage of cases with fatal outcome over all cases of disease with known outcome, for a given hazard. c portugal, greece not reporting. d portugal not reporting. for a more detailed review of vtec (including serotype o ) incidence and severity in the eu see the recently published efsa opinion on vtec-seropathotype and scientific criteria regarding pathogenicity assessment (efsa panel on biological hazards, ). e portugal not reporting. f s. enterica subsp. enterica serovar typhi and s. paratyphi serovars not included; netherlands not reporting. g seroprevalence. belgium, denmark, greece, italy, netherlands, portugal and sweden not reporting; spain reporting through the sentinel system and thus not taken into account. france not reported in at the time of extraction of these data. h n.a. = not available. data presented in tables - are related to flock/herd and carcass prevalence of the hazards identified in sheep and goats. they were obtained from monitoring data as reported by the eu mss in the frame of the zoonosis directive ( / /ec), when available. data reported in the period from to were considered. no information was available at carcass level for goats. in these tables, data described as originating from suspect or selective sampling and from clinical investigations were excluded as they do not, in most cases, represent the actual epidemiological situation. food samples described as collected for haccp and own-check purposes were also excluded because the sampling scheme may be biased. samples included are described as originating from control and eradication plans and monitoring and surveillance; consequently they are supposed to represent the occurrence of the zoonotic agent in the reporting country over the years, based on objective sampling. however, it has to be noted that monitoring and surveillance systems for most of zoonotic agents are not fully harmonised between mss. furthermore, data may not necessarily be derived from sampling plans that have a sound statistical design, and may therefore not accurately represent the national situation regarding the true prevalence of zoonoses. data in tables and originate from samples taken from either farms or slaughterhouses, while for table . ( - ) a n.a., no data available. b includes those reported as human pathogenic and non-human pathogenic (i.e. no harmonised scheme to discriminate between both, and data available does not preclude that they are not human pathogenic). c seroprevalence. includes those reported as human pathogenic and non-human pathogenic (i.e. no harmonised scheme to discriminate between both, and data available does not preclude that they are not human pathogenic). c seroprevalence. includes those reported as human pathogenic and non-human pathogenic (i.e. no harmonised scheme to discriminate between both, and data available does not preclude that they are not human pathogenic). c seroprevalence. listeria monocytogenes and toxins of bacillus cereus, clostridium botulinum, clostridium perfringens and staphylococcus aureus were all considered to fall within the category of risk related to growth or introduction post-chilling, for different reasons: b. cereus, c. botulinum and c. perfringens and their spores and s. aureus are considered ubiquitous bacteria, and can be found in a variety of foods. their vegetative forms need temperatures above those used for refrigeration to grow in raw meat to concentration levels of relevance for public health and thus the risk of disease seems not to be correlated with occurrence in raw meat but rather with improper hygiene and storage that allows the production of toxins. illness caused by l. monocytogenes is usually associated with ready-to-eat products, in which contamination has occurred before or during processing followed by growth during storage at refrigeration temperatures. based on incidence and severity in humans (table ) , flock/herd, animal and carcass prevalence (tables , and ) and other epidemiological evidence, the hazards in table were ranked and categorised according to the flowchart in figure , as described in section . . above. a summary of the outcome is provided in table at the end of this section. none of the hazards identified as low priority were found to be such owing to currently applied controls. this organism has a worldwide distribution, persisting in the soil in the form of extremely resistant spores for many years. infection is initiated with the introduction of the spore through a break in the skin or entry through the mucosa. after ingestion by macrophages at the site of entry, germination to the vegetative form occurs, followed by extracellular multiplication and capsule and toxin production. humans can acquire anthrax by exposure to infected animals, animal products or spores in the soil and, depending on the mode of transmission, can develop one of four distinct clinical forms: respiratory, cutaneous, gastrointestinal and oropharyngeal. human cases of pulmonary anthrax have been linked to the large-scale processing of hides and wool in enclosed factory spaces, where aerosolised anthrax spores may be inhaled. humans also acquire the cutaneous form of anthrax from handling contaminated animal products, such as hides, wool and hair. cases of gastrointestinal anthrax have resulted from the ingestion of raw or undercooked meat (spickler, ) and well-cooked beef from infected animals (centers for disease control and prevention, ) . recently, a case of anthrax possibly acquired through handling or consumption of contaminated beef in a household in romania has been reported (popescu et al., ) . consumption of meat (including sheep and goat meat) from carcasses of animals showing clinical signs of anthrax, or animals that have died from the disease, is the most reported common route of foodborne infection resulting in gastrointestinal anthrax. human incidence: based on eu data, low. anthrax has a low human prevalence in the eu (see table for details). between and , the number of anthrax cases reported to the ecdc ranged from two confirmed cases ( ) severity of disease: based on eu data, high. the severity of these infections is considered high, and this is supported by the mortality figures in table . evidence for meat from small ruminants as an important risk factor: no. the organism causes a highly infectious notifiable disease in farmed and wild animals that have grazed on contaminated land or ingested contaminated feed (swartz, ) . the livestock species most susceptible, in descending order, are cattle, sheep, horses, pigs, goats and camels (fasanella et al., a) . the disease is endemic in most countries in africa and asia (thurnbull, ) and in defined regions of other countries. flooding may often concentrate spores of b. anthracis in particular locations. in sheep and goats, the disease is usually peracute, or acute and rapidly fatal, with death occurring in some cases within hours and affected animals showing multiple haemorrhages from natural orifices. although most cases are found dead without showing premonitory signs, pyrexia with temperatures up to c along with depression, congested mucosae and petechiae may be observed ante-mortem. post-mortem findings are characterised by incomplete rigor mortis, widespread ecchymotic haemorrhages and oedema, dark, unclotted blood and blood-stained fluid in body cavities and severe splenomegaly (quinn et al., ) . handling, or direct contact with such animals and carcasses is highly dangerous. anthrax is now rare in livestock in the eu. the major enzootic areas are greece, spain, france and southern italy (fasanella et al., ; fouet et al., ) . a severe outbreak of anthrax occurred in southern italy in (fasanella et al., b . over days, cattle, sheep, goats, horses and deer died. also in italy, an outbreak of anthrax of similar magnitude was reported among cattle, sheep and horses in . given the low number of cases of anthrax in the small ruminant population in the eu, the risk of acquiring this disease through consumption of meat from these species can be considered very low. based on the data presented and on the above discussions, the biohaz panel concluded that b. anthracis was a low priority hazard with regard to meat inspection of small ruminants. this result is not due to current controls (i.e. any hazard-specific control measures implemented at farm and/or slaughter level before chilling of the carcasses, including current meat inspection procedures). human incidence: based on eu data, high. campylobacteriosis is the most frequently reported zoonotic illness in the eu with a reported incidence of . confirmed cases per in (table ) , and it is estimated that there are nine million cases of illness annually in the eu- (efsa panel on biological hazards, ). severity of disease: as the incidence is high, the severity does not need to be considered. campylobacter jejuni is common in the intestines of ruminants of sheep and lambs. the reported prevalence of campylobacter spp. in sheep and goats can be found in tables , and . for sheep and at flock level, the prevalence was . %, while for goats it was . % (at individual animal level there were . % and . %). with regards to carcasses, no data were available for goats. for sheep, the batch prevalence was %, and at individual sample level . %. information from the scientific literature also suggests that campylobacter spp. is often found in small ruminants, with a wide range of prevalences reported. in a study of lambs in the united kingdom campylobacter spp. was isolated from % of the samples taken from the small intestines (stanley et al., ) . on the other hand, sproston et al. ( ) found this bacterium in just % of fresh faecal samples from sheep on a farm in scotland. other studies have reported prevalences somewhere in between these two figures (milnes et al., ; ogden et al., ; oporto et al., ; schilling et al., ) . a seasonal variation in prevalence and the number of campylobacter spp. has also been reported in some studies (milnes et al., ; sproston et al., ) . several studies have investigated the presence of campylobacter spp. in carcasses or meat from small ruminants. garcia et al. ( ) investigated the presence of campylobacter spp. on sheep carcasses, with a resulting prevalence of %. the authors concluded that the prevalence on carcasses reflected the occurrence of campylobacter spp. in both wool and faeces. however, there is a significant reduction in detection following chilling, probably owing to both the low temperature and drying of the carcass (norwegian scientific committee for food safety, ). after swabbing of cm around the circum-anal incision of lamb carcasses before chilling campylobacter spp. was isolated from eight ( . %) of the carcasses. after a relatively slow chilling process (the air temperature was never below °c) campylobacter spp. was recovered from only one carcass ( for contact with sheep as a risk factor for human campylobacteriosis, but consumption of meat from these species was not considered a risk factor. an earlier case-control study in households with primary campylobacter spp. infection in the netherlands also failed to identify consumption of mutton as risk factor (oosterom et al., ) finally, people that had consumed mutton were less likely to become ill with campylobacter spp. infection in a prospective case-control study of campylobacteriosis carried out in norway (kapperud et al., ) . like their sensitive counterparts, antimicrobial-resistant campylobacter spp. involved in human disease are mostly spread through foods, especially poultry meat. as stated in a previous efsa opinion (efsa, ) , "a major source of human exposure to fluoroquinolone resistance via food appears to be poultry, whereas for cephalosporin resistance it is poultry, pork and beef that are important, these food production systems require particular attention to prevent spread of such resistance from these sources". there are no indications that resistant strains behave differently in the food chain compared with their sensitive counterparts, hence there is no need to consider these strains separately in the context of meat inspection. based on the presented data, it is concluded that campylobacter spp. are a low public health priority with regard to meat inspection of small ruminants. this ranking is not the result of current controls. verocytotoxin/shiga toxin (vt/stx)-producing e. coli (vtec/stec) are characterised by the ability to produce potent cytotoxins. pathogenic vtec usually also harbour additional virulence factors that are important for the development of the disease in human (efsa and ecdc, , b) . not all vtec strains have been associated with human disease and there is no single marker or combination of markers that defines a "pathogenic" vtec (efsa panel on biological hazards, ). while stx and eae gene-positive strains are associated with a high risk of more serious illness, other virulence gene combinations and/or serotypes may also be associated with serious disease in humans. for the purposes of this opinion, pathogenic vtec are defined as vtec capable of causing disease in humans. human incidence: based on eu data, low. most reported meat-borne human vtec infections are sporadic cases. in (efsa and ecdc, ), the total number of confirmed vtec cases in the eu was , representing a . % increase compared with , with a fatality rate of . %. table includes data from tessy from to inclusive. in that period the incidence (all vtec serotypes) per population varied between . and . . the data are not easily comparable between eu countries, owing to underlying differences in the national surveillance systems. the concentration of laboratory testing on the o serogroup means that the proportion of non-o strains is largely under-reported (ecdc and efsa, ) . data for have to be interpreted with caution, as vtec o :h caused a major outbreak which resulted in confirmed cases, including cases of vtec infection and of acute renal failure, known as haemolytic-uraemic syndrome (hus), with deaths reported in eu countries, the united states and canada when the epidemic was declared to be over at the end of july (karch et al., ) . it has to be noted, however, that the source of the outbreak was sprouted seeds and not meat. severity of disease: based on eu data, high. pathogenic vtec infections can be severe, and are often associated with bloody diarrhoea, but there is a wide clinical spectrum in the association between specific subtypes of pathogenic vtec and the clinical outcome. bloody diarrhoea has been shown to be associated with an increased risk of developing hus and neurological injury, such as paralysis. hus develops in up to % of patients infected with vtec o and is the leading cause of acute renal failure in young children (efsa and ecdc, ) . this is reflected in the severity figures in table and the corresponding classification in table , which are also supported by high daly (havelaar et al., a) and quality-adjusted lifeyear (qaly) estimates (hoffmann et al., ) published in the literature. evidence for meat from small ruminants as an important risk factor: yes. pathogenic vtec can be found in the gut of numerous animal species, but ruminants have been identified as a major reservoir of vtec that are highly virulent to humans, in particular vtec o . although cattle are considered to be the most important source of human infections caused by vtec o , they have also been isolated from the intestinal contents of sheep and goats. food of small ruminant origin has been reported as a source for human vtec infections (kosmider et al., ; schimmer et al., ; werber et al., ) . transmission occurs through consumption of undercooked meat, unpasteurised dairy products, or water and vegetables contaminated by faeces of carriers. person-to-person transmission has also been documented (rey et al., ) . data reported in the frame of the zoonoses directive ( / /ec) from to can be found in tables - . for all vtec serotypes, the reported prevalence was . % and . % for sheep at flock and individual animal level, respectively. for goats, the figures were . % and . %. prevalences for vtec o were much lower across the board. isolation of e. coli o from goats has been reported in studies from several countries, with isolation rates ranging between % and % (cortes et al., ; keen et al., ; orden et al., ; orden et al., ; schilling et al., ) . vtec strains have also been detected in sheep, with a similarly wide range of prevalence figures (milnes et al., ; oporto et al., ; prendergast et al., ; pritchard et al., ; schilling et al., ; sekse et al., ) . thus it is clear that small ruminants can play an important role by shedding these pathogens in the faeces (blanco et al., ; la ragione et al., ) . the prevalence can be influenced by the sampling and testing methodology, but these studies nevertheless clearly indicate that pathogenic vtec is present in the small ruminant population in the eu. table includes data from official monitoring of sheep carcasses. the reported prevalence was % at batch level and . % at individual carcass level ( . % for vtec o ). the scientific literature also indicates that sheep and goat carcasses or meat can be contaminated with vtec, albeit generally at lower levels compared with those in the animal reservoir. at the higher end of the range, barlow et al. ( ) in australia and zweifel et al. ( ) in switzerland reported prevalences around % in carcasses and lamb cuts. brooks et al. ( ) reported a prevalence of % in lamb cuts in new zealand and other, less recent, studies reported much lower prevalence-between % and % (doyle and schoeni, ; heuvelink et al., ; pierard et al., ; samadpour et al., ) . it has to be noted that this variation in prevalence could be a result of the different testing methodologies used (e.g. use of polymerase chain reaction (pcr) testing), and the fact that not all these vtec isolates would necessarily be pathogenic to humans. a case-control study on risk factors for human vtec in germany identified lamb as an important risk factor for human infection (werber et al., ) . consumption of dry cured sausages made with sheep meat was identified as the cause of an outbreak of vtec o :h infection in humans (schimmer et al., ; sekse et al., ). in the latter study, bacteria with the same properties, including identical dna profiles, were found in five dry cured sausage products and sheep meat used as raw material in sausage production and were identical to the isolates from patients. e. coli with the same virulence genes, serotypes, biochemical characteristics and dna profiles as those found in patients from the e. coli o :h outbreak, were detected in sheep from of farms in norway (brandal et al., ) . more recent research in norway and spain comparing virulence characteristics between strains isolated from humans and sheep has suggested that the latter can be an important reservoir for pathogenic vtec (brandal et al., ; sanchez et al., ) . the evidence arising from epidemiological or source attribution studies points to a minor role for meat from small ruminants as a source of human cases of vtec, although the model used in this study was found to underestimate the observed prevalence of vtec in lamb, so this attribution estimate should be interpreted with caution (kosmider et al., ) . based on the data (see table ) and the assessment presented above, the biohaz panel concluded that pathogenic vtec can be considered to be of high priority for meat inspection of small ruminants given the relatively high prevalence of this hazard in the small ruminant population, the epidemiological links to outbreaks in humans and the severity of the disease in humans. human incidence: based on eu data, high. in the eu, s. enterica subsp. enterica serovar enteritidis and s. typhimurium are the serovars most frequently associated with human illness, although the number of reported cases of s. enteritidis has more than halved since . human s. enteritidis cases are most commonly associated with the consumption of contaminated eggs and poultry meat, while s. typhimurium cases are mostly associated with the consumption of contaminated pig, poultry and bovine meat. human salmonellosis is the second-ranking foodborne disease reported in the eu and most european countries, exceeded only by campylobacteriosis (efsa, ; efsa and ecdc, b) . a total of confirmed cases were reported from eu mss in through tessy, corresponding to a notification rate of . confirmed cases per (table , which also includes data on the severity of human disease). accounting for under-reporting, it is estimated that there are six million cases of this illness annually in the eu- (efsa, ; havelaar et al., b) . severity of disease: as the incidence is high, the severity does not need to be considered. evidence for meat from small ruminants as an important risk factor: no. the common reservoir of salmonella spp. is the intestinal tract of a wide range of domestic and wild animals, which results in a variety of foodstuffs, of both animal and plant origin, as sources of human infections. the organism may be transmitted through direct contact with infected animals or between humans or from faecally contaminated environments. in animals, subclinical infections are common. the organism may easily spread between animals in a herd or flock without detection, and animals may become intermittent or persistent carriers. the variant, s. enterica subsp. diarizonae iiib .k: , , ( ), which might be referred to as "the sheep variant" owing to its adaption to sheep, is endemic in sheep in several regions of the world such as the united kingdom (hall and rowe, ) and norway (norwegian scientific committee for food safety, ) in europe and canada (greenfield et al., ; pritchard, ) and the united states (weiss et al., ). however, the overall conclusion is that s. enterica subsp. diarizonae iiib .k: , , ( ) is very rarely demonstrated as a cause of human infections, including in those areas in which the endemic prevalence in sheep is high such as the united kingdom and norway ((norwegian scientific committee for food safety, ). another salmonella spp. variant well adapted to sheep, causing abortion and death of ewes, is s. brandenburg, which is endemic in the south island of new zealand (sabirovic, ) , but its human health relevance seems to be limited. eu monitoring data for sheep and goats are presented in tables - , which contain data collected by mss from to . the prevalence reported in both herds and individual animals is . % and . %, respectively, for sheep and . % and . % for goats. although salmonella spp. is commonly found in live sheep or goats at variable prevalence levels (bonke et al., ; duffy et al., ; duffy et al., ; hjartardottir et al., ; moriarty et al., ; zweifel et al., ) , there is a more limited number of studies looking at the occurrence of salmonella spp. in sheep and goats carcasses. a prevalence of . % was reported in individual sheep carcasses in the eu monitoring (see table ). some outbreaks linked to meat from small ruminants can be found in the scientific literature (evans et al., ; hess et al., ; synnott et al., ) . these involved unusual consumption patterns (e.g. raw lamb liver) or cross-contamination of raw food ingredients (e.g. yoghurt relish contaminated with carcass blood), therefore it is unclear how significant these events are when assessing the role of sheep or goat meat as a source of salmonella spp. infection. data from epidemiological or source attribution studies suggest that the role of meat from small ruminants as a vehicle for salmonella spp. the occurrence of antimicrobial resistance among zoonotic salmonella spp. is an increasing problem. antimicrobial-resistant salmonella spp. involved in human disease are, like salmonella spp. in general, mostly spread through foods, predominantly poultry meat, eggs, pork and beef (hald et al., ) . as there are no indications that resistant strains behave differently from their sensitive counterparts in the food chain, there is no need to consider these strains separately in the context of meat inspection. fluoroquinolone and cephalosporin resistance are currently considered to be those of most public health concern. meat, particularly poultry meat and pork, is recognised as an important source of human exposure to fluoroquinolone-resistant salmonella spp., and high levels of esbl-/ampc-producing salmonella spp. have also been reported in poultry in some eu mss (efsa and ecdc, a). such resistant strains may or may not be associated with a significant level of human infection, depending on the pathogenicity of the strains involved and the opportunity for them to contaminate the food chain (butaye et al., ; de jong et al., ; efsa panel on biological hazards, c; rodriguez et al., ) . the control of antimicrobial-resistant bacteria in food including poultry meat is further complicated by the fact that resistance mechanisms can be located on mobile genetic elements such as plasmids and thereby be transferred between different bacterial species, for instance between generally apathogenic e. coli and salmonella spp. based on the data (see table ) and the assessment presented above, the biohaz panel concluded that the risk arising from consumption of meat from small ruminants with regards to salmonella spp. is of low priority for meat inspection of small ruminants. this ranking is not the result of current controls. human incidence: based on eu data on congenital toxoplasmosis, low. toxoplasmosis can be contracted by the oral ingestion of oocysts present in cat faeces and the environment, or tissue cysts present in the meat of infected animals (tenter et al., ) . in pregnant women, the parasite can cause congenital infections (abortion, stillbirth, mortality and hydrocephalus in newborns or retinochoroidal lesions leading to chronic ocular disease) and complications (lymphadenopathy, retinitis or encephalitis) in immunocompromised individuals such as organ graft recipients and individuals with acquired immune deficiency syndrome (aids) or cancer (efsa, b) . in immune-competent individuals, - % of cases of toxoplasma gondii infection are asymptomatic, and the majority of the remainder have only mild, self-limiting symptoms. thus, reports of acute symptomatic t. gondii infection (toxoplasmosis) do not provide a reliable basis for assessing overall disease incidence. given these limitations, the incidence of human disease caused by toxoplasmosis is rare ( table ). the prevalence of antibodies to t. gondii in the general population provides an alternative for estimating the number of cases and disease burden (food standards agency, ). t. gondii seroprevalence is known to vary geographically and with age (montoya and liesenfeld, ). although antibodies are found in - % of adults in the united kingdom, seroprevalence is higher in central europe, and similar or lower in scandinavia ( - %). climate and consumption of raw meat, meat from animals farmed outdoors or frozen meat may be factors that contribute to these variations (kijlstra and jongert, ) . seropositivity also varies within countries, being highest in those from rural or small town backgrounds and lowest in those from urban or suburban areas (food standards agency, ). data showing the variation in seropositivity with age are available from a number of countries. for example, in the netherlands, it was found to range from % at years of age to % at years (efsa, b; hofhuis et al., ) . there is evidence of a sharp decrease in seroprevalence over the last years in many populations. for example, in there was a reported seroprevalence of % in france, falling to % in (afssa, ) . this decrease is in part attributable to a decrease in infection in childhood, probably associated with increased standards of living, and has also been linked to changes in meat husbandry and consumption. severity of disease: based on eu data for congenital toxoplasmosis, high. owing to the lifelong impact of symptoms related to toxoplasmosis, the burden of disease is high. mead et al. ( ) showed that t. gondii ranked fourth in hospitalisations and third concerning deaths when compared with other foodborne pathogens. more recent research ranked t. gondii among the highest in population burden estimates (daly or qaly) among foodborne pathogens from both an individual and a population perspective (havelaar et al., a; hoffmann et al., ) . evidence for meat from small ruminants as an important risk factor: yes. the relative role of t. gondii oocysts in the environment versus tissues cysts in meat and meat products as a source of infection for humans could not be determined by laboratory tests until recently. hill et al. ( ) have developed a test to identify a sporozite specific antigen which will be a useful tool in providing information on the relative importance of oocysts as the agent of infection. until this recent development, source attribution information came from epidemiological studies. in europe, three large case-control studies have pinpointed uncooked meat as the most important risk factor for pregnant women (baril et al., ; cook et al., ; kapperud et al., ) . with regard to the prevalence in the animal population, despite t. gondii infection being a major cause of abortion and stillbirth in sheep and goats in the eu, most infections exist subclinically in flocks/herds (dumetre et al., ) . in response to natural infection, seropositive sheep have been shown to harbour infectious parasites as tissue cysts (dubey et al., ; kijlstra and jongert, ; opsteegh et al., ) . antibodies to t. gondii and tissue cysts persist in infected sheep (dubey, ). this implies that serological tests can be used to estimate the number of animals carrying t. gondii tissue cysts in the meat and thereby indicate the risk for public health (opsteegh et al., b) . seroprevalence increases with increasing age (dubey, ; halos et al., ) , and sheep and goats are identified as the main source of infected meat in southern european countries (berger et al., ; dumetre et al., ) . seroprevalence of t. gondii in sheep can range from % to % in certain european countries (efsa, b) . limited data available in slaughtered sheep report seropositive rates of - % in europe (dumetre et al., ; tenter et al., ) . seroprevalence in farmed goats in europe ranges from % to % (efsa, b) . no data have been published about seroprevalence in slaughtered goats in europe, but findings in goats in non-european countries range from % to % (efsa, b; tenter et al., ) . data reported by eu member states under the zoonoses directive ( / /ec), showing a relatively high seroprevalence for this hazard in flocks/herds and individual animals, can be found in tables - . notwithstanding this, significant uncertainty remains regarding this hazard. the prevalence of toxoplasmosis in humans and its importance in terms of overall disease burden still requires research. despite the development of recent laboratory procedures, the proportion of human toxoplasmosis attributable to the consumption of sheep meat is unknown. furthermore, the relationship between seropositivity in sheep and the number of viable tissue cysts in edible tissue has yet to be established (food standards agency, ). these uncertainties hinder the development of control procedures for this hazard. with regard to the role of meat from small ruminants as a risk factor for human toxoplasmosis, a prospective case-control study designed to identify preventable risk factors for t. gondii infection in pregnancy, conducted in norway (kapperud et al., ) found eating raw or undercooked mutton to be independently associated with an increased risk of maternal infection (or = . , p = . ). in the case-control study carried out by baril et al. ( ) , an odds ratio of . was estimated for the consumption of undercooked or raw mutton/lamb. the same odds ratio was obtained for the consumption of undercooked or raw mutton/lamb in the study carried out in (cook et al.) . in addition, raw or undercooked lamb meat is considered a delicacy in certain countries, such as france, and is therefore considered an important source of infection in that country (afssa, ) . this has been recently corroborated by a report of an outbreak of toxoplasmosis linked to the consumption of undercooked lamb (ginsbourger et al., ) . given its high seroprevalence in sheep and goat meat and the correlation of human infection to animal incidence, t. gondii in sheep and goat meat was considered by the panel to be of high priority for meat inspection of small ruminants within the eu (see table ). based on the priority ranking, the hazards were classified as follows: t. gondii and pathogenic vtec were classified as high priority for sheep/goat meat inspection. the remaining identified hazards, b. anthracis, campylobacter spp. (thermophilic) and salmonella spp., were classified as low priority, based on the available data. as new hazards might emerge and/or hazards that presently are not a priority might become more relevant over time or in some regions, both hazard identification and the risk ranking should be revisited regularly to reflect this dynamic epidemiological situation. particular attention should be given to potential emerging hazards of public health importance that arise only in small ruminants. to provide a better evidence base for future risk ranking of hazards, initiatives should be instigated to: improve and harmonise data collection of incidence and severity of human diseases caused by relevant hazards. systematically collect data for source attribution. collect data to identify and risk rank emerging hazards that could be transmitted through handling, preparation and consumption of sheep and goat meat. protection of public health is the top priority objective of meat inspection. the origin of western european meat inspection goes back to the end of the th century, when it became obvious that meat could play a role in the transmission of disease, particularly tuberculosis, and that the trade in animals, meat and meat products needed some sort of safety and quality assurance (johnson, ; theves, ; von ostertag, ) . the first meat inspection act was drawn up in by professor ostertag at the university of berlin. there is no doubt that the meat inspection procedures were highly risk based at that time. ever since, an ante-mortem and post-mortem inspection has been carried out at individual animal level in cattle and it has been extended to other species. the ante-mortem inspection is a simple clinical examination aimed at identifying sick or abnormal animals, as well as assessing the level of cleanliness of the animals entering the slaughtering process. the post-mortem inspection is a pathological-anatomical examination aiming at detecting and eliminating macroscopic abnormalities that could affect the fitness of meat for human consumption. it is based on visual inspection, palpation, incision and, when required, laboratory examination. postmortem inspection is laborious and expensive. the previous situation of slaughtering a few animals originating from a farm has evolved into large numbers of uniform, relatively young and healthy animals presented for slaughter, which have a common genetic background and prior history. at the same time, it is common to find mixed batches of animals arriving at the slaughterhouse, having been assembled at markets and where several farms have each contributed a few animals. transport can also increase the level and/or duration of shedding of pathogens, as well as the surface contamination of animals with pathogens via animal-animal or animal environment-animal contacts in the vehicle, at the market or in the lairage. therefore, it can be assumed that the food/meat safety risks increase as the number/frequency of movements of animals between farm and slaughter increases (scientific committee on veterinary measures relating to public health (scvmph), ). the current state of meat inspection in the eu and six selected exporting countries outside the eu has been reviewed and summarised recently in an external report. for further, more detailed information on the current eu meat inspection system, the reader is referred to that report. still, irrespective of the meat inspection procedures in place, it is well recognised that small ruminants presented at slaughter can be carriers of zoonotic microorganisms (see section . . above), which cannot be detected during ante-and post-mortem inspections. in the following section, an assessment of the strength and weaknesses of the current practices for protection of public health will be given. the food chain information (fci) principle includes a flow of information from farm to slaughterhouse in order to help classify the batch of animals according to its expected food safety risk, so that slaughter procedures and/or decisions on fitness for consumption can be adapted to the health status and food safety risk presented by the batch of sheep or goats. regulation (ec) no / also requires the feedback of information from slaughterhouse to farmers, describing also the information that has to be provided (appendix to annex i of the regulation). fci is recorded at the flock/batch level and its minimum content is described in regulation (ec) no / . fci related to primary production of flocks or herds is based on a farmer's declaration. most mss have made available a standardised fci declaration form to farmers (e.g. ireland, the united kingdom, italy, france). fci must be checked for completeness and content as part of antemortem inspection. in theory, fci may be used to adapt ante-and/or post-mortem inspections. fci serves as a channel of communication between the primary producer and the inspectors at the slaughterhouse. this, theoretically, facilitates the process of evaluating the health of incoming batches and prevents sick or abnormal animals entering the slaughterhouse, by providing early data on probable disease conditions that may be present in the flock or herd. this is based on information related to the on-farm health status of the animals (occurrence of disease, veterinary treatments, specific laboratory testing). the main benefit of the food chain information is that it may create an awareness among primary producers of the need for high standards of animal health and welfare, proper identification of animals and appropriate use of medicines. by contributing to the overall health of the animals sent to slaughter, such a system should have a positive impact on public health by ensuring that the animals are less likely to carry hazards of public health importance. in practice, ante-or post-mortem inspections of sheep and goats are rarely adapted to take account of fci. the main reason that current fci is insufficiently utilised is because of the lack of adequate and harmonised indicators that could help in classifying the animals according to the risk to public health they may pose. the food safety relevance of fci is often limited because the data that it contains is very general and does not address specific hazards of public health importance. also, farmers might not be in a position to properly assess the presence of relevant hazards. feedback of the results of the meat inspection process to farmers is not implemented in all mss to the full extent foreseen in the legislation. the flow of information back to the farm is not straightforward in the absence of a fast and reliable animal movement tracing system, e.g. through the use of electronic individual animal identification linked to a database containing information on the movements of animals (e.g. change of farm, last farm). for example, in ireland between % and % of small ruminants come to the slaughterhouse from assembly centres or markets (efsa, ) . in this case, it is difficult to consider a batch of small ruminants as an epidemiological unit. good feedback to farmers also requires harmonisation of the reasons for condemnation and the systematic use of the same terminology for each reason for condemnation. the ante-mortem clinical examination is carried out to evaluate the health and welfare of the animals, and to prevent sick or abnormal animals entering the slaughterhouse. this is a visual-only inspection, consisting of the identification of clinical signs of a disease and an assessment of the cleanliness of the incoming animals. it is performed at the individual level in sheep and goats. the public health-related strengths of ante-mortem inspection include inspection of individual animals for signs of disease and animal identification. ante-mortem inspection also helps in identifying dirty animals, as required by current legislation. regulation (ec) no / , annex i requires primary producers to ensure the cleanliness of animals going to slaughter. regulation (ec) no / annex ii, section ii states that food business operators, operating slaughterhouses, must have haccp-based intake procedures to guarantee that each animal or, where appropriate, each lot of animals accepted on to the slaughterhouse premises are clean. regulation (ec) no / , annex i, section ii, chapter iii states that animals with hides, skins or fleeces posing an unacceptable risk of contamination to meat during slaughter cannot be slaughtered for human consumption unless they are cleaned beforehand. adjustments can be made to the slaughter process depending on how dirty the batch of sheep or goats is. current pre-slaughter control procedures include: rejection of dirty lots, washing of animals, fleece/hide trimming or clipping (at the farm or at the slaughterhouse, either pre-or post-slaughter), and slaughter of dirty animals at the end of the day (byrne et al., ) . dirty animals that are presented for slaughter are rejected at ante-mortem inspection until their fleece/hide condition improves. suppliers are sometimes penalised financially through reduced price and the cost imposed by remedial actions required to improve fleece/hide condition. certain countries have adopted such measures as part of a "clean livestock policy". these policies were adopted to meet the requirements of the hygiene package and have proved to be effective in reducing the risk posed by dirty sheep (see section . . below). from a public health perspective, ante-mortem examination is of no value in the detection of toxoplasmosis in small ruminants, as animals infected with this previously identified hazard do not show clinical signs. despite the haccp-based intake procedures guaranteeing the health, welfare and cleanliness of animals going for slaughter, it is difficult to identify animals infected with pathogenic vtec and other enteric pathogens. supplying clean animals reduces, but does not prevent, the possibility of introducing this invisible hazard as infected animals are asymptomatic transient shedders (duffy, ) . post-mortem inspection of carcasses is designed to detect and withdraw from the food chain any carcass or part thereof that has grossly identifiable abnormalities that could affect its meat safety or wholesomeness. the meat inspector examines external and internal surfaces of the carcass and internal organs after evisceration for disease conditions and contamination that could make all or part of the carcass unfit for human consumption. generally, inspection procedures include mainly visual examination of the carcass and offal. these procedures are described in annex i, section iv, chapter ii of regulation (ec) no / . palpation is compulsory for liver, lungs and their lymph nodes. in addition, palpation is mandatory for the umbilical region and joints in young animals. incision is currently required only for the gastric surface of the liver. this procedure can be reduced to a visual-only inspection for sheep less than a year old or goats less than six months of age, if certain conditions are met, as stated in regulation (ec) no / , amending regulation (ec) no / , in the spirit of a risk-based inspection. it is unclear to what extent this derogation is currently used as intended. a more thorough examination, involving palpation and incision of other organs, is performed if abnormalities are detected during visual inspection. table in annex summarises these requirements for post-mortem inspection. ultimately, the production of safe food is the responsibility of the food business operator (fbo) as defined by regulation ( post-mortem inspection enables, to a certain extent, detection of lesions related to animal health and welfare, which are not dealt with in this part of the document (see appendix c). for food safety concerns, post-mortem examination can detect visibly contaminated carcasses and offal, which might present an increased food safety risk and is an indication of a hygienically inefficient slaughter process. post-mortem inspection allows for an assessment of the general health status of the animal to be carried out, which could influence the likelihood of important meat-borne hazards to be present in the carcass. classic zoonotic diseases, such as tuberculosis, which can be detected by post-mortem examination, are now controlled in many areas where modern systems of animal husbandry, disease control and animal health care were introduced. hence, the ability of current post-mortem inspection to detect lesions caused by mycobacteria is only relevant in regions where they are present. post-mortem inspection can also detect other non meat-borne hazards of public health significance that can be present in carcasses or offal from small ruminants. examples of these hazards are echinococcus granulosus and trematode parasites such as fasciola hepatica. acquisition of these parasites by humans occurs when subjects inadvertently swallow eggs or cysts attached to tainted vegetation or by drinking contaminated water containing free-floating eggs (e. granulosus) or cysts (f. hepatica) (fried and abruzzi, ) . from the public health standpoint, only e. granulosus is still relatively important in some mss (efsa and ecdc, b), while trematode parasites are less commonly reported in humans in the eu. meat inspection contributes to the monitoring of these parasites as they are routinely detected during post-mortem examination of sheep and goats. this also allows for appropriate disposal of infected organs, thus breaking the life cycle of the parasites. the extent to which meat inspection contributes to reducing the risk to human health posed by these parasites, compared with control measures elsewhere (e.g. anti-parasitic treatments of the final hosts) is not known, so it is difficult to assess the relative importance or effectiveness of this activity in protecting public health. nevertheless, the importance of meat inspection as a monitoring tool has been stressed previously (efsa, ). the slaughter of sheep involves greater challenges than the slaughter of cattle and pigs since the animal is relatively small and has a wool fleece increasing the risk of surface contamination at dehiding (buncic, ) . as mentioned in section . of this appendix, many challenges are posed by the processing procedure at the slaughterhouse, which has a direct effect on the final microbial disposition of the carcass (e.g. line speed, operational hygiene, equipment and training) (hansson, ; palmer, ) . in this context, the mandatory bacteriological analysis of carcasses to evaluate slaughter process hygiene is important. the maximum acceptable microbiological values are set in the phc for the indicators mentioned in regulation (ec) no / . some risks determined at postmortem examination are under the direct influence of the processor and can be ameliorated by corrective action procedures. in the case of the identified hazard, pathogenic vtec, post-mortem corrective actions may include clipping after stunning and bleeding, adjustments to operational hygiene practices, slowing the slaughter line down and/or adding extra personnel at certain carcass dressing stations, with feedback to producers (see section . . below). the competent authority also verifies the fbo's responsibility to produce safe food, as mandated by regulation (ec) no / , through audit and inspection of the slaughterhouse's food safety management system. in terms of the slaughter process, phc are end-product criteria. compliance with these criteria, in regulation (ec) no / , is one aspect of system compliance verification carried out by the competent authority. more details about phc can be found in section . . . . potential threats to public health associated with slaughtered sheep or goats including agents such as pathogenic vtec and t. gondii are carried by animals without signs or lesions. current meat inspection is not designed to detect or eliminate these agents. sometimes, cysts of t. gondii can be macroscopically visible but it is impossible to distinguish them from sarcocystis cysts, except cysts of s. ovifelis. visible meat quality-related abnormalities are detectable at post-mortem inspection, but these are not important for human health (see table ). sometimes, septicaemia and conditions associated with foci of infection in tissue such as arthritis, bronchopneumonia, mastitis, pleuritis or abscesses can be detectable at post-mortem inspection. some of these are caused by pathogens that might have zoonotic implications (e.g. erysipelothrix rhusiopathiae, s. aureus), but, as explained in section of this appendix, the risk to public health arising from these hazards is not considered to be important and is mostly related to occupational exposure or the way the meat is handled after it leaves the slaughterhouse. other conditions that result in condemnation of offal or carcasses are parasitic lesions. these parasites (c. tenuicollis, sarcocystis, fasciola, dicrocoelium, etc.) are not transmissible via meat consumption. in cases where these abnormalities are observed, the meat must be removed as unfit for human consumption on aesthetic or meat quality grounds. the potential for cross-contamination of carcasses exists whenever palpation and/or incision methods are used in the inspection process. palpation of the liver, the lungs, the umbilical region and the joints, and the incision of the gastric surface of the liver during the post-mortem examination of sheep and goats could contribute to the spread of the bacterial hazards of public health importance in small ruminants through cross-contamination. the importance of cross-contamination is not clear in small ruminants, although it has been considered important in other species (walker et al., ) . however, it should be borne in mind that incision is compulsory only for the liver, whereas in cattle and pigs incision of muscle is also required, so the level of contamination is likely to be smaller in small ruminants than in these species. current legislation foresees more detailed palpation and incision if abnormalities are detected during visual inspection. this could also facilitate cross-contamination of normal carcasses with microbiological hazards of public health importance. judgement of the fitness of meat for human consumption in current post-mortem inspection is based on the identification of "conditions making meat unfit for human consumption" but does not make a clear foodborne risk distinction between different subcategories i.e. between non-zoonotic conditions making meat unfit (inedible) on aesthetic/meat quality grounds (e.g. repulsive/unpleasant appearance or odour), non-zoonotic conditions making meat unfit in order to prevent spreading of animal diseases (e.g. foot and mouth disease), zoonotic conditions making meat unfit due to transmissibility to humans via the foodborne route (e.g. toxoplasmosis) and zoonotic conditions making meat unfit due to transmissibility via routes other than meat borne (e.g. brucellosis). the legislation on official controls on fresh meat from (regulation (ec) / , annex i) has a more horizontal approach than the former one (council directive no / , amended by council directives no / and no / and has also in theory a risk-based approach. however, the main experiences are that alternative control regimes, such as visual control of young animals (sheep of less than a year old and goats less than six months old) are not implemented due to the fact that the gains are limited due to: the threshold in terms of implementation of quality assurance systems and extra procedures at herd level is too high logistical challenges connected to the post-mortem meat inspection procedures as some flocks/herds are certified for visual control while others are not due to the fact that alternative control methods are not accepted by some importing countries outside the eu. ante-mortem and post-mortem inspections of sheep and goats enable the detection of observable abnormalities. in that context, they are an important activity for monitoring animal health and welfare. they provide a general assessment of animal/herd health, which if compromised may lead to a greater public health risk. visual inspection of live animals and carcasses can also detect animals heavily contaminated with faeces. such animals increase the risk for cross-contamination during slaughter and may consequently constitute a food safety risk if carrying hazards of public health importance. if such animals or carcasses are dealt with adequately, this risk can be reduced. visual detection of faecal contamination on carcasses can also be an indicator of slaughter hygiene, but other approaches to verify slaughter hygiene should be considered. post-mortem inspection can also detect non meat-borne hazards of public health significance that can be present in carcasses or offal from small ruminants. ante-mortem and post-mortem inspection also have the potential to detect new diseases if these have clinical signs, which may be of direct public health significance. currently, the use of food chain information for food safety purposes is limited for small ruminants, mainly because the data that it contains is very general and doesn't address specific hazards of public health importance. however, fci could serve as a valuable tool for risk management decisions and could be used for risk categorisation of farms or batches of animals. to achieve this, the system needs further development to include additional information important for food safety, including definition of appropriate and standardised indicators for the main public health hazards identified above (section of this appendix). ante-and post-mortem inspection is not able to detect any of the public health hazards identified as the main concerns for food safety. it would therefore be expected that more efficient procedures might be implemented to monitor the occurrence of non-visible hazards. in addition, given that the current postmortem procedures involve palpation and incision of some organs, there is potential for crosscontamination of carcasses. as identified by priority ranking earlier in this opinion, the principal biological hazards associated with meat from small ruminants are t. gondii and pathogenic vtec. the ranking presented in section of this appendix classified all other hazards in the low-risk category. this ranking is provisional because of the limited information available for some of the hazards. neither of the principal hazards identified can be detected by traditional meat inspection, which is focused on identification of visible abnormalities and issues relating to the health and welfare of the animals on the farm, in transit and at the slaughterhouse before slaughter. detection and quantification of those hazards in/on sheep or goats and their carcasses is possible only through laboratory testing. the occurrence and levels of t. gondii and pathogenic vtec on carcasses are highly variable depending on various factors, including particularly: (i) their occurrence in the sheep and goat population before slaughter and the application and the effectiveness of related pre-slaughter controls strategies; (ii) the extent of direct and/or indirect faecal cross-contamination during slaughter line operation (this does not apply to t. gondii); and (iii) the application and the effectiveness of possible interventions to eliminate/reduce them on carcasses (e.g. decontamination). therefore, as far as the presence of these pathogens in/on carcass meat is concerned, the risk reduction strategies, and related controls, are focused on these three aspects. changes are therefore necessary to identify and control microbiological hazards, and this can be most readily achieved by improved use of fci and interventions based on risk. control measures for pathogenic vtec at the slaughterhouse are also likely to be effective against other enteric pathogens, as they would all be controlled by addressing faecal contamination of carcasses. a comprehensive meat safety assurance system for meat from small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. the main responsibility for such a system should be allocated to fbos, whereby compliance is to be verified by the competent authority. the setting up of such a comprehensive meat safety assurance system at eu level is dependent on the availability of reliable information on the biological risks associated with the consumption of meat from these species. as indicated in the priority ranking section of this opinion (section of this appendix), information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. consequently, in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual mss. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. in the event that these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. eu targets to be reached at the national level are already in place for salmonella spp. in breeding flocks of gallus gallus and turkeys and production flocks of broilers, turkeys and laying hens. similar targets in primary production could also be considered for the main hazards of other species, including small ruminants. the use of specific hazard-based targets (i.e. pathogenic vtec or t. gondii related) for chilled carcasses provides: . measurable and transparent focus for the abattoir meat safety assurance system. ) on what has to be achieved at earlier steps in the food production chain. . information for the purpose of consumer exposure assessment for each hazard. . measurable aim for the meat industry in the context of global pathogen reduction programmes. for all these reasons, the chilled carcass targets have to be specific hazard based. this, however, may not be always practical (e.g. in very low hazard prevalence situations). therefore, proper functioning of meat safety quality assurance systems may not rely exclusively on hazard-based testing of the final carcass but on the general hygiene of the slaughter process. this issue is discussed further in the following sections. ). in addition, information on harmonised epidemiological indicators (heis) and related methodologies for the main hazards that can be used in studies to establish prevalence of the main pathogens to establish targets for carcasses and performance criteria for slaughterhouses, as well as targets for incoming small ruminant animals, is provided in the efsa report (efsa, ) . therefore, this opinion should be used in combination with that report. at farm level, the primary goal is reduction of risk for the main hazards, which can be achieved through preventive measures such as flock/herd health programmes, including biosecurity and good farming practices (gfps) that specifically address the hazards identified in section of this appendix. husbandry practices will vary considerably for small ruminants, particularly, the intensity of the rearing system. so, although it is not possible to detect any of the main foodborne zoonotic infections visually at the farm, there are known risk factors that are likely to increase the risk of infection with the main hazards. an important element of an integrated meat safety assurance system is considered to be risk categorisation of flocks or herds based on the use of farm descriptors and data on clinical disease and use of antimicrobials, in addition to data provided by ongoing monitoring of high-risk hazards that constitute the fci. such data could be provided through farm audits using heis to assess the risk and preventive factors for the flocks or herds related to each of the prioritised microbiological hazards (see efsa report, ( )). ongoing monitoring could be put in place for particular pathogens at eu level if, following the completion of the prevalence studies described earlier, these pathogens are identified as presenting a high risk. an assessment of the historical data over a time period could also be used for adjusting the sampling frequency of the main hazards in order to focus control efforts where the risk is highest. a structured approach to gathering more detailed farm information should become an additional farmrelated element of the fci that, in combination with the monitoring results for the main hazards, should form the basis for the risk categorisation of the farms. the frequency of monitoring in higher risk farms could be adapted in a cost-effective manner, e.g. there would be no need to sample every batch of animals to be slaughtered if the result is very likely to be "high risk" or "very low risk". thus, animals from higher risk farms could be systematically directed to, for example, logistic slaughter, or specific treatments such as decontamination at the slaughterhouse, until these high-risk farms demonstrated a decreased risk following the implementation of adequate on-farm measures. this system could act as an incentive for the primary producer to improve farm standards by means of reduced monitoring costs associated with low-risk status. at slaughterhouse level, the primary goal is risk reduction for the main hazards that can be achieved through integrated programmes based on gmp, ghp and haccp, including the use of phc: logistic slaughter based on the risk categorisation of the slaughtered batches; this could be slaughter of higher risk animals at the end of the day. hygienic practices and technology-based measures aimed at avoiding direct and indirect cross-contamination with the main hazards. interventions such as the scheduling of higher risk animals for carcass decontamination or for risk-reducing processes such as heat treatment to reduce pathogenic microorganisms or freezing-based treatments to eliminate parasites such as t. gondii. enteric pathogens are carried in the gastrointestinal tract and/or on the fleece of sheep and goats presented for slaughter, and carcass meat becomes contaminated as a result of direct or indirect crosscontamination that is highly dependent on slaughterhouse technology and the skills of the operators. technical aspects of individual steps of the slaughter process for small ruminants may vary considerably. the order of the processing steps at the slaughterhouse is generally as follows: transport/lairaging-stunning-bleeding-skinning-evisceration-chilling. each of these steps contributes differently to the final microbial load of the carcass. crosscontamination between animals can occur from transport and lairaging and during the slaughter process. contamination occurs particularly during skinning and evisceration. the slaughter of sheep involves greater challenges than the slaughter of cattle and pigs since the animal is relatively small and has wool. "during sheep de-pelting, it is difficult to achieve the low contamination rates capable of being achieved during cattle de-hiding, as the animal is smaller, the fleece is longer and there is a much greater chance of fleece inrolling and contacting the carcass. therefore, overall, de-skinning is a 'dirtier' procedure in small ruminants than in larger ones" (buncic, ) . chilling can help to control the numbers of pathogenic and spoilage microorganisms on carcasses. decontamination treatments for carcasses might be used to reduce the levels of enteric pathogens and can be divided into physical and chemical treatments. physical interventions include water-based treatments, irradiation, ultrasound or freezing. hot water, steam and irradiation effectively reduce the bacterial load. chemical interventions such as treatments with acetic and lactic acid reduce the bacterial load, as observed in poultry (loretz et al., ) . some combinations of treatments further enhance the reductions (loretz et al., ) . freezing and -irradiation can also be effective in eliminating t. gondii in carcasses. however, some of these methods are limited by their practicability, regulatory requirements or acceptability to consumers (acmsf, ) . thus, the best way to achieve reductions in carcass contamination is likely to come either from physical decontamination treatments or from technological developments in the process that are designed to improve hygiene, as long as they are acceptable to the industry and the consumer. each slaughterhouse can be viewed as unique, owing to differences in species slaughtered, logistics, processing practices, plant layout, equipment design and performance, standardised and documented procedures, personnel motivation and management, and other factors. these variations individually and in combination lead to between-slaughterhouse differences in risk-reduction capacity and, consequently, in the microbiological status of the final carcass. hansson ( ) indicated that there was a significantly greater amount of aerobic bacteria in ruminant carcasses slaughtered at lowcapacity slaughterhouses than in high-capacity slaughterhouses. this difference in carcass microbiological status can be accounted for by better separation of low-and high-risk areas, less variation in evisceration techniques, uniformity of the animals slaughtered, increased specialisation of labour and equipment, and improved measures taken to prevent contamination through effective operational hygiene practices in high-volume slaughter establishments (hogue et al., ; rahkio and korkeala, ) . consequently, a risk categorisation of slaughterhouses is also possible, based on the assessment of individual hygiene process performance. for that, a standardised methodology and criteria for the assessment of process hygiene is a prerequisite. in respect to process hygiene, differentiation of abattoirs in current eu regulation is based on the use of process hygiene criteria providing two categories: "acceptably" and "unacceptably" performing abattoirs. however, this differentiation is based solely on carcass testing, and so does not differentiate the abattoirs in terms of the processes but only the end products. more in-depth differentiation, even within each of the two global categories of abattoirs, would have been possible if improved process hygiene assessment methodology and indicators were used. the main guiding principle (koutsoumanis and sofos, ) in abattoir process hygiene differentiation is that abattoir phc need to address the initial level of a hazard and the reduction of that hazard during the production process. in the process of creating phc for abattoirs, the possibilities that need to be considered are whether they should be linked to individual stages of the process (e.g. reduction of occurrence/level of indicator organisms or hazards at a selected one or more specific steps along the slaughter line) or only related to the starting and the end point of the process (e.g. reduction of the occurrence/level in/on the final carcass meat compared with that in/on incoming animals). establishing the frequency of the official controls on the basis of pre-defined and objective criteria. carrying out the official controls using homogeneous criteria for plants with a comparable risk profile. these criteria take six parameters into account: food safety management systems can combine official control and supervision based on compulsory requirements prescribed by law (haccp, traceability, fci, etc.) , and private quality assurance schemes. besides those aspects included in the legislation, abattoirs can voluntarily implement their own quality requirements in the form of certification schemes. certification of production processes is based on the auditing and approval by accredited third-party organisations on an accredited standard. these schemes include official requirements but also pay attention to additional, more stringent, quality and safety aspects of the processes and products. at the slaughterhouse, standards are implemented for animal welfare and hygiene, slaughtering, dressing and evisceration, hygiene control, carcass quality and grading, storage conditions, carcass cutting and processing, etc. the adherence to certification schemes reassures stakeholders (suppliers, clients), government and consumers of the quality and safety of their products, with a view to meeting market demands and consumer satisfaction. retailers and manufacturers are increasingly demanding that their suppliers hold an approved certification. in this sense slaughterhouses are becoming increasingly important throughout supply chains in integrated food safety management systems. some examples of quality assurance schemes are the international standards organisation (iso) , food safety system certification (fssc) , international food standard (ifs), british retail consortium (brc) and globalgap. in summary, classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: (i) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. process hygiene criteria); and (ii) the use of operational procedures and equipment that reduce faecal contamination (as described in section . . . above), as well as industry-led quality systems. information about the risk categorisation of slaughterhouses could be then considered with the fci when assessing the risk arising from incoming animals. herbivorous animals most likely contract t. gondii infection via ingestion of pasture, hay, forage, feed or surface water contaminated with oocysts shed by infected cats (skjerve et al., ; tenter et al., ) . oocysts are very resistant and can survive a range of temperatures in the environment. a continuous input of sporulated oocysts, originating from young infected cats, must be present to sustain the oocyst reservoir in the environment (kijlstra and jongert, ). the risk of environmental oocyst contamination can be addressed by using sterilised feed and bedding, and not allowing sheep and goats outdoor access; however, such husbandry practices are not economically viable in most eu commercial sheep and goat enterprises (kijlstra and jongert, ). removing cats from the farm surroundings, or vaccinating cats, could theoretically lead to a reduction of the oocyst load in the neighbourhood of the farm. in reality, most of these measures would not be practical to implement in most situations at the moment. (dubey, ; kijlstra and jongert, ) . moreover, the vaccine may revert to a pathogenic strain and is, therefore, not suitable for human use (hiszczynska-sawicka et al., ). an oral vaccine composed of live bradyzoites from an oocyst-negative mutant strain (t- ) has been effective in preventing oocyst shedding by cats in experimental trials but a vaccine for cats is not yet commercially available (innes et al., ) . while the s strain vaccine remains the only one commercially available, there has been significant progress over the last years in the development of vaccines against toxoplasmosis due to technological advances in molecular biology (kur et al., ) . a cocktail dna vaccine has been shown to prime the immune system of animals against toxoplasmosis with increased immune responses being observed after experimental challenge (hoseinian khosroshahi et al., ) . in principle, an effective recombinant vaccine against both sexual and asexual stages of the parasite should be able to address all three targets listed above, but this is hampered by stage-specific expression of t. gondii proteins (jongert et al., ) . for this reason, the development of vaccines that prevent t. gondii infection in ruminants and/or cats is recommended. surveillance and monitoring of t. gondii in animals preharvest is essential in the control of this parasite, something that is currently not addressed effectively within the eu (efsa, b; opsteegh et al., b) . the most feasible surveillance method is the use of indirect serological tests (e.g. enzyme-linked immunosorbent assay, elisa) on live sheep and goats for the detection of t. gondii antibodies, as seropositivity has been correlated with tissue cyst presence in non-vaccinated animals (buxton, ; conde de felipe et al., ; dubey, ; opsteegh et al., b) . however, a more practical solution is taking a blood sample during bleeding of the animal at the slaughterhouse, or even freezing a piece of meat and collecting the meat juice during thawing. studies have indicated regional differences in seroprevalence in small ruminants which can be accounted for by differences in environmental contamination or by factors that influence the level of exposure of sheep to the environment, such as age and farm management (alvarado-esquivel et al., ; opsteegh et al., b) . monitoring programmes could help in the risk assessment and categorisation of small ruminants with regard to t. gondii at the slaughterhouse as part of the fci provided. for more details on the different options for indicators of the presence of t. gondii we refer the reader to the technical specifications on harmonised epidemiological indicators for biological hazards to be covered by meat inspection of small ruminants (efsa, ). with this background of high t. gondii prevalence in the national flocks and herds of small ruminants, a more realistic approach could be to focus the efforts in setting up a system to identify negative flocks/herds instead. for example, animals raised exclusively indoors and under controlled husbandry conditions (which would need to include for example the exclusion of cats from the farms and the absence of contamination of feed, bedding and water with t. gondii oocysts) would present a much lower risk with regards to t. gondii. when these husbandry conditions are combined with serological testing and the selection of young animals for slaughter, the production of t. gondii-free meat should be a feasible goal. this meat could be then used for either subpopulations at greater risk (e.g. pregnant women or immunocompromised people), or for the elaboration of particular dishes that require little cooking of the meat (e.g. agneau rosé in france). at the moment, this system might be practical to implement only for some intensive farms dedicated to milk or cheese production in some mss. a more detailed explanation about how harmonised epidemiological indicators could help setting up this system is provided in the accompanying report on these indicators mentioned above (efsa, ). there is no way to distinguish t. gondii-infected meat carcasses from uninfected carcasses during meat inspection (dubey et al., ) . similarly, current process hygiene criteria do not address the risk arising from this hazard (or any non-enteric hazard). the presence of t. gondii tissue cysts can be determined only by laboratory methods, particularly by using serological methods. this can be done on farm or at the slaughterhouse. studies on pcr methods to detect and quantify t. gondii in meat samples have shown promise with detection sensitivities comparable to those of bioassay (opsteegh et al., a) . studies using such laboratory techniques allow epidemiological studies to be conducted to determine the seroprevalence of toxoplasmosis in ovine meat and the risks such meat poses to human health (efsa, b; opsteegh et al., b) . additional information on sampling and testing methodologies to detect t. gondii can be found in the efsa report on harmonised epidemiological indicators for sheep and goats ( ). given that t. gondii cannot be horizontally transmitted between ruminants, there is no issue of between-animal cross-contamination with t. gondii at slaughter, and therefore separating sheep and goats from negative and positive flocks or herds during transport, lairage and on the slaughter line would not have any impact on the levels of t. gondii. studies have indicated that t. gondii tissue cysts in meat are susceptible to various physical procedures that can take place at the abattoir or beyond. these include heat treatment, freezing, irradiation, high pressure treatment and curing (addition of salt combined with drying) (table ). heat treatment is the most secure method to inactivate the parasite; however, freezing would probably be the most practical risk management option to control t. gondii for the meat industry (kijlstra and jongert, ). most of the information available for these treatments originates from research in pigs, so further research is required to validate these treatments in meat from small ruminants. these treatments would be particularly appropriate for meat cuts that are intended to be consumed rare. the ranking presented in section of this appendix classified pathogenic vtec as high risk. however, it is important to note that measures aimed at controlling this hazard will also probably be effective in reducing the level of other enteric pathogens such as salmonella spp. or campylobacter spp. control of pathogenic vtec at farm level is complicated by the fact that animals are asymptomatic carriers of these organisms, thus without an active monitoring programme there is no way of knowing which animals are infected and/or shedding at any given time. control activities must therefore be directed at the flock or herd. good management practices such as maintaining stable rearing groups, keeping a closed herd and preventing young animals from having contact with older animals all decrease the spread of vtec on and between farms. a number of studies have reported reductions in bacterial contamination and, in particular, in e. coli o :h levels on carcasses by reducing the level of fleece/hide contamination (hadley et al., ; longstreeth and udall, ) . in this context, the provision of a dry lying area for sheep improves hygiene. in outdoor rearing systems, this is achieved by access to sheltered free-draining land, avoiding access to wet or boggy areas. the housed rearing environment is more easily controlled by the producer. the shelter provided, in addition to the effect of good-quality bedding and the ability to influence access to food/water in the housed system, result in pre-slaughter housing being recommended as a clean fleece policy control measure (food standards agency, b) . other husbandry practices such as internal parasite control, effective mineral supplementation, regular dagging/crutching and the planned pre-slaughter preparation by the producer can have an impact on the on-farm clean sheep policy (food standards agency, b; pugh and baird, ) . although no such information is available for goats, it is probably safe to assume that these principles would also work in this species. controlling diet and feeding before slaughter to minimise digestive upset is essential in ensuring that animals are clean prior to slaughter. the provision of a high-fibre, nutritionally balanced diet, with easily digestible protein, helps develop good faecal consistency (collis et al., ; pugh and baird, ) . lush grass, contaminated water sources, overfertilised grassland, excessive concentrate supplementation and root/forage crop consumption prior to slaughter are dietary causes of fleece contamination (food standards agency, b ). in addition, in a recent review, pointon et al., ( ), considered the impact of pre-slaughter feed curfews of cattle, sheep and goats on food safety and carcass hygiene in australia. the authors examined the ecology of salmonella spp. and e. coli and the efficacy of on-farm withholding of feed, carried out to reduce soiling during transport, in terms of microbial reduction. they suggested that, to minimise carcass contamination with salmonella spp. and generic e. coli, the animals should be fasted before transport only for a period sufficient to complete faecal expulsion, i.e. hours, but not exceeding hours, and they concluded that the implementation of these practices as good agricultural practice is likely to improve the effectiveness in terms of reducing pathogens on the carcasses. good management of animal waste is also essential to prevent spread and cross-infection of other animals. animal waste from animals housed indoors generally accumulates as slurry or farmyard manure. vtec survive for extended periods in faecal, slurry, soil and water environments (besser et al., ; bolton et al., ; bolton et al., ; fremaux et al., ; himathongkham et al., ; hutchison et al., a; hutchison et al., b; islam et al., ; mcgee et al., ; o'neill et al., ) . current control measures to reduce the pathogen risk in animal waste can be applied before, during or after spreading manure. pre-spreading controls include the provision of proper storage facilities for animal waste to prevent leakage of waste into ground water, and keeping animals away from slurry pits or dung heaps. spreading should not take place in conditions where contamination of a water course is more likely to occur. after manure is spread, the land should not be used for grazing for a certain amount of time (at least one month or until all visible signs of animal waste have disappeared in the case of grazing (hutchison et al., ) . despite there being a range of on-farm measures to control vtec at farm level, the efficacy of such measures in reducing the prevalence (or load) of pathogenic vtec in small ruminants is not clear. transport has also been identified as a risk factor for hide cleanliness (animalia, ; byrne et al., ; food standards agency, b) . in compliance with council regulation (ec) no / on the protection of animals during transport and related operations, livestock should be carried in wellventilated, clean vehicles, at the correct stocking density, with the provision of shelter, bedding and access to food and water where appropriate. these measures, particularly relating to vehicle facilities, design and journey distances directly affect fleece/pelt cleanliness. industry standards on stocking densities during transport and lairage also facilitate the requirements of clean livestock policies (anonymous, ; minihan et al., ) . section . . above indicated that categorisation of flocks or herds according to risk can be an important element of an integrated meat safety assurance system. however, for pathogenic vtec there are a number of challenges that need to be overcome for this approach to be feasible, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for vtec due to the difficulty of correctly identifying pathogenic vtec. the two main sources of enteric bacteria on sheep and goat carcasses are the fleece/hide and the viscera, but contamination from the former is more common. a number of studies have established a relationship between the dirtiness of sheep at the time of slaughter and the amount of contamination, and therefore the amount of pathogenic bacteria transferred to the carcass during skinning (duffy et al., ; gerrand, ; hauge et al., a; longstreeth and udall, ) . this relationship is addressed by legislation at the production and processing level, within the hygiene package, as previously mentioned in section . . . to meet this requirement for clean fleece/hides, some mss have adopted formalised "clean livestock policies" to categorise livestock including sheep and goats at ante-mortem examination, thereby placing the responsibility of presenting clean animals for slaughter with the producer and the processor (byrne et al., ; hauge et al., a) . pre-slaughter washing of sheep is widely used in new zealand (biss and hathaway, ) , together with routine shearing at high-risk sites. the clean livestock policy adopted by the food standards agency in the united kingdom has had considerable success in meeting the requirements of the hygiene package. it is based on a visual inspection during unloading or lairaging and the categorisation of the animals as acceptable for slaughter, acceptable for slaughter following shearing or clipping (conducted at the primary producers expense), and unsuitable for slaughter. extra time spent in lairage, clipping, subsequent reduction in slaughter line speed, separate processing or excessive trimming and rejection of animals all incur additional costs to producers and processors (food standards agency, b) . similarly, in , the norwegian meat industry also adopted national guidelines for good hygiene slaughter practices regarding the categorisation of fleece cleanliness for sheep (hauge et al., a) . the policy coordinators, in both the united kingdom and norway, communicate the risk of contaminated sheep to the sheep producer, suggesting various husbandry practices, handling methods and pre-slaughter preparation to limit contamination prior to slaughter (animalia, ; food standards agency, b) . as part of the norwegian clean sheep policy, developed by the associations of producers and slaughterers, sheep are shorn in the slaughterhouses. if they do not become visually clean after shearing or they are already shorn on farm and are contaminated after shearing, the carcasses of these animals are processed in a separate line. this separate processing may include heat treatment of meat products and processing into a restricted range of products, with the farmers receiving a lower price (a reduction of - % in the carcass value). hauge et al. ( a) demonstrated that the measures taken as part of the norwegian policy decreased the risks posed by carcasses and thereby validated the use of such clean sheep policies. the influence of animal cleanliness on small ruminant carcass contamination was investigated by several authors in australia, canada, new zealand, ireland and norway (biss and hathaway, a , b , c duffy et al., ; gill, ; hauge et al., a; sumner et al., ) , but there is contradictory evidence on the impact of measures to improve fleece cleanliness on microbiological contamination of lamb carcasses. roberts ( ) and field et al. ( ) found no effect of shearing of sheep on carcass contamination. some more recent studies have reported that shearing of sheep decreased carcass surface bacterial counts (biss and hathaway, a; collis et al., ; schroder and ring, ) . in a study carried out in norway by hauge et al. ( a) a significantly lower level of aerobic plate count (apc) and e. coli was found on carcass surfaces from shorn lambs when compared with unshorn lambs at skinning. at this sampling point, shearing proved to be effective for reducing microbial loads on carcasses. results in this study showed a trend of increasing contamination of carcasses with increasing duration of the time between shearing and slaughter. sheep shorn immediately before slaughter yielded carcasses with the lowest microbial loads with respect to apc. the e. coli results were less definitive, but a similar trend was demonstrated. biss and hathaway ( b), investigating the effect of pre-slaughter washing of lambs on the microbiological and visible contamination of the carcasses at four slaughterhouses in new zealand, showed that total aerobic bacteria and e. coli contamination was greater on carcasses that had been washed before slaughter, irrespective of wool length, and it was generally higher on carcasses derived from woolly lambs than those derived from shorn lambs. other researchers have found that pre-slaughter washing of sheep will only have positive effects if the washed animals are allowed sufficient time to dry before they are slaughtered (newton et al., ; patterson and gibbs, ) . many studies have reported difficulties in making valid microbiological comparisons associated with differences in slaughter hygiene due to individual operators, uneven distribution of microorganisms on carcasses, variations between groups of animals, day-to-day variations, and seasonal variations (biss and hathaway, b; hauge et al., a; ingram and roberts, ) . this may explain the conflicting results obtained in relation to the effect of shearing or washing on carcass contamination in such studies. irrespective of this conflicting evidence about how to best ensure that incoming animals are clean, it seems necessary to continue accepting only clean animals for slaughter as currently required by eu legislation, as it can be assumed that the dirtier the animals are in terms of faecal material, the higher the risk of cross-contamination of the slaughterline environment, including the carcasses. as mentioned above, a second source of enteric bacteria on carcasses are the viscera. during carcass dressing, bacteria are transferred from the gastrointestinal tract to the carcass directly by contact with gut spillage or indirectly via contaminated hands, knives, other equipment and the air. in general, prerequisite gmp and ghp implemented to reduce bacterial contamination will also prevent or reduce carcass contamination with pathogenic vtec, salmonella spp. and other pathogens. during evisceration, the abdominal cavity is opened using a knife and the connective tissue joining the bung and the viscera to the carcass is cut. rodding (sealing the oesophagus with a crocodile clip, plastic ring or starch cone) may be performed to prevent leakage. the spread of faecal material from the rectum can be prevented or reduced by bagging and tying the bung. the current throat sticking practice in halal slaughter (cutting of blood vessels, oesophagus and trachea) limits the effect of rodding as the leakage from the oesophagus occurs before rodding can be applied. if a sticking method such as chest sticking is applied, the effect of rodding will be greater as the oesophagus remains intact, with reduced leakage from the oesophagus until rodding is performed as one of the first steps after bleeding. using this method, contamination from the oesophagus of wool, skinned surfaces and the abdominal and chest cavity, in addition to the operator's hands, equipment, walls and floor, will be avoided to a high degree. the effect of bagging on the level of e. coli on sheep carcasses has been previously investigated (norwegian scientific committee for food safety, ). although the numbers of carcasses were limited, based on relevant -cm sampling sites (circum-anal incision and pelvic duct), it could be concluded that the use of the plastic bag technique during circum-anal incision and removal of the rectum results in a to log reduction in e. coli (figure ). if a plastic bag is not used and the rectum is inserted in the abdomen, the chances of contamination are larger. the hygienic effect of rodding and bagging will depend on the operator's experience at these critical hygienic positions. skinning and evisceration may also be designated as critical control points (ccps) as part of the haccp programme, the critical limit for both being zero visible faecal contamination on the carcasses. monitoring occurs at the trimming stand where every carcass is visually inspected. this inspection may be facilitated using the online monitoring system described by tergney and bolton ( ) . when faeces or faecal stains are detected they are immediately removed by trimming. the cause of the breach in hygiene should also be investigated, and secondary corrective actions require retraining of personnel, replacement of knives, etc. setting and using indicators/criteria for "process hygiene" of slaughterhouses is an integral part of the meat safety assurance system, which targets specifically contamination of the carcasses with enteric pathogens. according to the regulation on microbiological criteria, a microbiological criterion means a criterion defining the acceptability of a product, a batch of foodstuffs or a process, based on the absence, presence or number of microorganisms, and/or on the quantity of their toxins/metabolites, per unit(s) of mass, volume, area or batch. phc included in regulation (ec) no / are defined as criteria indicating the acceptable functioning of the production process. they give guidance on the acceptable implementation of pre-requisite programmes (gmp/ghp) and haccp-based systems to ensure hygienic functioning of slaughterhouse processes and are applicable only to the product at the end of the manufacturing process (final carcass after dressing but before chilling), and not to products placed on the market. in eu countries, phc involve the evaluation of indicators of overall contamination (total viable count of bacteria), indicators of contamination of enteric origin (enterobacteriaceae) and salmonella spp. prevalence. bacteriological analysis of carcasses, as outlined in this regulation, is carried out by the fbo. it involves pooled samples from four risk-assessed sampling sites on each of five sampled carcasses. this must be carried out weekly or, depending on the previous results, once a fortnight. phc set an indicative microbial contamination value above which corrective actions are required by the fbo in order to maintain the hygiene of the process in compliance with eu food law. these corrective actions should include the improvement of slaughter hygiene and the review of process controls. the phc communicate the expected outcome of a process, but they neither characterise nor differentiate between the processes themselves. process compliance must be verified by audits of haccp plans and inspections of processing procedures. the competent authority carries out this role on behalf of the member state as defined by regulation (ec) no / . as phc verifies the hygienic functioning of the process rather than the safety of the product, it does not require validation by independent sampling on behalf of the competent authority. microbiological testing alone may convey a false sense of security due to the statistical limitations of sampling plans, particularly in the cases where the hazard presents an unacceptable risk at low concentrations and/or low and variable prevalences. in addition, for pathogens other than enteric hazards (e.g. t. gondii), phc does not provide any information about risk. sampling and testing, as required by regulation (ec) no / , is only part of the verification process of systems in place. these criteria should not be considered without other aspects of eu food legislation, in particular haccp principles and official controls to fbos' compliance (efsa, c) . with current eu legislation, one element of the phc indicates the maximum acceptable prevalence of salmonella spp. on carcasses at the end of the slaughter line. the inclusion of this pathogen as a process hygiene criterion for carcasses highlights the importance of salmonella spp. as a foodborne pathogen in the eu and the need for good hygiene measures for controlling it in the abattoir. however, the use of this hazard presents some problems, because the salmonella spp. occurrence on carcasses depends not only on process hygiene performance of a given abattoir, but also on the salmonella spp. carriage by incoming animals (or lack of it). hence, when slaughtering batches that are salmonella spp. free or that have a low prevalence, such phc will be satisfied even if the actual process hygiene is inadequate-and vice versa. on the other hand, the current eu salmonella-based process hygiene criterion partly has the nature of a salmonella-related target to be achieved by abattoirs. the important difference is that with the current eu phc the hazard is measured on the carcass before chilling, while with the target-based concept the hazard is measured on the chilled carcass (i.e. just before dispatch onwards to the meat chain). however, the chilled carcass is better suited for assessing consumer exposure, and for the hazard-related target concept, as the prevalence/levels of microbial hazards on the carcass may change during chilling. furthermore, these current salmonella-related eu criteria for chilled carcasses are not clearly linked to other salmonellarelated criteria/targets at preceding and/or consecutive steps of the meat chain. in addition, current eu-legislated phc for abattoirs actually do not provide information on ratios between initial contamination associated with incoming animals versus final contamination associated with carcasses, i.e. on the actual capacity of the process to reduce the incoming contamination, but only on the process outcomes. when the main purpose is to microbiologically characterise the abattoir process itself, which is the subject of this subsection and a prerequisite for related differentiation of abattoirs, this is a significant weakness of the current eu-legislated phc. these shortcomings could be addressed by the setting of specific targets for pathogenic vtec, as described in section . above, instead of using salmonella spp. in phc. in addition, to measure the performance of the slaughter line, phc based on indicator organisms should be implemented, measuring microbial loads in at least two stages of the processing line. this would allow determination of the ratio between indicator organisms on pre-chill carcasses and those found in incoming animals, for a given batch. the phc is considered to be a key component of the proposed meat safety assurance system so, in that context, careful consideration would need to be given to issues such as the number of samples taken per week, the areas where those samples are taken from (both in carcasses and incoming animals) and the need for regulatory auditing of the process hygiene assessment (which may include microbial testing, as well as record verification). this more accurate information based on trends of data derived from process hygiene assessments and from haccp programmes would enable differentiation ("risk categorisation") of abattoirs with respect to pathogenic vtec which, in turn, would enable different risk management options for different risk categories of abattoirs to be used, including: optimisation of balancing pathogenic vtec risk categories of small ruminants with risk categories of abattoirs where they are to be slaughtered. optimisation of the decision whether/where additional pathogenic vtec risk-reducing interventions are to be applied (e.g. carcass decontamination step). more stringent requirements for monitoring/verification/auditing programmes for higher risk abattoirs. more reliable feedback to the farm of origin on the root of problems with pathogenic vtec on carcasses of small ruminants. clearer identification of slaughterhouses where improvement of the slaughtering practices and/or technology is needed. small ruminants represent a reservoir of enteric pathogens. in that context, the slaughtering of sheep involves greater challenges because the animal is relatively small and has a wool fleece, thus increasing the risk of surface contamination at dehiding (buncic, ) , which might result in suboptimal hygiene during slaughtering compared with the slaughtering of cattle. technological and operational shortcomings, such as a too high line speed, no rodding and bagging and the use of seasonal workers not sufficiently trained for the purpose, are reported as additional challenges in some abattoirs (norwegian scientific committee for food safety, ). accordingly, interventions such as surface pasteurisation using hot water might be considered as one of several options to reduce the bacterial contamination on carcasses. hot water at - °c achieves a . to . log reduction in colony-forming units (cfus)/cm in salmonella spp. on beef carcasses (arthur et al., ; cutter and rivera-betancourt, ) . in a study by hauge et al. ( b) lamb carcasses were subjected to hot water pasteurisation at °c for eight seconds. the reduction in e. coli just after pasteurisation was . %, corresponding to . log cfus/cm , and after hours' storage . log cfus/cm . accordingly, surface pasteurisation of sheep carcasses might represent an important and efficient step (ccp) to reduce vtec on the carcasses and the risk of disease among consumers. an automatic surface pasteurisation step is easy to control by measurement of time/temperature, and these results, together with the quality of the process water, might be documented on display and/or on hard copy. the pasteurisation step might be recognised as a ccp in a haccp concept. steam treatment is also allowed in the eu and has been found to reduce bacterial contamination in sheep carcasses by a log cfus/cm , both in enterobacteriaceae (milios et al., ) and aerobic plate counts (james et al., ) . greater reductions of up to . log cfus/cm have been described when using a combination of steam and a hot water wash in sheep carcasses (dorsa et al., ) . similar effects have been observed with salmonella spp. counts in beef carcasses, achieving reductions of < . to . log cfus/cm (phebus et al., ; retzlaff et al., ) . surface pasteurisation can also be achieved by manual steam vacuum technology. however, the use of this technology depends on skilled and responsible operators, and will require close supervision in order to ensure the pasteurisation procedure is correctly applied to the whole carcass. the use of manual steam vacuum was evaluated in a norwegian slaughterhouse, showing a real reduction (median of . log cfus/cm )(hassan, ). surface pasteurisation of ruminant carcasses is an option that allows dealing with carcasses presenting greater risk, such as emergency slaughtered carcasses or unclean carcasses, without the need, for example, to apply a heat treatment on these carcasses. although not permitted in the eu, a range of specific interventions are applied in us slaughter plants targeting enteric pathogens such as pathogenic vtec and salmonella spp. these include the application of organic acids. the application of acetic acid to beef carcasses will reduce e. coli counts by . - . log cfus/cm (sofos and smith, ) . significant reductions achieved with lactic acid have also been described (efsa panel on biological hazards, ). as neither of the main public health hazards associated with meat from small ruminants can be detected by traditional meat inspection, other approaches are necessary to identify and control these microbiological hazards. a comprehensive meat safety assurance system for meat from small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual mss. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. in the event that these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. to meet these targets and criteria, a variety of control options for the main hazards are available, at both farm and abattoir level. flock/herd categorisation according to the risk posed by the main hazards is considered an important element of an integrated meat safety assurance system. this should be based on the use of farm descriptors and historical data in addition to batch-specific information. farm-related data could be provided through farm audits using heis to assess the risk and protective factors for the flocks/herds related to the given hazards. classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: (i) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. phc); and (ii) the use of operational procedures and equipment that reduce faecal contamination, as well as industry-led quality assurance systems. as mentioned in section . , further studies are necessary to determine with more certainty the risk of acquiring t. gondii through consumption of meat from small ruminants. in addition, the lack of tests that can easily identify viable cysts in meat is a significant drawback. furthermore, if there is a high prevalence in the animal population, this will hamper the development of systems based on risk categorisation of animals. for these reasons, the setting of targets for t. gondii is not recommended at the moment. there are a variety of animal husbandry measures that can be used to control t. gondii on sheep and goat farms but at present it would not be practical to implement them on most farms. a number of post-processing interventions might be effective in inactivating t. gondii such as cooking, freezing, curing and high-pressure and -irradiation treatments. however, most of the information available for these treatments originates from research in pigs, so further research is required to validate these treatments in meat from small ruminants. there are also a variety of animal husbandry measures that can be used to reduce the levels of vtec on infected farms, but their efficacy is not clear in small ruminants. in addition, there are a number of challenges that need to be overcome regarding the setting of targets for pathogenic vtec, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for vtec due to the difficulty of correctly identifying pathogenic vtec. the two main sources of vtec on sheep and goat carcasses are the fleece/hide and the viscera. to control faecal contamination from the fleece or hide only clean animals should be accepted for slaughter, as currently required by eu legislation. there are also a number of measures that can help to reduce the spillage or leakage of digestive contents onto the carcass, particularly rodding of the oesophagus and bagging of the rectum. post-processing interventions to control pathogenic vtec are also available. these include hot water and steam pasteurisation. risk categorisation of slaughterhouses should be based on trends of data derived from process hygiene assessments and from hazard analysis critical control point programmes. improvement of slaughter hygiene through technological and managerial interventions should be sought in slaughterhouses with repeatedly unsatisfactory performance. source attribution studies are needed to determine the relative importance of meat, as well as to ascertain the role of the different livestock species as a source of t. gondii and pathogenic vtec. methods should be developed to estimate the amount of viable t. gondii tissue cysts in meat, especially in meat cuts that are commonly consumed. recommend adaptation of inspection methods that provide an equivalent protection for current hazards the main rationale behind the concept of fci is that animals sent for slaughter can be categorised into different risk groups based on relevant information from the flock/herd of origin. this enables appropriate measures to be put in place during slaughter to deal with the level of risk identified. although regulation (ec) no / mentions the basic requirements for fci, these are very general and as a consequence the information reported in fci is rarely used as described above (section . . of this appendix). there are a number of ways in which fci could be improved. as explained in section . . above, more specific information about the main hazards could be used for assessing the risks associated with batches of animals arriving at the slaughterhouse, resulting in a classification according to these risks. to achieve this, the system needs further development to include additional information important for food safety, including the definition of appropriate and standardised indicators for the main public health hazards identified in section of this appendix. in addition, membership of quality assurance schemes and certification systems can have a positive impact on public health by contributing to the overall health of the animals sent to slaughter. certification procedures at farm level are voluntary tools to ensure compliance with given standards and regulations in the quality assurance system. they are aimed at achieving continuous improvement in production standards by monitoring quality assurance standards or criteria. audits or inspections of farms ensure that the animal (final product) is being raised and handled in accordance with the standards or guidelines, which producers should adhere to. the main areas covered by the standards include usually animal health, welfare and hygiene, identification and traceability, adequate and prudent use of medicines and chemicals at farm level, safety of feed and water, environmental guidelines, hygiene of personnel, processes and infrastructure, and preparation of animals for slaughter. the standards should be regularly updated in line with changes in legislation and with scientific developments. certifications are issued by independent agencies or bodies which confirm that an auditing process has been passed. farmers can also adopt other schemes such as the guides to good farming practices, recommendations of best practice published by international bodies (i.e. world organisation for animal health (oie) and the food and agriculture organization of the united nations (fao)). adherence to such quality schemes and guidelines at farm level has multiple benefits, providing slaughterhouse operators with useful information about animals intended to be slaughtered and could be integrated in the fci. it also increases farmers' responsibilities and has a beneficial influence on meat safety and quality. schemes such as the beef and lamb quality assurance scheme in ireland cover a broad area, relating to animal identification and animal health and welfare, and contribute to ensuring that healthy animals enter the slaughterhouse. farmers should be encouraged to participate in these schemes, and information on whether or not a primary producer is a member should be included in the fci. eu regulations (ec) no / and (ec) no / already require that information gathered during meat inspection is fed back to the primary producer. the main value of such feedback relates to animal health and welfare and production-related diseases, such as liver fluke and pneumonia. however, as mentioned previously, use of this information to produce healthier animals would have indirect benefits for public health. from discussions with stakeholders, it is clear that feedback to the producers is very limited in most mss and that there is considerable room for improvement in that area (see the report from the technical hearing on meat inspection of small ruminants ). ante-mortem inspection assesses the general health status of the animals on arrival at the slaughterhouse. meat for human consumption should be derived from the slaughter of healthy animals. inspection of animals on arrival at the slaughterhouse will help to enforce acceptable standards of transport and handling. this might indirectly contribute to the maintenance of operating standards that minimise the general risk associated with unhygienic and stressful management of food-producing animals. stress has been shown to be an important factor in the excretion of enteric pathogens such as pathogenic vtec, salmonella spp. and campylobacter spp., so inspection procedures that prevent stress are likely to be beneficial (efsa panel on biological hazards, b). measures to keep the transport-lairaging period as short as possible may be beneficial in terms of reducing crosscontamination with these enteric pathogens. the ante-mortem procedure will help to detect animals heavily contaminated with faeces and other material. measures to prevent excessively dirty animals from entering the slaughter line will help to prevent contamination of the carcasses and may reduce the level of enteric pathogens. taking these factors into consideration, and given that current methods do not increase the microbiological risk to public health and have considerable benefits in relation to the monitoring of animal health and welfare, no adaptations for the existing visual ante-mortem inspection are proposed. in the inspection procedure for sheep and goats, as set out in eu regulation (ec) no / , carcasses are subject to visual inspection only. incision is mandatory for the gastric surface of the liver. palpation is mandatory for the lungs, bronchial and mediastinal lymph nodes, the liver and its lymph nodes. in addition, palpation is mandatory for the umbilical region and joints of young animals. palpation of lungs, liver, umbilical region and joints, and incision of the liver could contribute to the spread of bacterial hazards through cross-contamination. although the importance of such crosscontamination has not been studied in small ruminants, it has been considered important in other species ( the pathogens of importance for public health cannot be detected by routine post-mortem examination. consequently, palpation of liver, lungs, the umbilical region and joints and incision of the gastric surface of the liver do not contribute to preventing the risk to public health arising from the meat-borne hazards identified in this opinion. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. visual examination contributes by detecting visible faecal contamination and/or spilled intestinal contents, although it is unclear how sensitive the current system is or what contribution this detection makes towards preventing public health risk. the current legislation foresees palpation and incision if abnormalities are detected during visual inspection. it is recommended that these procedures, if necessary, are carried out separately from the routine inspection of carcasses to prevent cross-contamination. elimination of abnormalities on aesthetic/meat quality grounds can be ensured through a meat quality assurance system and not through the official meat safety assurance system including meat inspection. any handling should be performed on a separate line and accompanied by laboratory testing as required. palpation and incision currently assist in the identification of zoonootic pathogens that are not meat borne, such as echinococcus granulosus, fasciola hepatica (although cysts are usually visible before incisions are made) and mycobacterium bovis. the removal of palpation and incision as a requirement in the post-mortem procedure in small ruminants could have a significant effect on monitoring echinococcus, in particular, as meat inspection is the principal method of detection of this pathogen (efsa, ) . in countries where hazards such as echinococcus are present it might be appropriate to conduct a risk assessment to evaluate the benefits to public health of stopping cross-contamination through palpation and incision of viscera with those obtained through monitoring of these non-meatborne zoonotic hazards. fci can be improved by including information on participation in quality assurance schemes and by giving greater feedback to the primary producer, as this would probably result in the production of healthier animals. ante-mortem inspection assesses the general health status of the animals and helps to detect animals heavily contaminated with faeces on arrival at the slaughterhouse. taking these factors into consideration, and given that current methods do not increase the microbiological risk to public health, no adaptations to the existing visual ante-mortem inspection procedure are required. although visual examination contributes by detecting visible faecal contamination, routine postmortem examination cannot detect the meat-borne pathogens of public health importance. palpation of the lungs, the liver, the umbilical region and the joints and incision of the liver could contribute to the spread of bacterial hazards through cross-contamination. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. the effect of this omission on the risk posed by non-meat-borne zoonoses such as e. granulosus, f. hepatica and m. bovis should be assessed, particularly in those countries where these hazards are prevalent. conclusions tor identify and rank the main risks for public health that should be addressed by meat inspection at eu level. general (e.g. sepsis, abscesses) and specific biological risks as well as chemical risks (e.g. residues of veterinary drugs and contaminants) should be considered. differentiation may be made according to production system and age of animals (e.g. breeding compared to fattening animals). based on the priority ranking, the hazards were classified as follows: -t. gondii and pathogenic verocytotoxin-producing escherichia coli (vtec) were classified as high priority for sheep/goat meat inspection. -the remaining identified hazards, b. anthracis, campylobacter spp. (thermophilic) and salmonella spp., were classified as low priority, based on the available data. as new hazards might emerge and/or hazards that presently are not a priority might become more relevant over time or in some regions, both hazard identification and the risk ranking should be revisited regularly to reflect this dynamic epidemiological situation. particular attention should be given to potential emerging hazards of public health importance. assess the strengths and weaknesses of the current meat inspection methodology and recommend possible alternative methods (at ante-mortem or post-mortem inspection, or validated laboratory testing within the frame of traditional meat inspection or elsewhere in the production chain) at eu level, providing an equivalent achievement of overall objectives; the implications for animal health and animal welfare of any changes suggested in the light of public health risks to current inspection methods should be considered. ante-mortem and post-mortem inspection of sheep and goats enable the detection of observable abnormalities. in that context, they are an important activity for monitoring animal health and welfare. they provide a general assessment of animal/herd health, which if compromised may lead to a greater public health risk. visual inspection of live animals and carcasses can also detect animals heavily contaminated with faeces. such animals increase the risk for cross-contamination during slaughter and may consequently constitute a food safety risk if carrying hazards of public health importance. if such animals or carcasses are dealt with adequately, this risk can be reduced. visual detection of faecal contamination on carcasses can also be an indicator of slaughter hygiene, but other approaches to verify slaughter hygiene should be considered. post-mortem inspection can also detect non-meat-borne hazards of public health significance that can be present in carcasses or offal from small ruminants. ante-mortem and post-mortem inspection also have the potential to detect new diseases if these have clinical signs, which may be of direct public health significance. currently, the use of food chain information (fci) for food safety purposes is limited for small ruminants, mainly because the data that it contains is very general and doesn't address specific hazards of public health importance. however, fci could serve as a valuable tool for risk management decisions and could be used for risk categorisation of farms or batches of animals. to achieve this, the system needs further development to include additional information important for food safety, including definition of appropriate and standardised indicators for the main public health hazards identified in section of this appendix. ante-and post-mortem inspection is not able to detect any of the public health hazards identified as the main concerns for food safety. it would therefore be expected that more efficient procedures might be implemented to monitor the occurrence of non-visible hazards. in addition, given that the current post-mortem procedures involve palpation and incision of some organs, there is potential for cross-contamination of carcasses. if new hazards currently not covered by the meat inspection system (e.g. salmonella, campylobacter) are identified under tor , then recommend inspection methods fit for the purpose of meeting the overall objectives of meat inspection. when appropriate, food chain information should be taken into account. as neither of the main public health hazards associated with meat from small ruminants can be detected by traditional meat inspection, other approaches are necessary to identify and control these microbiological hazards. a comprehensive meat safety assurance system for meat from small ruminants, combining a range of preventive measures and controls applied both on the farm and at the slaughterhouse in a longitudinally integrated way, is the most effective approach to control the main hazards in the context of meat inspection. information on the biological risks associated with the consumption of meat from sheep or goats is sometimes scant and unreliable. in order to facilitate decision making, harmonised surveys are required to establish values for the prevalence of the main hazards t. gondii and pathogenic vtec at flock/herd, live animal and carcass level in individual member states. epidemiological and risk assessment studies are also required to determine the specific risk to public health associated with the consumption of meat from small ruminants. in the event that these studies confirm a high risk to public health through the consumption of meat from sheep or goats, consideration should be given to the setting of clear and measurable eu targets at the carcass level. to meet these targets and criteria, a variety of control options for the main hazards are available, at both farm and abattoir level. flock/herd categorisation according to the risk posed by the main hazards is considered an important element of an integrated meat safety assurance system. this should be based on the use of farm descriptors and historical data in addition to batch-specific information. farmrelated data could be provided through farm audits using harmonised epidemiological indicators (heis) to assess the risk and protective factors for the flocks/herds related to the given hazards. classification of abattoirs according to their capability to prevent or reduce faecal contamination of carcasses can be based on two elements: (i) the process hygiene as measured by the level of indicator organisms on the carcasses (i.e. process hygiene criteria); and (ii) the use of operational procedures and equipment that reduce faecal contamination, as well as industry-led quality assurance systems. as mentioned in section . of appendix a, further studies are necessary to determine with more certainty the risk of acquiring t. gondii through consumption of meat from small ruminants. in addition, the lack of tests that can easily identify viable cysts in meat is a significant drawback. furthermore, if there is a high prevalence in the animal population, this will hamper the development of systems based on risk categorisation of animals. for these reasons, the setting of targets for t. gondii is not recommended at the moment. there are a variety of animal husbandry measures that can be used to control t. gondii on sheep and goat farms, but at present these would not be practical to implement on most farms. a number of post-processing interventions might be effective in inactivating t. gondii such as cooking, freezing, curing and high-pressure and -irradiation treatments. however, most of the information available for these treatments originates from research in pigs, so further research is required to validate these treatments in meat from small ruminants. there are also a variety of animal husbandry measures that can be used to reduce the levels of vtec on infected farms, but their efficacy is not clear in small ruminants. in addition, there are a number of challenges that need to be overcome regarding the setting of targets for pathogenic vtec, including the difficulties in identifying husbandry factors that can be used to classify farms according to pathogenic vtec risk, the intermittent nature of shedding, and the problems with the interpretation of monitoring results for vtec due to the difficulty of correctly identifying pathogenic vtec. the two main sources of vtec on sheep and goat carcasses are the fleece/hide and the viscera. to control faecal contamination from the fleece or hide only clean animals should be accepted for slaughter, as currently required by eu legislation. there are also a number of measures that can help to reduce the spillage or leakage of digestive contents onto the carcass, particularly rodding of the oesophagus and bagging of the rectum. post-processing interventions to control vtec are also available. these include hot water and steam pasteurisation. risk categorisation of slaughterhouses should be based on trends of data derived from process hygiene assessments and from hazard analysis critical control point programmes. improvement of slaughter hygiene through technological and managerial interventions should be sought in slaughterhouses with repeatedly unsatisfactory performance. tor recommend adaptations of inspection methods and/or frequencies of inspections that provide an equivalent level of protection within the scope of meat inspection or elsewhere in the production chain that may be used by risk managers in case they consider the current methods disproportionate to the risk, e.g. based on the ranking as an outcome of terms of reference or on data obtained using harmonised epidemiological criteria (see annex ). when appropriate, food chain information should be taken into account. fci can be improved by including information on participation in quality assurance schemes and by giving greater feedback to the primary producer, as this would probably result in the production of healthier animals. ante-mortem inspection assesses the general health status of the animals and helps to detect animals heavily contaminated with faeces on arrival at the slaughterhouse. taking these factors into consideration, and given that current methods do not increase the microbiological risk to public health, no adaptations for the existing visual ante-mortem inspection are required. although visual examination contributes by detecting visible faecal contamination, routine post-mortem examination cannot detect the meat-borne pathogens of public health importance. palpation of the lungs, the liver, the umbilical region and the joints, and incision of the liver could contribute to the spread of bacterial hazards through cross-contamination. for these reasons, palpation and incision should be omitted in animals subjected to routine slaughter. to provide a better evidence base for future risk ranking of hazards, initiatives should be instigated to: improve and harmonise data collection of incidence and severity of human diseases caused by relevant hazards systematically collect data for source attribution collect data to identify and risk rank emerging hazards that could be transmitted through handling, preparation and consumption of sheep and goat meat. source attribution studies are needed to determine the relative importance of meat, as well as to ascertain the role of the different livestock species, as a source of t. gondii and pathogenic vtec for humans. methods should be developed to estimate the amount of viable t. gondii tissue cysts in meat, especially in meat cuts that are commonly consumed. assessment of the importance of the hazards in table with regard to their potential as zoonotic agents that can be transmitted via consumption of meat from small ruminants. these bacteria are considered zoonotic, although this characteristic has only been documented in fish. transmission via consumption of meat from small ruminants has not been reported, despite aeromonas being detected in lamb and meat products and having the potential to be a foodborne pathogen (daskalov, ) . these obligate intracellular bacteria are found in sheep, cattle, horses and dogs, as well as deer and rodents in europe (kalinova et al., ), and although they cause human disease, this illness is rare (scharf et al., ) . they are transmitted by ticks of the genus ixodes, therefore they do not present a risk to humans via consumption of sheep meat. arcobacter spp. the genus arcobacter includes species that can be defined as aerotolerant campylobacter-like organisms. they were first isolated from aborted bovine foetuses. information on the real prevalence and clinical importance of arcobacter is limited because of the absence of routine testing protocols and the fact that most laboratories do not use appropriate culture conditions or do not identify isolates to species level. small ruminants have been found to be carriers of these bacteria (de smet et al., ) in europe. recent reports suggest that arcobacters, especially a. butzleri, may be involved in human enteric disease, although the evidence is not conclusive (houf, ). there are no specific epidemiological data establishing a link between arcobacter infection with consumption of meat from small ruminants. in addition, the public health significance of arcobacter remains unclear. borrelia are transmitted by ticks of the genus ixodes, and infect a wide range of hosts including sheep, although their contribution to the maintenance of b. burgdorferi is still not clear (mannelli et al., ) . although present throughout europe, currently there is no evidence that borrelia can be transmitted via consumption of meat. sheep and goat brucellosis is a zoonotic infection. brucellosis is caused by some bacterial species belonging to the genus brucella. of the six species known to cause disease in humans b. melitensis affects goats and sheep, their specific animal reservoir. humans are usually infected from direct contact with infected animals or via contaminated food, typically raw milk, cheese made thereof or other milk products such as cream and ice cream. meat is not considered a source of infection since muscle tissue contains low concentrations of brucella organisms and the survival time in meat seems extremely short. the number of organisms per gram of muscle is small and rapidly decreases as the ph of the meat drops. brucella spp. die off rapidly when incubated at º c in a medium at ph < (icmsf (international commission on microbial specifications for food), ). an exception in survival behaviour seems to be frozen carcasses, in which the organism can survive for years. c. abortus is known to be transmissible from animals to humans, causing significant zoonotic infections. c. abortus causes the enzootic abortion of ewes (ovine enzootic abortion), which has become recognised as a major cause of loss in sheep (and goats) in europe, north america and africa. (pospischil, ) . most cases of c. abortus infection are directly associated with exposure to infected sheep or goats, with transmission most probably occurring by mouth following the handling of an infected ewe or lamb or of contaminated clothing (longbottom and coulter, ) . the role of meat from small ruminants in the epidemiology of human infection with c. abortus is nevertheless unclear. c. difficile is a species of anaerobic, spore-forming gram-positive bacteria that causes severe diarrhoea and other intestinal disease when competing bacteria in the gut flora have been eliminated by antibiotic treatment. there are reports of c. difficile being isolated from small ruminants (hunter et al., ; koene et al., ; rieu-lesme and fonty, ; saif and brazier, ) . however, there is to date no indication of meat-borne transmission to humans. c. pseudotuberculosis is the causative agent of caseous lymphadenitis in small ruminants. these bacteria are commonly found in europe in the ruminant population. cases of human lymphadenitis have been described previously (peel et al., ) , although transmitted via occupational exposure and not through consumption of meat. c. burnetti has been isolated from a large range of animals including farm animals (e.g. cattle, sheep and goats), wildlife and arthropods. it has a near-worldwide distribution. c. burnetti causes q fever in humans, in whom it was traditionally considered an occupational disease in farm and abattoir workers. airborne transmission is also important, and has played a major role in recent outbreaks. the meatborne transmission route has so far not been identified as a possibility (georgiev et al., ) . erysipelothrix rhusiopathiae e. rhusiopathiae is a ubiquitous bacterium which can cause polyarthritis in sheep and lambs. it can also infect humans, in whom it causes either cutaneous (localised or general) or septicaemic disease (wang et al., ) . humans usually acquire the infection through contact with infected animals, i.e. erysipelas is considered an occupational disease. meat from small ruminants has not been identified as a vehicle for human infection. esbls may be defined as plasmid-encoded enzymes found in the enterobacteriaceae that confer resistance to a variety of ß-lactam antibiotics, including penicillins, and second-, third-and fourthgeneration cephalosporins. in contrast, ampc β-lactamases are intrinsic cephalosporinases found on the chromosomal dna of many gram-negative bacteria, which confer resistance to penicillins, second-and third-generation cephalosporins, including β-lactam/inhibitor combinations, and cefamycins (cefoxitin), but usually not to fourth-generation cephalosporins. a growing number of these ampc enzymes are now plasmid-borne (efsa panel on biological hazards, c). a targeted literature search found references that reported the presence of esbl/ampc-gene carrying enterobacteriaceae (geser et al., ; snow et al., ) in small ruminants but none indicated transmission of these enzymes to humans via consumption of meat from sheep or goats. helicobacter pylori h. pylori was previously known as campylobacter pylori (it is taxonomically related to campylobacter spp. and belongs to the family helicobacteraceae). infection of the stomach by h. pylori is associated with several alterations in gastric mucosal cell proliferation, and disorders such as chronic gastritis, gastric ulcers, duodenal ulcers and stomach cancer. colonisation of the stomach by h. pylori is well established and the bacterium is able to withstand digestive enzymes and concentrated hydrochloric acid. h. pylori is believed to be transmitted orally but no food has been as yet identified as a source. no reservoir other than the human gastric mucosa has been identified for h. pylori. plonka et al. ( ) suggest a zoonotic link to sheep, but no evidence of meat-borne transmission is presented. although the isolation of k. pneumoniae from small ruminants' meat has been described (brahmbhatt, ; sharma et al., ) , no evidence for meat-borne transmission of this pathogen to humans could be found. l. spiralis has been reported in the small ruminant population in europe and elsewhere (bisias et al., ; savalia et al., ; seixas melo et al., ) ; however, although it has been considered a potential occupational hazard (heuer et al., ) , there is no current evidence that it can be transmitted to humans via consumption of meat from small ruminants. mycobacterium avium subsp. paratuberculosis m. avium subsp. paratuberculosis (map), which causes chronic enteritis in all ruminants, is the most prevalent mycobacterium found in small ruminants within the m. avium complex (mac). mac includes eight mycobacteria species and several subspecies with different degrees of pathogenicity, a broad host range and environmental distribution in numerous biotopes including the soil, water, aerosols, etc. (alvarez et al., ; biet et al., ) . a link between map and the human chronic enteritis, crohn's disease, has been speculated and supported by several lines of evidence, such as the demonstration of map-specific sequences in crohn's disease-affected tissues. however, at present, there is no agreed consensus on any aetiological role for map in crohn's disease (chiodini, ; waddell et al., ; wagner et al., ) and no evidence that it presents a risk via consumption of meat or meat products. the presence of mycobacteria has been previously reported in the small ruminant population in the eu (domenis et al., ; malone et al., ; marianelli et al., ; . despite these reports, evidence of meat-borne transmission of these pathogens to humans from small ruminants is lacking, so this potential pathway of infection remains unproven in the context of livestock processed through the eu meat inspection system. mrsa has been isolated from most food-producing animals and from most meats, as well as from milk including sheep and lamb meat. where mrsa cc prevalence is high in food-producing animals, people in contact with these live animals (especially farmers and veterinarians, and their families) are at greater risk of colonisation and infection than the general population. food may be contaminated by mrsa (including cc ): eating and handling contaminated food is a potential vehicle for transmission. there is currently no evidence for increased risk of human colonisation or infection following contact or consumption of food contaminated by cc both in the community and in hospital (efsa, ). streptococcus spp. have been isolated in small ruminants, most commonly in milk or mastitis samples (pisoni et al., ; zdragas et al., ) . zoonotic transmission has been described (poulin and boivin, ), but there is no evidence to date that it can cause meat-borne disease in humans. foodborne yersiniosis is caused primarily by yersinia enterocolitica, with y. pseudotuberculosis representing a low fraction of isolates (less than %) from human cases reported (efsa and ecdc, b). the majority of isolates of y. enterocolitica isolated from food and environmental sources are non-pathogenic types, and therefore discrimination between pathogenic and non-pathogenic strains for humans is necessary. no reports of y. pseudotuberculosis have been published of isolates in food items tested during - (efsa and ecdc, , b . pigs are considered to be the primary reservoir for the human pathogenic types of y. enterocolitica, and they can be isolated from the oral cavity, the submaxillary lymph nodes, the intestine and faeces (nesbakken, ) . y. enterocolitica is found in small ruminants, and is considered to be responsible for certain infections in sheep and goats such as enteritis (arnold et al., ; fearnley et al., ; fredriksson-ahomaa et al., ; fukushima et al., ; gourdon et al., ; krogstad, ; mcnally et al., ; milnes et al., ; philbey et al., ; slee and button, ; slee and skilbeck, ; soderqvist et al., ; wojciech et al., ) . mcnally et al. ( ) investigated the relationship between livestock (sheep, cattle and pigs) carriage of y. enterocolitica and human disease with inconclusive results. the majority of the strains isolated from animal reservoirs differ from clinical strains found in humans, biochemically and serologically. so far pigs are the only species pinpointed as being significant reservoirs for pathogenic y. enterocolitica. there is no evidence that sheep and goats are important animal reservoirs for strains involved in human cases, although slee and button ( ) reported the infection of an animal attendant in connection with an outbreak of y. enterocolitica infection in a goat herd in norway. no evidence that yersinia spp. present a risk via consumption of meat or meat products from sheep or goats is currently available. c. albicans is a fungus that is the causal agent of opportunistic oral and genital infections in humans and has also been isolated from sheep and goats, for example in milk samples of goats suffering from mastitis (langoni et al., ) or from sheep droppings (nardoni et al., ) . no evidence to date could indicate transmission of this fungus to humans via consumption of meat. cryptococcosis is a rare disease in animals in europe. a few cases have been described in sheep and goats (lung and mammary gland) in australia. the source of microorganisms is largely environmental. no cases of transmission from animal to animal or from animal to man or from man to man (except corneal transplant) have been described (acha and szyfres, ). c. neoformans is therefore currently considered not relevant in the eu sheep and goat population and not transmissible via meat. species of microsporidia infecting humans have been identified in water sources as well as in wild, domestic, and food-producing farm animals, raising concerns for waterborne, foodborne, and zoonotic transmission (didier, ) . no evidence could be found in the literature of meat-borne transmission of this hazard from small ruminants to humans. enterocytozoon bieneusi e. bieneusi, the species now known to be the most frequent in microsporidial infections of humans, was not discovered until . e. bieneusi has recently been found in the faeces of animals, including pigs, rhesus macaques, cats and cattle. however, the potential reservoirs and the mode of transmission of this pathogen are still unknown (dengjel et al., ) . phylogenetic analysis revealed the lack of a transmission barrier between e. bieneusi from humans and animals (cats, pigs and cattle). thus, e. bieneusi appears to be a zoonotic pathogen." (dengjel et al., ). however, no evidence could be found in the literature for meat-borne transmission of this hazard from small ruminants to humans. parasites of the genus ascaris have very occasionally reported in sheep. however, the transmission of these parasites to humans is via ingestion of eggs that are excreted in faeces of the definite hosts (e.g. in pigs a. sum and in and humans a. lumbricoides), therefore there is currently no evidence of a link between human ascariasis and the consumption of ruminant meat. babesia spp. are vector-mediated parasites, and are transmitted by hard ticks (e.g. ixodes, dermacentor, rhipicephalus and hyaloma spp.). in europe, they are found in cattle and rodents, although they have also been reported in sheep ( (sreter et al., ) . human babesiosis is rare in europe, and only transmitted via tick bites, i.e. there have been no reports of meat-borne transmission to humans from animals. cerebral coenurosis is caused by the metacestode stage of the cestode t. multiceps, which has canids as the final host. both humans and sheep are intermediate hosts in the life cycle of this parasite, which is present in parts of europe (scala and varcasia, ) . infection occurs by ingestion of vegetables or water contaminated with the tapeworm eggs shed by the final host. meat has not been recorded as being involved in transmission of this parasite. cryptosporidiosis in humans is usually linked to consumption of contaminated water or contact with infected animals, mainly cattle but also young sheep and goats. although its presence in meat is considered possible, a quick review of the literature did not reveal any evidence describing the isolation in meat or any outbreaks caused by consumption of meat from small ruminants. c. ovis and c. tenuicollis are the larval stages of taenia ovis and taenia hydatigena respectively, found in the intestines of canids. humans can act as intermediate hosts for these cysticerci, but cases are very rare. consumption of meat is not associated with the transmission of these parasites, but they are targeted during meat inspection because cysticerci are visible and render the meat unfit for human consumption on quality grounds. ) . humans are a dead-end host and may become infected through accidental ingestion of the eggs, shed in the faeces of infected dogs or other canids. this usually occurs via the ingestion of contaminated food (especially vegetables) or water, and also through accidental soil ingestion or by acquiring the eggs directly from the coat of the definitive host. meat, however, has not been identified as a vehicle for transmission of e. granulosus. the trematode fasciola is a parasite of herbivores that can infect humans accidentally, and is commonly found in europe. humans can become infected by ingesting freshwater plants or water containing metacercariae (fried and abruzzi, ). there is currently no evidence of meat-borne transmission of this parasite to humans. g. intestinalis is a ubiquitous protozoan parasite with global distribution, which infects humans as well as a wide range of other mammals. g. intestinalis is excreted in faeces, and it is transmitted to humans via contaminated water or fresh vegetables. no evidence is available for a role for meat from small ruminants in transmitting this parasite to humans. virus of the family astroviridae are associated with gastroenteritis in birds and mammals, including small ruminants and humans. although a potential zoonotic link has been suggested (jonassen et al., ) , information available in the scientific literature does not point at potential transmission of astroviruses to humans via consumption of meat. bdv infections can result in neurological disease that mainly affects horses and sheep in certain areas of germany (bilzer et al., ; durrwald, ; grabner and fischer, ; ludwig et al., ) . the endemic area also includes areas of switzerland, austria and the principality of liechtenstein (caplazi et al., ; weissenbock et al., ) . bdv received worldwide attention when it was reported that sera and/or cerebrospinal fluids from neuropsychiatric patients can contain bdv-specific antibodies. as infected animals produce bdv-specific antibodies only after virus replication, it was assumed that the broad spectrum of bdv-susceptible species also includes man. however, reports describing the presence of other bdv markers, i.e. bdv-rna or bdv-antigen, in peripheral blood leucocytes or brain tissue of neuropsychiatric patients are highly controversial and, therefore, the role of bdv in human neuropsychiatric disorders is questionable (richt and rott, ) . in any case, no evidence of meat-borne transmission has been found. there is a lack of clarity in relation to the taxonomy of bovine enterovirus (bev). while it appears that bev may be zoonootic, based on a serological survey in turkey, and that sheep and goats in europe are infected, it is likely that the main source of infection for humans is contact with infected animals and/or material contaminated with faeces of infected animals. on the basis of the information obtained from the scientific literature, it is proposed that bev should not be considered for risk ranking. chandipura virus, a member of the rhabdoviridae family and vesiculovirus genus, has recently emerged as a human pathogen associated with a number of outbreaks of acute encephalitis in different parts of india (basak et al., ) . the virus closely resembles the vesicular stomatitis virus, and there are reports of antibody detection in small ruminants, also in india (joshi et al., ) . there are no reports of this virus being present in the eu or being able to be transmitted to humans via food. the information available in the scientific literature for this virus in the small ruminant reservoir is very limited. cchf is a tick-borne disease that can also be transmitted to humans through contact with infected tissues or blood from affected (viraemic) livestock, including sheep. cases of cchf have been reported in butchers and abattoir workers (ergonul, ) as well as health care workers, therefore it can be considered an occupational disease. currently, there is no evidence of meat-borne transmission, and it has been reported that "meat itself is not a risk because the virus is inactivated by post slaughter acidification of the tissues and would not survive cooking in any case." (ergonul, ) . hev has been found in both livestock, especially pigs, and humans in europe. the epidemiology of hev is complex, and a foodborne transmission of hev from animal products (e.g. pork and pork products) to humans is an emerging concern. however, only very few systematic studies have been performed so far, therefore the importance of specific food items has not been sufficiently substantiated. although the presence of hev antibodies in sheep has been previously reported in europe (peralta et al., ), there is no evidence that meat from small ruminants has played a role in transmitting the virus to humans (efsa panel on biological hazards, a). the presence of influenza virus has been occasionally reported in small ruminants (abubakar et al., ; shukla and negi, ; zupancic et al., ) . although no information is available for small ruminants, the safety of meat from pigs infected with influenza has been previously assessed, and it was found that these viruses are not known to be transmissible to humans through the consumption of meat (fao/who/oie, ). orf, also known as contagious ecthyma, is caused by a parapoxvirus and is commonly found in the small ruminant population in europe. this virus is transmitted to humans through direct contact with infected animals and thus is considered an occupational disease (uzel et al., ) . meat-borne transmission has not been reported to date. small ruminants are susceptible to infection with rabies virus, which is present in the wild animal reservoir in europe (mainly in bats and wild canids). cases of rabies in sheep and goats have been occasionally reported in europe (maciulskis et al., ; mudura et al., ) , and although experimental oral transmission has been described (bell and moore, ; fischman and ward, ) , transmission of this virus to humans through the consumption of meat from small ruminants has not been reported to date. this rna virus of the family bunyaviridae causes disease in cattle, sheep and goats, and is transmitted to humans by a wide range of mosquitoes, as well as by handling diseased animals (davies and martin, ) . contact with and consumption of meat, as well as other animal products, has been identified as a risk factor for human infection (anyangu et al., ; mohamed et al., ) . the presence of this virus has not been reported in europe so far (efsa panel on animal health and welfare (ahaw), ). rotaviruses are responsible for causing enteritis and diarrhoea in young livestock, including sheep and goats, as well as in humans. some studies that used gene sequencing point to a common evolutionary origin for rotavirus strains found in small ruminants and those found in humans (ghosh et al., ; matthijnssens et al., ). this could suggest that there is potential for zoonotic transmission between livestock and humans, or at least that some exchange of viruses has occurred in the past (nakagomi et al., ) . it is, however, unclear if meat-borne transmission is possible, as there are no data in the literature reporting this possibility. tbe is an infection caused by flavivirus found in both wild and domestic animals in europe, including small ruminants. humans acquire the infection following the bite of an infected tick. transmission via aerosol and direct contact is also possible, as well as by consuming fresh milk from infected animals. however, transmission via consumption of meat has not been described (krauss et al., ) . objective of meat inspection is to ensure that meat is fit for human consumption. historically, meat inspection procedures have been designed to control slaughter animals for the absence of infectious diseases, with special emphasis on zoonoses and notifiable diseases. the mandate that meat needs to be fit for human consumption, however, also includes the control of chemical residues and contaminants that could be potentially harmful for consumers. this aspect is not fully addressed by the current procedures. the efsa panel on contaminants in the food chain (contam panel) was asked to identify and rank undesirable or harmful chemical residues and contaminants in meat from sheep and goats. such substances may occur as residues in edible tissues from the exposure of the animals to contaminants in feed materials as well as following the possible application of non-authorised substances and the application of authorised veterinary medicinal products and feed additives. a multi-step approach was used for the ranking of these substances into categories of potential concern. as a first step, the contam panel considered substances listed in council directive / /ec and evaluated the outcome of the national residue control plans (nrcps) for the period - . the contam panel noted that only . % of the total number of results was non-compliant for one or more substances listed in council directive / /ec. potentially higher exposure of consumers to these substances from sheep and goat meat takes place only incidentally, as a result of mistakes or noncompliance with known and regulated procedures. the available aggregated data indicate a low number of samples that were non-compliant with the current legislation. however, in the absence of substance-and/or species-specific information, such as the tissues used for residue analysis and the actual concentration of a residue or contaminant measured, these data do not allow for a reliable assessment of consumer exposure. independently from the occurrence data as reported from the nrcps, other criteria used for the identification and ranking of chemical substances of potential concern included the identification of substances that are found in other testing programmes and that bio-accumulate in the food chain, substances with a toxicological profile of concern, and the likelihood that a substance under consideration will occur in sheep and goat carcasses. taking into account these criteria, the individual compounds were ranked into four categories denoted as being of high, medium, low and negligible potential concern. dioxins and dioxin-like polychlorinated biphenyls (dl-pcbs) were ranked as being of high potential concern owing to their known bioaccumulation in the food chain, the frequent findings above maximum levels (mls), particularly in sheep liver, and in consideration of their toxicological profile. the following substances were ranked in the category of medium potential concern: stilbenes, thyreostats, gonadal (sex) steroids, resorcylic acid lactones and beta-agonists (especially clenbuterol) because of their toxicity for humans, their efficacy as growth promoters in sheep and goats and the incidence of non-compliant results; chloramphenicol and nitrofurans because they have proven toxicity for humans, are effective as antibacterial treatments for sheep/goats and non-compliant samples are found in most years of the nrcps; non-dioxin-like polychlorinated biphenyls (ndl-pcbs) because, while they bioaccumulate and there is a risk of exceeding the mls, they are less toxic than dioxins and dl-pcbs; and the chemical elements cadmium, lead and mercury because of the number of non-compliant results reported under the nrcps and their toxicological profile. residues originating from other substances listed in council directive / /ec were ranked as of low or negligible potential concern. the contam panel emphasises that this ranking into specific categories of potential concern is based on the current knowledge regarding toxicological profiles, usage in sheep and goat production and occurrence as chemical residues and contaminants. where changes in any of these factors occur, the ranking might need amendment. the contam panel was also asked to assess the main strengths and weaknesses of current meat inspection protocols within the context of chemical hazards. it was noted that current procedures for sampling and testing are, in general, well established and coordinated, including follow-up actions subsequent to the identification of non-compliant samples. the regular sampling and testing for chemical residues and contaminants is an important disincentive for the development of undesirable practices and the prescriptive sampling system allows for equivalence in the control of eu-produced sheep and goat meat. the current combination of animal traceability, ante-mortem inspection and gross tissue examination can support the collection of appropriate samples for residue monitoring. nevertheless, a major weakness is that, with very few exceptions, presence of chemical hazards cannot be identified by current ante-/post-mortem meat inspection procedures at the slaughterhouse level and there is a lack of sufficient cost-effective and reliable screening methods. in addition, sampling is mostly prescriptive rather than risk or information based. there is limited ongoing adaptation of the sampling and testing programmes to the results of the residue monitoring programmes, with poor integration between the testing of feed materials for undesirable substances and the nrcps and sampling under the nrcps reflecting only a part of testing done by a number of mss, the results of which should be taken into consideration. the contam panel was also asked to identify and recommend inspection methods for new hazards. as dioxins and dl-pcbs have not yet been comprehensively covered by the sampling plans of the current meat inspection, they should be considered as 'new' hazards as they have been ranked as being of high potential concern. moreover, for other organic contaminants that may accumulate in foodproducing animals and for a number of chemical elements used as feed supplements, only limited data regarding residues in sheep and goats are available. this is the case, in particular, for brominated flame retardants, including polybrominated diphenylethers (pbdes) and hexabromocyclododecanes (hbcdds) and perfluorinated compounds (pfcs) including (but not limited to) perfluorooctane sulphonate (pfos) and perfluorooctanoic acid (pfoa). the contam panel concludes that sheep and goat production in the eu is marked by being largely extensive in nature, involving frequent trading of animals and nomadic flocks. these differences in husbandry systems and feeding regimes result in different risks for the occurrence of chemical residues and contaminants. extensive periods on pasture or/as nomadic flocks and the use of slaughter collection dealerships may preclude detailed lifetime food chain information (fci). the contam panel recommends that fci should be expanded for sheep and goats produced in extensive systems to provide more information on the specific environmental conditions where the animals are produced and that future monitoring programmes should be based on the risk of occurrence of chemical residues and contaminants, taking into account the completeness and quality of the fci supplied and the ranking of chemical substance into categories of potential concern, which needs to be regularly updated. control programmes for chemical residues and contaminants should be less prescriptive, with sufficient flexibility to adapt to results of testing, should include 'new hazards', and the test results for sheep and goats should be separately presented. there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and monitoring of environmental contaminants. the development of analytical techniques covering multiple analytes and of new biologically based testing approaches should be encouraged too and incorporated into the residue control programmes. for prohibited substances, testing should be directed where appropriate towards the farm level and, in the case of substances that might be used illicitly for growth promotion, control measures, including testing, need to be refocused to better identify the extent of abuse in the eu. meat inspection in the eu is specified in regulation (ec) no / . the main objective of meat inspection is to ensure that meat is fit for human consumption. historically, meat inspection procedures have been designed to control slaughter animals for the absence of infectious diseases, with special emphasis on zoonoses and notifiable diseases. the mandate that meat needs to be fit for human consumption, however, also includes the control of chemical residues and contaminants in meat that could be potentially harmful for consumers. this aspect is not fully addressed by the current procedures. for the purposes of this document 'chemical residues' refer to chemical compounds which result from the intentional administration of legal or illegal pharmacologically active substances while 'chemical contaminants' refer to chemical compounds originating from the environment. this document aims to identify undesirable or harmful chemical residues and contaminants that may occur in meat from sheep and goats taking into account the current legislation and the results from the national residue control plans (nrcps) implemented in line with council directive / /ec. these findings, together with the characteristics of the individual substances and the likelihood that a substance will occur in meat from sheep or goats were used to rank chemical residues and contaminants into categories of potential concern. four categories were established constituting a high, medium, low or negligible potential concern. in the second part, the main strengths and weaknesses of current meat inspection protocols were assessed within the context of chemical hazards. the ultimate aim is an overall evaluation of the current strategies for sampling and analytical testing, resulting in recommendations for possible amendments to the current meat inspection protocols. in this opinion, where reference is made to european legislation (regulations, directives, decisions), the reference should be understood as relating to the most current amendment, unless otherwise stated. sheep (ovis aries) were domesticated from ancestral subspecies of wild mouflon approximately years ago in south-west asia, and by years ago, sheep had been transported throughout europe. today, over sheep breeds are recognised worldwide, and europe supports a greater number of breeds than any other continent. sheep are raised for three main purposes: meat, milk and wool. therefore, a range of different breeds have been developed over centuries to suit the land and weather and husbandry conditions in different areas of the eu. in mountain and arid areas, for example, sheep are bred for hardiness and self-reliance (e.g. scottish blackface). they must be able to survive poor weather and thrive on poor grazing. lowland or grassland breeds, on the other hand (e.g. suffolk and texel), usually do not cope as well with bad weather or poor-quality feed, but produce higher numbers of lambs that are often better suited for meat production. most lambs are born in late winter or spring. many lambs are born outside, particularly those in mountain flocks. indoor lambing is also common, particularly for lowland flocks. good housing facilities and management are important in order to prevent disease and heat stress problems. most meat-breed sheep are slaughtered and presented for meat inspection as younger stock "lambs" from ten weeks up to one year. in accordance with commission regulation (ec) no / , a "young ovine animal" means an ovine animal of either gender, not having any permanent incisor teeth erupted and not older than months. sheep have also been raised for milk production for thousands of years. the east friesian type is one of the most common and productive breeds of dairy sheep. europe's commercial dairy sheep industry is concentrated in the countries on or near the mediterranean sea. most of the sheep milk is used to produce cheese, such as feta, ricotta, manchego and pecorino romano. in france, the lacaune is the breed of choice for making roquefort cheese. dairy sheep kept on small farms are milked seasonally by hand but more modern sheep dairies use sophisticated machinery for milking. ewes are milked once or twice per day. sheep are also widely kept for wool production, particularly in the united kingdom and spain. wool may range from fine or medium fibre diameter to specialised breeds producing wool for carpets. some flocks may be kept for both meat and wool production purposes. meat from older sheep carcasses (mutton), derived from cull adult sheep from the dairy or wool industries, is tougher and is not as widely consumed as fresh meat, but may also be used in sausage production. goats have been associated with man in a symbiotic relationship for up to years. the goat eats little, occupies a small area and each produces enough milk to sustain a family. in europe, goat farming is strongly oriented towards milk production, which is mostly used for cheese production. it has been estimated that the eu has . % of the world's goat population, but it produces . % of goat milk and . % of goat meat generated in the world annually (casey, ) . during the last ten years, the overall eu goat count has diminished. in france, greece and spain, annual goat milk production is , , and million litres, respectively, which comprises % of the total goat milk produced in the eu. france produces a great number of goat's milk cheeses, especially in the loire valley, with examples of french chèvre including bucheron. like sheep, dairy goats kept on small farms are milked seasonally by hand but modern goat dairies use more sophisticated machinery for milking. does are milked once or twice per day. goats produced for fibre are not common in europe, but small local flocks occur in many member states (mss). the fibre taken from an angora goat breed is called mohair. a single goat produces between four and five kilograms of hair per year, shorn twice a year. cashmere is the valuable fine undercoat found to varying degrees and qualities on all goats, except the angora. it grows as a winter down which is shed in early spring, when it is harvested by either shearing or combing. goat meat or chevron is not widely consumed in the eu. specialised larger goat meat breeds such as the boer goat are currently only held in small local herds, but crosses of a boer sire and a cashmeretype breed dam can also be used to provide a suitable carcass. these meat-line goats can grow to slaughter weight ( - kg) in approximately six to nine months on low-quality feeding. again, the meat from older goat carcasses derived from the cull goats from dairy or fibre industries tends to be very tough. meat from older male goats ('billy' goats) can have an offensive odour. the extensive farming practices and generally low economic value of sheep and goats mean that veterinary treatment of individual animals is often limited. sheep and goats are often exposed to parasites, which explain the necessary use of anti-parasitic programmes for the flocks. other veterinary interventions follow normal clinical practice, such as the use of registered mastitis treatments for milking animals, with appropriate withdrawal periods and residue monitoring. it is important to note that, despite recent developments towards large milking goat holdings, sheep and goat production in the eu largely remains extensive in nature, involving frequent trading of animals and nomadic flocks. this involves varied husbandry systems and feeding regimes resulting in different risks for chemical substances and contaminants. sheep and goat populations in the eu as reported by eurostat are presented in table . in accordance with annex i of regulation (ec) no / all animals should be inspected prior to slaughter (ante-mortem inspection) as well as after slaughter and evisceration (post-mortem inspection). there are concerns about slaughter outside licensed premises where animals are not subject to appropriate meat inspection. since january , a mandatory identification of small ruminants has been implemented in the eu by regulation (ec) no / . domestic sheep and goats may be presented for slaughter in small numbers or even as individuals. visual ante-mortem inspection is carried out at the level of the individual animal. extensive periods on pasture or as nomadic flocks, sale at open markets of many sheep and goats, and the presence of slaughter collection dealerships that may combine small numbers of animals purchased from several farmers, means that there is a level of concern that food chain information (fci) shared between farmers and the slaughterhouse (where residue data are managed), may be suboptimal. similarly, in these situations, the level of feedback from the slaughterhouse and authorities to farmers regarding the results of residue testing may be suboptimal. here the individual identification of animals, which has now become mandatory, may contribute to more transparency in the future. there is less concern about fci from dairy sheep and goats as they are reared under more intensive and controlled conditions. fci is the animal's life history data from birth, through all stages of rearing, up to the day of slaughter. in particular, the food business operator (fbo) at the slaughterhouse should receive information related to the veterinary medicinal products (vmps) or other treatments administered to the animals within a relevant period prior to slaughter, together with their administration dates and their withdrawal periods. moreover, any test results for samples taken from the animals within the framework of monitoring and control of residues should also be communicated to the slaughterhouse operators before the arrival of the animals. based on regulation (ec) no / , post-mortem inspection was, and still is, directed primarily at the detection of lesions due to infections, based on observation, palpation, and incision. an exception is the mandatory sampling of adult animals for transmissible spongiform encephalopathies (tses). in contrast to bovine animals, tse testing is not directed at individual animals, but is based on a region and animal stock related monitoring system. visual inspection of the carcass (and offals) may allow, in some cases, for the identification of gross alterations in carcass conformation (e.g. abscesses or deposits) and organ-specific lesions in kidneys, liver, lungs or other organs that might be indicative of recent use of vmps (with the possibility of noncompliance with withdrawal periods) or acute or chronic exposure to toxic substances. in most cases, exposure to chemical compounds does not result in typical organ lesions. hence it needs to be considered that evidence for the presence of chemical residues and contaminants will in most cases not be apparent during the visual inspection of ovine and caprine carcasses. therefore, the meat inspection approach based on "detect and immediately eliminate", used for biotic (microbiological) hazards in slaughterhouses, is generally not applicable to abiotic hazards. while monitoring programmes (council directive / /ec, described in section . ) may provide a gross indication of the prevalence of undesirable chemical residues and contaminants in ovine and caprine carcasses, the sole intervention at abattoir level is the isolation of a suspect carcass as potentially unfit for human consumption, pending results of residue testing. council directive / /ec prescribes the measures to monitor certain substances and residues thereof in live animals and animal products. it requires that mss adopt and implement a national residue monitoring programme, also referred to as the national residue control plan (nrcp, for defined groups of substances. mss must assign the task of coordinating the implementation of the controls to a central public body. this public body is responsible for drawing up the national plan, coordinating the activities of the central and regional bodies responsible for monitoring the various residues, collecting the data and sending the results of the surveys undertaken to the commission each year. the nrcp should be targeted; samples should be taken on-farm and at abattoir level with the aim of detecting illegal treatment or controlling compliance with the maximum residue limits (mrls) for vmps according to commission regulation (eu) no / , with the maximum residue levels (mrls) for pesticides as set out in regulation (ec) no / , or with the maximum levels (mls) for contaminants as laid down in commission regulation (ec) no this means that in the national monitoring plans, the mss should target those groups of animals/gender/age combinations in which the probability of finding residues is highest. this approach differs from random sampling, in which the objective is to gather statistically representative data, for instance to evaluate consumer exposure to a specific substance. the minimum number of animals to be checked for all kind of residues and substances must be at least equal to . % of sheep and goats over three months of age slaughtered the previous year, with the following breakdown (further details on group a and b compounds is presented in section . group a: . % of total samples -each sub-group a must be checked each year using a minimum of % of the total number of samples to be collected for group a. the balance is allocated according to the experience and background information of the ms. group b: . % of the total samples - % must be checked for group b substances - % must be checked for group b substances - % must be checked for group b substances. the balance must be allocated according to the situation of the ms. in the case of imports from third countries, chapter vi of council directive / /ec describes the system to be followed to ensure an equivalent level of control on such imports. in particular it specifies (i) that each third country must provide a plan setting out the guarantees which it offers as regards the monitoring of the groups of residues and substances referred to in annex i to the council directive, (ii) that such guarantees must have an effect at least equivalent to those provided for in council directive / /ec, (iii) that compliance with the requirements of and adherence to the guarantees offered by the plans submitted by third countries shall be verified by means of the checks referred to in article of directive / /eec and the checks provided for in directives / /eec and / /eec, and (iv) that mss are required to inform the commission each year of the results of residue checks carried out on animals and animal products imported from third countries, in accordance with directives / /eec and / /eec. in accordance with article of council directive / /ec, the mss are requested, as a follow-up, to provide information on actions taken at regional and national level as a consequence of non-compliant results. the commission sends a questionnaire to the ms to obtain an overview of these actions, for example when residues of non-authorised substances are detected or when the maximum residue limits/maximum levels established in eu legislation are exceeded. in summary, this means that the term 'suspect sample' applies to a sample taken as a consequence of: non-compliant results, and/or suspicion of an illegal treatment, and/or suspicion of non-compliance with the withdrawal periods. non-compliant results for a specific substance or group of substances or a specific food commodity should result in intensified controls for this substance/group or food commodity in the plan for the following year. article it should be noted that targeted sampling as defined by council directive / /ec aims at monitoring certain substances and residues thereof in live animals and animal products across eu mss. in contrast to monitoring, under suspect sampling, a 'suspect' carcass has to be detained at the abattoir until laboratory results confirm or deny conformity with legislative limits for chemical residues. based on the test results, the carcass can be declared fit or unfit for human consumption. in the first scenario, the carcass is released into the human food chain whereas in the second case the carcass is disposed of. in addition to the minimum testing requirements which form part of the nrcps, council directive / /ec also establishes the requisites for self-monitoring and co-responsibility on the part of operators. in accordance with article , chapter iii of council directive / /ec, mss shall ensure that the owners or persons in charge of the establishment of initial processing of primary products of animal origin (slaughterhouses) take all necessary measures, in particular by carrying out their own checks, to: accept only those animals for which the producer is able to guarantee that withdrawal times have been observed satisfy themselves that the farm animals or products brought into the slaughterhouse do not contain residue levels which exceed maximum permitted limits and that they do not contain any trace of prohibited substances or products. farmers and food processors (including slaughterhouses) must place on the market only: animals to which no unauthorised substances or products have been administered or which have not undergone illegal treatment animals for which, where authorised products or substances have been administered, the withdrawal periods prescribed for these products or substances have been observed. tor : identification, classification and ranking of substances of potential concern . . in the current eu legislation, chemical residues and contaminants in live animals and animal products intended for human consumption are addressed in as one of the objectives of this assessment of current meat inspection protocols is the identification of chemical substances of potential concern that may occur as residues or contaminants in sheep and goats, but have not been specifically addressed in council directive / /ec, a more general grouping of chemical substances was chosen, resulting in the following three major groups: substances that are prohibited for use in food-producing animals, corresponding to group a substances in council directive / /ec veterinary drugs, also denoted vmps, corresponding to groups b and b substances in council directive / /ec and contaminants, corresponding to group b substances in council directive / /ec. the first group of chemicals that may occur in edible tissues as residues are those substances prohibited for use in food-producing animals; these substances correspond largely with group a substances in council directive / /ec. there were different rationales for banning these substances for application to animals and the list of group a substances comprises compounds that are of toxicological concern (including vmps for which an acceptable daily intake (adi) could not be established), as well as substances having anabolic effects and pharmacologically active compounds that may alter meat quality and/or affect animal health and welfare. a second group of chemicals that may be a source of residues in animal-derived foods are vmps (including antibiotics, anti-parasitic agents and other pharmacologically active substances) and authorised feed additives used in the health care of domestic animals; these substances correspond largely with group b and b substances in council directive / /ec. these substances have been subjected to assessment and pre-marketing approval by the with regard to antibacterial agents, it is important to state that the ranking of substances of concern in this part of the document considers only toxicological concerns related to the presence of residues. other aspects, such as the emergence of antimicrobial resistance is considered by the efsa panel on biological hazards (biohaz panel) in a separate part of this opinion (see appendix a of the biohaz panel). a third group of chemical substances that may occur in edible tissues of sheep and goats are contaminants that may enter the animal's body mainly via feed, ingested soil, drinking water, inhalation or direct (skin) contact; these substances include the group b substances in council directive / /ec. feed materials can contain a broad variety of undesirable substances comprising persistent environmental pollutants, toxic metals and other elements as well as natural toxins, including toxic secondary plant metabolites and fungal toxins (mycotoxins). feed producers have to act in compliance with commission directive / /ec, listing the undesirable substances in feed and feed materials and presenting maximum content in feed materials or complete feedingstuffs. in a recent re-assessment of these undesirable substances in animal feeds, the contam panel reevaluated the risk related to exposure to these substances for animals. special attention was given to toxic compounds that accumulate or persist in edible tissues, including meat, or that are directly excreted into milk and eggs. . . , pp. - . the term "dioxins" used in this opinion refers to the sum of polychlorinated dibenzo-p-dioxins (pcdds) and polychlorinated dibenzofurans (pcdfs). as an early warning tool, the european commission has set action levels for dioxins and dl-pcbs in food through commission recommendation / /ec. owing to the fact that their sources are generally different, separate action levels for dioxins and dl-pcbs were established. the action levels for meat and meat products of sheep are . pg who-teq/g fat for dioxins and . pg who-teq/g fat for dl-pcbs. in cases where levels of dioxins and/or dl-pcbs in excess of the action levels are found, it is recommended that mss, in cooperation with fbos, initiate investigations to identify the source of contamination, take measures to reduce or eliminate the source of contamination and check for the presence of ndl-pcbs. maximum residue levels for certain elements in sheep and goats are also laid down in regulation (ec) no / of the european parliament and of the council on maximum residue levels of pesticides in or on food and feed of plant and animal origin, related to the use of copper-containing and mercury-containing compounds as pesticides. for copper, the maximum residue levels are each mg/kg for meat and fat and mg/kg each for liver, kidney and edible offal. for mercury compounds (sum of mercury compounds expressed as mercury), the maximum residue levels are . mg/kg each for meat, fat, liver, kidney and edible offal. a multi-step approach was used for ranking the potential concern of the three groups of substances that are presented in sections . and . . the steps are: evaluation of the outcomes of the nrcps indicating the number of results that are noncompliant with the current legislation evaluation of the likelihood that specific residues or contaminants, including 'new hazards', may be present in sheep and goat carcasses consideration of the toxicological profile for chemical substances. data from the nrcps are published annually and these data were considered as the first step for hazard ranking. aggregated data for the outcome of the nrcps for targeted sampling of sheep and goats from to are presented in tables - results from suspect sampling are not included, as these results are considered not to be representative of the actual occurrence of chemical residues and contaminants. as stated above, suspect sampling arises as (i) a follow-up to the occurrence of a non-compliant result, and/or (ii) on suspicion of illegal treatment at any stage of the food chain, and/or (iii) on suspicion of non-compliance with the withdrawal periods for authorised vmps (articles , and of council directive / /ec, respectively). a non-compliant result refers to an analytical result exceeding the permitted limits or, in the case of prohibited substances, any measured level with sufficient statistical certainty that it can be used for legal purposes. it should be noted that information on the number of total analyses performed for an individual substance is only transmitted by those mss that were reporting at least one non-compliant result for that substance. therefore, it is not possible to extract from the data supplied complete information on the individual substances from each sub-group tested or the number of samples tested for an individual substance where no non-compliant result is reported. in addition, in some cases the same samples were analysed for different substance groups/sub-groups and therefore the number of substance groups/sub-groups tested is higher than the total number of samples collected from sheep and goats. it is to be noted that there is a lack of harmonisation regarding details provided on non-compliant results for the nrcps from mss. this hampers the interpretation and the evaluation of these data. moreover, in some cases, no information is available on the nature of the positive samples (i.e. whether this refers to muscle, liver, kidney, skin/fat or other samples) and these results often give no indication of the actual measured concentrations of residues or contaminants. as a result, in the absence of substance-specific information and the actual concentration of a residue or contaminant measured, these data do not allow for an assessment of consumer exposure. in addition, particularly in the case of prohibited substances, much of the testing may be done in matrices such as urine, faeces and hair and so no data on residue levels in edible tissues are available. another problem with interpreting the data provided arises from the failure to clearly identify in all cases (i) the proportion of total samples tested that are of sheep and that are of goat and (ii) whether a particular non-compliant result refers to a sample from a sheep or from a goat. in spite of the limitations highlighted above, an overall assessment of these data indicates that the percentage of non-compliant results is of a low order of magnitude compared with the total number of samples tested. ( ( ) ( ) ( ) antibacterials (unspecified) c published at http://ec.europa.eu/food/food/chemicalsafety/residues/control_en.htm a summary of the data presented in the previous tables (tables [ ] [ ] [ ] shows that of the ( . %) samples analysed in the eu nrcps during the period - were non-compliant for one or more substances listed in annex i of council directive / /ec. further details are presented in table . as mentioned above, one sample can be non-compliant for multiple substances, so that the number of non-compliant results is higher than the number of non-compliant samples. for example, for b substances, there were non-compliant results in non-compliant samples. one sample can be non-compliant for more than one substance. b published at http://ec.europa.eu/food/food/chemicalsafety/residues/control_en.htm c some of the samples were analysed for several substances in different subgroups (e.g. same sample analysed for b a, b b and b c), this total represents the total number of samples analysed for at least one substance in the group. it should be noted that the data in tables - provide the results for sampling and testing carried out by mss under the terms of council directive / /ec within the nrcps. however, there may be other chemical substances of relevance for control in sheep and goats, particularly in the case of contaminants, which are not included in the nrcps at all or which are not systematically covered by the nrcps. some of these substances are addressed further under tor of this opinion ('new hazards'). of the total number of samples taken for analysis during the period - , . % were taken at farm level while the remaining . % were taken at slaughterhouse level. no information on the types of animals sampled is readily available. results indicate that: . % of the total samples were non-compliant for one or more substances, with . %, . % and . % being non-compliant for group a, group b /b and group b substances, respectively. . % of all samples taken at farm level were non-compliant for one or more substances, with . %, . % and . % being non-compliant for group a, group b /b and group b substances, respectively. . % of all samples taken at slaughterhouse level were non-compliant for one or more substances, with . %, . % and . % being non-compliant for group a, group b /b and group b substances, respectively. the highest proportion of non-compliant results overall ( . %) was for group b substances, contaminants, representing largely exceedances of the mls/mrls specified for these substances. the proportions of non-compliant results overall for group a, prohibited substances ( . %), and for group b /b substances, vmps ( . %) represent largely illicit use of prohibited substances and exceedances of the mrls specified for vmps, respectively. an analysis of the results for sampling at farm level compared with slaughterhouse level indicates that for prohibited substances (group a) the rate of non-compliant results determined for sampling at farm level is considerably lower than that for sampling at slaughterhouse level. the majority ( %) of samples found to be non-compliant for prohibited substances relate to those having anabolic effects (thyreostats, steroids, zeranol, beta-agonists) and only a minority ( %) were non-compliant for substances such as chloramphenicol, nitrofurans and nitroimidazoles. while the incidence of noncompliant results from farm level sampling is low, such sampling is an integral component of the system for controlling illicit use of prohibited substances in food-producing animals, particularly in the case of substances having anabolic effects. in the case of vmps (group b /b ) the rate of non-compliant results determined at farm level is markedly higher than for sampling at slaughterhouse level. however, slaughterhouse-level sampling is more appropriate for identifying non-compliant samples for vmps, based on compliance with or exceedance of the specified mrls in edible tissues. in the case of contaminants (group b ) the rate of non-compliant results determined for sampling at slaughterhouse level is almost twofold higher than for sampling at farm level. indeed, sampling for group b substances is more appropriate, generally, at slaughterhouse level where identification of non-compliant results, based on compliance with or exceedance of specified mrls/mls in edible tissues, can be made. it should be noted also that a direct comparison of data from the nrcp over the years is not entirely appropriate as the test methods used and the number of samples tested for an individual residue varied between mss, and the specified mrls/mls for some substances may change over time. in addition, there are ongoing improvements in analytical methods, in terms of method sensitivity, accuracy and scope (i.e. number of substances covered by the method), which affects inter-year and inter-country comparisons. therefore, the cumulative data from the nrcps provide only a broad indication of the prevalence and nature of non-compliant samples. in conclusion, this compilation of data clearly indicates the low prevalence of abiotic hazards (residues and contaminants) in edible tissues of sheep and goats. only . % of the total number of analysed samples was non-compliant for one or more substances listed in annex i of council directive / /ec. based on these results, it can be concluded that potentially higher exposure of consumers to these substances from edible tissues of sheep and goats takes place only incidentally, as a result of mistakes or non-compliance with known and regulated procedures. the available aggregated data indicate the number of samples that were non-compliant with the current legislation. however, in the absence of species-and substance-specific information, such as the tissues used for residue analysis and the actual concentration of a residue or contaminant measured, these data do not allow for a reliable assessment of consumer exposure. while the data from the annual nrcp testing by mss indicate a relatively low incidence of noncompliant results for sheep and goats, there may be human health concerns regarding certain contaminants. for example, an evaluation undertaken by efsa (efsa contam panel, b) on the risk to public health related to the presence of high levels of dioxins and dioxin-like pcbs in liver from sheep (and deer) concluded that regular consumption of sheep liver would result, on average, in an approximate % increase of the median background exposure to dioxins and dioxin-like pcbs (dl-pcbs) for adults. the study also concluded that on individual occasions, consumption of sheep liver could result in high intakes exceeding the tolerable weekly intake (twi), and that the frequent consumption of sheep liver, particularly by women of child-bearing age and children, may be a potential health concern. independent from the occurrence data as reported from the nrcps, each substance or group of chemical substances that may enter the food chain was also evaluated for the likelihood that potentially toxic or undesirable substances might occur in sheep and goat carcasses. for prohibited substances and vmps/feed additives, the following criteria were used: the likelihood of the substance(s) being used in an illicit or non-compliant way in sheep and goats (suitability for sheep and goat production; commercial advantages) the potential availability of the substance(s) for illicit or non-compliant usage in sheep and goat production (allowed usage in third countries; availability in suitable form for use in sheep and goats; non-authorised supply chain availability ('black market'); common or rare usage as a commercial licensed product) the likelihood of the substance(s) occurring as residue(s) in edible tissues of sheep and goats based on the kinetic data (pharmacokinetic and withdrawal period data; persistence characteristics; special residue issues, e.g. bound residues of nitrofurans) toxicological profile and nature of hazard and the relative contribution of residues in sheep and goats to dietary human exposure. for contaminants, the following criteria were considered: the prevalence (where available) of occurrence of the substances in animal feeds/forages and pastures, and of the specific environmental conditions in which the animals are raised the level and duration of exposure, tissue distribution and deposition including accumulation in edible tissues of sheep and goats toxicological profile and nature of hazard and the relative contribution of residues in sheep and goats to dietary human exposure. considering the above mentioned criteria, a flow-chart approach was used for ranking of the chemical residues and contaminants of potential concern. the outcome of the nrcps (indicating the number of non-compliant results), the evaluation of the likelihood that residues of substances of potential concern can occur in sheep and goats and the toxicological profile of the substances were considered in the development of the general flow chart, presented in figure . ml, maximum level; mrl, maximum residue limits; nrcp: national residue control plan. a contaminants from the soil and the environment, associated with feed material, are considered to be part of the total feed for the purposes of this opinion. b potential concern was based on the toxicological profile and nature of hazard for the substances. c the contam panel notes that the ranking of vmps/feed additives was carried out in the general context of authorised usage of these substances in terms of doses, route of treatment, animal species and withdrawal periods. therefore, this ranking is made within the framework of the current regulations and control and within the context of a low rate of exceedances in the nrcps. d see definitions as provided in the next section . . . figure : general flow chart used for the ranking of residues and contaminants of potential concern that can be detected in sheep and goats. outcome of the ranking of residues and contaminants of potential concern that can occur in sheep/goat carcasses four categories were established resulting from the application of the general flow chart: substance irrelevant in sheep/goat production (no known use at any stage of production); no evidence for illicit use or abuse in sheep/goats; not or very seldom associated with exceedances in mrls in nrcps; no evidence of occurrence as a contaminant in feed for sheep/goats. vmps/feed additives which have an application in sheep/goat production, residues above mrls are found in control plans, but substances are of low toxicological concern; contaminants and prohibited substances with a toxicological profile that does not include specific hazards following accidental exposure of consumers and which are generally not found or are not found above mls in sheep/goats. contaminants and prohibited substances to which sheep/goats are known to be exposed and/or with a history of misuse, with a toxicological profile that does not entirely exclude specific hazards following accidental exposure of consumers; evidence for residues of prohibited substances being found in sheep/goats; contaminants generally not found in concentrations above the mrl/mls in edible tissues of sheep/goats. contaminants and prohibited substances to which sheep/goats are known to be exposed and with a history of misuse, with a distinct toxicological profile comprising a potential concern to consumers; evidence for ongoing occurrence of residues of prohibited substances in sheep/goats; evidence for ongoing occurrence and exposure of sheep/goats to feed contaminants. . . . . substances classified in the category of high potential concern . . . . . contaminants: dioxins, dioxin-like polychlorinated biphenyls (dl-pcbs) in the high potential concern category are dioxins and dioxin-like polychlorinated biphenyls (dl-pcbs) as the occurrence data from the monitoring programmes show a number of incidents due to contamination of feed, such as illegal disposal of dioxin-and dl-pcb-containing waste materials into feed components, or open drying of feed components with dioxin-containing fuel materials. dioxins are persistent organochlorine contaminants that are not produced intentionally and have no targeted use, but are formed as unwanted and often unavoidable by-products in a number of thermal and industrial processes. because of their low water solubility and high lipophilic properties, they bioaccumulate in the food chain and are stored in fatty tissues of animals and humans. the major pathway of human dioxin exposure is via consumption of food of animal origin which generally contributes more than % of the total daily dioxin intake (efsa, ) . a number of incidents in the past years were caused by contamination of feed with dioxins. examples are feeding of contaminated citrus pulp pellets, kaolinitic clay containing potato peel or mixing of compound feed with contaminated fatty acids. all these incidents were caused by grossly negligent or criminal actions and led to widespread contamination of feed and subsequently to elevated dioxin levels in the animals and the foodstuffs produced from them. monitoring programmes also demonstrated that certain food commodities, such as sheep liver can have high dioxin levels even when not affected by specific contamination sources. in , the contam panel delivered a scientific opinion on the risk to public health related to the presence of high levels of dioxins and dl-pcbs in liver from sheep and deer (efsa contam panel, b). efsa evaluated, inter alia, the dioxin and pcb results from sheep liver and sheep meat samples submitted by eight european countries. almost all sheep meat samples were below the relevant mls set by regulation (ec) no / . however, the corresponding liver samples from the same sheep in more than half of the cases exceeded the relevant maximum levels considerably. this finding is likely to be associated with differences in the level of biotransformation enzymes in sheep compared with bovine animals. dioxins have a long half-life and are accumulated in various tissues. the findings of elevated levels in food are of public health concern owing to their potential effects on liver, thyroid, immune function, reproduction and neurodevelopment (efsa, a (efsa, , . the available data indicate that a substantial part of the european population is in the range of or already exceeding the twi for dioxins and dl-pcbs. a report on "monitoring of dioxins and pcbs in food and feed" (efsa, ) estimated that between . % and . % of individuals were exposed above the twi of pg teq/kg body weight (b.w.) for the sum of dioxins and dl-pcbs. in addition to milk and dairy products and fish and seafood, meat and meat products also contributed significantly to total exposure. owing to the high toxic potential of dioxins and the incidence of samples of sheep meat and sheep liver exceeding the maximum limits, efforts need to be undertaken to reduce exposure where possible. in summary, based on the high toxicity and the low maximum levels set for meat and fat of sheep (see table ) and considering that food of animal origin contributes significantly (> %) to human exposure, dioxins have been ranked in the category of substances of high potential concern. (b) dioxin-like polychlorinated biphenyls (dl-pcbs) in contrast to dioxins, pcbs had widespread use in numerous industrial applications, generally in the form of complex technical mixtures. due to their physicochemical properties, such as nonflammability, chemical stability, high boiling point, low heat conductivity and high dielectric constants, pcbs were widely used in industrial and commercial closed and open applications. they were produced for over four decades, from onwards until they were banned, with an estimated total world production of . - . million tonnes. according to council directive / /ec, mss were required to take the necessary measures to ensure that used pcbs are disposed off and equipment containing pcbs is decontaminated or disposed of at the latest by the end of . earlier experience has shown that illegal practices of pcb disposal may occur resulting in considerable contamination of animals and foodstuffs of animal origin. on the other hand, monitoring programmes also demonstrated that certain food commodities, such as sheep liver can have high pcb levels even when not affected by specific contamination sources. this was demonstrated by efsa in its scientific opinion on the risk to public health related to the presence of high levels of dioxins and pcbs in liver from sheep and deer (efsa contam panel, b). efsa evaluated, inter alia, the dioxin and pcb results from sheep liver and sheep meat samples submitted by eight european countries. for sheep liver, the mean upper bound concentration for dl-pcbs (expressed as who-teq ) was . (range: . - . ) pg who-teq/g fat. the corresponding levels in sheep meat were considerably lower: . (range: . - . ) pg who-teq/g fat (efsa contam panel, b). based on structural characteristics and toxicological effects, pcbs can be divided into two groups. one group consists of congeners that can easily adopt a coplanar structure and have the ability to bind to the aryl hydrocarbon (ah) receptor, thus showing toxicological properties similar to dioxins (effects on liver, thyroid, immune function, reproduction and neurodevelopment). this group of pcbs is therefore called "dioxin-like pcbs". the other pcbs do not show dioxin-like toxicity but have a different toxicological profile, in particular with respect to effects on the developing nervous system and neurotransmitter function. this group of pcbs is called "non dioxin-like pcbs" (see below). as dl-pcbs, in general, show a comparable lipophilicity, bioaccumulation, toxicity and mode of action as dioxins (efsa, a) , these two groups of environmental contaminants are regulated together in european legislation and are considered together in risk assessments. based on the high toxicity, widespread use and potential for improper disposal practices of technical pcb mixtures, dl-pcbs have been ranked in the category of substances of high potential concern. . . . . substances classified in the category of medium potential concern . . . . . prohibited substances: stilbenes, thyreostats, gonadal (sex) steroids, resorcylic acid lactones, beta-agonists, chloramphenicol and nitrofurans (a) stilbenes the toxicity of stilbenes is well established (for review see waltner-toews and mcewen, ) and this has led to their prohibition for use as growth promoters in animals in most countries, based also on their involvement in the baby food scandal in the late s (loizzo et al., ) . in particular, diethylstilbestrol is a proven human genotoxic carcinogen (group i iarc (international agency for research on cancer)) (iarc, ), while sufficient evidence for hexestrol and limited evidence for dienestrol for carcinogenicity in animals were found (iarc, ) . diethylstilbestrol is associated with cancer of the breast in women who were exposed while pregnant, and also causes adenocarcinoma in the vagina and cervix of women who were exposed in utero; finally, a positive association has been observed between exposure to diethylstilbestrol and cancer of the endometrium, and between in utero exposure to diethylstilbestrol and squamous cell carcinoma of the cervix and cancer of the testis. in , the use of stilbenes in all species of food-producing animals was prohibited in the european community by directive / /eec. diethylstilbestrol, and other stilbenes such as hexestrol and dienestrol, are likely to be available on the black market and, therefore, might be available for illicit use in sheep and goats. no non-compliant results for stilbenes in sheep and goat samples have been reported from the european nrcps - , indicating that abuse of stilbenes in sheep and goat production in the eu is unlikely. considering that stilbenes have proven toxicity for humans, these substances are ranked as of medium potential concern. however, considering that there is no evidence for current use of stilbenes in sheep and goat production and that no non-compliant results have been found over a number of years of nrcp testing, control measures for stilbenes might be focused on identifying any potential future abuse of these substances in sheep and goat production in the eu. thyreostats are a group of substances that inhibit the thyroid function, resulting in decreased production of the thyroid hormones triiodothyronine (t ) and thyroxine (t ). enlargement of the thyroid gland has been proposed as a criterion to identify illicit use of these compounds (vos et al., ; vanden bussche et al., ) . they are used in human and in non-food-producing animal medicine to deal with hyperthyroidism. the use of thyreostats for animal fattening is based on weight gain caused by filling of the gastrointestinal tract and retention of water in muscle tissues (courtheyn et al., ) . synthetic thyreostats include thiouracil, methylthiouracil, propylthiouracil, methimazole, tapazol (methylmercaptoimidazole) and mercaptobenzimidazole (mbi). use of synthetic thyreostats in food-producing animals has been prohibited in the eu since (council directive / /ec). naturally occurring thyreostats include thiocyanates and oxazolidine- -thiones, which are present as glucosinolates in plant material such as in the seeds of cruciferae, like rapeseed (efsa, b; vanden bussche et al., ) . evidence for the occurrence of thiouracil in urine of cattle fed on a cruciferous-based diet has been demonstrated (pinel et al., ) . thyreostats are very widely available on the black market so there is the possibility for illicit use in sheep/goat production. the results from the european nrcps - show that sheep/goat samples were found to be non-compliant for thyreostats ( non-compliant results out of the total samples analysed for thyreostats). however, it has been shown that the source of the generally low levels of thiouracil determined in urine samples may be from exposure of animals through their diet (le bizec et al., ) . some mss reporting the highest numbers of non-compliant samples for thiouracil state that "the presence of thiouracil in low concentrations may be due to the animals eating cruciferous plant material" and "in line with scientific evidence, the competent authority has concluded that the residues resulted from dietary factors". thyreostats have been considered to be carcinogenic and teratogenic. while the in utero exposure to methimazole or propylthiouracil has been associated with aplasia cutis and a number of other congenital defects (löllgen et al., ; rodriguez-garcia et al., ) , an iarc evaluation found inadequate evidence in humans, but limited evidence (in the case of methimazole) and sufficient evidence (in the case of thiouracil, methylthiouracil and propylthiouracil) in experimental animals for carcinogenicity (iarc, ; efsa b) . thyreostats are prohibited substances owing to their potential toxicity to humans and their efficacy as growth promoters in sheep/goats, but considering that the non-compliant results that have been found in most years of nrcp testing have been attributed largely to a dietary source, these substances are ranked as of medium potential concern. control measures for thyreostats might focus on identifying potential abuse of these substances in sheep and goat production in the eu. (c) gonadal (sex) steroids a broad range of steroids derived from oestrogens, androgens and progestagens are available and have been used as growth-promoting agents in food-producing animals. there is an extensive body of animal production research demonstrating the efficacy of anabolic steroids, often in combination treatments of an oestrogen and an androgen (or progestagen), as growth promoters. all use of steroids as growth-promoting agents in food-producing animals is banned according to council directive / /ec, as amended by directives / /ec and / /ec. the latter included βoestradiol in the list of prohibited substances owing to its demonstrated tumour-promoting (epigenetic) and tumour initiating (genotoxic) properties (russo et al., ) . certain uses of β-oestradiol, progesterone and medroxyprogesterone acetate in sheep and/or goats are allowed for therapeutic or zootechnical purposes only (commission regulation (eu) no / ). there is evidence that anabolic steroids are of economic value for farmers as animals respond to their application with increased growth rate and feed conversion efficiency. anabolic steroids are widely available on the black market so there is the possibility for illicit use in sheep and goat production. , . . , p. - . an accurate estimate for the level of abuse of anabolic steroids in european sheep and goat production from these data. there are divergent views on the potential adverse effects for the consumer from residues of anabolic steroids in edible tissues of treated animals. there is concern regarding the carcinogenic effects of oestrogenic substances, and the long-term effects of exposure of prepubescent children to oestrogenic substances. in the scientific committee on veterinary measures relating to public health (scvph) performed an assessment of the potential risks to human health from hormone residues in bovine meat and meat products (scvph, (scvph, , (scvph, , , particularly as regards the three natural hormones ( β-oestradiol, testosterone, progesterone) and the three synthetic analogues (zeranol, trenbolone acetate, melengestrol acetate) that may be legally used as growth promoters in third countries. it was concluded that, taking into account both the hormonal and nonhormonal toxicological effects, the issues of concern include neurobiological, developmental, reproductive and immunological effects, as well as immunotoxicity, genotoxicity and carcinogenicity. in consideration of concerns relating to the lack of understanding of critical developmental periods in human life as well as uncertainties in the estimates of endogenous hormone production rates and metabolic clearance capacity, particularly in prepubertal children, no threshold level and therefore no adi could be established for any of the six hormones. according to iarc, β-oestradiol and steroidal oestrogens are classified as proven human carcinogens (group ), androgenic (anabolic) steroids as probably carcinogenic to humans (group a); for most progestagens, evidence for human carcinogenicity is inadequate while that for animals varies from sufficient to inadequate (iarc, ) . notwithstanding the toxicological profile of gonadal (sex) steroids, owing to the low prevalence of non-compliant samples from confirmed illicit use in the nrcps, these substances are ranked as of medium potential concern. (d) resorcylic acid lactones (rals) in the eu, zeranol was evaluated together with other hormonal growth promoters by the scvph (scvph (scvph , (scvph , . in these scientific opinions it was concluded that, taking into account both the hormonal and non-hormonal toxicological effects, no adi could be established for any of the six hormones, including zeranol. use of zeranol as a growth promoter in cattle production was widespread in some mss prior to its prohibition in europe in . zeranol is widely available as a commercial product and is used extensively in third countries. hence it is readily available on the market and there is the possibility for its illicit use in cattle production in the eu. zeranol is derived from, and can also occur as, a metabolite of the mycotoxin zearalenone, produced by fusarium spp. the results from the european nrcps - show sheep/goat samples non-compliant for resorcylic acid lactones (a total of seven non-compliant results out of the total samples analysed). however, it has been shown that the source of the generally low levels of zeranol and its metabolites determined in these samples may be from exposure of sheep/goats to the mycotoxin zearalenone through their diet (efsa, a) . some mss reporting non-compliant results for zeranol and its metabolites state that "the residue was found to be as a result of feed contamination on the farm" and it was "probably attributable to mycotoxin contamination of feed". rals are prohibited substances owing to their potential toxicity to humans and their efficacy as growth promoters in sheep and goats, but considering that the non-compliant results that have been found in some years of nrcp testing have been attributed largely to a dietary source, these substances are ranked as of medium potential concern. control measures for rals might focus on identifying potential abuse of these substances in sheep and goat production in the eu. (e) beta-agonists beta-agonists, or β-adrenergic agonists, have therapeutic uses as bronchodilatory and tocolytic agents. a wide range of beta-agonists have been developed, such as clenbuterol, salbutamol, cimaterol, terbutaline, ractopamine, etc., and all of these are prohibited for use as growth-promoting agents in food-producing animals in the eu. salbutamol and terbutaline are licensed human medicines indicated for treatment of asthma and bronchospasm conditions and for prevention of premature labour, respectively. one of the beta-agonists, clenbuterol, is licensed for therapeutic use in cattle (as a tocolytic agent) and in the treatment of obstructive airway conditions in horses (commission regulation (eu) no / ). other beta-agonists, such as ractopamine, have been approved for use in food-producing animals in a number of third countries. treatment of sheep with beta-agonists, such as clenbuterol, results in increased muscle mass and increased carcass leanness (baker et al., ) . the commercial benefits of using beta-agonists in sheep and goat production, particularly lambs, combined with the availability of these substances, indicates that illicit use of beta-agonists as growth promoters cannot be excluded. an outbreak of collective food poisoning from the ingestion of lamb meat containing residues of clenbuterol has been reported in portugal; symptoms shown by the intoxicated people may be generally described as gross tremors of the extremities, tachycardia, nausea, headaches and dizziness (barbosa et al., ) . in the light of the known adverse biological effects of beta-agonists in humans, particularly clenbuterol, and the efficacy of such drugs as repartitioning agents in sheep/goats, but considering that no non-compliant results for sheep/goats have not been found in the nrcps since , these substances currently are ranked as of medium potential concern. (f) chloramphenicol chloramphenicol is an antibiotic substance, first used for the treatment of typhoid in the late s. chloramphenicol may produce blood dyscrasias in humans, particularly bone marrow aplasia, or aplastic anaemia, which may be fatal. there is no clear correlation between dose and the development of aplastic anaemia and the mechanism of induction of aplastic anaemia is not fully understood (watson, ) . although the incidence of aplastic anaemia associated with exposure to chloramphenicol is apparently very low, no threshold level could be defined (emea, ). in addition, several studies suggest that chloramphenicol and some of its metabolites are genotoxic (fao/who, emea, ) . considering the available evidence from in vitro experiments and from animal studies, as well as from a case-control study conducted in china, in which there was evidence for the induction of leukaemia in patients receiving a long-term treatment with chloramphenicol, iarc classified chloramphenicol as group a (probably carcinogenic to humans) substance (iarc, ) . based on these evaluations, the use of chloramphenicol in food-producing animals is prohibited within the eu to avoid the exposure of consumers to potential residues in animal tissues, milk and eggs. consequently, chloramphenicol is included in table until its prohibition, chloramphenicol was used on food-producing animals, including sheep and goats, for treatment of salmonella infections and for prevention of secondary bacterial infections. currently, chloramphenicol, which is licensed for use as a broad-spectrum bacteriostatic antibacterial in pets and non-food-producing animals in the eu, is used also in some third countries for foodproducing animals. hence, chloramphenicol may be available on the black market for illicit use in sheep/goat production. however, the availability for use on food-producing animals of related substances with similar antibacterial properties, thiamphenicol and florfenicol (with no toxicological concern), should mitigate the illicit use of chloramphenicol in sheep/goat production as these alternative drugs are available as prescription medicines. non-compliant results for chloramphenicol in sheep/goats have been reported in most years' results from the european nrcps - ( non-compliant results), indicating that abuse of chloramphenicol in sheep/goat production in europe may be a continuing occurrence. chloramphenicol has proven toxicity for humans and is effective as an antibacterial treatment for sheep/goats but, considering that lower numbers of non-compliant results have been found in recent years of the nrcp testing, chloramphenicol currently is ranked as of medium potential concern. (g) nitrofurans nitrofurans, including furazolidone, furaltadone, nitrofurantoin and nitrofurazone, are very effective antimicrobial agents that, prior to their prohibition for use on food-producing animals in the eu in , were widely used on livestock (cattle, sheep/goats, pigs, sheep and goats), in aquaculture and in bees. various nitrofuran antimicrobials are still applied in human medicine particularly for the treatment of urinary tract infections. a characteristic of nitrofurans is the short half-life of the parent compounds and the formation of covalently bound metabolites which, under the acidic conditions of the human stomach, may be released as active agents (hoogenboom et al., ) . these covalently bound metabolites are used as marker residues for detecting the illicit use of nitrofurans in animal production. it should be noted that the metabolite semicarbazide (sem) has been shown not to be an unambiguous marker for abuse of the nitrofuran drug nitrofurazone because the sem molecule may occur from other sources (hoenicke, et al., ; sarnsonova et al., ; bendall, ). nitrofurans are effective in treatment of bacterial and protozoal infections, including coccidiosis, in food-producing animals. although prohibited for use on food-producing animals in many countries, nitrofurans are likely to be available on the black market for illicit use in sheep/goat production. noncompliant results for nitrofurans in sheep/goats have been reported in most years' results from the european nrcps - , indicating that abuse of nitrofurans in sheep/goat production in europe is a continuing occurrence. a metabolite of furazolidone that can be released from covalently bound residues in tissues has been shown to be mutagenic and may be involved in the carcinogenic properties of the parent compound (emea, a). nitrofurans have proven toxicity for humans and are effective as antibacterials for sheep and goats but, considering that non-compliant results, other than for the marker residue sem, are found only sporadically in the nrcp testing, these substances currently are ranked as of medium potential concern. . . . . . contaminants: non dioxin-like pcbs (ndl-pcbs), chemical elements and mycotoxins (a) non dioxin-like pcbs (ndl-pcbs) the non dioxin-like pcbs (ndl-pcbs) show a different toxicological profile to the dl-pcbs. in , the contam panel performed a risk assessment on ndl-pcbs in food (efsa, a) . in the final conclusion, the contam panel stated that no health-based guidance value for humans can be established for ndl-pcbs because simultaneous exposure to dioxin-like compounds hampers the interpretation of the results of the toxicological and epidemiological studies, and the database on effects of individual ndl-pcb congeners is rather limited. there are, however, indications that subtle developmental effects, caused by ndl-pcbs, dl-pcbs, or polychlorinated dibenzo-pdioxins/polychlorinated dibenzofurans alone, or in combination, may occur at maternal body burdens that are only slightly higher than those expected from the average daily intake in european countries. in its risk assessment the contam panel decided to use the sum of the six pcb congeners (- , - , - , - , - and - ) as the basis for their evaluation, because these congeners are appropriate indicators for different pcb patterns in various sample matrices and are most suitable for a potential concern assessment of ndl-pcbs on the basis of the available data. moreover, the panel noted that the sum of these six indicator pcbs represents about % of total ndl-pcbs in food (efsa, a because of their somewhat lower toxicity than that of dl-pcbs, ndl-pcbs are classified in the medium potential concern category. (b) chemical elements (heavy metals: cadmium, mercury and lead) among the chemical elements, heavy metals traditionally have gained attention as contaminants in animal tissues, as they may accumulate in certain organs, particularly in kidneys over the lifespan of an animal. kidney tissue from sheep forms a specific dietary component in many european cultures. exposure of animals is commonly related to contaminated feed materials, despite older reports of accidental intoxication of animals from other sources (paints, batteries). the contam panel has issued within the framework of the re-evaluation of undesirable substances in animal feeds according to directive / /ec several opinions addressing heavy metals and arsenic in feed materials and the transfer of these elements from feed to edible tissues, milk and eggs. cadmium (efsa, a) is a heavy metal found as an environmental contaminant, both through natural occurrence and from industrial and agricultural sources. cadmium accumulates in humans and animals, causing concentration-dependent renal tubular damage. older animals are expected to have higher concentrations of cadmium accumulated in the kidneys. most of the non-compliant results were for kidney samples with some non-compliant results for muscle and liver being reported. mercury (efsa, a , efsa contam panel, a exists in the environment as elemental mercury, inorganic mercury and organic mercury (primarily methylmercury). methylmercury bioaccumulates and biomagnifies along the aquatic food chain. the toxicity and toxicokinetics of mercury in animals and humans depends on its chemical form. elemental mercury is volatile and mainly absorbed through the respiratory tract, whereas its absorption through the gastrointestinal tract is limited ( - %). following absorption, inorganic mercury distributes mainly to the kidneys and, to a lesser extent, to the liver. the critical effect of inorganic mercury is renal damage. in contrast, in animals, as in humans, methylmercury and its salts are readily absorbed in the gastrointestinal tract (> %) and rapidly distributed to all tissues, although the highest concentrations are also found in the kidneys. data from mss indicated the presence of mercury in animal feeds, but the measured concentrations remained below the maximum content for feed materials ( lead (efsa contam panel, ) is an environmental contaminant that occurs naturally and, to a greater extent, from anthropogenic activities such as mining and smelting and battery manufacturing. lead is a metal that occurs in organic and inorganic forms; the latter predominate in the environment. human exposure is associated particularly with the consumption of cereal grains (except rice), cereal and cereal-based products, potatoes, leafy vegetables and tap water. the contribution of sheep and goat meat and offal to human exposure is limited. given the toxicological profile of these elements and the fact that cadmium accumulates in animals and humans, these three elements have been allocated to the group of substances of medium potential health concern. . . . . substances classified in the category of low potential concern . . . . . prohibited substances: nitroimidazoles, chlorpromazine (a) nitroimidazoles the -nitroimidazoles, dimetridazole, metronidazole and ronidazole, are a group of drugs having antibacterial, antiprotozoal and anticoccidial properties. owing to the potential harmful effects of these drugs on human health (emea, b)-carcinogenicity, mutagenicity, genotoxicity and the occurrence of covalent binding to macromolecules of metabolites with an intact imidazole structuretheir use in food-producing animals is prohibited in the eu, united states, china, and other countries. nitroimidazoles had been used as veterinary drugs for the treatment of cattle, pigs and sheep and goats. although prohibited for use on food-producing animals, not only in the eu but also in many third countries, nitroimidazoles are likely to be available on the black market for illicit use in animal production, particularly as drugs such as metronidazole are readily available as human medicines. however, there are no clinical conditions in sheep/goats for which nitroimidazoles are particularly appropriate. non-compliant results (two) for nitroimidazoles in sheep/goats have been reported only in one year and from one ms from the european nrcps - , suggesting that abuse of nitroimidazoles in sheep/goat production in europe is not widespread. considering that nitroimidazoles have proven toxicity for humans and that they may be effective as antibacterial/antiprotozoal treatments for sheep/goats, these substances might be ranked as of medium potential concern. however, as only occasional non-compliant results have been found over a number of years of nrcp testing, nitroimidazoles currently are ranked as of low potential concern. (b) chlorpromazine chlorpromazine is a sedative and is also used against motion sickness and as an anti-emetic in pets. its use is banned in food-producing animals, including sheep/goats. chlorpromazine is likely to be available as a black market substance for illicit use in sheep/goat production. no non-compliant results for chlorpromazine were reported from the nrcp for the period - , indicating that the substance may not be rarely used illicitly in sheep/goat production in the eu. chlorpromazine is used as an antipsychotic drug in human therapy and has long-term persistence in humans and numerous side effects, including the more common ones of agitation, constipation, dizziness, drowsiness, etc. (emea, ) . chlorpromazine may be effective as a tranquilliser for sheep/goats but, since no non-compliant results have been found over a number of years of nrcp testing, chlorapromazine currently is ranked as of low potential concern. . . . . . contaminants: organochlorine pesticides, organophosphorus compounds, and natural toxins (a) organochlorine compounds organochlorine pesticides, such as dichlorodiphenyltrichloroethane (ddt) and its metabolites, hexachlorocyclohexanes (hchs), dieldrin, toxaphene and others have been assigned to the category of contaminants of low potential concern. occurrence of residues of these substances has declined over the years, because of their long-standing ban, and relatively low levels in animal products can be expected, as shown by results from the nrcps - , which indicate that results out of the total of samples tested for the category of organochlorine compounds were non-compliant for organochlorine pesticides. organophosphorus compounds are classified in council directive / /ec as group b b contaminants, although they may be used also as vmps for the therapy of parasitic infestations of sheep and goats. however, their probably infrequent use and short half-life results in these compounds being assigned to the category of low potential concern, or even negligible potential concern where mrls are not exceeded. results from the nrcps from - indicate that results out of the total of samples tested for the category of organophosphorus compounds were non-compliant. (c) natural toxins: mycotoxins and toxic plant secondary metabolites (c. ) mycotoxins mycotoxins comprise a chemically diverse group of secondary metabolites of moulds which may induce intoxication in humans and animals following ingestion of contaminated food or feed materials. mycotoxins evaluated by the contam panel as undesirable contaminants in animal feeds, including aflatoxins (efsa, b ), deoxynivalenol (efsa, c , fumonisins (efsa, b) and zearalenone (efsa, a) , t- toxin (efsa contam panel, c), ergot alkaloids (efsa contam panel, b) may pose a risk for animal health and productivity when present in feed materials that are used for sheep and goat animals over an extended period of time. however, most of the known mycotoxins are efficiently degraded by the rumen microflora and have a short biological half-life. hence, even if residues of mycotoxins are occasionally detected in animal tissues (monogastric animal species) they do not contribute significantly to human exposure, which is mainly related to the consumption of cereal products, nuts and spices. considering that some mycotoxins like aflatoxins have proven toxicity for humans, some of these substances might be ranked as of medium potential concern. however, since non-compliant results have been found incidentally (two out of samples) over a number of years of nrcp testing, these substances currently are ranked as of low potential concern. (c. (efsa, f) . although for several of these substances potential concerns for animal health could be identified following ingestion with feed, none of these natural toxins appeared to accumulate in edible tissues. the limited data on the kinetics of these metabolites does not preclude in all cases a transfer from the feed into animal tissues under certain circumstances of exposure. for example, residues of gossypol in meat of cattle (and sheep) were demonstrated under experimental conditions (feeding of cotton meal as the main feed component), but such residues are not expected under the conditions of european farming, where cotton seeds or cotton seed by-products are infrequently used and only with limited inclusion rates in feed (efsa, e) . other natural substances, such as the fungal metabolite (mycotoxin) zearalenone, are intensively metabolised in the rumen and following absorption in the liver and other animal tissues, and this may explain certain noncompliant analytical results. zearalanol (zeranol) is one of these metabolites and which is used in certain third countries as a growth-promoting agent owing to its oestrogenic activity (see section . . . . (d) ). this applies also to certain thiocyanates and oxazolidinethiones, originating from glucosinolates produced by a broad variety of plants of the brassicaceae family. they target different steps in the synthesis of thyroid hormones, leading eventually to hypothyroidism and enlargement of the thyroid gland (goitre) (efsa, b) . again, these natural products may explain some of the noncompliant results found in nrcp testing where treatment of animals with antithyroid agents (thyreostats) has been suspected. recently, an increasing use of herbal remedies, given as so-called alternatives to antibiotics for animals, has been reported also in ruminants. many of the herbal products contain biologically active substances that are also addressed in the list of undesirable plant metabolites. however, the remedies are given in low concentrations (lower than the larger amount that could be ingested with feed), and for a limited period. although specific data are lacking, it seems unlikely that residues of these compounds may be found in edible tissues of slaughtered animals. such substances, therefore, are placed in the category of low potential concern within the current classification. vmps, such as antimicrobials, anti-coccidials and anti-parasitics, are used commonly on sheep and goats for prophylactic purposes, particularly prior to turning animals out to grazing (anti-parasitic treatments). therapeutic use of vmps, particularly antimicrobials, may occur in response to diagnosis of infection in individual animals or in the flock. in general, vmps, except the substances allocated to annex table of regulation (ec) no / , are categorised as being of low potential concern because they have all been subjected to premarketing approval which specifies adis, and mrls, with the aim of guaranteeing a high level of safety to the consumer. where exceedances of mrls are found in the residue monitoring programmes (i.e. non-compliant results out of the samples analysed for antibacterials, non-compliant results for anthelmintics out of the samples analysed, and eight non-compliant results out of the samples analysed for anti-coccidials), these are typically of an occasional nature that is not likely to constitute a concern to public health. despite only two non-compliant results being reported out of the samples analysed for corticosteroids, there is concern about their potential illicit use, particularly in fattening lambs. in the negligible potential concern category are the dyes and the prohibited substances, colchicine, dapsone, chloroform and aristolochia spp. . . . . . prohibited substances: colchicine, dapsone, chloroform and aristolochia spp. colchicine is a plant alkaloid that has been used in veterinary medicine to treat papillomas and warts in cattle and horses by injection at the affected area. a possible contamination of food with colchicine has been identified through consumption of colchicum autumnale in forage by animals such as cattle or sheep and, in this context, colchicine has been determined in milk of sheep after exposure to c. autumnale (hamscher et al., ) . colchicine is genotoxic and teratogenic and may have toxic effects on reproduction. no non-compliant results for colchicine in sheep/goats have been reported from the european nrcps - ; however, it is probable that testing for this substance may not be included in monitoring programmes in many countries. in the absence of the absence of evidence for use of colchicine in sheep/goats colchicine currently is ranked as of negligible potential concern. (b) dapsone dapsone is a drug used in humans and formerly in veterinary medicine: in human medicine it is used for treatment of leprosy, malaria, tuberculosis and dermatitis; and in veterinary medicine it is used as an intramammary treatment for mastitis, for oral treatment of coccidiosis and for intra-uterine treatment of endometriosis. following scientific assessment by the committee for medicinal products for veterinary use (cvmp), a provisional mrl of µg/kg parent drug was established for muscle, kidney, liver, fat and milk for all food-producing animals (emea, ) . further information on teratogenicity and reproductive effects for dapsone was required but, when this was not provided, the substance was recommended for inclusion in annex iv to council regulation (eec) no / (now annex, table , of commission regulation (ec) no / ). more recently, the cvmp has reviewed the alleged mutagenicity of dapsone in the context of its occurrence as an impurity in vmps containing sulphonamides and concluded that it is not genotoxic (cvmp, ) , and efsa has issued a scientific opinion on the product as a food-packaging material (compound ), proposing an acceptable level of mg/kg food (efsa, c) . in the absence of evidence for use of dapsone in sheep and goats, dapsone currently is ranked as of negligible potential concern. (c) chloroform and aristolochia spp. in the negligible potential concern category are the prohibited substances, chloroform and plant remedies containing aristolochia spp., as these are not relevant to sheep/goat production and there is no evidence for use of these substances in sheep/goat production. vmps used in sheep and goat production but with no evidence for residues above mrls being found in monitoring programmes and vmps irrelevant for sheep and goat production are ranked as of negligible potential concern. (a) carbamates and pyrethroids carbamates and pyrethroids are used in animal houses and occasionally in animals including sheep for control of environmental infections, such as lice eggs in buildings. there are no recent incidents of non-compliant results reported in nrcp testing in sheep and goats during the period - , resulting in these substances being assigned to the category of negligible potential concern. (b) sedatives a range of sedative substances including barbiturates, promazines, xylazine and ketamine, are licensed for use in sheep, goats and other animal species for sedation and analgesia during surgical procedures or for euthanasia. they are rarely used in sheep and goats. no non-compliant results were found in the nrcp testing for the period - . due to their rapid excretion, these substances generally do not have detectable residues in muscle and so do not have mrls registered in the eu. animals euthanised with these substances are not allowed to enter the food chain. however, it should be noted that testing for this category of substances is not required under the provisions of council directive / /ec. there are no indications for use of dyes such as (leuco-)malachite green in sheep and goat animals. testing of sheep and goat animals for this group of substances is not required under council directive / /ec . a summary of the outcome of the ranking is presented in table . dioxins dl-pcbs mrl, maximum residue limit; nrcp, national residue control plan; psm, plant secondary metabolite; vmp, veterinary medicinal product. the ranking into specific categories of potential concern of prohibited substances, vmps and contaminants presented in this section applies exclusively to sheep and goats and is based on current knowledge regarding the toxicological profiles, usage in ovine animal production, and occurrence as residues or contaminants, as demonstrated by the data from the nrcps for the - period. where changes in any of these factors occur, the ranking might need amendment. another element of future aspects is the issue of 'new hazards'. in this context, new hazards are defined as compounds that have been identified as anthropogenic chemicals in food-producing animals and derived products and in humans and for which occurrence data are scarce. it does not imply that there is evidence for an increasing trend in the concentration of these compounds in food or in human samples. examples are brominated flame retardants, such as polybrominated diphenyl ethers (pbde) and hexabromocyclododecanes (hbcdds) or perfluorinated compounds (pfc), such as perfluorooctane sulphonate (pfos) and perfluorooctanoic acid (pfoa). in , efsa performed a risk assessment on polybrominated diphenyl ethers (pbdes) in food (efsa contam panel, e) . pbdes are additive flame retardants which are applied in plastics, textiles, electronic castings and circuitry. pbdes are ubiquitously present in the environment and likewise in biota and in food and feed. eight congeners were considered by the contam panel to be of primary interest: . the highest dietary exposure is to . toxicity studies were carried out with technical pbde mixtures or individual congeners. the main targets were the liver, thyroid hormone homeostasis and the reproductive and nervous system. pbdes are not genotoxic. the contam panel identified effects on neurodevelopment as the critical endpoint, and derived benchmark doses (bmds) and their corresponding lower % confidence limit for a benchmark response of %, the bmdl , for a number of pbde congeners: bde- , μg/kg b.w.; bde- , μg/kg b.w.; bde- , μg/kg b.w.; bde- , μg/kg b.w. owing to the limitations and uncertainties in the current database, the panel concluded that it was inappropriate to use these bmdls to establish health based guidance values, and instead used a margin of exposure (moe) approach for the health risk assessment. as the elimination characteristics of pbde congeners in animals and humans differ considerably, the panel used the body burden as the starting point for the moe approach. the contam panel concluded that for bde- , - and - current dietary exposure in the eu does not raise a health concern. for bde- there is a potential health concern with respect to current dietary exposure. the contribution of ovine meat and ovine-derived products to total human exposure is currently not known. as these compounds bioaccumulate in the food chain, they deserve attention and should be considered for inclusion in the nrcps. in , efsa delivered a risk assessment on hbcdds in food (efsa contam panel, f) . hbcdds are additive flame retardants, primarily used in expanded and extruded polystyrene used as construction and packing materials, and in textiles. technical hbcdd consists predominantly of three stereoisomers (α-, βand γ-hbcdd). also δ-and ε-hbcdd may be present but at very low concentrations. hbcdds are present in the environment and likewise in biota and in food and feed. data from the analysis of hbcdds in food samples were provided to efsa by seven european countries, covering the period from to . the contam panel selected α-, β-and γ-hbcdd as of primary interest. as all toxicity studies were carried out with technical hbcdd, a risk assessment of individual stereoisomers was not possible. main targets were the liver, thyroid hormone homeostasis and the reproductive, nervous and immune systems. hbcdds are not genotoxic. the contam panel identified neurodevelopmental effects on behaviour as the critical endpoint, and derived a bmdl of . mg/kg b.w. owing to the limitations and uncertainties in the current data base, the contam panel concluded that it was inappropriate to use this bmdl to establish a healthbased guidance value, and instead used an moe approach for the health risk assessment of hbcdds. as the elimination characteristics of hbcdds in animals and humans differ, the panel used the body burden as the starting point for the moe approach. the contam panel concluded that current dietary exposure to hbcdds in the eu does not raise a health concern. the occurrence data reported to efsa have shown that hbcdds could be detected in a limited number of meat samples. as the total number of sheep and goat meat samples analysed for hbcdds are sparse and thus the current knowledge about the prevalence and their levels in edible tissues of ovine animals is limited, their inclusion into nrcps even as a temporary measure should be considered. perfluorinated compounds (pfcs), such as pfos, pfoa and others have been widely used in industrial and consumer applications including stain-and water-resistant coatings for fabrics and carpets, oil-resistant coatings for paper products approved for food contact, fire-fighting foams, mining and oil well surfactants, floor polishes, and insecticide formulations. a number of different perfluorinated organic compounds have been widely found in the environment. in , efsa delivered a risk assessment on pfos and pfoa in food (efsa, g) . the contam panel established a tdi for pfos of ng/kg b.w. per day, and a tdi for pfoa of . μg/kg b.w. per day. a few data indicated the occurrence of pfos and pfoa in meat samples. however, owing to the low number of data, it has not been possible to perform an assessment of the relative contribution from different foodstuffs to human exposure to pfos and pfoa. a recent study in which contaminated feed was fed to sheep demonstrated the transfer of pfos, pfoa and various other pfcs with different chain lengths into milk and meat of the sheep (kowalczyk et al., ) . as pfcs have found widespread use and ubiquitous distribution in the environment, but representative data on their occurrence in meat are still limited, an intensified monitoring of these compounds in tissues as well as feed should be considered. besides the heavy metals discussed in section . . . . , attention should be given also to those compounds that may be used as feed supplements (e.g. copper, selenium, zinc). the correct use of these supplements cannot be guaranteed. although supplementary feeding to sheep and goats at pasture with trace elements is practised, supplements for sheep are not permitted to contain copper. however, the risk of copper supplementation cannot be ruled out on mixed livestock farms where supplements containing copper for other livestock, e.g. pigs or calves, may be given in error to sheep, resulting in undesirable residues in animal organs, such as the liver. sheep are particularly susceptible to copper toxicity; goats appear to be able to tolerate higher intakes (underwood and suttle, ) . in the absence of supplementation, the main source of copper is the pasture, the uptake of which is a complex interaction between the copper, molybdenum and sulphate levels in the plants and the grass plants themselves. for example, sheep that consume excess subterranean clover (trifolium spp.) will develop chronic copper accumulation in their tissues as a result of the copper/molybdenum balance in the clover (radostits et al., ) . there are also large differences between breeds in susceptibility to copper toxicity (underwood and suttle, ) . a closer communication of results from official feed control seems essential to decide whether or not analytical monitoring of residues in slaughter animals needs to be directed to these substances that might be overused or mistakenly used in sheep or goat feeds. in light of the existing regulations and the daily practice of the control of residues/chemical substances in sheep/goat carcasses, the strengths and weaknesses of the current meat inspection methodology can be summarised as follows: the strengths of the current meat inspection methodology for chemical hazards are as follows: the current procedures for sampling and testing are a mature system, well established and coordinated, and subject to regular evaluation that is in place across eu mss, with residue testing that is based on common standards for method performance and interpretation of results (commission decision / /ec), laboratory accreditation (iso/iec ) and quality assurance schemes. the residue monitoring programmes are supported by a network of eu and national reference laboratories and by research in the science of residue analysis that serves to provide state-of-the-art testing systems for control of residues (see annex ). there are well-developed systems and follow-up actions subsequent to the identification of non-compliant samples. as indicated in section . , follow-up on non-compliant samples is typically through intensified sampling (suspect sampling), withholding of slaughter and/or of carcasses subject to positive clearance as compliant, and on-farm investigations potentially leading to penalties and/or criminal prosecutions. the regular sampling and testing for chemical residues is a disincentive for the development of bad practices. there is constant development of new approaches in sampling and testing methodologies, particularly in the area of prohibited substances, directed at identifying illicit use of such substances in animal production; for example, use of samples other than edible tissues, such as excreta, eyes, fibre, etc. that demonstrate enhanced residue persistence characteristics, and use of indirect testing procedures, such as genomics, proteomics and metabolomics, to identify treated animals. the prescriptive sampling system allows for equivalence in the control of eu-produced sheep/goat meat. any forthcoming measures have to ensure that the control of imports from third countries remains equivalent to the controls within the domestic market (this issue is addressed further in tor ). the current combination of animal traceability, ante-mortem inspection and gross tissue examination can support the collection of appropriate samples for residue monitoring. however, any indication of misuse or abuse of pharmacologically active substances through visual assessment needs to be confirmed by chemical analysis for potential residues. the weaknesses of the current meat inspection methodology for chemical hazards are as follows: presence of chemical hazards cannot be identified by current ante-/post-mortem meat inspection procedures at the slaughterhouse level, indicating the need for further harmonisation of the risk reduction strategies along the entire food chain. at present, there is poor integration between the testing of feed materials for undesirable contaminants and the nrcps in terms of communication and follow-up testing strategies or interventions. moreover, a routine environmental data flow is not established and keeping habits for sheep and goats provide opportunities for feed coming in without a clear feed chain history. under the current system, sampling is mostly prescriptive rather than risk or information based. it appears that individual samples taken under the nrcp testing programme may not always be taken as targeted samples, as specified under council directive / /ec, but sometimes may be taken as random samples. there is a lack of sufficient cost-effective and reliable screening methods and/or the range of substances prescribed/covered by the testing is sometimes limited. there is limited flexibility to adopt new chemical substances into the nrcps and limited ongoing adaptation of the sampling and testing programme to the results of the residue monitoring programmes. the sampling under the nrcps reflects only a part of testing done by a number of mss and, therefore, data from the nrcps may not provide the most complete information for certain categories of substances. sheep and goats may not be subject to surveillance over their lifetime at the same level as is the case for other food animal categories such as pigs, poultry and, to a large extent, bovine animals owing to their traditional nomadic/outdoor farming systems. current monitoring of residues and contaminants in edible tissues of slaughter sheep/goats is based on council directive / /ec. in turn, risk ranking as presented under tor is also based largely on the chemical substances listed in council directive / /ec. the outcome of the ranking showed that only a small number of compounds are considered to constitute a high potential concern for consumers. however, considering the recent information available from the re-assessment of undesirable substances in the food chain, covered by more recent efsa opinions from the contam panel, additional compounds have been identified that require attention. prominent examples of such substances are and dl-pcbs, which were identified as compounds of high potential concern as they bioaccumulate in the food chain, are likely to be found in sheep/goat carcasses and have a toxicological profile that points towards public health concerns even at low (residue) concentrations. in addition, it has been shown that these substances are found in edible tissues of sheep, particularly in sheep liver. other halogenated substances such as brominated flame retardants, including polybrominated diphenylethers (pbdes) as well as hexabromocyclododecanes (hbcdds) and perfluorinated compounds (pfcs), such as pfos and pfoa have a different toxicological profile. these compounds bioaccumulate in the food chain and deserve attention, as currently the knowledge about the prevalence and level of residues of these compounds in edible tissues of sheep and goats is limited. chemical elements, such as copper, selenium and zinc, given as feed supplements may be mistakenly provided to sheep and goats resulting in undesirable residues in animal organs, such as the liver. inclusion of these various substances in the nrcps (even as a temporary measure) should be considered together with an intensified monitoring of feed materials for the presence of these compounds, to support forthcoming decisions on whether or not these substances require continued monitoring either in feed materials and/or in slaughter animals. due to the nature of the husbandry systems applied, sheep and goats are more likely to be exposed to environmental contaminants than other livestock. therefore, any incident giving rise to contamination of the environment may be noted primarily in animals kept outdoors, i.e. in sheep and goats. it is important to note that sheep and goat production in the eu is marked by being largely extensive in nature, involving frequent trading of animals and nomadic flocks. this involves differences in husbandry systems and feeding regimes resulting in different risks from chemical substances and contaminants. extensive periods on pasture or/as nomadic flocks, sale at open markets of many sheep and goats, and the presence of slaughter collection dealerships that may combine small numbers of animals purchased from several farmers, means that there is a level of concern that fci shared between farmers and the slaughterhouse (where residue data is managed), may be suboptimal. similarly, in these situations, the level of feedback from the slaughterhouse and authorities to farmers regarding the results of residue testing may be suboptimal. here the individual identification of animals, which has now become mandatory, may contribute to more transparency in the future. there is less concern about fci from dairy sheep and goats if they are reared under more intensive and controlled conditions. fci should be expanded for sheep and goats produced in extensive systems to provide more information on the specific environmental conditions where the animals are produced. it is recommended that sampling of sheep and goats should be based on the risk of occurrence of chemical residues and contaminants and on the completeness and quality of the fci supplied. to achieve this, better integration of results from official feed control with residue monitoring seems essential to indicate whether monitoring of residues in slaughter animals needs to be directed to particular substances. it should be noted that for the small ruminant chains more environmental information should be provided. therefore, there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and monitoring of environmental contaminants. moreover, the combination of data from both sheep and goats into one data set is based on the assumption that both food chains are identical. in many cases such an assumption is not justified. a separation of records for both species is recommended. in addition, there is a need to develop new approaches to chemical residues and contaminants testing. recent developments in chemical analytical techniques allow the simultaneous measurement of a broad range of substances. analytical techniques covering multiple analytes should be encouraged too and incorporated into feed quality control and national residue control programmes. application of such validated methods for multi-residue analyses comprising veterinary drugs, pesticides and natural and environmental contaminants should be encouraged. for prohibited substances, testing should be directed towards the farm level. one of the limitations of the currently applied analytical strategies is the generally poor sensitivity of some screening methods, resulting in the potential failure to detect residues in the low µg/kg range and, therefore, to identify non-compliant samples. new approaches including molecular biological techniques for the identification of indirect biomarkers of exposure in animals, as well as the development of reliable in vitro assays based on the biological action(s) of the compounds under analysis, are considered to be of additional value. such approaches may help in detecting molecules of unknown structure or that are not included in the nrcps but share a common mechanism of action, thereby better orienting and rationalising the subsequent chemical analysis. in the case of many of the substances that might be used illicitly for growth-promoting purposes in sheep and goat production, the results of nrcp testing show no non-compliant results (e.g. stilbenes) or indicate that reported non-compliant results may be attributable to dietary sources (e.g. thyreostats, zeranol) or are the result of endogenous production (e.g. gonadal (sex) steroids). therefore, future nrcp testing relating to such substances needs to be reduced and/or refocused, in terms of the range of analytes tested and the appropriateness of samples taken for testing, to better identify the extent of abuse of growth-promoting substances in sheep and goat production in the eu. in addition, control measures for such substances must not rely exclusively on nrcp testing, but should include veterinary inspection/police activities along the food chain directed at identifying abuse of such substances in sheep and goat production in the eu. finally, it should be noted that any measures taken to improve the efficacy of meat inspection protocols also need to address the compliance of imports to the eu with these strategies. where eu meat inspection would move to a risk-based approach, particular attention to the achievement of equivalent standards of food safety for imported food from third countries will be required. currently, within the prescriptive system for meat inspection and residue monitoring applying in the eu, third countries exporting food products of animal origin to the eu need to demonstrate that they have the legal controls and residue monitoring programmes capable of providing equivalent standards of food safety as pertains within the eu. if eu meat inspection moves to a risk-based approach, particular attention will need to be paid to the achievement of equivalent standards of food safety for imported food from third countries. the risk-ranking appropriate within the eu in relation to veterinary drugs and contaminants might not be appropriate in third countries to achieve equivalent standards of food safety. rather than requiring that a risk-based monitoring programme applying within eu mss should be applied similarly in the third country, an individual risk assessment for each animal product(s)/third country situation may be required, which should be updated on a regular basis. this section contains conclusions derived from the material discussed in the document, together with recommendations for improvements to meat inspection with regard to chemical hazards within the eu. tor to identify and rank the main risks for public health that should be addressed by meat inspection at eu level. general (e.g. sepsis, abscesses) and specific biological risks as well as chemical risks (e.g. residues of veterinary drugs and contaminants) should be considered. differentiation may be made according to production systems and age of animals (e.g. breeding compared to fattening animals) as a first step in the identification and ranking of chemical substances of potential concern, the contam panel considered the substances listed in council directive / /ec and evaluated the outcome of the national residue control plans (nrcps) - . the contam panel noted that only . % of the total number of results was non-compliant for one or more substances listed in council directive / /ec. potentially higher exposure of consumers to these substances from sheep and goat meat takes place only incidentally, as a result of mistakes or non-compliance with known and regulated procedures. the available aggregated data indicate a low number of samples that were non-compliant with the current legislation. however, in the absence of substance-and/or species-specific information, such as the tissues used for residue analysis and the actual concentration of a residue or contaminant measured, these data do not allow for a reliable assessment of consumer exposure. other criteria used for the identification and ranking of chemical substances of potential concern included the identification of substances that that are found in other testing programmes and that bioaccumulate in the food chain, substances with a toxicological profile of concern, and the likelihood that a substance under consideration will occur in sheep and goat carcasses. taking into account these criteria the individual compounds were ranked into four categories denoted as being of high, medium, low and negligible potential concern. the highest overall proportion of non-compliant results under the nrcps were for group b substances, contaminants ( . %) representing largely exceedances of the maximum residue limits/maximum levels (mrls/mls) specified for these substances. the proportion of noncompliant results overall for group a substances, prohibited substances ( . %) and for group b /b substances, veterinary medicinal products (vmps) ( . %) represent largely illicit use and exceedances of the mrls, respectively. dioxins and dioxin-like polychlorinated biphenyls (dl-pcbs) were ranked as being of high potential concern owing to their known bioaccumulation in the food chain, their frequent findings above mls, particularly in sheep liver, and in consideration of their toxicological profile. stilbenes, thyreostats, gonadal (sex) steroids, resorcylic acid lactones and beta-agonists, especially clenbuterol, were ranked as being of medium potential concern because of their toxicity for humans, their efficacy as growth promoters in sheep and goats and the incidence of non-compliant results. chloramphenicol and nitrofurans were ranked as being of medium potential concern, as they have proven toxicity for humans, they are effective as antibacterial treatments for sheep/goats and as non-compliant samples are found in most years of the nrcps. non-dioxin-like polychlorinated biphenyls (ndl-pcbs) bioaccumulate, and there is a risk of exceeding of the mls, but they were ranked in the category of medium potential concern, because they are less toxic than dioxins and dl-pcbs. the chemical elements cadmium, lead and mercury were allocated to the medium potential concern category taking into account the number of non-compliant results reported under the nrcps and their toxicological profile. residues originating from other substances listed in council directive / /ec were ranked as of low or negligible potential owing to the toxicological profile of these substances at residue levels in edible tissues or to the very low or non-occurrence of non-compliant results in the nrcps - , and/or to the natural occurrence in sheep and goats of some of these substances. the low potential concern category includes nitroimidazoles chlorpromazine, organochlorine pesticides, organophosphorus compounds, natural toxins, as well as and vmps exceeding mrls. in the negligible potential concern category are the prohibited substances colchicine, dapsone, chloroform and aristolochia spp., the dyes, as well as vmps occurring below mrls. the contam panel emphasises that this ranking into specific categories of potential concern is based on the current knowledge regarding toxicological profiles, usage in sheep and goat production and occurrence as contaminants or chemical residues, as demonstrated by the data from the nrcps for the - period. future monitoring programmes should be based on the system for the ranking of chemical compounds into potential concern categories as presented in this document. regular updating of the ranking of chemical compounds in sheep and goats as well as of the sampling plans should occur, taking into account any new information regarding the toxicological profile of chemical residues and contaminants, usage in sheep and goat production, and actual occurrence of individual substances in sheep and goats. tor to assess the strengths and weaknesses of the current meat inspection methodology and recommend possible alternative methods (at ante-mortem or post-mortem inspection, or validated laboratory testing within the frame of traditional meat inspection or elsewhere in the production chain) at eu level, providing an equivalent achievement of overall objectives; the implications for animal health and animal welfare of any changes suggested in the light of public health risks to current inspection methods should be considered strengths of the current meat inspection methodology for chemical hazards are as follows: the current procedures for sampling and testing are a mature system, in general well established and coordinated including follow-up actions subsequent to the identification of non-compliant samples. the regular sampling and testing for chemical residues and contaminants in the system is an important disincentive to the development of undesirable practices. the prescriptive sampling system allows for equivalence in the control of eu-produced sheep and goat meat. any forthcoming measures have to ensure that the control of imports from third countries remains equivalent to the controls within the domestic market. the current combination of animal traceability, ante-mortem inspection and gross tissue examination can support the collection of appropriate samples for residue monitoring. weaknesses of the current meat inspection methodology for chemical hazards are as follows: a weakness of the system is that presence of chemical hazards cannot be identified by current ante-/post-mortem meat inspection procedures at the slaughterhouse level, indicating the need for further harmonisation of the risk reduction strategies along the entire food chain. integration between testing of feed materials for undesirable contaminants and the nrcps in terms of communication and follow-up testing strategies or interventions is currently limited. moreover, a routine environmental data flow is not established and keeping habits for sheep and goats provides opportunities for feed coming in without a clear feed chain history. under the current system, sampling is mostly prescriptive rather than risk or information based. it appears that individual samples taken under the nrcp testing programme may not always be taken as targeted samples, as specified under council directive / / ec, but sometimes may be taken as random samples. there is a lack of sufficient cost-effective and reliable screening methods and/or the range of substances prescribed/covered by the testing is sometimes limited. there is limited flexibility to adopt emerging chemical substances into the nrcps and limited ongoing adaptation of the sampling and testing programme to the results of the residue monitoring programmes. in addition, sampling under the nrcps reflects only a part of testing done by a number of ms, the results of which should be taken into consideration. sheep and goats may not be subject to surveillance over their lifetime at the same level as is the case for other food animal categories such as pigs, poultry and, to a large extent, bovine animals owing to their traditional nomadic/outdoor farming systems. meat inspection systems for chemical residues and contaminants should be less prescriptive and should be more risk and information based, with sufficient flexibility to adapt the residue monitoring programmes to results of testing. if new hazards currently not covered by the meat inspection system (e.g. salmonella, campylobacter) are identified under tor , then recommend inspection methods fit for the purpose of meeting the overall objectives of meat inspection. when appropriate, food chain information should be taken into account dioxins and dl-pcbs which accumulate in food-producing animals have been ranked as being of high potential concern. as these compounds have not yet been comprehensively covered by the sampling plans of the current meat inspection (nrcps), they should be considered as 'new' hazards. in addition, for a number of chemical elements used as feed supplements and for organic contaminants that may accumulate in food-producing animals only limited data regarding residues in sheep and goats are available. this is the case, in particular, for brominated flame retardants, including polybrominated diphenylethers (pbdes) and hexabromocyclododecanes (hbcdds) and perfluorinated compounds (pfcs) including (but not limited to) pfos and pfoa. control programmes for residues and contaminants should include 'new hazards' and take into account information from environmental monitoring programmes which identify chemical hazards to which animals may be exposed. provide an equivalent level of protection within the scope of meat inspection or elsewhere in the production chain that may be used by risk managers in case they consider the current methods disproportionate to the risk, e.g. based on the ranking as an outcome of tor or on data obtained using harmonised epidemiological criteria. when appropriate, food chain information should be taken into account sheep and goat production in the eu is marked by being largely extensive in nature, involving frequent trading of animals and nomadic flocks. this involves differences in husbandry systems and feeding regimes resulting in different risks for chemical substances and contaminants. extensive periods on pasture or/as nomadic flocks and the use of slaughter collection dealerships may preclude detailed lifetime fci. similarly, in these situations, the level of feedback from the slaughterhouse and authorities to farmers regarding the results of residue testing may be suboptimal. there is less concern about fci from dairy sheep and goats as they are reared under more intensive and controlled conditions. better integration of results from official feed control with residue monitoring seems essential to indicate whether monitoring of residues in slaughter animals needs to be directed to particular substances. therefore, there is a need for an improved integration of sampling, testing and intervention protocols across the food chain, nrcps, feed control and environmental monitoring. fci should be expanded for sheep and goats produced in extensive systems to provide more information on the specific environmental conditions where the animals are produced. it is recommended that sampling of sheep and goats should be based on the risk of occurrence of chemical residues and contaminants and on the completeness and quality of the fci supplied. there is a need for an improved integration of sampling, testing and intervention protocols for domestic sheep and goats across the food chain, nrcps, feed control and environmental monitoring. the development of analytical techniques covering multiple analytes and of new biologically based testing approaches should be encouraged and incorporated into feed quality control and chemical residue/contaminants testing in the nrcps. the combination of data from both sheep and goats into one data set assumes that both food chains are identical, which is not the case. a separation of test records for both species is recommended. for prohibited substances, testing should be directed where appropriate towards the farm level. future nrcp testing relating to substances that might be used illicitly for growth promoting purposes needs to be refocused to better identify the extent of abuse in the eu. in addition, control measures for prohibited substances should not rely exclusively on nrcp testing, but should include veterinary inspection during the production phase and the use of biological methods and biomarkers suitable for the identification of abuse of such substances in sheep and goat production in the eu. commission decision / /ec specifies the performance criteria for methods, including recovery and accuracy, trueness and precision. the decision specifies, also, the validation required to demonstrate that each analytical method is fit for purpose. in the case of screening methods, validation requires determination of the performance characteristics of detection limit, precision, selectivity/specificity and applicability/ruggedness/stability. for confirmatory methods, in addition to determination of those performance characteristics, validation requires, also, determination of decision limit and trueness/recovery. the analytical requirements for the determination of dioxins, dl-pcbs and ndl-pcbs are laid down in commission regulation (ec) no / . following a criteria approach analyses can be performed with any appropriate method, provided the analytical performance criteria are fulfilled. while methods, such as gc-ms and cell-and kit-based bioassays are allowed for screening purposes, the application of gc/high-resolution ms is mandatory for confirmation of positive results. screening methods include a broad range of methods, such as elisa, biosensor methods, receptor assays, bioassays and biomarkers for the presence of residues of concern. these screening methods generally use specific binding of the molecular structure of the residue(s) by antibodies or other receptors to isolate and measure the presence of the residues in biological fluids (urine, plasma) or sample extracts. more recently, biomarkers for the use of prohibited substances such as hormonal growth promoters have been identified as potential screening methods for these substances. physicochemical methods, such as lc or gc with various detectors, may be used, also, as screening methods. in the particular case of antimicrobials, microbiological or inhibitory substance tests are widely used for screening. in such tests, using multiple plates/organisms or kit formats, the sample or sample extract is tested for inhibition of bacterial growth. if, after a specific period of incubation, the sample inhibits the growth of the bacteria, it is considered that an antibacterial substance is present in the sample, but the specific substance is not identified. given that this is a qualitative analytical method, a misinterpretation of the results cannot be ruled out, and some false-positives can occur. microbiological methods are screening methods that allow a high sample throughput but limited information is obtained about the substance identification and its concentration in the sample. when residues are found in a screening test, a confirmatory test may be carried out, which normally involves a more sophisticated testing method providing full or complementary information enabling the substance to be identified precisely and confirming that the maximum residue limit has been exceeded. with the significant developments in liquid chromatography and in mass spectrometry over the last decade, confirmatory methods are largely ms-based, using triple quadrupole, ion trap, and other ms techniques. indeed, with current methodology in a modern residue laboratory with good ms capability, much of the two-step approach of screening followed by confirmatory testing has been replaced by single confirmatory testing. this has been made possible by the greatly-enhanced separation capability of ultra-high-performance liquid chromatography (uplc), coupled with sophisticated ms detection systems. the parallel growth in more efficient sample extraction/clean-up methods is an integral part of these advances in confirmatory methods and such chemistries produce rapid, sometimes (semi)-automated procedures providing multi-residue capability. techniques based on highly efficient sorbent chemistries for solid-phase extraction and techniques such as quechers are examples of these advances. such combinations of uplc-ms/ms methods with appropriate sample extraction/cleanup technologies allows for unequivocal, quantitative determination of a broad spectrum of substances in a single analytical method. particularly in the area of prohibited substances, the power of ms techniques is being applied to identify hitherto unknown compounds and to identify exogenous from endogenous substances. for example, time-of-flight ms provides accurate mass capability and may allow for retrospective analysis capability from the ms data. the technique of gc-combustion-isotope ratio ms has been utilised to study the c/ c ratio of substances in urine samples, where, for example, such c/ c ratio differs significantly between endogenous (or natural) testosterone and exogenous (or synthetic) testosterone. liver examination at slaughter is the most direct, reliable, and cost-effective technique for the diagnosis of fasciolosis. moving to a visual only meat inspection system would decrease the sensitivity of inspection of fasciolosis at the animal level; however, it would be sensitive enough to identify most if not all affected herds. therefore the consequences of the change would be of low relevance. the feedback to farmers of fasciola hepatica detected at meat inspection should be improved, to allow farmer information to support rational on-farm fluke management programmes. quantitative analysis indicated that the proposed changes to the meat inspection system would not affect detection of welfare conditions; however, for leg and foot disorders and sheep scab a combination of the two surveillance components (clinical surveillance and meat inspection) were found to be more effective than either one of the surveillance component on its own. qualitative analysis suggested that the proposal for shortened transport and lairage time would be beneficial to improving the welfare of small ruminants. food chain information should include animal welfare status in order to complement the slaughterhouse surveillance systems (ante-mortem and post-mortem inspection) and the latter could be used to identify on-farm welfare status. other recommendations on biological and chemical hazards would not have a negative impact on surveillance of animal diseases and welfare conditions. in this mandate, the ahaw panel and the ad hoc working group (wg) are focusing on the implications for animal health and welfare of any changes to the current meat inspection (mi) system, as proposed by biological hazards (biohaz) and contaminants in the (contam) panels. "implications for animal health and welfare" relates specifically to monitoring and surveillance of animal diseases and welfare conditions during mi (that is, inspection at the slaughterhouse before and after slaughter, in this document referred to as ante-mortem (ami) and post-mortem (pmi) inspection, respectively). therefore, the objective of this work was to identify possible effects and to assess the possible consequences on surveillance and monitoring of animal diseases and welfare conditions if the proposed changes in the mi system were applied. apart from its contribution to assuring public health, current mi also contributes to surveillance and monitoring of animal diseases and welfare conditions (efsa, ) , and may be an important component of the overall monitoring and surveillance system. further, mi offers the only opportunity for monitoring some diseases and welfare conditions at certain stages of a control and eradication programme. therefore, any change in mi system that could lead to a loss of sensitivity (reduced probability of detection) may compromise the surveillance efficacy. in the case of animal welfare, ami and pmi also play a role in surveillance and monitoring of the welfare of farmed animals, and, moreover, it is the only place to assess poor welfare during the transport of animals to the slaughterhouse. small ruminants are subjected to different periods of feed and water restriction, handling and transport prior to arrival at the slaughterhouse. ami begins with the observation of animals at the time of unloading from the transport vehicle and the purpose is to determine whether animal welfare has been compromised in any way on the farm and during handling and transport. welfare conditions such as fitness to travel, prevalence of injury, lameness and exhaustion, and the cleanliness of the animals are ascertained during ami. certain other welfare conditions such as bruising may not always be detectable during ami, but become visible during routine pmi. welfare conditions related to foot and leg disorders would be detectable only if the animals are observed during walking, e.g. unloading or moving to lairage pens, and are also less likely to be detected by visual examination during pmi. when mi detects apparent defects or abnormalities, incision of the relevant joints, tendons and/or muscles could be necessary to determine the presence as well as the severity of foot and leg disorders. implications for surveillance and monitoring for small ruminant health and welfare of changes to meat inspection as proposed by the biohaz panel the proposed modifications to the mi system that may have implications for animal health and welfare (see biohaz appendix a for full details), include: shorter transport and lairaging, which may be beneficial in terms of reducing crosscontamination of pathogens salmonella spp. and human pathogenic escherichia coli (see biohaz appendix a, section . ). the changes to address prioritised hazards not currently detected by mi will focus on improved collection and use of relevant food chain information (fci), including the use of harmonised epidemiological indicators, to provide information for categorisation of farms, which can be used for, for example, risk-based ami, logistic slaughter and/or decontamination (see biohaz appendix a, sections and . ). omission of palpation and incision in animals subjected to routine slaughter at pmi. if abnormalities are detected during visual inspection, palpation and incision should be carried out separately from the routine inspection of carcasses to prevent cross-contamination) (see biohaz appendix a, section . ). to assess the impact of proposed changes to the current mi on the overall sensitivity for surveillance and control of animal diseases and welfare conditions, a quantitative assessment was performed based on expert opinion and modelling. an external consortium (comisurv), under the provision of an efsa procurement, performed this work. the detailed methodology, as well as results and conclusions, together with assumptions and limitations of the modelling, can be found in the comisurv report for small ruminants mi (hardstaff et al., ) . these limitations include: the parameters for the probability of detection were based on expert opinion and therefore there is uncertainty as to the true range of these values. limited number of experts to cover the different subjects needed for the assessment. variations in the epidemiological situation of the disease and welfare conditions between countries. a brief description of the methodology that was applied is given below. an initial long list of small ruminant diseases and welfare conditions relevant to the eu was established, based on general textbooks, references, and expert opinion. wg experts filtered this list using a decision tree, following previous methodology and criteria developed for previous opinions (efsa biohaz, contam and ahaw panels, , ) . a disease or condition was retained on the list by the wg experts using the following criteria: a high likelihood of detection of a disease or welfare condition at mi, at the age that animals are presented at the slaughterhouse (if likelihood was medium, low, or the condition was undetectable, it was excluded from the list). the disease or welfare condition is considered relevant to the eu (conditions not occurring in eu member states (ms) were omitted). the condition is relevant to animal health and welfare (conditions mainly relevant to public health were not retained, as they should be dealt with by the biohaz panel). the slaughterhouse surveillance component (ami + pmi) provided by mi is significant for the overall surveillance of the disease or welfare condition (if there are other surveillance or detection systems much more effective and highly preferable to mi, the conditions were removed from the list). the final list of conditions established by the wg experts to be assessed by the comisurv consortium is shown in table . a total of twenty conditions (eleven diseases and nine welfare conditions) were included in this list. a stochastic model to quantify the monitoring and surveillance effectiveness of mi in small ruminants was developed. a definition of a typical and a mild case for each of the diseases and welfare conditions listed in table was provided by the comisurv experts. typical cases were by definition detectable cases and express more developed clinical signs than mild cases. typical cases were defined as the clinical signs and/or lesions that are expected to be observed in more than % of affected or infected small ruminants arriving at slaughter. the mild case of a disease or welfare condition is the form that could be seen at the early stages of the disease or at some point between the subclinical (and without pathological lesions that are observable through the meat inspection process) and the fully developed form (i.e. "typical" form). a mild case is neither typical nor non-detectable. the animal will probably present more subtle signs than in the typical case. as an example, a typical case of echinococcosis would show hydatid cysts in the liver and in the lungs, and a mild case would have a low number of small cysts in liver and lungs. the proportion of affected animals presenting as typical or mild cases, as well as the non-detectable fraction was estimated (see comisurv report for details). the most likely detection probability, as well as th and th percentiles (the probability intervals) of the output distribution of ami, pmi, and ami and pmi combined were derived for each of the conditions in table , both prior to and following suggested changes to the mi system as proposed by the biohaz panel. the inspection protocols in the current and visual only systems are compared in table . the probability of detection was calculated for both detectable cases (mild and typical), and for all cases (referred to as stage in the comisurv report). mastitis x a stage -all diseases and welfare conditions listed were evaluated with regards to their probability of being detected at mi. b stage -for selected diseases and welfare conditions, surveillance by mi was to be compared with clinical surveillance. as inspection tasks aimed to detect orf do not change in a visual-only system, orf was not further discussed. in addition, for three of the selected diseases and two welfare conditions, considered to be more adversely affected in terms of probability of detection following the proposed changes to the mi system, further modelling was implemented to quantify the effectiveness of monitoring and surveillance in the overall monitoring and surveillance system, both prior to and following suggested changes to the mi system (referred to as stage in the comisurv report). the objective for exotic diseases (i.e. foot and mouth disease (fmd), was to evaluate the probability of detecting at least one infected case of infected small ruminants by slaughterhouse inspection relative to other surveillance system components (component sensitivity), which for the purpose of this opinion was clinical surveillance. for endemic diseases (fasciolosis, lower respiratory tract infection) and welfare conditions (leg and feet disorders including foot rot, sheep scab) the objective was to calculate the case-finding capacity i.e. the proportion of infected or affected animals detected by the surveillance components (detection fraction) during both slaughterhouse and clinical surveillance. note that the word surveillance as used in this opinion does not imply that any action is taken to capture, or act upon, the information that is collected. it merely points to the potential of these systems to be used for such purposes. the detection probability for each disease and condition using the current mi system and the visual only system is shown in table (detectable cases) and table a of annex (all cases, including subclinical cases not detectable at slaughterhouse). a change to visual only inspection caused a significant reduction in the probability of detection (i.e. non-overlapping % probability intervals, stage ) during mi of detectable cases of fasciolosis (with a % reduction in detection probability) and tuberculosis (tb) in goats ( %) ( table ). when all cases were considered (see annex , table a) , the change to a visual only pmi protocol resulted in a clear reduction in the detection probability of three diseases, tb in goats (with a % reduction in detection fraction), fasciolosis ( %) and pulmonary adenomatosis/maedi-visna ( %)), although none of these reductions was significant when the overlap of probability intervals was considered. values for the probability of detection at ami and for the two proposed pmi scenarios for all cases (detectable and non-detectable cases combined) were also determined for welfare conditions (table and annex , table a , respectively). the probability of detection was significantly higher for ami than pmi for broken bones, diarrhoea, leg and foot disorders, partial prolapses/hernias and sheep scab. a change in pmi protocol to a visual only system did not significantly reduce the detection of any welfare conditions. pmi had a significantly higher probability of detection than ami for mastitis. combined slaughterhouse probabilities of detection were higher for detecting cases of many welfare conditions than when the slaughterhouse inspection components were considered separately (table and annex , table a ). where this was not the case, i.e. the detection probability of the combined mi process yielded equal values as either ami or pmi on its own. this was due to the fact that the experts had agreed that the respective welfare condition could not be detected at all with the one of the two mi steps. therefore the results of the combined mi are solely based on the results of either ami or pmi. for three welfare conditions (arthritis, broken bones and poor body condition), the pmi of detectable cases with visual only protocol also reduced the detection probability, although this was not significant. when considering all cases (annex , table a ), the probability of detection for the combined inspection was lower than for detectable cases. the change in pmi protocols led to a slight reduction in the detection probability of two welfare conditions (arthritis and poor body condition), yet none of these reductions were significant when the overlap of probability intervals was considered. the probability of detection for all detectable cases of diseases and welfare conditions at ami, pmi (two proposed scenarios-current and visual) inspection scenarios with the most likely (ml), th and th percentiles. shaded rows indicate diseases identified as having a significant reduction in detection probability in the visual-only scenario. for the two welfare conditions (leg and foot disorders and sheep scab) included in the overall surveillance analysis (stage ), a combination of the two surveillance components (clinical surveillance and mi) was found to be more effective (detecting a higher fraction of affected animals) than either one of the surveillance component on its own. however, the change in pmi protocol did not greatly affect the detection fraction of these welfare conditions (table ) . with regard to epizootic diseases, clinical surveillance (detection of clinical signs) had a greater sensitivity for detecting fmd than slaughterhouse surveillance, and the sensitivity increased with an increase in population size (table , stage ). a change to a visual only system would not have a negative impact on sensitivity of detection. a qualitative assessment was conducted, based on a literature review and expert opinion from the wg members, for the diseases identified as having a significant reduction in detection probability of detectable cases in the quantitative assessment of the comisurv report (tb in goats and fasciolosis) and welfare conditions. therefore routine inspection, unlike inspection for btb in the bovine, does not differ substantially from the visual only mi being proposed. information regarding the presence of tb is not specifically recorded at pmi. the comisurv report relating to the contribution of meat inspection to animal health surveillance in sheep and goats investigated the probability of detection of specific diseases and welfare conditions for three scenarios: one for inspection tasks as currently required by the legislation; one with visual inspection only; and one in which risk categorisation based on a hypothetical public health risk formed the basis for subsequent inspections. according to the comisurv report, the most likely values for the proportion of non-detectable, mild and typical cases elicited by experts for tb in goats were . , . and . , respectively. the pmi had a significantly higher probability of detection of tb in goats than ami for detectable cases and all cases, and the reduction in the probability of detection of tb in goats was significant for visual only pmi. the probability of detection (most likely values) of tb in goats (table for detectable cases and annex , table a for all cases) for combined ami and pmi was . ( . for all cases) changing to . ( . for all cases) for visual only, which represents a % reduction. as is the case with btb in bovines, the contribution of mi surveillance of tb in small ruminants is to support the detection of flocks/herds with tb, and the detection of individual animals with tb is merely the first step in improving herd surveillance. since more than one sheep or goat per flock/herd is likely to be slaughtered per time period (e.g. per year), the flock/herd probability of detection is a function of the individual animal sensitivity, the number of animals slaughtered from the herd and the within-herd prevalence of tb. for any given flock/herd, the flock/herd sensitivity will increase with the number of animals slaughtered. officially tuberculosis free (otf) status, however, is not available for small ruminants as it is for bovine herds, so the herd status is important in controlling tb in small ruminants, but not in substantiating freedom from tb. for tb in goats, the results from the comisurv report suggest that a change from the current inspection to visual only will reduce the probability of detection for detectable cases. a qualitative risk and benefit assessment for visual only pmi of cattle, sheep, goats and farmed/wild deer, commissioned by the uk food standards agency (fsa) (fsa, a), considered the absolute and relative animal health risk of tb in small ruminants as negligible when moving to a visual only pmi system when compared with the current legal requirements of inspection for sheep and goats (regulation (ec) / ). the main reason to reach this conclusion is that the current legal pmi requirements for small ruminants are mainly visual and do not require the incision of the lungs. incision of lymph nodes are required if in doubt after the initial visual inspection. considering that the majority of positive submissions to government labs in the united kingdom are associated with lesions in the mediastinal and bronchial lymph nodes, it is likely that the most frequent tb-like lesions in small ruminants (as described above) are not detected under the current traditional pmi requirements, which are initially visual, and therefore nor would be by visual only inspection. this lack of sensitivity is aggravated by the current commercial speed of slaughtering lines and the limited time available to carry out the inspection of carcases and offal. in the united kingdom, tb in non-bovine farmed animals is rare. small ruminants are not considered to represent a significant reservoir of the disease for other animals or to be of any significance in the persistence of btb in cattle. although small ruminants are considered as spillover hosts, it is still possible that severely infected sheep and goats could act as vectors of infection for other domestic and wild animals. in these circumstances, on-farm identification of possible sick small ruminants by farmers and a differential diagnosis from other respiratory disease and necropsy examination of lungs and relevant lymph nodes by farm veterinarians are the most effective control activities. fasciolosis (liver fluke) in small ruminants has a world wide distribution and is caused by the trematode parasite, fasciola hepatica. the direct losses due to fasciolosis are mortality, liver condemnation and reduced growth rate. disease results from the migration of large numbers of immature flukes through the liver, from the presence of adult flukes in the bile ducts, or both. liver fluke can infect all grazing animals, but is most pathogenic in sheep (armour, ) . the incidence of liver fluke is inextricably linked to high rainfall and is particularly prevalent in years when summer rainfall is high, which facilitates the survival and proliferation of the snail intermediate host and infective parasite stages present in the environment (ollerenshaw, ) . changes in recent epidemiological patterns, due to climate change, have resulted with increasing prevalence in northern european countries and the survival of fluke on pasture over winter, exposing sheep to infection for long periods (daniel and mitchell, ) . there have been increasing reports of liver fluke disease over the last decade in countries such as the united kingdom and ireland, most likely due to higher than average rainfall and temperatures through the seasons, and greater stock movements (taylor, ) . in southern european regions, for example in spain, the infection of snails could occur throughout the year, with a higher infection rate at the end of summer-autumn and at the end of the winter, and sheep eliminating eggs throughout the year (manga et al., ) . prevalence studies in the north-west of spain have indicated a liver fluke infection rate of approximately % of sheep flocks (ferre et al., ) . regulation (ec) / requires that domestic sheep and goats going for human consumption must have visual inspection of the liver and the hepatic lymph nodes, palpation of the liver and its lymph nodes, and incision of the gastric surface of the liver to examine the bile ducts. liver examination at slaughter is the most direct, reliable, and cost-effective technique for diagnosis of fasciolosis (urquhart et al., ) . reliance upon clinical signs to diagnose fasciolosis may result in low detection rates (rojo-vázquez et al., ) . mi is a convenient means of confirming a suspected herd or flock infestation, assessing the extent of infestation or determining the effectiveness of anthelmintic treatment (kissling and petrey, ) . pmi can confirm acute and sub-acute liver damage with liver enlargement, caused by the presence of immature flukes. animals suffering from chronic fasciolosis show a deterioration of the carcass, cholangitis, biliar occlusion and hepatic fibrosis with adult fluke present in bile ducts. besides the liver, other organs and structures can be found damaged, such as periportal and mesenteric lymph nodes that are enlarged and exhibit a brownish colour (rojo-vázquez et al., ) . mckenzie's study ( ) compared the new zealand inspection procedure (observation and palpation of livers) with the european community procedure (observation and incision through the gastric surface of liver to examine the bile ducts) and found that the new zealand method detected fewer truly infected livers, but misdiagnosis by inspectors gave more false-positives. the gastric surface incision procedure has a specificity of % and sensitivity of . % to . % (kissling and petrey, ) . this underlines their importance in animal disease surveillance and the importance of the present mi technique in liver fluke surveillance. effective disease monitoring systems are essential to the provision of reliable information on diseases to producers, thereby protecting a nation's agricultural system and its potential for production (glosser, ) . information on fluke infestation at herd level allows farmers to develop and implement control programmes that can attempt to reduce risk factors and recommend the use of drugs in a more strategic fashion (fairweather, ) . edwards et al., ( ) demonstrated that one-third of farmers would improve their animal husbandry if informed of the mi findings for their lambs. the comisurv report on the contribution of mi to animal health surveillance determined that there would be a significant difference in detection rates between the current and the visual only mi techniques (a probability of . of all detectable cases by current method compared with . by the visual only method). a reduction in liver fluke surveillance by the use of a less sensitive mi procedure will reduce the quality of information available for producers and thereby directly impact animal health and welfare. the quantitative analysis (see comisurv report) of detection levels for welfare conditions indicated that none of them will be significantly affected by the proposed changes to mi. however, the results also revealed that when both ami and pmi were considered, the probability of detection was high for most welfare conditions. it was also evident that detection of two welfare conditions, i.e. leg and foot disorders (including foot rot) and sheep scab, would be more effective when a combination of clinical and slaughterhouse surveillance systems are used. leg and foot disorders in sheep are caused by either infectious conditions, i.e. interdigital dermatitis (also known as scald), foot rot, contagious ovine digital dermatitis, or non-infectious conditions such as white line disease (shelly hoof), granulomas, foot abscesses, interdigital fibromas, and foreign bodies such as thorns, wire or soil balls (kaler and green, ; conington, et al., a conington, et al., , b fawc, ) . overgrown and misshapen hooves are also attributed to lameness in sheep and erysipelas can cause outbreaks of lameness in lambs. the importance of routine feet examination in sheep health management is well documented (hodgkinson, ) . the farm animal welfare council (fawc) suggested that there is adequate legal protection for sheep suffering from lameness as the european transport regulation ec/ / prohibits the transport of unfit animals, and specifically includes those that are "injured or present physiological weaknesses or pathological processes" and, in particular, are "unable to move independently without pain or to walk unassisted". the fawc also recommended that the surveillance of lameness in sheep should be undertaken by the uk government, in conjunction with farm assurance schemes, to determine trends in lameness over time, which would also apply to other mss where the prevalence of lameness is high (e.g. more than % of flocks being affected at national level). lameness in dairy goats is also a common welfare problem and abnormalities detected in the united kingdom were horn separation, white line lesions, slippering, abscess of the sole, foreign bodies and granulomatous lesions (hill et al., ) . interdigital dermatitis has also been reported to be the cause of lameness in goats kept indoors in greece (christodoulopoulos, ). sheep scab is a skin disease caused by the mite psoroptes ovis and has been widely prevalent in europe. it is a major animal welfare, husbandry and economic problem (bisdorff et al., ; bisdorff and wall, ) . the objectives of the ami in the current hygiene legislation, regulation (ec) / , are to determine: conditions that may might adversely affect human or animal health, paying particular attention to the detection of zoonotic diseases and animal diseases for which animal health rules are laid down in eu legislation, and whether there is any sign that welfare has been compromised. council regulation ( implementation of welfare assessment protocols using appropriate animal based indicators during clinical and slaughterhouse (ami + pmi) surveillance systems would improve the welfare of small ruminants. these welfare surveillance systems should become an integral part of the food chain information (fci). sheep are thought to be tolerant of being transported and deprived of food and water for long periods (knowles et. al., ) . it is a common practice by farmers to withdraw food on the farm for several hours prior to transport of sheep / lambs to auction markets or slaughterhouses, primarily to reduce soiling. however, dehydration can be a welfare problem during long transport distances/times, especially in high ambient temperatures (knowles, ) . recovery from the effects of food and water deprivation is a very slow process and therefore lairage appears to be of very little benefit. in this regard, full recovery from hours of transport has been shown to take up to hours (knowles, ) . owing to these, the biohaz panel's proposal for shortened transport and lairage time would be beneficial to animal welfare. the eu regulation (ec) no / on the hygiene of foodstuffs requires slaughterhouse operators to request fci declarations to ensure animals entering the food chain are safe for human consumption. fci is also a good source of information to facilitate the detection in the slaughterhouse of abnormalities indicative of animal health and welfare conditions. fci is recorded at the flock/herd level, and its minimum content is described in regulation (ec) no / . fci related to primary production of small ruminant herds/flocks is based on a farmer's declaration. most mss have made available to farmers a standardised fci declaration form. a whole-chain approach to food safety, animal health and animal welfare requires slaughterhouse operators to be provided by livestock producers with information about their animals consigned to slaughter. based on the fci provided food business operators (fbos) can assess potential hazards presented by the animals and are required to act upon any information recorded on the fci declaration as part of their hazard analysis and critical control point (haccp) plan. this helps the slaughterhouse operator to organise slaughter operations and to ensure that no animals affected by disease or certain veterinary medicines enter the food chain. quality assurance schemes at primary producer level are voluntary tools operated by independent agencies or bodies to ensure compliance with given standards and regulations. these schemes increase farmers' responsibilities with regard to animal health and welfare and have potential for integration within the fci provided (oie, ) . the fci also assists risk management to determine the required inspection procedures and should be analysed by risk management and used as an integral part of the inspection procedures. the value of the fci in guiding risk management to discriminate between animals subsequently going through different types of inspection procedures should be evaluated. as for any evaluation of (pre-) screening procedures, the sensitivity and specificity of the classification should be estimated. priority should be given to improving test sensitivity, noting that (pre-) screening tests should preferably produce few false negative classifications for the sake of animal disease detection and surveillance. test specificity will largely be an economical parameter, since the subsequent inspection of all "fcipositive" animals or groups should detect any false positives not correctly identified during the fci pre-screening. regulation (ec) no / requires that data from the ami and pmi at the slaughterhouse is delivered back to the farmer/producer when the inspections reveal the presence of any disease or condition that might affect public or animal health, or compromise animal welfare. currently this feedback of information to primary producers is not fully implemented in all mss (efsa biohaz, contam and ahaw panels, concludes that the effective and efficient flow of information provides valuable information to both the farmer and the fbo and allows more targeted and effective inspection procedures in the slaughterhouse and effective interventions on the farm that should contribute to a cycle of continuous improvement with positive implications for animal health and welfare. the effectiveness of this information cycle depends on a reliable animal identification and recording system at the slaughterhouse and an information transfer system to the primary producer. the collection and communication of slaughterhouse inspection results is an opportunity to collect and use data and knowledge applicable to disease control and the effectiveness of interventions, animal production systems, food safety and animal health/welfare (garcia, ) . at national and eu level such data can contribute to disease surveillance (for the detection of exotic diseases, monitoring of endemic diseases and identification of emerging diseases) and targeted animal health and welfare interventions. therefore fci, if consistently and effectively implemented as enshrined within the hygiene package, will form an integral part of a risk-based mi system. extended use of fci has the potential to compensate for some, but not all, of the information on animal health and welfare that would be lost if visual only pmi is applied. for the fci to be effective it should include species-specific indicators for the occurrence of disease and welfare conditions. fci for public health purposes may not have an optimal design for the surveillance and monitoring of disease and welfare conditions; therefore, an integrated system should be developed whereby fci for public health and for animal health and welfare can be used in parallel, more effective. the conclusions and recommendations from the contam panel refer to areas such as the ranking system for chemical substances of potential concern and its updating, the use of fci to help facilitate risk-based sampling strategies; the inclusion of new hazards in control programmes for chemical residues and contaminants (see contam appendix b, for full details). none of these were considered to have an impact on animal health and welfare surveillance and monitoring. as shown in the comisurv assessment, a change to visual only inspection would cause a significant reduction in the probability of detection (i.e. non-overlapping % probability intervals) of detectable cases of fasciolosis and of tuberculosis in goats. clinical surveillance had a greater sensitivity for detecting fmd than slaughterhouse surveillance following the assessment by comisurv, although the sensitivity of meat inspection increased with an increase in population size. a change to a visual only system would not have a negative impact on sensitivity of detection. as shown in the comisurv assessment, the proposed changes to meat inspection would not greatly affect the probability of detection of any of the welfare conditions analysed. from the comisurv assessment, for two welfare conditions (leg and foot disorders and sheep scab), a combination of the two surveillance components (clinical surveillance and meat inspection) were shown to be more effective (detecting a higher fraction of affected animals) than either one of the surveillance components on its own. according to regulation (ec) / , current inspection in small ruminants includes visual inspection and palpation of the lungs and respiratory lymph nodes. a change to visual inspection would imply that palpation is abandoned. small ruminants are usually not subjected to official tuberculosis eradication campaigns, and farm controls are only performed on premises where cattle and goats are kept together, or in flocks/herds that commercialise raw milk. surveillance for small ruminant tuberculosis at present relies on meat inspection of sheep and goats slaughtered for human consumption, or other limited diagnostic surveillance activities. as is the case with tuberculosis in bovines, the contribution of meat inspection surveillance of tuberculosis in small ruminants is to support the detection of flocks/herds with tuberculosis. detection of tuberculosis in individual animals is merely the first step in improving the effectiveness of flock/herd surveillance, and for any given flock/herd, the flock/herd sensitivity will increase with the number of animals slaughtered. results of two recent risk assessments (comisurv report; fsa, a) show that a change from the current inspection to visual only will reduce the probability of detection of tuberculosis in small ruminants. however, the consequences for animal health were considered as negligible in the fsa assessment, due to the fact that current meat inspection does not prescribe routine incision of lymph nodes, and the only inspection task omitted will be palpation of lungs and respiratory lymph nodes. in recent years tuberculosis has been reported in small ruminants in several eu countries and most information derives from recognition of tuberculosus lesions at the slaughterhouse and from laboratory reports. although small ruminants are not considered to represent a significant reservoir of the disease for the persistence of bovine tuberculosis in cattle, it is still possible that infected sheep and goat herds could act as vectors of infection for other domestic and wild animals. therefore, surveillance and control of tuberculosis in domestic small ruminants does have consequences for the overall surveillance and control of tuberculosis. liver examination at slaughter is the most direct, reliable, and cost-effective technique for diagnosis of fasciolosis. moving to a visual only meat inspection system would decrease the sensitivity of inspection at animal level for fasciolosis, however it would be sensitive enough to identify most, if not all, affected herds. therefore the consequences of change are low (charleston et al., ) . the feedback to farmers regarding fasciola hepatica detected at meat inspection is low at present and the real risk to animal health/welfare for this disease, caused by a change to a visual only meat inspection method, is probably low. implementation of welfare assessment protocols using appropriate animal based indicators during clinical and slaughterhouse (ami + pmi) surveillance systems would improve the welfare of small ruminants. extended use of food chain information has the potential to compensate for some, but not all, of the information on animal health and welfare that would be lost if visual only post-mortem inspection is applied. food chain information is a potentially effective tool to perform more targeted ante-mortem and post-mortem inspection tasks in the slaughterhouse which may increase the effectiveness of those tasks in detecting conditions of animal health and animal welfare significance. the existing ineffective flow of information from primary production to the slaughterhouses and vice versa reduces the ability of detection of animal diseases and animal welfare conditions at the slaughterhouse and as a result it limits possible improvements on animal health and welfare standards at the farm as farmers will not be aware of the slaughterhouse findings. the conclusions and recommendations on chemical hazards were reviewed by the ahaw working group and none of them were considered to have impact on animal health and welfare surveillance and monitoring. data collected during clinical and slaughterhouse (ante-mortem and post mortem inspection) surveillance systems should be utilised more effectively to improve animal welfare at farm level. slaughterhouse surveillance of tuberculosis in small ruminants should be improved and encouraged, as this is in practice the only surveillance system available. the detection of tuberculosis in small ruminants should be adequately recorded and notified, followed by control measures at the farm level. lack of feedback of post-mortem inspection results to the farmer prevents instigation of a fluke management programme, which could be detrimental to animal health and welfare. an improvement in this feedback of information is recommended. welfare surveillance systems should become an integral part of the food chain information. an integrated system should be developed whereby food chain information for public health and for animal health and welfare can be used in parallel, more effectively provide farmers with background information on the animal diseases and welfare conditions of key concern that may affect their livestock and why it is important to provide this information to the slaughterhouse through the use of food chain information. all cases: the combination of detectable cases (mild and typical) and non-detectable cases. case-finding capacity: characteristic of a surveillance system for endemic disease, describing the ability of the system to identify infected or affected herds or individuals, so that a control action can (potentially) be taken. the detection fraction is a measure of the case-finding capacity. case type: includes detectable (mild or typical cases) and non-detectable cases. clinical surveillance: surveillance based on clinical observations in the field. combined inspection: taking into account ante-mortem and post-mortem inspection. component sensitivity: the probability that one or more infected animals will be detected by the surveillance component during a specified time period, given that the disease is present at a level defined by the design prevalence. a risk assessment of shiga toxin-producing escherichia coli (stec) in the norwegian meat chain with emphasis on dry-cured sausages. panel on biological hazards, norwegian scientific committee for food safety norwegian scientific committee for food safety konsekvenser for dyr og mennesker norwegian scientific committee for food safety great hygienic effects from more strict requirements for the sheep slaughter campylobacter excreted into the environment by animal sources: prevalence, concentration shed, and host association comparative studies on the survival of verocytotoxigenic escherichia coli and salmonella in different farm environments epidemiological investigations on campylobacter-jejuni in households with a primary infection escherichia coli o : h and non-o shiga toxin-producing e-coli in healthy cattle, sheep and swine herds in northern spain genetic diversity among campylobacter jejuni isolates from healthy livestock and their links to human isolates in spain direct detection and genotyping of toxoplasma gondii in meat samples using magnetic capture and pcr a quantitative microbial risk assessment for meatborne toxoplasma gondii infection in the netherlands evaluation of elisa test characteristics and estimation of toxoplasma gondii seroprevalence in dutch sheep using mixture models a longitudinal study of verotoxinproducing escherichia coli in two dairy goat herds prevalence and characterization of vero cytotoxin-producing escherichia coli isolated from diarrhoeic and healthy sheep and goats liver distomatosis in cattle, sheep and goats of northeastern iran subtilase cytotoxin encoding genes are present in human, sheep and deer intimin-negative, shiga toxin-producing escherichia coli o :h studies on the reservoir status of leptospirosis in gujarat updates on morphobiology, epidemiology and molecular characterization of coenurosis in sheep distinct host species correlate with anaplasma phagocytophilum anka gene clusters sheep and goat medicine zoonotic agents in small ruminants kept on city farms in southern germany outbreak of haemolytic uraemic syndrome in norway caused by stx -positive escherichia coli o :h traced to cured mutton sausages minimising the bacterial count on lamb carcases by means of slaughtering methods scvmph (scientific committee on veterinary measures relating to public health) . opinion on revision of meat inspection in veal calves. european commission, health and consumer protection directorate-general. directorate c-scientific opininos main aspects of leptospira sp infection in sheep an outbreak of escherichia coli o :h -bacteriological investigations and genotyping of isolates from food potentially human-pathogenic escherichia coli o in norwegian sheep flocks tuberculosis in goats on a farm in ireland: epidemiological investigation and control study on prevalence of klebsiella pneumoniae in meat and meat products and its enterotoxigenicity a study of the sero-prevalence of influenze-a viruses in sheep and goats risk factors for the presence of antibodies to toxoplasma gondii in norwegian slaughter lambs enteritis in sheep and goats due to yersinia enterocolitica infection epidemiology of yersinia pseudotuberculosis and y. enterocolitica infections in sheep in australia investigation of the presence of esbl-producing escherichia coli in the north wales and west midlands areas of the uk in to using scanning surveillance yersinia enterocolitica in sheep -a high frequency of biotype a nonacid meat decontamination technologies: model studies and commercial applications temporal variation and host association in the campylobacter population in a longitudinal ruminant farm study babesia microti and anaplasma phagocytophilum: two emerging zoonotic pathogens in europe and hungary seasonal variation of thermophilic campylobacters in lambs at slaughter attribution of campylobacter infections in northeast scotland to specific sources by use of multilocus sequence typing microbial contamination on beef and sheep carcases in south australia current concepts -recognition and management of anthrax -an update an outbreak of salmonella mikawasima associated with doner kebabs diagnosis of human visceral pentastomiasis pathogenicity of yersinia enterocolitica biotype a procedures in the current meat inspection of domestic sheep and goats . . criteria used for the evaluation of the likelihood of the occurrence of residues or contaminants in sheep and goats . weaknesses of the current meat inspection methodology for chemical hazards tor : adaptation of inspection methods use of a β-adrenergic agonist to alter muscle and fat deposition in lambs food poisoning by clenbuterol in portugal semicarbazide is non-specific as a marker metabolite to reveal itrofurazone abuse as it can form under hofmann conditions special issue plenary papers of the th international conference on goats recent developments in the use and abuse of growth promoters regulation (ec) no / for dapsone as an impurity in veterinary medicinal products containing sulphamethoxazole or other sulphonamides scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to zearalenone as undesirable substance in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to aflatoxin b as undesirable substance in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to deoxynivalenol (don) as undesirable substance in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to the presence of non dioxin-like polychlorinated biphenyls (pcb) in feed and food scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to fumonisins as undesirable substances in animal feed opinion of the scientific panel on food additives, flavourings, processing aids and materials in contact with food (afc) on a request related to a th list of substances for food contact materials scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to pyrrolizidine alkaloids as undesirable substaces in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission related to cyanogenic compounds as undesirable substances in animal feed mercury as undesirable substance in animals feed-scientific opinion of the panel of contaminants in the food chain glucosinolates as undesirable substances in animal feed-scientific opinion of the panel of contaminants in the food chain scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission on tropane alkaloids (from datura spp.) as undesirable substances in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission on theobromine as undesirable substances in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission on gossypol as undesirable substances in animal feed scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission on ricin (from ricinus communis) as undesirable substances in animal feed opinion of the scientific panel on contaminants in the food chain (contam panel) on perfluorooctane sulfonate (pfos), perfluorooctanoic acid (pfoa) and their salts cadmium in food -scientific opinion of the panel of contaminants in the food chain (contam panel) scientific opinion of the panel of contaminants in the food chain (contam panel) on request from the european commission on saponins in madhuca longifolia l. as undesirable substances in animal feed results of the monitoring of dioxin levels in food and feed update of the monitoring of dioxins and pcbs levels in food and feed scientific opinion on lead in food efsa panel on contaminants in the food chain (contam), a. statement on tolerable weekly intake for cadmium scientific opinion on the risk to public health related to the presence of high levels of dioxins and dioxin-like pcbs in liver from sheep and deer scientific opinion on the risk for animal and public health related to the presence of t- and ht- toxin in food and feed scientific opinion on pyrrolizidine alkaloids in food and feed scientific opinion on polybrominated diphenyl ethers (pbdes) in food scientific opinion on hexabromocyclododecanes (hbcdds) in food scientific opinion on the risk for public health related to the presence of mercury and methylmercury in food scientific opinion on ergot alkaloids in food and feed emea (the european agency for the evaluation of medicinal products) emea (the european agency for the evaluation of medicinal products), a. committee for veterinary medicinal products emea (the european agency for the evaluation of medicinal products), b. committee for veterinary medicinal products emea (the european agency for the evaluation of medicinal products), . committee for veterinary medicinal products. dapsone ( ) committee for veterinary medicinal products expert committee on food additives. chloramphenicol -toxicological evaluation of certain veterinary drug residues in food evaluaion of certain veterinary drug residues in food agriculture organization of the united nations/world health organization expert committee on food additives), . safety evaluation of certain contaminants in food who food additives series, . who/jecfa monographs , who determination of colchicine residues in sheep serum and milk using high-performance liquid chromatography combined with electrospray ionization ion trap tandem mass spectrometry depletion of protein-bound furazolidone metabolites containing the -amino- -oxazolidinone side-chain from liver, kidney and muscle tissues from pigs formation of semicarbazide (sem) in food by hypochlorite treatment: is sem a specific marker for nitrofurazone abuse? monographs on the evaluation of carcinogenic risks to humans. sex hormones (ii) iarc monographs on the evaluation of carcinogenic risks to humans. pharmaceutical drugs iarc monographs on the evaluation of carcinogenic risks to humans. some thyrotropic agents iarc monographs on the evaluation of carcinogenic risks to humans transfer of perfluoroctanoic acid (pfoa) and perfluoroctane sulfonate (pfos) from contaminated feed into milk and meat of sheep: pilot study towards a criterion for suspect thiouracil administration in animal husbandry italian babyfood containing diethylstilbestrol - years later aplasia cutis congenita in surviving co-twin after propylthiouracil exposure in utero evidence that urinary excretion of thiouracil in adult bovine submitted to a cruciferous diet can give erroneous indications of the possible illegal use of thyreostats in meat production diseases associated with inorganic and farm chemicals aplasia cutis congenita and other anomalies associated with methimazole exposure during pregnancy estrogen and its metabolites are carcinogenic agents in human breast epithelial cells the identification of potential alternative biomarkers of nitrofurazone abuse in animal derived food products opinion on the risk assessment of dioxins and dioxins-like pcbs in food. cs/cntm/dioxin/ final opinion of the scientific committee on veterinary measures relating to public health on assessment of potential risks to human health from hormone residues in bovine meat and meat products, adopted on review of specific documents relating to the scvph opinion on april on the potential risks to human health from hormone residues in bovine meat and meat products opinion of the scientific committee on veterinary measures relating to public health on review of previous scvph opinions of the mineral nutrition of livestock analysis of thyreostats: a history of years the world health organization reevaluation of human and mammalian toxic equivalency factors for dioxins and dioxin-like compounds weight increase of the thyroid gland as a tentative screening parameter to detect the illegal use of thyreostatic compounds in slaughter cattle residues of hormonal substances in foods of animal origin: a risk assessment pesticide, veterinary and other residues in food quantitative assessment of the impact of changes on meat inspection on the effectiveness of the detection of animal diseases and welfare conditions (comisurv report) implications for surveillance and monitoring for small ruminant health and welfare of changes to meat inspection as proposed by the contam panel and has also zoonotic implications. the pathological and histological findings in sheep and goats are similar to those seen in cattle these reports highlight the possible role of domestic goats and sheep as reservoirs of tb, and the need to re-evaluate the evidence for m. bovis (and m. caprae) transmission among cattle and small ruminants described the main pathological lesions in goats in the lungs in the form of abscesses ( - cm in size) with yellowish white, caseous or caseocalcareous lesions. lesions are also seen in the retropharyngeal, mediastinal or mesenteric lymph nodes and in liver, spleen and udder. lesions is sheep are very similar to those in goats, ranging from encapsulated soft, caseous lesions and encapsulated, calcified tubercles are also present in the lungs and liver in general, small ruminants are not subjected to official tb eradication campaigns; however, sheep and goats may undergo a bovine tuberculin skin test for detecting tb infection if located on premises where btb has been confirmed in cattle (subject to findings of a veterinary risk assessment), or if m. bovis infection has been confirmed in the sheep/goat flock/herd itself. if goats are kept together with cows, such goats must be inspected and tested for tb. furthermore, given that milk has it involves a visual inspection of the lungs, trachea and oesophagus, with palpation of the lungs and the bronchial and mediastinal lymph nodes both m. bovis and m. caprae cause tuberculosis in bovines and other species, including humans. further in the text, only m. bovis is mentioned regulation (ec) no / of the european parliament and of the council of april laying down specific hygiene rules for food of animal origin oj l interference of paratuberculosis with the diagnosis of tuberculosis in a goat flock with a natural mixed infection elevation of mycobacterium tuberculosis subsp. caprae aranaz et al. to species rank as mycobacterium caprae comb. nov., sp. nov liver fluke morphopathology of caprine tuberculosis. i. pulmonary tuberculosis. anales de veterinaria de murcia prevalence and regional distribution of scab, lice and blowfly strike in great britain control and management of sheep mange and pediculosis in great britain liver fluke (fasciola hepatica) in slaughtered sheep and cattle in new zealand, - foot lameness in dairy goats contribution of meat inspection to animal health surveillance in sheep and goats characterisation of white line degeneration in sheep and evidence for genetic influences on its occurrence foot health in sheepprevalence of hoof lesions in uk and irish sheep tb in goats caused by mycobacterium bovis fasciolosis in cattle and sheep outbreak of tuberculosis caused by mycobacterium bovis in golden guernsey goats in great britain scientific opinion of the panel on biological hazards (biohaz) on a request from the commission on tuberculosis and control in bovine animals: risks for human health strategies scientific opinion on the public health hazards to be covered by inspection of meat (swine) scientific opinion on the public health hazards to be covered by inspection of meat (poultry) reducing the future threat from (liver) fluke: realistic prospect or quixotic fantasy? veterinary parasitology seroprevalence of fasciola hepatica infection in sheep in northwestern spain a qualitative risk and benefit assessment for visual-only postmortem inspection of cattle, sheep, goats and farmed/wild deer an evaluation of food chain information (fci) and collection and communication of inspection results (ccir) for all species the use of data mining techniques to discover knowledge from animal and food data: examples related to the cattle industry back to the future: the animal health monitoring system -a political necessity being addressed in the united states differentiation by molecular typing of mycobacterium bovis strains causing tuberculosis in cattle and goats lameness and foot lesions in adult british dairy goats the importance of feet examination in sheep health management naming and recognition of six foot lesions of sheep using written and pictorial information: a study of english sheep farmers comparison of new zealand and european community ovine liver inspection procedures a review of the road transport of slaughter sheep effects of stocking density on lambs being transported by road evaluation of the gammainterferon assay for eradicaiton of tuberculosis in a goat herd cost-effective meat inspection: the scientific basis. surveillance kinetics of fasciola hepatica egg passage in the faeces of sheep in the porma basin a case of generalized bovine tuberculosis in a sheep tuberculosis due to mycobacterium bovis and mycobacterium caprae in sheep guide to good farming practices for animal production food safety the ecology of the liver fluke (fasciola hepatica) tuberculosis in goats aranaz a and spanish network on surveillance monitoring of animal tuberculosis, . a database for animal tuberculosis (mycodb.es) within the context of the spanish national programme for eradication of bovine tuberculosis update on trematode infections in sheep tuberculosis in goats on a farm in ireland: epidemiological investigation and control concurrent outbreak of tuberculosis and caseous lymphadenitis in a goat herd emerging parasitic diseases in sheep . fasciolidae. in: veterinary parasitology mycobacterium bovis causing clinical disease in adult sheep meat inspection, comprising both ante-mortem and post-mortem inspection, is recognised as a valuable tool for surveillance and monitoring of animal diseases and welfare conditions, and helps in the recognition of outbreaks of existing or new disorders or disease syndromes, in situations where clinical signs are not detected on-farm. meat inspection represents a practical way to evaluate the welfare of small ruminants on-farm, and the only way to evaluate their welfare during transport and associated handling. changes in the meat inspection system may negatively affect the efficiency of the surveillance and monitoring of animal diseases and welfare conditions. the focus of the animal health and welfare (ahaw) panel was to assess the implications for surveillance of animal health and welfare of the changes proposed to the current small ruminants meat inspection system by the biological hazards (biohaz) and contaminants in the food chain (contam) panels. briefly, the recommendations of the biohaz panel were related to (i) shorter transport and lairaging, (ii) improved collection of food chain information to provide information for categorisation of farms, which can be used for e.g. risk-based ante-mortem inspection, logistic slaughter and/or decontamination, (iii) omission of palpation and incision in animals subjected to routine slaughter at post-mortem inspection (if necessary, detailed inspection with potential use of palpation and incision should be carried out separately). the contam panel recommendations included (i) the ranking system for chemical substances of potential concern and its updating, (ii) the use of fci to help facilitate risk-based sampling strategies, and (iii) the inclusion of new hazards in control programmes for chemical residues and contaminants.to assess the impact of proposed changes to the current meat inspection on the overall sensitivity for surveillance and control of animal diseases and welfare conditions, the results and conclusions of a quantitative assessment, carried out by an external consortium (comisurv) under an efsa procurement, were analysed. this report assessed the impact of a change from the current small ruminant meat inspection to a visual only system in terms of detection efficiency of a list of twenty selected diseases and welfare conditions of sheep and goats. additional information from scientific literature and other recent assessments were also taken into account by experts to assess the impact of proposed changes on the detection probability and overall surveillance of animal diseases and welfare conditions.a change to visual only inspection caused a significant reduction in the probability of detection (i.e. non-overlapping % probability intervals) of detectable cases of fasciolosis and tuberculosis in goats (stage ).with regard to exotic diseases, clinical surveillance (stage ) had a greater sensitivity for detecting foot and mouth disease than slaughterhouse surveillance, and the sensitivity increased with an increase in population size. this indicates that for those countries in europe with a large sheep population, clinical surveillance is highly effective for detecting at least one case of foot and mouth disease in an infected sheep. for countries with high slaughter numbers of sheep, slaughterhouse surveillance would be almost equally efficient in detecting the disease. a change in post-mortem protocol to a visual only system did not significantly reduce the detection of any welfare conditions.in recent years tuberculosis has been reported in small ruminants in several eu countries and most information derives from recognition of tuberculosus lesions at the slaughterhouse and from laboratory reports. according to regulation (ec) / , current inspection in small ruminants aimed at detecting tuberculosis includes visual inspection and palpation of the lungs and respiratory lymph nodes. a change to visual inspection would imply abandoning palpation, which is the reason for this reduced detection. surveillance of tuberculosis at the slaughterhouse for small ruminants should be improved and encouraged, as this is in practice the only surveillance system available. the detection of tuberculosis in small ruminants should be adequately recorded and followed at the farm level. detectable cases: cases that are detectable by routine meat inspection procedures. they will express a range of combinations of clinical and pathological signs. a proportion of detectable cases will fit the definition of the typical case and a proportion will be milder cases.detection effectiveness: the proportion of animals with lesions (i.e. detectable by visual inspection, palpation and/or incision) that are actually detected.detection fraction: the proportion of infected or affected units that are successfully detected by the surveillance system.mild cases: the mild case of a disease or condition is the form that could be seen at the early stages of the disease or at some point between the subclinical and the fully developed (i.e. "typical") form. a mild case is neither typical nor subclinical. the animal will probably present more subtle signs than in a typical case. mild cases fit the mild case definition validated by experts.monitoring: investigating samples or animals in order to obtain information about the frequency of disease or infection as it varies in time and/or space.non-detectable cases: cases that are beyond the detection capacity of current meat inspection protocols. these will often be early cases at a stage where distinct clinical signs have not yet developed, but they can be cases with mild infection that leads to only subclinical conditions, without pathological lesions detectable by meat inspection.non-overlapping probability intervals: indicates that scenarios differ significantly from each other.overall surveillance system: includes several components, such as slaughterhouse surveillance and clinical surveillance. stage : assessment of the probability of detection at meat inspection. the objective of stage modelling was to estimate case type-specific (for typical and mild cases) as well as overall probabilities of detection at meat inspection.stage : assessment of the relative effectiveness of meat inspection within the overall surveillance system by comparing meat inspection with other available surveillance methods.typical cases: cases that are, by definition, detectable cases and express more developed clinical signs than mild cases. they fit the typical case definition provided by the experts, which is defined as signs and/or lesions that are expected to be observed in more than % of affected or infected of animals seen at the slaughterhouse. key: cord- -sltofaox authors: gutiérrez-spillari, lucia; palma m., geovani; aceituno-melgar, jorge title: obesity, cardiovascular disease, and influenza: how are they connected? date: - - journal: curr trop med rep doi: . /s - - - sha: doc_id: cord_uid: sltofaox purpose of review: to better understand the impact of obesity and cardiovascular diseases on influenza a infection. recent findings: this infection could have detrimental outcomes in obese patients with cardiovascular diseases, such as an increased risk, length of hospitalization, disease severity, morbidity, and mortality. nevertheless, there also might be some cardioprotective benefits associated with influenza vaccination, such as a reduced mortality, hospitalization, and acute coronary syndromes, in patients with coronary heart disease and/or heart failure. summary: obesity negatively impacts immune function and host defense. recent studies report obesity to be an independent risk factor for increased morbidity and mortality following infection. obese patients might need special considerations in the treatment; however, there is not enough evidence to fully comprehend the mechanisms behind the reduced immunocompetence when influenza a infection occurs. future studies should focus on special consideration treatments when the patients have not been vaccinated and have cardiovascular diseases. this review focuses on how obesity and cardiovascular disease impact influenza response. retrospective studies demonstrate that during the h n pandemic, obesity was identified as a risk factor for hospitalization, mechanical ventilation, and mortality upon infection. these data must be highlighted since it is projected that nearly % of the worldwide population is going to be obese by . several case studies have identified possible effects of obesity on viral replication in the deep lung, progression to viral pneumonia, and prolonged viral shedding [ ] . therefore, management of the influenza infection in this at-risk population has to have special consideration given that they may not respond optimally to vaccination [ ] . excessive fat accumulation that results in obesity impairs health for adults [ ] . its low-grade chronic inflammatoryinduced state negatively impacts immune function and host defense [ ] , as shown during the influenza a virus h n pandemic, where obesity resulted to be an independent risk factor for severe disease, hospitalization, mechanical ventilation, and mortality upon infection [ ] . it is well known that influenza a virus infection is characterized by fever, myalgia, rhinorrhea, sore throat, and sneezing. these symptoms peak - days post-infection, with viral shedding peaking at days - . usually, it is limited to the upper respiratory tract; however, in severe cases, the lower respiratory tract, including the lungs, can be affected and often requires hospitalization. this progression is more common in obese patients, leading to diminish infection resolution when compared with non-obese patients [ ] . obesity also plays a role in the outcome of critical complications from influenza this article is part of the topical collection on metabolism in tropical medicine a/pdmh n infection and is associated with longer mechanical ventilation for severe acute respiratory distress syndrome and shock [ ] . a higher body mass index (bmi) and metabolic syndrome in patients with influenza have shown an increased risk and length of hospitalization [ ] [ ] [ ] , increased disease severity, morbidity, and mortality during lower respiratory tract infections. this might be explained in part by increased lung permeability during infection, found in mice studies. obese mice have an increased protein leakage from the lung into the bronchoalveolar lavage fluid when compared to lean mice. additionally, lung edema and oxidative stress are also increased, which emphasizes the multiple etiologies of increased lung pathology in the obese host and the impairment in wound repair [ , ] . the obesogenic state can also affect the influenza a virus evolution. it is well known that obese individuals are malnourished besides their excess in fat; they might also present nutrient deficiencies, such as vitamins [ ] , minerals, and trace elements [ ] . there are a variety of mechanisms by which nutritional imbalances could alter within-host viral evolution [ ] . studies have shown that such imbalances prolong infections, delay clearance, and increase shedding ( % longer than non-obese) [ ] , all of which potentially increase viral transmission [ ] . in addition to the decreased immunocompetence mechanisms, other potential factors might contribute to its increased susceptibility to infection in the hospital setting. some examples, which are underlying diseases that affect mobility, can also increase risk for skin problems, prolonged visits at hospitals and nosocomial infections, alteration in the pharmacokinetics of some drugs, and an increased susceptibility to postsurgery infections [ ] . thus, it is a complex problem that needs further evidence to develop better treatments for this increasing population. obesity causes a chronic state of inflammation in a generalized and constant way with negative effects on immunity. obese people have delayed immune responses to influenza virus infection and experience slower recovery from the disease. in addition, the efficacy of the treatment and the vaccine is reduced in this population causing an alteration of the viral life cycle and, coupled with an already weakened and delayed immune response, leading to a more serious condition. poor initial and adaptive responses to infection and vaccination create an impaired ability to respond appropriately to infection. the efficacy of the vaccine may decrease in obese humans; however, more studies are needed to better understand how the obese state affects infection control [ ] . previous studies suggest that the severity of influenza virus infection is multifactorial and may be related to lung spread and repair, the formation of extracellular concentrates of neutrophils at the lung level; however, this mechanism in individuals is unknown [ ] . the efficacy of the vaccine in human groups has shown that initial seroconversion rates are high in the obese population, but that over time there is a greater decrease in efficacy than that observed in non-obese populations [ ] . influenza vaccine as a method of prevention is formulated each year, typically containing both influenza a and b. a study conducted in - aimed at evaluating whether obesity was associated with an increased risk of influenza for influenza and influenza-like illness among vaccinated obese and nonobese adults, finding that, among the obese, . % had confirmed influenza or influenza-like illness compared with . % of healthy weight participants. compared with the vaccinated healthy weight, obese participants had twice the risk of developing influenza or influenza-like illness (relative risk = . , % ci . , . , p = . ); therefore, in this risk group, in the same way, vaccination is very important [ ] . although it appears that in high-risk groups, such as the obese and overweight population, vaccination may not provide optimal protection, and because of the growing trend of obesity worldwide, the efficacy of the vaccine should be improved [ ] . among cardiovascular disease patients, there is compelling evidence that shows lower risk of major adverse cardiovascular events, reduced hospitalization, and mortality [ ] [ ] [ ] , being the greatest treatment effect was seen among the highestrisk patients with more active coronary disease [ ] . the recent recommendation advocates the priority of vaccination against influenza in obese patients; a vaccination program should be fully evaluated in obese adults. high-dose vaccines designed to vaccinate people over can also be used in the obese population [ , ] . in the twentieth century during influenza epidemics, there has been an excess of mortality from cardiovascular disease [ ] . a recent study that included hospitalizations for acute myocardial infarction demonstrated an increased risk of acute myocardial infarction within one week after influenza virus infection to a risk that was six times higher than the risk during the year before or after the onset of infection [ ] . cardiovascular complications associated with influenza infection include myocarditis, pericardial effusion, myopericarditis, right and left ventricle dysfunction, myocardial infarction, heart failure, stroke, and circulatory failure due to septic shock [ , [ ] [ ] [ ] . the risk of myocardial infarction after mild respiratory infection returns to baseline within approximately weeks, but in the case of pneumonia complicated by sepsis the risk persists up to years after the infection [ ] [ ] . infectious agents (including influenza virus) have been implicated in the etiology of atherosclerosis [ ] . there have been described several mechanisms by which influenza increases the risk of cardiovascular events; they may be related to pro-inflammatory mediators, sympathetic stimulation, and the activation of the coagulation cascade [ ] . according to the fourth universal definition of myocardial infarction, there are types of myocardial infarction based on clinical, electrocardiographic, and laboratory evaluation [ ] . influenza infection can trigger type and type myocardial infarctions [ ] . type myocardial infarction is defined as myocardial ischemia caused by atherothrombotic coronary artery disease, and it is usually precipitated by atherosclerotic plaque disruption that can be rupture or erosion [ ] . it is important to remember that atherosclerotic plaques also contain inflammatory cells, and pro-inflammatory cytokines, such as interleukins , , and , and tumor necrosis factor α are generated as a response of infection. these inflammatory cytokines can activate inflammatory cells in atherosclerotic plaques [ ] [ ] . acute influenza infection is associated to a procoagulant state that increases the risk of coronary thrombosis at sites of plaque disruption [ ] [ , ] . infection with influenza virus is associated with expression of genes that have been linked to platelet activation: h n exposure increases platelet gene expression signature, which is associated with myocardial infarction [ ] . type myocardial infarction might be considered with a rise and/or fall of cardiac troponin values and evidence of an imbalance between myocardial oxygen supply and demand unrelated to coronary thrombosis. [ ] . influenza infection produces a systemic inflammatory response with a resulting increase in heart rate and shortens the filling time during diastole compromising in that way, the coronary blood supply. if septic shock occurs, it may have a substantial adverse effect on coronary perfusion. in older patients with chronic coronary plaques, systemic inflammation causes cardiac metabolic mismatch increasing the risk of myocardial infarction [ , ] . influenza infection is also associated with increased mortality in patients with heart failure [ ] and is vulnerable to influenza-associated complications [ ] [ ] . this type of patients has limited cardiac and respiratory reserves and may not tolerate the metabolic demand and hypoxemia, exacerbating underlying cardiac disease probably due to an increased sympathetic nervous system activity, hypoxemia, and renal dysfunction that can lead to volume overload [ ] . in a healthy heart, severe acute influenza infection produces pro-inflammatory cytokine level elevations that can cause acute myocarditis [ ] [ ] , characterized by a broad spectrum of symptoms that go from asymptomatic courses to signs of myocardial infarction to devastating illness with cardiogenic shock [ ] [ ] . myocarditis often results from common viral infections and post-viral immune-mediated responses [ ] . in acute myocarditis, there is a high incidence of wall motion abnormalities. during influenza epidemics, % of the patients admitted to a military hospital with influenza infection had wall motion abnormalities on echocardiogram [ ] . this becomes important, due to the fact that the myocardium inflammatory disease is regarded as a precursor of dilated cardiomyopathy [ ] [ ] . the electrocardiographic findings in patients with myocarditis range from nonspecific t-wave and st segment changes to st segment elevation resembling an acute myocardial infarction; supraventricular and ventricular arrhythmias can also be present. electrocardiographic findings that are related to poor clinical outcome include a qtc prolongation at ms, an abnormal qrs axis, and ventricular ectopic beats [ ] . a recent study developed to evaluate the incidence and hemodynamic consequences of right ventricular and left ventricular dysfunction in patients with h n infection demonstrated that on admission % had abnormal ventricular function ( % had isolated left ventricular abnormalities and % had isolated right ventricular dysfunction) and % had biventricular dysfunction. on the follow-up, right ventricular function tended to worsen during hospitalization, but left ventricular function tended to normalize. however, patients with ventricular dysfunction needed more aggressive therapy and of rescue ventilatory strategies, such as inhaled nitric oxide, prone positioning, and extracorporeal membrane oxygenation [ ] . during influenza epidemics, hospitalizations for cerebrovascular diseases increase [ ] . an increase in incidence of ischemic stroke within weeks after influenza infection has been suggested [ , ] . protein c pathway and endogenous fibrinolysis are mechanisms associated with cerebrovascular ischemia and influenza infection [ ] . influenza infection develops a prothrombotic state by increasing tissue factor expression and decreases fibrinolytic capacity by increased plasminogen-activator inhibitor- (pai- ) expression. this results in an unbalance between coagulation and anticoagulant pathways [ ] . as in myocardial infarction, the relationship between systemic inflammation and stroke pathophysiology has shown that stroke often occurs in a pre-existing state of inflammation due to atherosclerosis, obesity, or infection [ ] . the sixth joint task force of the european society of cardiology and other societies on cardiovascular disease prevention in clinical practice recommend that annual influenza vaccination can be considered in patients with established cardiovascular disease (class iib, level c) [ ] , based on the fact that the risk of a cardiovascular event (myocardial infarction or stroke) is more than four times higher after a respiratory tract infection, with the highest risk in the first days after infection [ ] . there have been developed several studies that demonstrate that influenza vaccination reduces mortality, hospitalization, and acute coronary syndromes in patients with coronary heart disease and/or heart failure [ ] . the mechanisms by which acute inflammation affect the risk of vascular events include the following: endothelial dysfunction, procoagulant state, and inflammatory changes in atherosclerotic plaques [ , ] . it is also known that persistent systemic inflammatory activity is a risk of factor for cardiovascular disease, and higher interleukin- blood levels increase cardiovascular mortality at one year after pneumonia infection [ ] . this systemic inflammatory response can be reduced by vaccination [ ] . when estimating the cost and benefit of interventions to prevent pneumonia, the association of pneumonia with cardiovascular disease risk should also be considered [ ] . patients with chronic heart failure are vulnerable to influenza-related complications (including secondary infections, such as pneumonia and acute heart failure exacerbations). recently, the paradigm-hf trial assessed associations between receiving influenza vaccine and cardiovascular death or heart failure hospitalizations, all-cause hospitalizations, and cardiopulmonary or influenza-related hospitalizations, concluding that vaccination was associated with reduced risk for death [ ] . there have been described two possible mechanisms by which influenza vaccination may reduce cardiovascular events: unspecific and specific effects [ ] . the unspecific mechanism is based on the fact that influenza infection causes a systemic inflammatory response, endothelial dysfunction, and a procoagulant state. these factors have negative effects on patients with previous cardiovascular diseases, such as ischemic heart disease and heart failure, causing acute heart failure, pulmonary edema, or destabilization of chronic ischemic heart disease, leading to myocardial infarction or sudden cardiac death [ ] [ ] . influenza vaccination reduces the risk fig. possible cardioprotective mechanisms of influenza vaccination of infection and inflammation by decreasing the secretion of pro-inflammatory mediators, such as cytokines (that cause reduced myocardial contractility) and metalloproteinases (that cause adverse cardiac remodeling and plaque rupture); influenza vaccination also causes inhibition of platelet activation and cloth formation [ ] [ ] . the specific mechanism takes into account the immunological properties of the vaccine. the protective effect of the influenza vaccine has been demonstrated in multiple studies. to explain the pleiotropic effect of the influenza vaccine, the "antigen mimicry" between atherothrombotic plaque and influenza virus has been proposed [ ] . it has also been proposed that there is an autoimmune "cross-reaction" between influenza and atherosclerosis [ ] [ ] . figure summarizes the possible cardioprotective mechanism of influenza vaccination [ ] . it is well studied that obese patients can develop cardiovascular diseases; however, it is less known that the lowinflammatory chronic state might affect host defense and immune cell dysfunction and infections, such as influenza a, could have detrimental outcomes in such patients, such as an increased risk, length of hospitalization, disease severity, morbidity, and mortality. cardiovascular diseases, such as ischemic heart disease and heart failure combined with influenza a infection, can trigger acute heart failure exacerbations that increase the overall mortality in a hospitalized setting. cardiovascular complications associated with influenza infection include myocarditis, pericardial effusion, myopericarditis, right and left ventricle dysfunction, myocardial infarction, heart failure, stroke, and circulatory failure due to septic shock. there have been described several mechanisms by which influenza increases the risk of cardiovascular events; they might be related to pro-inflammatory mediators, sympathetic stimulation, and activation of the coagulation cascade. while influenza vaccination is associated with a significant reduction in all-cause mortality risk in patients with heart failure, this cardioprotective mechanism may not function as intended in the obese population since they do not always respond optimally to vaccination. therefore, in an effort to prevent these complications and in the absence of special consideration treatments for this population, we strongly suggest a weight-loss approach. future studies should focus on developing targeted treatments that can combat the reduced immunocompetence that excess adiposity causes to the patient. particular interest, published recently, have been highlighted as: • of importance impact of obesity on influenza a virus pathogenesis, immune response, and evolution the impact of obesity on the immune response to infection immunity to influenza: impact of obesity high body mass index as a risk factor for hospitalization due to influenza: a case-control study epidemiology of severe influenza outcomes among adult patients with obesity in detroit association between vitamin deficiency and metabolic disorders related to obesity rna virus evolution, population dynamics, and nutritional status obesity increases the duration of influenza a virus shedding in adults obesity outweighs protection conferred by adjuvanted influenza vaccination obesity is associated with impaired immune response to influenza vaccination in humans increased risk of influenza among vaccinated adults who are obese cardioprotective effect of influenza and pneumococcal vaccination in patients with cardiovascular diseases influenza vaccines for preventing cardiovascular disease association between influenza vaccination and cardiovascular outcomes in high-risk patients acute infection and myocardial infarction acute myocardial infarction after laboratoryconfirmed influenza infection myocardial dysfunction during h n influenza infection beneficial effects of vaccination on cardiovascular events: myocardial infarction, stroke, heart failure. cardiol cardiovascular manifestations associated with influenza virus infection severe infections and subsequent delayed cardiovascular disease influenza and cardiovascular disease: a new opportunity for prevention and the need for further studies fourth universal definition of myocardial infarction diffuse and active inflammation occurs in both vulnerable and stable plaques of the entire coronary tree: a histopathologic study of patients dying of acute myocardial infarction gene expression profiles link respiratory viral infection, platelet response to aspirin, and acute myocardial infarction. schulz c, editor sepsis, thrombosis and organ dysfunction relation of concomitant heart failure to outcomes in patients hospitalized with influenza association of influenza-like illness activity with hospitalizations for heart failure: the atherosclerosis risk in communities study decreased immune responses to influenza vaccination in patients with heart failure update on myocarditis current state of knowledge on aetiology, diagnosis, management, and therapy of myocarditis: a position statement of the european society of cardiology working group on myocardial and pericardial diseases management of myocarditis-related cardiomyopathy in adults influenza vaccination and reduction in hospitalizations for cardiac disease and stroke among the elderly temporal relationship between influenza infections and subsequent first-ever stroke incidence influenza-like illness as a trigger for ischemic stroke influenza and stroke risk: a key target not to be missed? influenza virus infection aggravates stroke outcome european guidelines on cardiovascular disease prevention in clinical practice. the sixth joint task force of the european society of cardiology and other societies on cardiovascular disease prevention in clinical practice (constituted by representatives of societies and by invited experts. developed with the special contribution of the european association for cardiovascular prevention & rehabilitationg ital cardiol (rome) risk of myocardial infarction and stroke after acute infection or vaccination association between hospitalization for pneumonia and subsequent risk of cardiovascular disease inflammatory markers at hospital discharge predict subsequent mortality after pneumonia and sepsis influenza vaccination in patients with chronic heart failure: the paradigm-hf trial from vulnerable plaque to vulnerable patient: a call for new definitions and risk assessment strategies: part ii influenza vaccine as prevention for cardiovascular diseases: possible molecular mechanism publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations key: cord- - hj sev authors: miroudot, sébastien title: reshaping the policy debate on the implications of covid- for global supply chains date: - - journal: j int bus policy doi: . /s - - - sha: doc_id: cord_uid: hj sev disruptions in global supply chains in the context of the covid- pandemic have re-opened the debate on the vulnerabilities associated with production in complex international production networks. to build resilience in supply chains, several authors suggest making them shorter, more domestic, and more diversified. this paper argues that before redesigning global supply chains, one needs to identify the concrete issues faced by firms during the crisis and the policies that can solve them. it highlights that the solutions that have been proposed tend to be disconnected from the conclusions of the supply chain literature, where reshoring does not lead to resilience, and could further benefit from the insights of international business and global value chain scholars. lastly, the paper discusses the policies that can build resilience at the firm and global levels and the narrative that could replace the current one to reshape the debate on the policy implications of covid- for global supply chains. with covid- , the debate has re-emerged on the vulnerabilities of an interconnected world where goods are produced in complex value chains that span across borders. international production and supply chains were criticized because of the economic disruptions they allegedly created when a pandemic interrupted trade and the movement of people across countries, adding to existing fears and concerns about globalization (kobrin, ) . reshaping global supply chains, and possibly making them shorter, more domestic, or more diversified, were therefore proposed to bring some resilience into production networks (coveri, cozza, nascia, & zanfei, ; javorcik, ; lin & lanng, ; o'leary, ; o'neil, ; shih, ) . this debate builds on several concepts used in supply chain risk management, starting with 'resilience' . however, some of the solutions proposed, such as reshoring or diversifying production away from china, may be motivated by a different policy agenda than risk mitigation (evenett, ) . in this paper, i argue that before reshaping global supply chains, the debate itself needs to be reframed and more solidly grounded in business reality and in lessons from the literature. there is an important corpus of knowledge in the supply chain and risk management literature that tells firms what to do to improve the resilience of their own production networks. however, there are fewer answers on what resiliency means at the country or global level and what global value chain-oriented policies can be adopted to strengthen it. this is where the international business (ib) and global value chain (gvc) literature can provide further insights. for the ib community, covid- can be seen as an opportunity to bring to policy circles the knowledge on firms and the organization of multinational enterprises (mnes) that can help to shape the debate on the resilience of supply chains, in line with the ambition of the journal of international business policy (van assche, ; van assche & lundan, ). as noted by lorenzen, mudambi and schotter ( ) , studying mne risk mitigation strategies in the context of covid- can be a fruitful avenue for ib research. strange ( ) already provides some interesting thoughts about how gvcs may be reorganized once the crisis is over. the concept of resilience is not new in the gvc literature. it was used for example to highlight the recovery of trade networks after the great financial crisis in (cattaneo, gereffi, & staritz, ) . more recently, gereffi ( ) addresses the issue of the resilience of medical supply gvcs. however, as policymakers now seem to associate resilience with a specific type of organization of gvcs where mnes produce mostly through more localized or shorter supply chains, new questions arise on the type of governance that would allow such organization and on the way policymakers could influence the design of gvcs. the main risk with the current debate on the economic policy implications of covid- is that it can lead to the use of supply chain concepts by policymakers and international organizations in a way that departs from business reality, thus leading to wrong policy choices. the idea that reshoring unambiguously improves the resilience of supply chains, for example, is not supported by academic research. if there is a case for linking reshoring to higher resiliency, it should be brought up based on evidence, on a deeper discussion of the specific circumstances where it might be a strategy mitigating risks, and one would also need to disentangle the different policy rationales (e.g., bringing jobs back home versus creating more resilient supply chains). what is at stake in this debate are three decades of productivity gains and innovation driven by the internationalization of production, as well as higher levels of income in many emerging economies (world bank, ). building more resilient supply chains should not lead to the dismantlement of gvcs. it should also not replace the risks related to covid- by new policy hazards and a higher level of uncertainty for companies. against this backdrop, the paper suggests that the debate on the policy implications of covid- for international supply chains can be improved in three ways. first, there is a need to better understand the 'vulnerabilities' of global supply chains during covid- . that is, the primary step in reshaping the debate is to identify what went wrong during covid- . second, one needs to compare the current policy proposals to established insights from the business literature. i illustrate this with a discussion of the effects of building redundancy in suppliers, just-in-case management, and domestic supply chains. the literature indicates that these strategies are not the best suited to boost resilience in gvcs. still, their analysis is useful to point to better and more realistic policy options and to see where ib research can help. lastly, as the literature suggests that it is at the firm level that resilience is built (or at the level of mnes or lead firms in gvcs), the question is what resilience means at the country or global level and what governments can do to strengthen it. answering these questions can set the stage for a new narrative on the policy implications of covid- for global supply chains. prescribing the solution unlike the great financial crisis of - that provoked a collapse of trade financing, covid- has prompted an economic crisis that is not specifically a trade crisis. the most affected industries are services that do not rely on long and complex value chains but involve movements of people (benz, gonzales, & mourougane, ) . as china was the first country to put a lockdown into effect in january, there was initially the fear that many manufacturing gvcs would be disrupted because key inputs from china would not be delivered. this immediately triggered a series of papers warning about the vulnerabilities of international supply chains and the risk of producing in china (braw, ; gertz, ; linton & vakil, , among others) . many manufacturing value chains indeed rely on inputs produced in china and calculations with international input-output tables suggest that not having access to chinese inputs can have a high economic impact on the rest of the world (baldwin & freeman, ) . however, there is a lack of evidence at this stage of how serious disruptions were related to the (partial) lockdown of the chinese economy. the reason is that large parts of the world also implemented lockdowns a few weeks later, and demand for most manufacturing goods started to fall at the same time. the chinese lockdown was also relatively short, and china was the first country to restart its economy. macro calculations can give an indication of how important a country is as a supplier of inputs for others. however, since companies have risk management strategies and inventories, the actual impact of china temporarily shutting down its exports is not known. figure compares the projected fall in gdp in according to the latest oecd economic outlook (oecd, a) with the import intensity of production in g economies (an indicator of the reliance on imported inputs all along the value chain). there is no apparent correlation between the two. the country that is the most dependent on gvcs is korea, which happens to be the economy with the lowest projected fall in gdp for . at the opposite side of fig. are eu economies that are dependent on their regional supply chains (but not so much on china) and were severely hit by covid- . the idea that dependence on china or some other country creates supply chain vulnerabilities and that covid- has somehow materialized this fear would need to be substantiated by strong quantitative evidence, and this evidence would have to point to the size of economic losses rather than just the existence of disruptions. when analyzing the evidence, it is also important to propose some counterfactuals. using a quantitative model of production and trade, bonadio, huo, levchenko and pandalai-nayar ( ) find, for example, that there is a high drop in world gdp due to the transmission of the covid- shock through gvcs. however, the drop in gdp is higher under the scenario of a 'renationalization' of global supply chains. also using a quantitative model, oecd ( b) highlights that in addition to higher costs, the re-localization of supply chains also leads to higher volatility in output, as there are fewer channels for economic adjustments. therefore, the right question might not be whether there are vulnerabilities associated with international sourcing but whether these vulnerabilities are higher than if production were concentrated domestically. while the jury is still out on the actual vulnerabilities of gvcs during covid- , there are nonetheless three types of concrete issues that have been highlighted and that deserve to be addressed from a policy perspective. the chinese supply shock at the end of january and beginning of february is an example of international supply chain risk. whether it is a pandemic or a natural disaster, production can suddenly halt in a region of the world and induce a contagion effect to other regions through international supply chains. this was observed in with the tōhoku earthquake and tsunami in japan and the chao phraya floods in thailand, or in with hurricane katrina in the united states. many companies have learned to deal with such country-specific shocks by reinforcing their risk management strategies to mitigate the impact on their production processes. if building more resilient supply chains simply means improving the capacity of firms to face country-or region-specific supply chain risks, there is already abundant literature that indicates how this can be done (christopher & peck, ; sheffi, a; manuj & mentzer, ; pettit, fiksel, & croxton, ; kamalahmadi & parast, ) . from the experience of firms at the beginning of , it might be possible to revisit this literature and to draw new conclusions from additional case studies, as the field is always evolving (pettit, croxton, & fiksel, ) . such case studies would be particularly useful to give insights on what exactly went wrong with global supply chains beyond the supply and demand shocks that have affected all firms during covid- . the second type of disruption that has triggered the debate on gvcs is those related to the production of medical supplies and more particularly personal protective equipment (ppe). the shortage in face masks, a key product to fight the coronavirus, was also quickly turned into an international supply chain issue. however, the story behind face mask shortages was an exceptional surge in demand (oecd, c; gereffi, ) . the country that was concentrating half of the production of face masks in the world (china) also faced a shortage, suggesting that domestic production or not, the way to deal with a surge in demand is not by asking where production takes place but how production capacity can be rapidly ramped up. the shortage was also exacerbated by export restrictions put in place by some countries and by the fierce competition between governments to get access to existing stocks of face masks (fiorini, hoekman, & yildirim, ) , highlighting that the problem is not limited to the organization of supply chains. china and gvcs provided the solution to the shortage with massive exports from china to the rest of the world in april-may and one can wonder whether what seemed to be the issue (international sourcing from china) was not retrospectively the solution. if building more resilient supply chains means preventing future shortages in essential products, the answer might lie in a discussion of stockpiling strategies, contingency plans and public-private partnerships, as well as addressing export restrictions put in place by governments (oecd, c). like companies, governments need to assess risks and have risk management strategies that include plans for the production of essential goods (dasaklis, pappis, & rachaniotis, ). a closer look at supply chains from an ib perspective could nonetheless bring additional insight on how companies themselves can be prepared for a surge in demand and more generally the volatility in demand (as the surge in demand is followed by a fall when the crisis is finished, leaving many companies with excess production capacity). the supply chain risk and the issue of volatility in demand are not new and may not require the 'world after' to be radically different from the world of yesterday. although not necessarily making the headlines when they are unrelated to major natural disasters or a pandemic, disruptions in value chains are frequent (logistics incidents, fire in a warehouse, bankruptcy of a supplier, etc.). not all companies are well prepared to face risks (mckinsey global institute, ), but some are (sheffi, ) , and building on advances in supply chain and risk management, companies should come up with sensible answers to the question of resilience of their supply chains. new advances in big data analytics and the internet of things (iot) are also likely to provide new answers (birkel & hartmann, ) . the third type of disruption, and maybe the most prevalent during covid- , is those related to the functioning of international trade networks. trade did not come to a halt at the height of the crisis but it was definitely more complicated (and more costly) for firms to export and to import because of tensions in transport services and issues with border controls (oecd, d; wto, ). with travel bans, the supply of air cargo services was reduced, as half of air cargo shipments are on passenger flights. longer delays at the border were observed for customs procedures due to new health regulations and tighter controls, also affecting maritime and land transport. new port procedures and rules on the disembarkation of crews were also responsible for reduced capacity in the shipping industry. while transport companies are the ones that can mitigate the impact of such disruptions, it should be noted that they are the consequence of measures put in place by governments and that making border processes faster and safer is the basis of trade facilitation policies. addressing these types of issues does not require a reorganization of gvcs. once the problems faced by firms are clearly identified, it becomes easier to move to an evidence-based policy discussion and to see what type of policies or cooperation across countries could actually bring answers to supply chain risks or volatility in demand. and domestic supply in addition to not having properly identified the issues to be solved, the debate on covid- and global supply chains has started with strong statements about solutions, relying on concepts from the business literature. these concepts tend to be used without actually referring to academic work, which can be explained by the fact that what the literature has to say is quite different from the recommendations made. this can be illustrated with three concepts used in the papers telling us about the new normal of supply chains in the post-covid- world: redundancy, just-in-case inventories, and domestic supply chains. a new word that has appeared in the supply chain vocabulary of policymakers is 'redundancy'. in order to build more robust value chains, there should be some redundancy in suppliers (or supplier diversification) , so that in case of failure of one, others can step in and provide the required inputs. redundancy is part of the toolkit of risk management strategies and can be applied not only to suppliers but also to inventories or production capacity (kamalahmadi & parast, ) . however, it is generally regarded as a costly solution to mitigate risk. as summarized by yossi sheffi, one of the leading experts in organizational resilience: ''companies can develop resilience in three main ways: increasing redundancy, building flexibility, and changing the corporate culture. the first has limited utility; the others are essential.'' (sheffi, b) . in an empirical study looking at us firms, jain, girotra and netessine ( ) found that supply chains with more diversified sourcing (i.e., the same products are sourced from different suppliers) have slower recovery after a disruption than supply chains relying on single sourcing. one of the reasons for this is that single sourcing is associated with long-term relationships with suppliers. these long-term relationships ensure faster recovery because suppliers are more committed to mitigating risks, are ready to go beyond their contractual obligations to address disruptions, and are more integrated in the production processes of the firm with more information sharing. redundancy also means having some extra inventory or additional production capacity to face crises. however, the cost of holding a large inventory or maintaining spare production capacity often outweighs the gains from mitigating risks, particularly in the case of low-probability events. for companies that regularly face hurricanes or adverse climate conditions, for example, redundancy can make sense (sheffi, ) , but one cannot expect companies to invest in extra production capacity and inventories for a once-ina-century pandemic. the issue of redundancy is clearly one where ib research can help to shape the policy debate. the multinational enterprise has been analyzed as a network of subsidiaries operating in different countries with the objective of managing risks, such as exchange rate volatility or policy risk (kogut & kulatilaka, ) . even if there are switching costs, covid- has illustrated that some companies used their network to reallocate production, such as samsung moving temporarily the production of its high-end mobile phones from korea to vietnam when its factory was threatened by the coronavirus (financial times, ). this type of redundancy is more related to flexibility and does not imply duplicating capacity or multiplying the number of suppliers on a permanent basis. however, it is one advantage that mnes have over companies operating in a single market. explaining what type of redundancy is useful to build resilience in gvcs and how this redundancy is related to international production could improve the terms of the debate. the discussion on inventories is related to 'just-intime' strategies that have contributed to reducing the size of buffer stocks. just-in-time (jit) inventories management was introduced in the s by toyota and was quickly adopted by many manufacturing companies in the world as an effective strategy to reduce costs, shorten lead times, and improve the quality of production (keller & kazazi, ) . jit is part of lean manufacturing strategies aimed at reducing all costs and waste in the production process (bhamu & singh sangwan, ) . now the idea would be to switch to 'justin-case' management where a loss in economic efficiency would be traded-off for increased security in supply of inputs. but what exactly is the underlying management strategy? 'just-in-case' is an expression used in the literature to describe what was before jit or in the risk management literature to discuss whether higher inventories are needed (srinidhi & tayi, ) . but it is not a specific type of management model that could be mainstreamed to make supply chains more resilient, unless the idea is to come back to the management of inventories as it was before the ict revolution and modern logistics. 'just in case' is a very vague proposal that maybe only suggests adjusting jit to better take into account risk management. however, this is already the case, as risk management strategies and jit generally go together. firms that invest in reducing inventories and making their production process as efficient as possible all along the value chain are also the ones investing in the monitoring and management of risks. this can be illustrated with cisco's supply chain risk management that is often mentioned as an example of best practices. cisco aims at identifying the right level of inventories to achieve both resilience and efficiency (miklovic & witty, ) . in may, in the middle of the covid- crisis, m -one of the main manufacturers of face masks -announced that it plans to reduce the cost of its inventories by usd million in the coming years in order to operate its supply chains more efficiently (supply management, ) . this highlights that companies producing essential goods are also looking for lean inventories and do not see it as a contradiction with their risk management and business continuity objectives. a point made by pisch ( ) is also that jit companies have lower costs for inventories. therefore, if there is a need to increase inventories to reduce risks, they are also better placed to do it in a more competitive way. a related consideration is that when there is a fall in demand, like in the current covid- crisis, companies with low inventories have smaller losses than those with high inventories. the paradox is that if 'just-incase' was currently the predominant strategy of firms, more of them could become bankrupt with covid- . finally, it should be noted that the manufacturing paradigm has also recently shifted from 'lean manufacturing' to 'agile manufacturing' (potdar, routroy, & behera, ) . while some firms may still follow jit and lean production, new business models put more emphasis on the capacity of firms to adapt to change and to produce in uncertain environments. some ib scholars suggest that the international business environment is now characterized by volatility, uncertainty, complexity, and ambiguity (vuca) and that, in this new vuca world, firms need to develop dynamic capabilities to remain competitive (bennett & lemoine, ; teece, ; van tulder, verbeke, & jankowska, ) . what authors reacting to covid- are calling for is already something under way (and where efficiency does not have to be sacrificed to achieve resilience). further insights on the new paradigms of firms and what they do and intend to do as a consequence of the covid- pandemic could also help to bring the policy debate closer to the decisions of firms. the idea that domestic production is more resilient than international production is also not something found in the risk management literature. the main reason for this is that there are many risks in the domestic economy as well and this literature does not try to identify where is the safest place to produce but what strategies can companies put in place to mitigate risks. for example, companies producing in japan will always face a high risk of earthquakes. some countries may be less exposed to natural disasters but will face other risks such as exchange rate volatility, strikes, social unrest, or a pandemic. there are also risks that are not related to the location of production, such as bankruptcy. if a supplier goes bankrupt (which is a high risk during the covid- recession), it does not matter whether it produces in the domestic economy or not. inputs will no longer be supplied. another type of risk that has recently gained attention is cyber risk (ghadge, weiß, caldwell, & wilding, ) . as supply chains increasingly rely on information and communication technologies, they are more vulnerable than before to cyber-attacks and it failures, a risk that is not lower when production is domestic (and potentially higher if domestic firms all use the same it infrastructure). risk management is about looking at the whole portfolio of risks, which can lead to different decisions in terms of the location of production. domestic production might indeed be the strategy in some cases, but it would be the result of a decision integrating a variety of risks -and risk is only one determinant of the location of production among others. until recently, the concept of reshoring could not be found in the business literature and was mentioned more as a hypothetical case when discussing offshoring. some anecdotal evidence on companies actually reshoring their activities has prompted new research. however, the literature does not regard supply chain risk as one of the main determinants of reshoring (wiesmann, snoei, hilletofth, & eriksson, ) . minimizing disruptions in supply chains and reducing delivery times might be a driver but studies generally emphasize the limits of reshoring (bailey & de propris, ) . some further insights that could be brought from the ib literature are what firms have to lose (or to change) when being disconnected from the most efficient suppliers or from international knowledge networks. there are advantages in producing locally, such as not supporting all the additional costs related to cross-border transactions and managing activities abroad. the question is what kind of location advantages from offshored places are traded for these domestic location advantages. more generally, there are important insights from the ib literature that can help policymakers to develop a better understanding of the relationship between the organization of global supply chains and risk. risk is part of the location advantages in dunning's eclectic theory (dunning, ) . the policy or institutional risk in the host country of mnes has always been regarded as an important determinant of fdi, with heterogeneous responses (buckley, chen, clegg, & voss, ) . volatility in real exchange rates is another risk specific to international production that can lead firms to look for options and the flexibility of switching production across countries (kogut & kulatilaka, ) . real options theory provides a theoretical basis for this type of diversification strategy and can be applied to a variety of other risks beyond exchange rates (chi, trigeorgis, & tsekrekos ) . internalization theory can also potentially address the issue of risks in supply chains with an answer not limited to the geographical location of activities but also the boundaries of firms and what they decide to outsource or not (strange, ) . the geography of mnes is the result of complex strategic decisions (mudambi et al., ) . a new imperative related to the mitigation of supply chain risks can affect these decisions and change the geography and boundaries of firms, but the idea of reshoring seems simplistic as compared to the sophisticated location decisions described in the ib literature and the constraints faced by firms to remain competitive. help to know what to do now that it seems accepted that covid- has revealed the vulnerabilities of international supply chains, governments are under pressure to show that they are taking some action to fix gvcs. this is why it is dangerous to leave an analytical vacuum where the solutions proposed would only be the ones analyzed in the previous section. reshaping the debate and introducing a different set of answers derived from the business literature and supported by empirical evidence requires addressing three questions. one is a matter of communication, and the two others are more fundamental. while there is some convergence in the risk management literature on what can improve the resilience of supply chains, the first question is how to communicate the results of this research to policymakers. one issue is the diversity in the concepts used to describe what firms need to achieve. under the list of 'capabilities' to be developed are the concepts of flexibility, agility, visibility, adaptability, and collaboration (kamalahmadi & parast, ) . each concept casts light on a different aspect of what makes firms able to quickly react to a crisis and mitigate its impact, but there is also some overlap between them. as concepts, they also carry some level of abstraction and both businesses and policymakers might regard them as a bit disconnected from their daily work. it would be useful to simplify the message and to synthesize these different aspects. one reason why the concept of 'resilience' is successful (while not always properly used) is that it sounds like a reasonable and simple objective. as the pendulum seems to be right now more on the side of limiting international supply, increasing inventories and diversifying suppliers, there is a need to move it more in the direction of flexibility and agility where firms do not have to become less efficient to mitigate risks. the role of collaboration (scholten & schilder, ) , which is related to visibility, might also be interesting to emphasize from a policy perspective (having in mind governments as potential actors in this collaboration). the second question is whether solutions are at the firm level or the gvc level. the risk management literature focuses on making firms resilient, and this can be measured by the time they take to recover from a disruption. according to martins de sá et al. ( ) , resilience in the value chain does not depend on the organizational features of the supply chain but rather on efficient risk management strategies in firms that are able to reconfigure the value chain to mitigate the disruptions. here it might be useful to refer to gvc analysis and to the different models of governance of supply chains (gereffi, humphrey, & sturgeon, ) . if a lead firm controls the whole value chain (captive value chain or vertically integrated value chain), ensuring that the lead firm has the capabilities needed for effective risk mitigation might be enough to create resilience all along the supply chain. it might also be the case in some relational value chains where the same is achieved through collaboration. in the case of market linkages and modular value chains, one may have to distinguish the resilience of the supply chain from the resilience of specific firms. moving from policy proposals focusing mainly on the design of supply chains (e.g., to make them shorter, more domestic, and more diversified) to proposals enhancing the capabilities of firms (e.g., to help them to develop flexibility and agility, as well as visibility in their supply chains) requires clarifying the intersection between firms and gvcs (pananond, gereffi and pedersen, ) . on the one hand, the concept of resilience at the gvc level (the way policymakers understand it, i.e., value chains for a large range of final producers belonging to the same broad industry, such as medical supplies or food) is more difficult to define, as different firms will recover at a different pace after a disruption (and not all of them might be affected in the first place). in theory, the production of final goods can only resume when production all along the value chain starts again, but it can still leave some final producers and input suppliers at different stages of recovery when considering gvcs of a whole industry. on the other hand, the type of resilience that is discussed by policymakers (and which is more about reducing and diversifying risks rather than shortening the time to recover from a disruption) might be easier to achieve at the gvc level. for example, reducing the dependence on inputs from a specific partner country can be the result of different firms sourcing from different countries while not asking each firm to diversify suppliers (ferrarini & hummels, ) . the third question is how governments can influence sourcing decisions or capabilities of firms, as well as the organization of gvcs. it should be noted that this question is the same whether one is promoting reshoring or agility. it is a traditional question in the literature in relation to the design of gvc-oriented industrial policy (gereffi & sturgeon, ) , the public/private governance of value chains (bair, ) , the role of the state in gvcs (horner & alford, ) , as well as the impact of investment and trade regimes on decisions of mnes (buckley, ; rugman & verbeke, ) . it may receive different answers based on whether governments want to encourage or constrain firms in adopting specific strategies. leaving aside the option where governments themselves become actors in gvcs (through government ownership), constraints (e.g., tariffs, taxes) or incentives to firms (e.g., subsidies, tax breaks) inevitably lead to economic distortions. it would be a paradox to resort to such mechanisms when the origin of current trade tensions and policy uncertainties for investment lies in market-distorting government support and when several countries highlight the need for levelling the playing field (oecd, ). some governments might still follow this path, but a more coherent policy framework that would not suggest increasing policy risks and costs for companies in order to build resilience would have to rely on a two-pronged approach. first, as highlighted before, there are a series of policies, such as trade facilitation or the regulation of transport and infrastructure services, where the government is directly in charge of setting the rules and can create the conditions for firms to mitigate risks and increase their agility (e.g., by eliminating red tape, creating emergency certification procedures, etc.). the reduction of policy uncertainties, including at the global level, is also in the hands of governments, although more dependent on the success of international cooperation (which is not warranted in the current geopolitical environment). second, some dialogue with the private sector and possibly the organization of public-private platforms at the level of gvcs (hoekman, ) can allow governments to encourage firms to put more emphasis on the issue of resilience, while not introducing financial constraints or incentives. different types of incentives can be provided, such as with the organization of 'stress tests' that would put companies in the position of proving that they have taken the necessary steps to be resilient, particularly in the context of the production of essential goods (simchi-levi & simchi-levi, ) . such stress tests could also provide information to governments to organize their own risk management strategies (e.g., the right level of national stockpiling to meet a surge in demand beyond the capacity of firms to ramp up their production) and to improve their policies (e.g., information on policy-related costs encountered by firms in their operations). a gvc-level dialogue would also allow firms to cooperate among themselves to be better prepared for risks. the basis for developing and promoting such policy proposals would be a new narrative with the following elements: ( ) covid- has confirmed interdependencies between economies. there are risks inherent to these interdependencies but they are also the source of growth and development. at this stage, there is no reason to believe that reducing interdependencies would reduce the exposure of economies to risks. on the contrary, simulations suggest that not only the income of countries would be lower but also more volatile. ( ) there are concrete issues that can be addressed by policymakers at the gvc level for economies to be better prepared for risks in the future. these issues do not require a new paradigm for gvcs but may involve their restructuring in a process driven by companies and tailored to the specific conditions they operate in. three of them were discussed in the first section of the paper: international supply chain risks and contagion effects, surges in demand for essential goods, and disruptions in trade and transport networks. in these three areas, a gvc perspective makes sense and a combination of actions by firms and governments can mitigate the impact of the next crises. ( ) there is no trade-off between efficiency and lower risk. there are trade-offs between different types of risks and firms have to balance the costs and benefits of risk management. however, the most efficient firms are also the ones that are the best at mitigating risks (sheffi, ) . promoting agility and flexibility is an agenda that can serve both the objective of resilience and economic recovery after covid- . ( ) the location of production is a complex issue where risk is one determinant among others. there is no rationale for suggesting a specific organization of gvcs that would create resilience, but one type of risk that governments can control is policy risk. they can diminish uncertainties within their domestic economy and rely on international cooperation to reduce international policy risks and trade tensions. conclusion in its world investment report, unctad ( ) is already predicting that reshoring, diversification, and regionalization will drive the restructuring of gvcs in the coming years. it might be premature, as these strategies have been proposed in columns or opinion pieces and are not grounded in business experience, research, and analytical work. calls for more resilient supply chains have been heard before, after when the emphasis was on risks related to terrorism or after when the emphasis was on natural disasters. businesses that nowadays are described to be focused too much on efficiency and insufficiently prepared for risks related to hyper-globalization have already been through many crises that have prompted them to act. still, covid- is an unprecedented crisis, and its global scale might lead more companies to rethink their strategies and to put more emphasis on risk management. companies that have been through this process in the past have not resorted to reshoring or regionalization and have not significantly diversified their supply chains. what could be different this time is that firms also have to adjust to deep changes in their environment, such as the digital transformation, climate change, or rising protectionism and trade tensions. therefore, we will see some structural shifts in the organization of global supply chains and covid- might be an accelerator of these shifts in some cases. it is too early, however, to predict what solutions will lead businesses to thrive in this uncertain environment. still, it is useful to have the current debate on the policy implications of covid- for global supply chains. first, this debate can prevent governments from making the wrong policy choices in the future. that is, there is an opportunity for researchers to convey to policymakers relevant knowledge that will improve their policies or prevent them from making mistakes. second, the overlap between the debate on the resilience of supply chains and the debate on protectionism and economic nationalism can also offer new ways of addressing concerns about globalization. while building more resilient gvcs could be used as a pretext for protectionist policies, it is a two-edged sword. demonstrating that domestic value chains are increasing certain types of risks or that international sourcing can improve the access to essential goods would not only reduce the appeal of protectionist policies but do it on different grounds than just pointing at a welfare loss. third, new research on global supply chains and risk mitigation during covid- could provide novel insights, as well as new policy recommendations. different questions could also be examined, not being constrained by the initial emphasis on reshoring and redundancy. for example, the reshoring debate focuses on resilient value chains for developed countries and does not take into account developing and emerging economies. these countries would not only lose some economic activity if reshoring was the new normal, but would also face a more difficult access to essential goods when they are produced by mnes from developed countries. the author is writing in a strictly personal capacity. the views expressed do not reflect those of the oecd secretariat or the member countries of the oecd. the author is grateful to the editor, ari van assche, and to two anonymous referees for their many helpful comments and suggestions. in the risk management literature, resilience is defined as ''the ability of a system to return to its original state or move to a new more desirable state after being disturbed'' (christopher & peck, ) . in the supply chain, resilience is about reducing the time it takes for companies to resume normal production once a disruption has occurred. it is different from 'robustness', which is the ability of supply chains to maintain their function despite internal or external disruptions (brandon-jones, squire, autry, & petersen, ). authors who ask for more resilient supply chains in the context of covid- are often mistaking resilience for robustness. they focus on the description of the disruptions but do not report how quickly international supply chains have generally adjusted, which is the sign of their resilience. on the policy implications of robustness versus resilience in gvcs, see miroudot ( ). the debate on china overlaps with another type of risk that is not related to covid- but to trade tensions between the united states and china. there is evidence that an increasing number of companies are moving their production out of china to avoid trade barriers imposed on chinese exports or potential political pressures (baker mckenzie, ) . the fact that reshoring and domestic production are suggested to build more resilient supply chains is likely to be linked to economic nationalism and the anti-globalization sentiment. the issue of concentration of production, which is relevant for supply chain disruptions, is also generally analyzed only in relation to china. see timmer et al. ( ) for the calculation of the import intensity of production. the ratio indicates for each dollar of output the value of all intermediate inputs traded upstream in the value chain. it was calculated for (latest year available) with data from the oecd trade in value added (tiva) database. data for the eu are based on the euro area only. business surveys generally indicate high rates of disruptions but without an indication of the consequences of these disruptions (e.g., whether firms have stopped producing or not). see for example the surveys conducted by the institute for supply management (www.instituteforsupply management.org), the data collected on firmlevelrisk.com and mckinsey global institute ( ). the expression 'supplier diversification' is only partially a synonym for redundancy. redundancy suggests that there are (at least) two suppliers for the same input (in different locations). supplier diversification can also be understood as diversifying the sources of supply by working with different suppliers in different countries but each of them providing different inputs (i.e., maintaining single sourcing). it can spread the supply chain risk but does not offer the same level of business continuity when one of these suppliers fails to provide the inputs. the argument is also about shorter supply chains, which can be understood as a regionalization of production and not just relying on domestic suppliers. the relationship between distance and risk is linked to regional integration and economic cooperation among countries, which can reduce the policy and institutional risk. cultural factors could also play a role with lower transaction costs and easier co-operation between firms when there is some cultural proximity. see e.g., shenkar ( ) for a discussion of cultural distance. trade in value-added terms, the relationship between trade and investment, and the trade policy implications of global value chains. he holds a phd in international economics from sciencespo. supply chains reimagined: recovery and renewal in asia pacific and beyond manufacturing reshoring and its limits: the uk automotive case contextualising compliance: hybrid governance in global value chains supply chain contagion waves: thinking ahead on manufacturing 'contagion and reinfection' from the covid concussion what a difference a word makes: understanding threats to performance in a vucaworld the impact of covid- international travel restrictions on services-trade costs: some illustrative scenarios lean manufacturing: literature review and research issues internet of things -the future of managing supply chain risks. supply chain management global supply chains in the pandemic a contingent resource-based perspective of supply chain resilience and robustness blindsided on the supply side towards a theoretically-based global foreign direct investment policy regime risk propensity in the foreign direct investment location decision of emerging multinationals global value chains in a post-crisis world: a development perspective real options theory in international business building the resilient supply chain supply chain contagion and the role of industrial policy toward an eclectic theory of international production: some empirical tests chinese whispers: covid- , supply chains in essential goods, and public policy asia and global production networks: implications for trade, incomes and economic vulnerability covid- : expanding access to essential supplies in a value chain world inside samsung's fight to keep its global supply chain running what does the covid- pandemic teach us about global value chains? the case of medical supplies the governance of global value chains global value chain-oriented industrial policy: the role of emerging economies the coronavirus will reveal hidden vulnerabilities in complex global supply chains managing cyber risk in supply chains: a review and research agenda supply chains, mega-regionals and multilateralism: a roadmap for the wto the roles of the state in global value chains recovering from supply interruptions: the role of sourcing strategies global supply chains will not be the same in the post-covid- world a review of the literature on the principles of enterprise and supply chain resilience: major findings and directions for future research just-in-time' manufacturing systems: a literature review how globalization became a thing that goes bump in the night operating flexibility, global manufacturing, and the option value of a multinational network here's how global supply chains will change after covid- coronavirus is proving we need more resilient supply chains international connectedness and local disconnectedness: mne strategy, city-regions and disruption global supply chain risk management supply chain resilience: the whole is not the sum of the parts risk, resilience, and rebalancing in global value chains case study: cisco addresses supply chain risk management resilience versus robustness in global value chains: some policy implications zoom in, zoom out: geographic scale and multinational activity the modern supply chain is snapping. the atlantic how to pandemic-proof globalization: redundancy, not re-shoring, is the key to supply chain security oecd. a. oecd economic outlook oecd. b. shocks, risks and global value chains: insights from the oecd metro model the face mask global value chain in the covid- outbreak: evidence and policy lessons trade facilitation and the covid- pandemic levelling the playing field an integrative typology of global strategy and global value chains: the management and organization of cross-border activities the evolution of resilience in supply chain management: a retrospective on ensuring supply chain resilience ensuring supply chain resilience: development of a conceptual framework managing global production: theory and evidence from just-in-time supply chains agile manufacturing: a systematic review of literature and implications for future research global corporate strategy and trade policy the resilient enterprise: overcoming vulnerability for competitive advantage building a resilient supply chain the power of resilience. how the best companies manage the unexpected cultural distance revisited: towards a more rigorous conceptualization and measurement of cultural differences is it time to rethink globalized supply chains? sloan management review collaboration in supply chain resilience we need a stress test for critical supply chains just in time or just in case? an explanatory model with informational and incentive effects the covid- pandemic and global value chains m cuts inventory by $ m a dynamic capabilities-based entrepreneurial theory of the multinational enterprise an anatomy of the global trade slowdown based on the wiod release world investment report, -international production beyond the pandemic from the editor: steering a policy turn in international business -opportunities and challenges from the editor: covid- and international business policy international business in a vuca world: the changing role of states and firms drivers and barriers to reshoring: a literature review on offshoring in reverse world development report -global value chains: trading for development trade in services in the context of covid- about the author sébastien miroudot is senior trade policy analyst in the trade in services division of the oecd trade and agriculture directorate. he was previously a research assistant at groupe d'economie mondiale and taught in the master's degree programme at sciencespo, paris. during - , he was visiting professor at the graduate school of international studies of seoul national university. at the oecd, his current work is on the measurement of key: cord- - hu twhp authors: mueller, siguna title: facing the pandemic: what does cyberbiosecurity want us to know to safeguard the future? date: - - journal: biosaf health doi: . /j.bsheal. . . sha: doc_id: cord_uid: hu twhp as the entire world is under the grip of the coronavirus diseases (covid- ), and as many are eagerly trying to explain the origins of the virus and cause of the pandemic, it is imperative to place more attention on related potential biosafety risks. biology and biotechnology have changed dramatically during the last ten years or so. their reliance on digitization, automation, and their cyber-overlaps have created new vulnerabilities for unintended consequences and potentials for intended exploitation that are largely under-appreciated. herein, i summarize and elaborate on these new cyberbiosecurity challenges, ( ) in terms of comprehending the evolving threat landscape and determining new risk potentials, ( ) in developing adequate safeguarding measures, their validation and implementation, and ( ) specific critical dangers and consequences, many of them unique to the life-sciences. drawing upon expertise shared by others as well as my previous work, this article aims to summarize and critically interpret the current situation of our bioeconomy. herein, the goal is not to attribute causative aspects of past biosafety or biosecurity events, but to highlight the fact that the bioeconomy harbors unique features that have to be more critically assessed for their potential to unintentionally cause harm to human health or environment, or to be re-tasked with an intention to cause harm. i conclude with recommendations that will need to be taken into consideration to help ensure converging and emerging biorisk challenges, in order to minimize vulnerabilities to the life-science enterprise, public health, and national security. ever since the coronavirus diseases (covid- ) pandemic, (laboratory) biosafety and biosecurity concerns are even more rigorously scrutinized. this article uses the lens of the current pandemic to evaluate biological risks from biological research, particularly those that are amplified by the digitization of biological information and biotechnology automation. the cyberphysical nature of biotechnology has led to fascinating advances throughout the bioscience field. only recently, concerns have been raised regarding new risks that may lead to unintended consequences or unrecognized potentials for misuse. just as the emergence of the internet some decades ago led to a major revolution -which, by necessity was paralleled by the field of cybersecurity -we are now facing the era of cyber biosecurity with its own security vulnerabilities. the dna synthesis industry has worked proactively for many years to ensure that synthesis is carried out securely and safely . these efforts have been complemented by the growing desire and capability to resynthesize biological material using digital resources [ , ] . yet, the convergence of technologies at the nexus of life and medical sciences, cyber, cyberphysical, supply chain and infrastructure systems [ ] , has led to new security problems that have remained elusive to the majority of the scientific, agricultural, and health communities. it has only been during the last few years, that awareness of these new types of vulnerabilities is growing, especially related to the danger of intended manipulations. as these concerns have spawned the emergence of cyberbiosecurity as a new discipline, it is important to realize that its focus is not merely on traditional cyber-attacks (sect. and fig. below). due to the increased reliance of the bioscience fields on cyberphysical systems (cps, fig. below), potentials for exploitation exist at each point where bioengineered or biomanufactured processes or services interface with the cyber and the physical domain, whereby attackers may exploit unsecured networks and remotely manipulate biologic data, exploit biologic agents, or affect physical processing involving biological materials, that result (whether intentionally or unintentionally) in unwanted or dangerous biological outcomes [ , , , ] . great efforts have been put into place to rigorously assess the new risks and threats (see in particular [ ] and the recent national academy of sciences, engineering, and medicine report "safeguarding the bioeconomy" [ , pp. - ] ). nonetheless, cyberbiosecurity is still in its infancy. there is still limited expertise to fully characterize and assess the emerging cyberbio risks [ ] , and it has been recognized that generic cyber and information security measures are insufficient [ , , , , , , ] . triggered by the covid- pandemic, enormous amounts of resources have been devoted to identify its exact genesis. a goal of this article is to challenge this narrow focus by concentrating on the larger context of cyberbiosecurity, to illuminate serious new concerns for a wide audience. i will highlight distinct challenges and suggest specific steps to help support risk deterrence efforts. most broadly, cyberbiosecurity aims to identify and mitigate security risks fostered by the digitization of biology and biotechnology automation. fig. gives a summary how this new paradigm evolved. while others, including the author, began to investigate these challenges almost a decade ago [ , , , , , ] , the term cyberbiosecurity was first (informally) used in [ ] . these authors warned of security issues resulting from the cyberphysical interface of the bioeconomy, as it was recognized that all biomanufacturing processes are in fact cps (see also incomplete awareness. during the last few years, the biotechnology industry has fallen prey to serious attacks (see e.g. [ , table - ]), although there is no broad awareness of this. this important observation and the compelling need to question the "naive trust" throughout the life-science arena were key drivers to establish cyberbiosecurity as a new discipline [ ] . additional sobering criminal cases that have affected the bioscience field are now emerging, even during the current pandemic (e.g. [ , , , , , ] ). as noted in [ ] , these encompass three critical areas of attack -sabotage, corporate espionage, and crime/extortion. yet, people in the life-sciences are largely ignorant of the dangers as they are barely trained in security issues -or not at all. research and healthcare industries are vulnerable to cyberbiosecurity attacks because they have not kept up with threats [ , ] . capitalizing on a common misconception. generally, it is widely accepted that cybersecurity attacks and data breaches are a matter of when, not if. very recently, ransomware attacks have been recognized as "the primary threat" to healthcare organizations [ ] . statements like these seem to support the understanding that cyberbio concerns in the bioeconomy could be dealt with by using it solutions alone (and possibly optimized for life-science demands). unfortunately, the reliance on cps generates unrecognized convergence issues. it is important to understand that due to cross-over effects, neither cyber nor physical security concepts alone are sufficient to protect a cps. "separate sets of vulnerabilities on the cyber and physical sides do not simply add up, they multiply" [ ] . notably, cyber-attacks on critical automated (computer-based) processes (e.g., workflow or process controls) may lead to dire real-world consequences, similar to direct physical attacks. for instance, a explosion in the highly secure , -mile baku-tbilisi-ceyhan pipeline was caused by computer sabotage. the main weapon for this cyberphysical act of terrorism was "a keyboard" [ , ] . in general, the term "physical" in cps (fig. , central box) is applied to the "engineering, physical and biological" [ ] components of the system, or more generally, any components of the physical world which are connected through cyber elements. (e.g. the hazard analysis control point system for the fd+ag sector or, more generally, the infrastructure survey tool [ ] or nist guidelines [ ] ), it is recognized that fully scoping all the cyberbio risks, not to mention their relative likelihood and impact, is rather challenging [ , , ] . although some of the cyberbio vulnerabilities share compelling similarities to the early days of the internet [ ] , there are critical differences [ , , , , ] . while most responders to the above mentioned survey of international experts [ ] agreed that their organizations had "considered" cyberbio issues, some noted "insufficient time" or "no idea" how to address them, and all pinpointed the lack of available resources. this section describes some of the difficulties. the problem of identifying what needs to be protected: -many of the novel cyberbio risks and threats (table ) have not been fully scoped. they are difficult to characterize, and envisioning the complete risk landscape continues to be a challenge [ , , , , ] . -identifying and hierarchizing the extent, impact and severity of various (including, hypothetical) new vulnerabilities is difficult. -there is no comprehensive model to effectively capture, assess, and address the motivations, capabilities, and approaches of those who may cause harm (see also sect. . ). • how protection is achieved and enforced: -existing solutions from the cyber domain are only geared at specific aspects of biosecurity and cybersecurity but do not address the overlap and the issues arising from this convergence [ , , ] . -due to variations in types of threats, targets and potential impacts, it is not straightforward to determine the applicability and effectiveness of a possible solution. -as "there is no one model" to secure the use of information systems across the bioeconomy [ ] , weak or premature solutions may only help address a distinct problem but be misapplied in a different context, or even become a source for exploitation (sect. . and fig. below). j o u r n a l p r e -p r o o f standards and guidelines [ , , ] are a serious issue to achieve comprehensive and international protection. very recent publications and programs [ , , , , , , , , ] ) undoubtedly have increased cyberbiosecurity awareness and large corporations will have been able to enhance their infrastructure. yet, the pandemic has shifted r & d priorities and budget and has hampered many efforts to better comprehend the new risks and to develop solutions. pharma and medtech professionals and companies are overwhelmed with covid- mitigation and crisis resolution while the industry sprints to develop new therapeutics and vaccines. on the other hand, the pandemic has led to a huge rise in cyber-attacks, with some reporting an % increase compared to pre-coronavirus levels [ ] . as cybersecurity professionals are struggling to target this surge in cyber-crime, wfh (work from home) has impacted the ability of many cybersecurity professionals to support new business applications or initiatives [ ] . as companies and organizations struggle to maintain stability and security, new research areas such as cyberbiosecurity have received inadequate attention and support. in addition to the known cyberbio challenges described above, the context of the bioscience fields leads to distinct problems that are not well understood. the context of the life-sciences involves unique concerns and unknowns. cyber-based attacks targeting the biological and medical sciences involve living entities, with networks of connections, combinatorial interactions and a dynamic range of outcomes. future and timed effects can be achieved by various technologies (e.g., non-volatile memory devices and electronic circuits). yet, with biotechnology products there is a decreased ability to control exposure [ ] : they are often designed to be easily dispersed (e.g., with agricultural technologies directly in the field [ ] ), reach high scalability [ ] , can be delivered in different states (including water [ ] ), and can be activated by simple environmental agents (temperature, light, wind [ , , ] ). a critical issue with active biologicals is that they can be transferred by contact, ingestion, j o u r n a l p r e -p r o o f journal pre-proof or inhalation [ ] . while concerns about unintended consequences and ill-intended applications of these and related technologies have been raised recently (see e.g., [ , , , , , , ] ), types of biotechnologies that not merely have a cyber-overlap, but which constitute artificial systems themselves, have been even less assessed. these include artificially generated self-replicating systems [ ] , artificial cells that mimic the ability of natural cells to communicate with bacteria [ ], or artificially generated processes to interact with one another and initiate various signaling cascades [ ] . the consequences of an ill-intended or accidental release of such systems into the environment are not understood. one of the most complex issues may be that "information" in the biological context is of a different kind than what is meant in the information sciences. identifying "biological information" is not always straightforward and may evade available technology from time to time: consider, for instance, the situation of recessive alleles of a gene. these can be phenotypically invisible over a huge proportion of a population and known for their frequency using tools such as the hardy weinberg equilibrium equation; as dna sequencing and synthesizing technologies developed over decades they could be detected and linked to individuals. while such invisibility features are of potential benefit in the area of steganography, [ ] describes critical concerns that analogously apply to cyberbiosecurity. for instance, biological information can be stored and transmitted in a virtually undetectable way: "no x-ray, infra-red scanner, chemical assay or body search will provide any immediate evidence" of it [ ] . further, biological media can survive much longer than anticipated [ ] , which in this context leads to the worrisome situation that data (or biologic "information") can "literally run off on its own" [ ] . notably, critical vulnerabilities also arise in the context of devices and mechanisms. among others, the above mentioned survey [ ] identified "elevated or severe risk" potentials for an unauthorized actor to ( ) take control of infrastructure (e.g., lab equipment, lab control systems, or even a fully automated robot lab), ( ) interrupt the functioning of lab systems, or ( ) circumventing security controls. the cyber-physical nature of biotechnology is one of the key concerns in cyberbiosecurity ( fig. and table ). with increased automation, dangers arise, for example, in the context of sterilization methods used in the healthcare and laboratory setting. for some methods, a very recent study [ ] demonstrates that "integrity of released dna is not completely compromised," which is leading to the "danger of dissemination of dna and xenogenic elements across waterways." these findings were linked to temperature and time (e.g., journal pre-proof short microwave exposure times or short exposure time to glutaraldehyde treatment were least effective). parameters like these are both highly malleable and susceptible to manipulation, which will become an even bigger concern with "smart labs" of the future [ ] . in the context of food and agricultural systems, cyberphysical interconnections lead to the danger of "[m]anipulation of critical automated (computer-based) processes (e.g., thermal processing time and temperature for food safety)" and "[l]ack of ability to perform vulnerability assessment" [ ] . traditionally, the reliance on tacit knowledge and direct hands-on processes and applications has shielded the bioscience field from many forms of attack. beyond doubt, the digitization of biology and biotechnology automation are key drivers that enable the bioeconomy. nonetheless, these are creating yet a different type of risk than described above. the internet makes it easier to bypass our existing controls (be they personal intuitions, company procedures or even laws) [ ] . we have evolved social and psychological tools over millions of years to help us deal with deception in face-to-face contexts. but when we lose both physical and human context (as in online communication), forgery and intrusion become more of a risk. it is now known that in the cyber fields "deception, of various kinds, is now the principal mechanism used to defeat online security" [ ] . online frauds are often easier to do, and harder to stop, than similar real-world frauds. and according to [ ] , "more and more crimes involve deception; as security engineering gets better, it"s easier to mislead people than to hack computers or hack through walls." while only recently recognized as one of the most important factors in security engineering [ ] , the entire life-science enterprise is not adequately prepared for attacks that exploit psychology (social engineering attacks, table ). at the same time, hackers are getting better at technology: "designers learn how to forestall the easier technical attacks..." [ ] . thus, through various forms of fraud and deception, attackers may be able to circumvent many of the existing cyber-based safeguarding mechanisms and get direct access to their victim"s system. once they have entry to a target system, this may allow them to exploit not only the data and cyber side; it could also facilitate attacks on control and processes underlying various cyber-physical applications ( fig. ) with consequences that directly affect biophysical components (fig. ) . cyberbiosecurity is highly cross-disciplinary and will benefit from integrating existing capabilities and proven methodologies from a wide range of fields (e.g. security engineering, physical security and privacy, infrastructure resilience, and security psychology), with requirements from the life-science realm. as cyberbiosecurity may profit the most from lessons learned in the information security domains, this section focuses on this arena. several suggestions have been made to secure specific new cyberbio challenges via various cyber applications (e.g. [ , , , , , , ] ). nonetheless, their practical realization is not always straightforward as even most basic information security notions still need to be better adapted to the bioscience framework (see e.g. [ , table ]). similarly, it will be necessary to refine and extend the classic cia triad (which long has been the heart of information security), to extend the suggestions made previously (e.g. [ , fig. ]), to optimally align them with the new demands. j o u r n a l p r e -p r o o f as argued (sect. . ), not all of the new problems can be linked to traditional cyber issues. thus, it will be important to distinguish which challenges could, or could not, be identified/safeguarded by existing cyber-approaches (or slight modifications thereof). to aid this distinction and develop a hierarchy of risk severity, it will be helpful to pinpoint the following. identify challenges to assure authenticity and integrity. the cyber-based interface to measure and assess a bioengineered product or service creates a gap, potentially allowing a range of vulnerabilities, from falsifiable entries of biological databases and sequence errors [ , ] -which in a context like pathogens could lead to entry errors with rather disturbing effects -the intentional tampering of data related to forensics [ ] , cyber-enabled attacks on systems monitoring water security [ ] , to the actual exchange of the purported actual (cps produced) entity. the latter may enable the distribution of accidentally exchanged/counterfeit products such as plasmids [ ] . which give rise to unique concerns where, e.g. some undeclared and "invisible" protein or nucleic acid in a suspended formulation contacts the stated product on release from the packaging or in the retail chain (see [ ] ). "information" in the biological sciences [ ] , the information life-cycle at large, logically-based game strategies, mechanisms for dual-use appropriation, end-to-end assessments, "routes to harm," context, and multiple exposure pathways [ , , , , , , ] . identify the possibility of future and off-target effects. these are situations where clear predictions as required for various "if-then" paradigms employed in the cyber domains are inapplicable. deterrence measures will need to consider emerging actors and their pathways of action, including interactions between synthetic and natural entities, as well as mechanisms, vesicles and actions that can be activated by various physical and mechanical forces or combinations thereof [ , ] . cyberbio efforts will benefit from the cps arena as these provide unique insights relative to "hardware" (incl. devices and systems) and "software" interdependencies. the cyber-interactions and the interconnectedness of such systems necessitate a drastic modification of previous security principles (see e.g., [ , ] ). analogously, for cyberbio systems and mechanisms, it will be necessary to refine a list of security principles and goals, by incorporating cps lessons, to optimally align them with the bioscience fields. cyberbiosecurity is an evolving paradigm that points to new gaps and risks, fostered by the cyber-overlaps of modern biotechnologies. the enormous increase in computational capabilities, artificial intelligence, automation and use of engineering principles in the bioscience field have created a realm with a glaring gap of adequate controls. vulnerabilities exist within biomanufacturing, cyber-enabled laboratory instrumentation and patient-focused systems, "big data" generated from "omics" studies, and throughout the farm-to-table enterprise..." [ ] . numerous security risks in the biological sciences and attack potentials based on psychology have not been adequately assessed, let alone captured. they will require completely new approaches towards their protection to avoid emergencies at the scale of covid- or more. yet, the current situation regarding cyberbiosecurity is sobering (fig. ) . the private sector, small and moderate-sized companies, and the larger diy community itself are particularly vulnerable [ , , ] . rather than spending enormous amounts of resources in looking back to identify the exact j o u r n a l p r e -p r o o f journal pre-proof genesis of sars-cov- , cause of the pandemic, and the emphasized singularity of our current global situation, a concerted effort to better understand and mitigate the emerging cyberbio challenges faced by the entire bioeconomy sector should be a top priority. this paper summarizes existing critical issues that must be considered. it also suggests steps that can be leveraged to help assess and ensure that the many bioscience capabilities remain dependable in the face of malice, error, or mischance. the author confirms sole responsibility for the following: conceptualization, investigation, methodology, validation, visualization, writing -original draft, writing-reviewing and editing. the author declares there is no conflict of interest. i would like to thank the reviewers who provided expertise and comments that greatly improved this paper. • the attacker was able to view personal information including email addresses and phone numbers, which are displayed to some users of twitter"s internal support tools [ ] . • these credentials can give them access to internal network tools and enable them to sabotage cyber-based controls of cps (figs. and ) . exploratory fact-finding scoping study on "digital sequence information" on genetic resources for food and agriculture., report on the exploratory fact-finding scoping study on comments of third world network on digital sequence information editorial: mapping the cyberbiosecurity enterprise cyberbiosecurity: an emerging new discipline to help safeguard the bioeconomy cyber-biosecurity risk perceptions in the biotech sector researchers are sounding the alarm on cyberbiosecurity national and transnational security implications of asymmetric access to and use of biological data the national security implications of cyberbiosecurity cyberbiosecurity challenges of pathogen genome databases the digitization of biology: understanding the new risks and implications for governance on dna signatures, their dual-use potential for gmo counterfeiting, and a second) dissertation, biomedical sciences a covert authentication and security solution for gmos a covert authentication and security solution for gmos point of view: a transatlantic perspective on emerging issues in biological engineering the intelligent and connected bio-labs of the future cyberbiosecurity: from naive trust to risk awareness cyberbiosecurity implications for the laboratory of the future building capacity for cyberbiosecurity training cyberbiosecurity in advanced cyber safety us hospitals turn away patients as ransomware strikes bloomberg, hackers "without conscience" demand ransom from dozens of hospitals and labs working on coronavirus cybersecurity in healthcare: a systematic review of modern threats and trends institute for critical infrastructure technology, the cybersecurity think tank (nd overview of security and privacy in cyber-physical systems mysterious turkey pipeline blast opened new cyberwar era adaptations of avian flu virus are a cause for concern cyberbiosecurity: a new perspective on protecting u.s. food and agricultural system cyberbiosecurity for biopharmaceutical products defending our public biological databases as a global critical infrastructure cyberbiosecurity: a call for cooperation in a new threat landscape are market gm plants an unrecognized platform for bioterrorism and biocrime? the australia group (nd) the nuclear threat initiative, biosecurity reducing biological risk and enhancing global biosecurity (nd) vbc launches biosecurity codes section national institutes of health, national science advisory board for biosecurity (nd blue ribbon study panel on biodefense (nd) top cyber security experts report: , cyber attacks a day since covid- pandemic the covid- pandemic and its impact on cybersecurity environmentally applied nucleic acids and proteins for purposes of engineering changes to genes and other genetic material agricultural research, or a new bioweapon system? plant-protecting rnai compositions comprising plant-protecting double-stranded rna adsorbed onto layered double hydroxide particles systems and methods for delivering nucleic acids to a plant methods and compositions for introducing nucleic acids into plants the next generation of insecticides: dsrna is stable as a foliar-applied insecticide the new alchemists: the risks of genetic modification, the new alchemists: the risks of genetic modification why gene editors like crispr/cas may be a game-changer for neuroweapons development of an artificial cell, from self-organization to computation and self-reproduction vesicle-based artificial cells as chemical microreactors with spatially segregated reaction pathways aims and methods of biosteganography anticipating xenogenic pollution at the source: impact of sterilizations on dna release from microbial cultures psychology and security resource page (nd) is confidence in the monitoring of ge foods justified? next steps for access to safe, secure dna synthesis identifying personal microbiomes using metagenomic codes perspectives on harmful algal blooms (habs) and the cyberbiosecurity of freshwater systems genetically modified seeds and plant propagating material in europe: potential routes of entrance and current status methods for data encoding in dna and genetically modified organism authentication, united states patent a reference model of information assurance & security an update on our security incident key: cord- - csl md authors: li, shuai; xu, yifang; cai, jiannan; hu, da; he, qiang title: integrated environment-occupant-pathogen information modeling to assess and communicate room-level outbreak risks of infectious diseases date: - - journal: build environ doi: . /j.buildenv. . sha: doc_id: cord_uid: csl md microbial pathogen transmission within built environments is a main public health concern. the pandemic of coronavirus disease (covid- ) adds to the urgency of developing effective means to reduce pathogen transmission in mass-gathering public buildings such as schools, hospitals, and airports. to inform occupants and guide facility managers to prevent and respond to infectious disease outbreaks, this study proposed a framework to assess room-level outbreak risks in buildings by modeling built environment characteristics, occupancy information, and pathogen transmission. building information modeling (bim) is exploited to automatically retrieve building parameters and possible occupant interactions that are relevant to pathogen transmission. the extracted information is fed into an environment pathogen transmission model to derive the basic reproduction numbers for different pathogens, which serve as proxies of outbreak potentials in rooms. a web-based system is developed to provide timely information regarding outbreak risks to occupants and facility managers. the efficacy of the proposed method was demonstrated by a case study, in which building characteristics, occupancy schedules, pathogen parameters, as well as hygiene and cleaning practices are considered for outbreak risk assessment. this study contributes to the body of knowledge by computationally integrating building, occupant, and pathogen information modeling for infectious disease outbreak assessment, and communicating actionable information for built environment management. this study aims to develop a framework for room-level outbreak risk assessment based on integrated building-occupancy-pathogen modeling to mitigate the spread of infectious disease in buildings. the rationale is twofold. first, buildings are highly heterogeneous with a variety of compartments of distinctive functionalities and characteristics, providing diverse habitats for humans and various pathogens [ , ] . modeling the pathogen transmission and exposure within a building at the room level will provide useful information at an unprecedented resolution to implement appropriate disease control strategies. second, the spread of infectious diseases can be mitigated if occupants and facility managers have adequate and timely information regarding the outbreak risks within their buildings. communicating actionable information to occupants and facility managers through an easily accessible interface will help occupants to follow hygiene and social distancing practice, and help facility managers to schedule disinfection for rooms with high outbreak risks. to address the knowledge gaps, a novel environment-occupant-pathogen modeling framework and a web-based information visualization system are developed to assess the outbreak risks and mitigate the spread of infectious diseases in buildings ( fig. ) . first, to assess the outbreak risks, the fomite-based pathogen transmission model proposed in [ ] is adopted in this study. the limitation of the model is that the environmental parameters and occupant characteristics are not automatically extracted and incorporated in the model, hindering the computation of the spatially-varying environmental infection risks in buildings. to overcome this limitation, bim is exploited to automatically retrieve venue-specific parameters including building characteristics and occupancy information that are relevant to pathogen transmission and exposure. then, the extracted building and occupant parameters are used with pathogen-specific parameters in a human-building-pathogen transmission model to compute the basic reproduction number r for each room in a building. r is used as a proxy to assess the outbreak risks of different infectious diseases. second, a web-based system is developed to enable information visualization and communication in an interactive manner to provide guidance for occupants and facility managers. this study innovatively establishes the computational links among building, occupant, and pathogen modeling to predict outbreak risks. the risk prediction for spatially and functionally distributed rooms in a building provides useful information for end-users to combat and respond to the spread of infectious diseases, including the seasonal flu and covid- . the developed method and system add a health dimension to transform the current building management to a user-centric and bio-informed paradigm. fig. in this study, a computational tool is developed based on dynamo [ ] to extract the geometry and properties of each room in a building, and to compute the corresponding venue-specific parameters. fig. shows the workflow of the information retrieval process. lines in fig the workflow for information retrieval is detailed as follows. the steps for extracting room parameters are: thereafter, the total furniture area in each room (named ) is calculated by summing up the surface area of all furniture inside the room. in epidemiology literature, r is one of the most widely used indicators of transmission intensity to demonstrate the outbreak potential of an infectious disease in a population. commonly, r > means the epidemic begins to spread in the population, r < means the disease will gradually disappear, and r = means the disease will stay alive and reach a balance in the population. with the increase of r , the outbreak risk will increase, and more severe control measures and policies will be needed [ ] . in this study, we categorize the level of outbreak risk into low, mild, moderate, and severe based on the range of r . specifically, the risk is low when r < ; the risk is mild when ≤ r < . because there is a fair chance that the transmission will fade out as , is not much larger than [ ]; the risk is moderate when . ≤ r < , indicating an epidemic can occur and is likely to do so [ , ] ; and the risk is severe when r > and immediate actions should be taken by facility managers, such as cleaning the surfaces, to reduce the risk. to better communicate the infection risk to occupants and facility managers, a web-based system was developed to visualize the outbreak risk of different pathogens in each room within a building. fig. illustrates the architecture of the web-based system, which consists of four modules, i.e., data management, model derivative, web application, and user. three add-in functions were developed to help users visualize the interior layout of the building and color-coded rooms with their corresponding risk levels, as well as search specific room- related disease outbreak risk information. the first add-in function is "vertical explode", which is used to view each level of the building. this function can help the user visualize the interior and room layout. the facility users can also use this function to visualize the outbreak risk of rooms on each floor and take appropriate practices. for facility managers, the "vertical explode" function enables them to obtain a holistic view of risk distribution at each level and take informed actions, such as limiting the number of occupants and implementing cleaning and disinfection protocols, to control the spread of the disease. this function is integrated with the web-based system, and clicking buttons were created to activate and deactivate it. the second function is "room filtering", which is used to highlight rooms at different risk levels for a specific pathogen. the user needs to first select one of the three pathogens from the dropdown menu: sars-cov- , influenza, and norovirus. thereafter, the user can set a risk threshold to highlight rooms with r greater than a specific value. in addition, different highlighting colors are used to represent different infection risk levels. low, mild, moderate, and severe risks are represented by color green, blue, celery, and red, respectively. the third function is "room query", which enables the user to search for a specific room and retrieve infection risk for the three pathogens. the "room query" function is displayed as a search box on the web-based system. the users can easily find the potential risk of a specific room using this function. finally, end users can access the web-based information communication system and obtain information about outbreak risk in each room of the building through various channels, including laptops, smartphones, and tablets. a hypothetical case study is used as an example to demonstrate the efficacy of the proposed framework and the newly developed web-based system. the building information model of a six-floor school building with , square feet is used. the building contains classrooms and faculty and graduate assistant offices. the room types considered in the case study include offices and classrooms. five offices and five classrooms were selected. the venue-specific parameters of the rooms are extracted and listed in table , and the computed r values of the three diseases are listed in table . table venue-specific parameters in representative rooms from table , the values of r vary across different rooms and different diseases. r values in offices are smaller than the values in classrooms, which stems from the small occupancy and the low rate of fomite touching in offices compared to those in classrooms. for influenza, the r values in all the rooms are less than , indicating that influenza is unlikely to outbreak in the building through the fomite-mediated transmission. this could be partially explained by the relatively short infectious period, high inactivation rate in hands, low hand-to-fomite pathogen transmission efficiency, and relatively low infectiousness with the same amount of pathogens. for covid- , the r values in all rooms are higher than those of influenza, and the risk in classroom reaches a moderate level, indicating that covid- has the potential to outbreak in the classroom. covid- has a relatively high outbreak risk in most cases because it has a high shedding rate, small surface inactivation rate, and high transfer efficiency from fomites to hands. for norovirus, the r values are high in most classrooms, which might be because of its high infectivity, long infection period, and high hand-to-fomite transmission efficiency compared to the other two diseases. this finding also aligns with the trend obtained in [ ]. the above results prove that the outbreak risk of an infectious disease is influenced by both venue-specific and pathogen-specific parameters, which highlights the significance of integrating bim and the pathogen transmission model in assessing spatial-varying disease outbreak risk. sensitivity analysis was further conducted to evaluate the influence of the rate of fomite touching (+ e ) and the shedding rate (%) of sars-cov- on r based on the estimated ranges of the two parameters (listed in table ). fig. illustrates the changes in r with the increase of + e for all three diseases in both classrooms and offices. from fig. , the disease outbreak risk increases as the increase of + e . the values of r for norovirus and covid- in classroom , , and may exceed with the increase of + e . on the other hand, the infection risk in offices and that for influenza in classrooms will remain low even occupants touch objects in the rooms more frequently. therefore, it is particularly important to educate students in classrooms with relatively high occupancy to not touch the common areas frequently. fig. illustrates the changes in r of covid- with varying shedding rates. from the figure, % has a significant impact on the outbreak risk of covid- in classroom , , and . therefore, for classrooms with relatively large occupancy, control strategies should be taken to reduce pathogen shedding from the occupants, such as using face masks, and covering the mouth when coughing. applied to different rooms to reduce the risks to an acceptable low level. cleaning the surface five times per day will decrease r by over %, compared to no surface cleaning. considering the ongoing outbreak of covid- , classrooms with high occupancy (e.g., classroom ) should be given particular attention on surface cleaning. cleaning surfaces at least two times per day is needed to achieve a low risk level. for norovirus, classrooms with relatively large occupancy (e.g., classroom , , and ) will require more frequent surface cleaning to reduce the outbreak risk to the low level. other complementary strategies, such as increasing hand washing and limiting occupancy, should be adopted to maintain a low level of outbreak risks. as shown in fig. , room filtering and room query functions can help the user easily locate rooms with high risk and query risk information for a specific room. specifically, fig. (a) shows an exemplary output of the room filtering function that highlights the rooms with r value greater than for covid- . fig. (b) displays an example of the room query function in the web system. the pathogen risk information for influenza, norovirus, and covid- is retrieved with corresponding recommendations. with the web-based information communication system, facility managers can take important measures to control the spread of diseases, such as designing appropriate cleaning and disinfection strategies, promoting hand hygiene, reducing maximum occupancy, and accommodating facility usage schedule based on risk distribution across rooms within the building. for instance, deep cleaning and disinfection are required for rooms with severe outbreak risk. in addition, facility managers can post signs at these high-risk areas to remind occupants to take essential practices such as social distancing and hand hygiene. the web-based system will also keep facility users, including teachers, students, and other staff, aware of up-to-date outbreak risk information within the building, and thus taking informed actions to avoid further spread of diseases. for example, facility users can avoid entering rooms with high outbreak risk. . discussion the results and insights derived from the analysis have important implications on adaptive built environment management to prevent infectious disease outbreak and respond to on-going pandemic. due to varying building characteristics, occupancy levels, and pathogen parameters, the microbial burdens and outbreak risks differ significantly even in the same building, highlighting the need for spatially-adaptive management of the built environment. the proposed method automates the batch process for simulation and prediction of outbreak risks for different pathogens at the room level, and visualizes the risks for adaptive management. the results on outbreak risks at room level enables the paradigm for spatially-adaptive management of the built environment. with the new streams of risk information, customizable interventions can be designed. for instance, in consistent with the practice during the covid- pandemic, reducing the accessible surfaces in rooms and restricting the occupancy in the room are some of the effective strategies to reduce the outbreak risks. the spatially-varying risk information can also guide the facility managers to pay close attention to high-risk areas by adopting more frequent disinfection practices. a bim-based information system is developed to extract the necessary information for modeling infection within buildings, and to visualize the derived information in an easy-to-understand and convenient way through web pages. as such, the information-driven interventions could alleviate the pathogenic burdens in the buildings to prevent the spread of infectious diseases. providing information to end-users is critically important for them to change behaviors. human behavior plays an important role in the transmission of pathogens such as the sars-cov- . changing behaviors is critical to preventing transmission. providing timely and contextual information can be a promising option to motive the change of human behaviors. with the room- level outbreak risk information, the users could be motivated or persuaded by the visualized risks to practice appropriate behaviors such as wearing a mask, social distancing, and hand- washing. the facility managers can use the information to conduct knowledge-based management, such as limiting the occupancy in the room, managing crowd traffic, and rearranging room layout. this study has some limitations that deserve future research. first, the model does not consider factors such as sunlight exposure, humidity, and airflow that may impact the persistence and transmission of pathogens in built environments. this is mainly because the quantitative impacts of these factors on pathogen persistence and transmission are largely ambiguous, if not unknown. if these impacts can be quantified and the environmental parameters can be monitored and modeled in bim, our proposed framework can be extended to incorporate these factors. second, the computation of r only considers the fomite-mediated transmission, and does not consider the airborne and close contact transmission. microbial pathogens may have different transmission routes, including airborne, close-contact, and fomite-based transmission. this study focused on fomite-based transmission to illustrate the modeling approach for assessing the outbreak risks, and demonstrate the efficacy of the developed information system to guide infection control practices and building operations. to fully assess the exposure risks and outbreak potentials, all important routes need to be considered. in addition, the outbreak potentials of a variety of pathogens can be considered together to develop an aggregate index, which could be more intuitive for occupants and facility managers who are not public health experts. third, the system mainly relies on static models and does not make full use of dynamic and real-time data regarding built environments and occupant behaviors such as presence and interactions with objects. in future studies, the internet of things sensors can be installed in the buildings and algorithms can be developed to retrieve dynamic data for integration with the models for accurate and robust risk estimation. fourth, the web-based system can be further improved by connecting it with smart devices such as robots for automated cleaning and disinfection and smartphones for precision notifications. this study creates and tests a computational framework and tools to explore the connections among built environment, occupant behavior, and pathogen transmission. using bim-based simulations, building-occupant characteristics, such as occupancy and accessible surface, are extracted as venue-specific parameters. the fomite-mediated transmission model is used to predict the contamination risks in the built environment by calculating a room-by-room basic reproductive number r , based on which the level of infection risk at each room is characterized into low, mild, moderate, and severe. a web-based system is then created to communicate the infection risk and outbreak potential information within buildings to occupants and facility managers. the case study demonstrated the efficacy of the proposed methods and developed systems. practically, the method and system can be used in a variety of built environments, especially, schools, hospitals, and airports, where transmission of infectious pathogens is of particular concern. the outbreak risks predicted at room resolutions can inform the facility managers to determine room disinfection and cleaning frequency, schedule, and standard. in addition, appropriate operational interventions including access control, occupancy limits, social distancing, and room arrangement (e.g. reducing the number of tables and chairs) can be designed based on the derived information. the occupants can access the useful information via webpage to plan their visit and staying time in the facilities, and practice appropriate personal hygiene and cleaning practice based on the information. microbial exchange via fomites and implications for human health how quickly viruses can contaminate buildings --from just a single doorknob the occurrence of influenza a virus on household and day care center fomites an interactive web-based dashboard to track covid- in real time prolonged infectivity of sars-cov- in fomites exaggerated risk of transmission of covid- by fomites role of fomite contamination during an outbreak of norovirus on houseboats epidemiologic and molecular trends of "norwalk-like viruses" associated with outbreaks of gastroenteritis in the united states microbiology of the built environment model analysis of fomite mediated influenza transmission informing optimal environmental influenza interventions: how the host, agent, and environment alter dominant routes of transmission dynamics and control of infections transmitted from person to person through the environment bacterial transfer to fingertips during sequential surface contacts with and without gloves, indoor air. ( ) ina. evaluating a transfer gradient assumption in a fomite-mediated microbial transmission model using an experimental and bayesian approach physical factors that affect microbial transfer during surface touch architectural design drives the biogeography of indoor bacterial communities architectural design influences the diversity and structure of the built environment microbiome the diversity and distribution of fungi on residential surfaces microbiota of the indoor environment: a meta-analysis bacterial communities on classroom surfaces vary with human contact what have we learned about the microbiomes of indoor environments? bim handbook: a guide to building information modeling for owners, managers, designers, engineers and contractors a conceptual framework for integrating building information modeling with augmented reality fomite-mediated transmission as a sufficient pathway: a comparative analysis across three viral pathogens medical and health sciences public health and health services building information modeling (bim) for existing buildings -literature review and future needs determining the level of development for bim implementation in large-scale projects: a multi-case study transmission of influenza a in a student office based on realistic person-to-person contact and surface touch behaviour risk of fomite-mediated transmission of sars-cov- in child daycares, schools, and offices: a modeling study visual scripting environment for designers -dynamo predicting infectious sars-cov- from diagnostic samples deducing the dose-response relation for coronaviruses from covid- , sars and mers meta-analysis results estimated surface decay of sars-cov- (virus that causes covid- ) a study of the probable transmission routes of mers-cov during the first hospital outbreak in the republic of korea, indoor air cov- : clinical presentation, infectivity, and immune responses pandemic potential of a strain of influenza a (h n ): early findings, science on the definition and the computation of the basic reproduction ratio r in models for infectious diseases in heterogeneous populations unraveling r : considerations for public health applications assessing the pandemic potential of mers-cov nuanced risk assessment for emerging infectious diseases interventions to mitigate early spread of sars-cov- in singapore: a modelling study web development with mongodb and nodejs hand hygiene and surface cleaning should be paired for prevention of fomite transmission, indoor air key: cord- -f ac m authors: campbell, c.; wang, t.; mcnaughton, a. l.; barnes, e.; matthews, p. c. title: risk factors for the development of hepatocellular carcinoma (hcc) in chronic hepatitis b virus (hbv) infection: a systematic review and meta-analysis date: - - journal: nan doi: . / . . . sha: doc_id: cord_uid: f ac m background: hepatocellular carcinoma (hcc) is one of the leading contributors to cancer mortality worldwide and is the largest cause of death in individuals with chronic hepatitis b virus (hbv) infection. it is not certain how the presence of other metabolic factors and comorbidities influences hcc risk in hbv. therefore we performed a systematic review and meta-analysis to seek evidence for significant associations. methods: medline, embase and web of science databases were searched from st january to th june for english studies investigating associations of metabolic factors and comorbidities with hcc risk in individuals with chronic hbv infection. we extracted data for meta-analysis and report pooled effect estimates from a fixed-effects model. pooled estimates from a random-effects model were also generated if significant heterogeneity was present. results: we identified observational studies reporting on associations of diabetes mellitus, hypertension, dyslipiaemia and obesity with hcc risk. meta-analysis was possible for only diabetes mellitus due to the limited number of studies. diabetes mellitus was associated with > % increase in hazards of hcc (fixed effects hazards ratio [hr] . , % ci . - . , random effects hr . , % ci . - . ). this association was attenuated towards the null in sensitivity analysis restricted to studies adjusted for metformin use. conclusions: in adults with chronic hbv infection, diabetes mellitus is a significant risk factor for hcc, but further investigation of how antidiabetic drug use and glycaemic control influence this association is needed. enhanced screening of individuals with hbv and diabetes may be warranted. hepatitis b virus (hbv) is a hepatotropic virus responsible for substantial morbidity and mortality worldwide. infection can be acute or chronic, with most of the hbv disease burden attributable to chronic disease. the world health organisation (who) estimated a chronic hbv (chb) global prevalence of million for , with , hbv-attributable deaths reported in the same year ( ), making hbv the second highest viral cause of daily deaths (with first being the agent of the global covid- pandemic, sars-cov- ) ( , ) of which the burden has increased in recent decades ( ). most chb deaths are due to primary liver cancer and cirrhosis; these conditions were responsible for over % of all viral hepatitis-attributable deaths. a gbd study on the global hcc burden reported a % increase in incident cases of hcc attributable to chronic infection between and ( ), among which chb infection was the largest contributor, responsible for more than % of incident cases in ( ). multiple risk factors for hcc in chb-infected individuals have been established, including sex, age, cirrhosis, and co-infection with human immunodeficiency virus (hiv) or other hepatitis viruses (including hepatitis c and d). previous studies have investigated associations of comorbidities, such as diabetes mellitus (dm) ( - ) and hypertension ( , ) , with risk of hcc in the general population, and the european association for the study of the liver (easl) recognises dm as a risk factor for hcc in chb ( ). as the global prevalence of comorbidities such as dm ( ) , renal disease ( ) , hypertension ( ) and coronary heart disease (chd) ( ) continues to rise, these conditions are increasingly relevant to the development of hcc. various risk scores have been developed to predict hcc risk: the page-b risk score was developed to predict hcc risk in caucasian patients on antiviral treatment ( ) , with hcc risk. therefore, we undertook a systematic review, aiming to summarise and critically appraise studies investigating associations of relevant comorbidities and metabolic factors with risk of hcc in chb-infected individuals. in june we systematically searched three databases (web of science, embase and medline) in accordance with prisma guidelines ( ); search terms are listed in table s . we searched all databases from st january until th june , without application of any restrictions for study design applied to search terms or results, but including only full-text human studies published in english. we combined and deduplicated search results from the three databases, prior to screening for eligibility. we excluded articles not investigating associations of comorbidities with risk of hcc and/or not restricted to chb-infected participants. we also searched reference lists of relevant systematic reviews/meta-analyses and studies identified for inclusion to identify additional studies for inclusion. search terms were constructed and agreed on by three authors (pm, tw and cc) and articles were screened and selected by one author (cc). investigated, number of participants, number of hcc cases, sex, age at baseline, risk ratio and covariates adjusted for. we carried out meta-analysis in r (version . . ) using the "meta" package (version . - ) ( ), including only hazard ratios (hrs) minimally adjusting for age and sex. we calculated pooled summary effect estimates using the inverse-variance weighting of hrs on the natural logarithmic scale, and quantified between-study heterogeneity using the i statistic; significance of heterogeneity was investigated using cochran's q test (p threshold= . ). where i was > and heterogeneity was significant, we present both fixed-and random-effects summary estimates. we undertook multiple sensitivity analyses whereby analyses were restricted to studies adjusting for various additional confounders and for dm treatment, and stratified by dm type, in order to investigate robustness of observed associations. for diabetes, we considered diagnoses of type and type diabetes, as well as unspecified diabetes mellitus, for pooling the effect, followed by further stratification by subtypes of diabetes if enough studies were eligible. hypertension (ht), was defined by either a diagnosis of ht recorded as part of the medical history or current health assessment, or a measurement with mean arterial pressure (map) above a specified threshold. obesity was based on bmi values, by referring the cut-off in the included studies, where , , kg/m were the common threshold values used. cvd was defined broadly as an umbrella term including any of the following disease subtypes: ischeamic heart disease (ihd)/coronary heart disease(chd), cerebrovascular disease. dyslipidaemia was defined according to serum lipid concentrations above a certain threshold (thresholds may vary depending on healthcare setting). exposure/outcome ascertainment. studies with scores of < , - and > points were considered to be of low, sufficient and high quality, respectively. in total our search identified , articles ( from medline, from embase and from web of science) (figure ). after deduplication, we screened , individual articles by title/abstract, from which full texts were identified for full-text assessment. after exclusion of ineligible articles and reference list searching of relevant articles, we identified articles for inclusion in this review. summary characteristics of included studies are reported in table s . all studies were observational in design, with cohort and case-control studies included (table s ) . thirty-two studies were conducted in asian countries. four studies were restricted to male cohorts and were undertaken in mixed-sex cohorts. all studies recruited participants from health centres, healthcare or prescription databases, or pre-existing cohorts or cancer screening programmes. all studies were undertaken in adults, with mean/median ages of cohorts ranging between and years in studies. thirty-three studies investigated dm/insulin resistance/fasting serum glucose, studies investigated blood hypertension/blood pressure, investigated dyslipidaemia, investigated obesity and cardiovascular disease. less than studies investigated other factors including renal disease, statin use and use of antidiabetic drugs. in the studies including , adults, > , hcc events occurred (we are unable to report an exact number, because one study did not report a precise number of hcc cases ( )). sample sizes of cohort studies varied widely, ranging from to among studies, had quality scores ≥ (tables s and s ). all cohort studies were of sufficient quality with of these being scored as high-quality. six case-control studies were of sufficient quality and one of poor quality. inclusion criteria varied widely and therefore study populations were heterogeneous. in most studies, exposures and outcomes were ascertained using health assessment, imaging or record linkage. twenty-three cohort studies and case-control studies accounted for age and sex. hcc typically arise after long durations of infection, and therefore prolonged follow up allows for detection of more hcc events; among cohort studies identified, only cohort studies had lengths of follow-up ≥ years). thirty-six studies investigated the association of dm with risk of chb progression to hcc, comprising case-control studies (table a) and cohort studies (table b) . four studies were restricted to males and the others included both sexes (table s ) . mean ages at baseline in all studies were ≥ years, respectively. study populations were heterogeneous with variable inclusion criteria, and definitions of dm were not consistent between studies. four case-control and four cohort studies investigated type dm/insulin resistance, three case-control and seven cohort studies investigated unspecified dm, and one case-control and three cohort studies investigated both type and dm as a composite potential risk factor. of the case-control studies that reported effect estimates, there was directional inconsistency between effect estimates reported in case-control studies, with studies reporting an increased risk of hcc in those with dm as compared to those without, studies reporting a decreased risk of hcc in those with dm, and one study failing to provide an effect estimate. risk ratios (rrs) > ranged from . to . , and all were statistically significant. rrs < ranged from . to . , of which two were statistically significant. among cohort studies providing effect estimates ( hrs and or), there was directional consistency with of the reported rrs > . effect sizes > ranged from . to . , with rrs being statistically significant. the single rr that was < was nonsignificant. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . most case-control studies adjusted for age, sex, hcv coinfection, hiv coinfection and cirrhosis. twenty cohort studies minimally adjusted for age and sex. of these, adjusted for hcv coinfection, for cirrhosis, for antiviral treatment, for hiv coinfection, for alcohol consumption, each for hbv viral dna load and cigarette smoking and for other liver disease (including alcoholic liver disease). eight studies excluded participants who developed hcc within the first to months of follow-up in their main analyses. one study did so in sensitivity analysis and found this did not modify associations observed. dm was associated with an increased risk of progression to hcc by meta-analysis restricted to hrs minimally adjusted for age and sex ( figure ). as there was significant heterogeneity (i = %, p< . ), results from both fixed-and randomeffects analyses are presented. in random-effects analysis risk of hcc was % (summary rr . ; % ci . - . ) significantly higher in dm compared to non-dm. we performed sensitivity analyses in order to investigate the robustness of pooled estimates to additional adjustment for hcv or hiv coinfection, cirrhosis, and dm treatment. after restricting meta-analysis to studies adjusting for hcv coinfection in addition to age and sex ( figure s ), pooled hrs did not change materially. considering studies adjusting for hiv and antiviral treatment ( figure s ), pooled hr from the fixed-effects analysis was attenuated towards the null slightly but still remained significant. to investigate the robustness of the association of dm with hcc to adjustment for cirrhosis, a potential mediator, we restricted meta-analysis to studies adjusting for cirrhosis ( figure s ). this did not change pooled hrs materially. to investigate heterogeneity between type dm and unspecified dm, sensitivity analysis was performed whereby studies were stratified by dm type. amongst studies investigating type dm, heterogeneity was % (p= . ) ( figure s ). however the the association of dm with hcc risk was attenuated towards the null in studies that adjusted for metformin use, with risk of hcc % higher in dm participants as compared to non-dm (random effects hr . , % ci . - . ) in analysis restricted to studies adjusting for metformin use ( figure s ). after restricting to studies adjusting to dm treatment, pooled hrs remained statistically significant. eleven studies investigated the association of ht with risk of chb progression to hcc, one case-control study and cohort studies ( table ). all studies were mixedsex samples in which mean/median age at baseline was ≥ years (table s ) . definitions of ht were heterogeneous; most studies ascertained hypertension via record linkage, but others used health assessment or interview. few studies defined clinical thresholds for hypertension classification. "higher" map was the primary exposure of interest in the case-control study, for which a threshold was not defined. among studies reporting hazards of hcc associated with ht, only three identified significantly increased risks, with two unadjusted and one adjusted for age. another five studies reported an effect in the same direction, but effect sizes were not statistically significant. adjusted hrs > ranged from . to . and < from . to . . adjustment for confounders was poor, with only four hrs minimally adjusted for age and sex. seven studies investigated the association of dyslipidaemia with hcc risk in chb patients (table ) . all studies reported reduced risks of hcc in participants with dyslipiaemia as compared to those without, however only one hr was statistically significant. clinical definitions of dyslipidaemia were often not reported, and only four studies minimally adjusted for age and sex. six studies investigated the association of obesity with hcc risk. clinical definitions of obesity varied greatly, and out of four studies reporting increased risks of hcc with obesity, only one hr was statistically significant. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint three studies investigated the association of statin use with hcc risk in chb. all studies reported hrs < , and two of these hrs were statistically significant. hrs reported in studies for hcc risk associated with cvd varied, likely due to the variable definitions of cvd used across studies. associations for other variables, including respiratory disease and renal disease, were reported by < studies each. our meta-analysis suggests that dm is a risk factor for hcc in chb infected individuals, with hazards of hcc > % higher in the presence of dm; however, we report significant between-study heterogeneity. this association did not materially change after restriction to studies adjusting for relevant confounders, but did suggest a favourable impact of dm treatment with metformin. pooled effect estimates remained significant in sensitivity analyses. few studies investigated other comorbidities, and some comorbidity search terms included in our systematic literature search returned few or no results. this highlights the need for future investigation of these comorbidities, as antiviral treatment cannot eliminate the risk of hcc entirely and therefore novel risk factors must be identified in order to inform interventions. although easl ( ) and apasl ( ) guidelines recognise this association, it is not currently consistently described in other recommendations (e.g. aasld guidelines ( ) do not list dm as a risk factor for hcc). some studies investigating comorbidities and their metabolic risk factors reported significantly reduced hazards of participants with these conditions as compared to those without. this association may be confounded by the requirement for treatment in secondary care, whereby chb-infected individuals may be more likely to receive screening and antiviral treatment. findings from case-control and cohort studies were not consistent; whilst the majority of cohort studies reported increased risks of hcc associated with dm, case-control findings were inconsistent, and indeed three studies reported a significant reduction of hcc risks in association with dm. explanations for such findings including all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . confounding, selection bias associated with the study of hospital control groups that enrich for dm ( , ), and chance, especially in small studies ( - ). our findings are consistent with a previous meta-analysis ( ); we provide a comprehensive review of all cohort studies and include a larger number of studies. we restricted to studies reporting hrs minimally adjusted for age and sex. however, adjustment for covariates and inclusion criteria varied considerably between studies, and this may explain some of the between-study heterogeneity. substantial heterogeneity remained in sensitivity analyses restricted to studies adjusting for additional key confounders, as adjustment for confounders was variable within these studies and populations may not have been comparable. although baseline age and sex characteristics were comparable across studies, there was variability regarding exclusion of those with additional comorbidities and those on antiviral treatment. we noted variable definitions of dm, with some studies restricting investigation to type dm whereas others included participants with unspecified dm. risk factors for types and diabetes mellitus vary, and heterogeneity in dm definitions could therefore contribute to variable study populations and outcomes. global prevalence and incidence estimates for specific dm types do not exist, as distinguishing between types often requires expensive laboratory resources that are not available in many settings. however most cases of type diabetes are found in europe and north america, and the large majority of studies included in this systematic review and meta-analysis were conducted in asian countries ( ). it is possible that varied lengths of follow-up also contributed to between-study heterogeneity, although hrs did not significantly vary with length of follow-up in sensitivity analysis. this is because cancer is a chronic disease with a slow development, and preclinical disease can be present for many years before clinical manifestation; follow-up times < years may be insufficient to detect hcc outcomes. we were unable to provide effect estimates across most potential patient subgroups because the subgroups contained small numbers of studies, putting subgroup analyses at greater risk of chance findings as well as being subject to the influence of multiple testing. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint the association we report in this meta-analysis is weaker than those observed in patients with chronic hcv infection. in previous studies of individuals with chronic hcv infection, risk of hcc was elevated ~ -fold in the presence of dm ( , ). previous studies also report increased risks of dm in hcv-infected individuals as compared to non-infected individuals ( ) ( ) ( ) ( ) . however, this is likely due to the various extrahepatic manifestations of hcv which are not present in hbv infection. in sensitivity analysis restricted to studies adjusting for cirrhosis, the observed association of dm with hcc was attenuated towards the null. this may be explained by a confounding of the association by cirrhosis, accounted for by an independent association of cirrhosis with both dm and hcc, and the absence of cirrhosis from the causal pathway that associates dm with hcc. however, if cirrhosis is located along this causal pathway, then it can be characterised as a mediator rather than a confounder. if cirrhosis is a mediator then adjusting for it would be incorrect. past studies support a positive association of dm with hcc risk in non-chb patients three studies adjusted for metformin use ( - ), and in sensitivity analysis restricted to these studies, the association between dm and hcc remained significant but was attenuated towards the null. it is not known the extent to which this is a result of glucoregulation by metformin, accomplished by inhibition of hepatic gluconeogenesis and improvement of insulin sensitivity in tissues leading to reduced oxidative stress in all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint the liver ( ), and/or a direct impact of metformin in reducing cancer risk via regulation of cellular signalling. evidence from observational studies ( - ) and randomised controlled trials (rcts) ( ) supports a protective effect of metformin against the development and progression of cancer in diabetic individuals. there is also some rct evidence for protective effects of metformin against progression of certain cancer types in non-diabetic individuals ( ) although this is not consistent. multiple largescale phase iii rcts are currently underway ( - ) and will provide further information regarding the roles of dm and metformin in cancer development. we included all studies investigating the association of comorbidities with risk of chb progression to hcc that minimally adjusted for age and sex in order to provide a comprehensive review of available evidence. however, few studies investigated non-dm comorbidities, preventing meta-analysis for these comorbidities. additionally we were unable to restrict our meta-analysis of dm and hcc to studies adjusting for confounders other than age and sex, as few studies minimally adjusted for all relevant factors. publication bias may influence the outcome, as we restricted our search to peer-reviewed literature, and studies that do not report an association of dm with hcc may be less likely to be published. our results may not be generalisable to the global chb population, as there were a limited number of studies from non-asian countries. the lack of studies from any african countries is of concern, given that the region carries both the highest hbv prevalence ( ) and largest mortality burdens for cirrhosis and hcc ( , ). our finding that dm is a risk factor for hcc in chb-infected individuals suggests that enhanced cancer surveillance may be justified in patients with chb and dm to enable early detection and treatment. improvements in guidelines could help to inform more consistent approaches to risk reduction. after adjustment for metformin use, this association remained significant, but was attenuated suggesting a potential benefit of metformin that warrants further study. ongoing investigation is required in order to identify and characterise risk factors for hcc, to extend these analyses to diverse global populations, and to elucidate disease mechanisms in order to inform prevention, screening and therapeutic intervention. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . stanaway preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint assessing the quality of nonrandomised studies in meta-analyses [internet] . [cited jun ] . available from: http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint tables table a . effect estimates for case-control studies investigating the association of diabetes mellitus with hepatocellular carcinoma risk. ahr, adjusted hazards ratio; uhr, unadjusted hazards ratio; bmi, body mass index; cvd, cardiovascular disease; ihd, ischaemic heart disease; acs, acute coronary syndrome; nafld, non-alcoholic fatty liver disease; copd, chronic obstructive pulmonary disease. † adjusted risk ratios are minimally adjusted for age and sex. ‡ defined specifically as hyperlipidaemia. § defined specifically as hypertriglyceridaemia. ¶ defined specifically as hypercholesteraemia. † † adjusted for age but not sex. † † † metabolic risk factors (obesity, diabetes, hypertriglyceridemia and ht), with exposure groups split into groups of , , and ≥ risk factors. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint hr, hazard ratio; ci, confidence interval; dm, diabetes mellitus. all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint all rights reserved. no reuse allowed without permission. perpetuity. preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in the copyright holder for this this version posted august , . . https://doi.org/ . / . . . doi: medrxiv preprint association of diabetes duration and diabetes treatment with the risk of hepatocellular carcinoma diabetes increases the risk of hepatocellular carcinoma in the united states: a population based case control study metabolic syndrome and hepatocellular carcinoma risk metabolic risk factors and primary liver cancer in a prospective study of , adults global, regional, and national burden of chronic kidney disease, - : a systematic analysis for the global burden of disease study global burden of hypertension and systolic blood pressure of at least to mmhg global burden of cvd: focus on secondary prevention of cardiovascular disease the effects of metformin on the survival of colorectal cancer patients with diabetes mellitus metformin and reduced risk of cancer in diabetic patients metformin associated with lower cancer mortality in type diabetes: zodiac- . diabetes care metformin for chemoprevention of metachronous colorectal adenoma or polyps in post-polypectomy patients without diabetes: a multicentre double-blind, placebo-controlled, randomised phase trial neoadjuvant chemotherapy with or without metformin in early breast cancer. -full text view -clinicaltrials.gov [internet the metformin active surveillance trial (mast) study -full text view -clinicaltrials.gov [internet a phase iii randomized trial of metformin vs placebo in early stage breast view of hepatocellular carcinoma: trends, risk, prevention and management the global, regional, and national burden of cirrhosis by cause in countries and territories, - : a systematic analysis for the global burden of disease study diabetes poses a higher risk of hepatocellular carcinoma and mortality in patients with chronic hepatitis b: a population-based cohort study diabetes mellitus is a risk factor for hepatocellular carcinoma in patients with chronic hepatitis b virus infection in china association between hepatocellular carcinoma and type diabetes mellitus in chinese hepatitis b virus cirrhosis patients: a case-control study statin use and the risk of hepatocellular carcinoma in patients with chronic hepatitis b real-world effectiveness from the asia pacific rim liver consortium for hbv risk score for the prediction of hepatocellular carcinoma in chronic hepatitis b patients treated with oral antiviral therapy -pubmed radiologic nonalcoholic fatty liver disease increases the risk of hepatocellular carcinoma in patients with suppressed chronic hepatitis b the influence of metabolic syndrome on the risk of hepatocellular carcinoma in patients with chronic hepatitis b infection in mainland china stratification of hepatocellular carcinoma risk through modified fib- index in chronic hepatitis b patients on entecavir therapy effects of diabetes and glycemic control on risk of hepatocellular carcinoma after seroclearance of hepatitis b surface antigen hepatocellular carcinoma in the absence of cirrhosis in patients with chronic hepatitis b virus infection insulin resistance and the risk of hepatocellular carcinoma in chronic hepatitis b patients prognosis of patients with chronic hepatitis b in france ( - ): a nationwide, observational and hospitalbased study liver cirrhosis stages and the incidence of hepatocellular carcinoma in chronic hepatitis b patients receiving antiviral therapy the impact of pnpla (rs c>g) polymorphisms on liver histology and long-term clinical outcome in chronic hepatitis b patients. liver int increased risk of hepatocellular carcinoma in chronic hepatitis b patients with new onset diabetes: a nationwide cohort study type diabetes: a risk factor for liver mortality and complications in hepatitis b cirrhosis patients determinants virological response to entecavir on the development of hepatocellular carcinoma in hepatitis b viral cirrhotic patients: comparison between compensated and decompensated cirrhosis diabetes mellitus, metabolic syndrome and obesity are not significant risk factors for hepatocellular carcinoma in an hbv-and hcv-endemic area of southern taiwan risk factors for hepatocellular carcinoma in a cohort infected with hepatitis b or c the impact of type diabetes on the development of hepatocellular carcinoma in different viral hepatitis statuses metabolic factors and risk of hepatocellular carcinoma by chronic hepatitis b/c infection: a follow-up study in taiwan body-mass index and progression of hepatitis b: a population-based cohort study in men type diabetes and hepatocellular carcinoma: a cohort study in high prevalence area of hepatitis virus infection obesity and hepatocellular carcinoma in patients receiving entecavir for chronic hepatitis b thiazolidinediones reduce the risk of hepatocellular carcinoma and hepatic events in diabetic patients with chronic hepatitis b influence of metabolic risk factors on risk of hepatocellular carcinoma and liver-related death in men with chronic hepatitis b: a large cohort study adapting a clinical comorbidity index for use with icd- -cm administrative databases all rights reserved. no reuse allowed without permission. perpetuity preprint (which was not certified by peer review) is the author/funder, who has granted medrxiv a license to display the preprint in key: cord- - s x v authors: hawkins, devan title: differential occupational risk for covid‐ and other infection exposure according to race and ethnicity date: - - journal: am j ind med doi: . /ajim. sha: doc_id: cord_uid: s x v background: there are racial and ethnic disparities in the risk of contracting covid‐ . this study sought to assess how occupational segregation according to race and ethnicity may contribute to the risk of covid‐ . methods: data about employment in by industry and occupation and race and ethnicity were obtained from the bureau of labor statistics current population survey. this data was combined with information about industries according to whether they were likely or possibly essential during the covid‐ pandemic and the frequency of exposure to infections and close proximity to others by occupation. the percentage of workers employed in essential industries and occupations with a high risk of infection and close proximity to others by race and ethnicity was calculated. results: people of color were more likely to be employed in essential industries and in occupations with more exposure to infections and close proximity to others. black workers in particular faced an elevated risk for all of these factors. conclusion: occupational segregation into high‐risk industries and occupations likely contributes to differential risk with respect to covid‐ . providing adequate projection to workers may help to reduce these disparities. whether they were likely or possibly essential during the covid- pandemic and the frequency of exposure to infections and close proximity to others by occupation. the percentage of workers employed in essential industries and occupations with a high risk of infection and close proximity to others by race and ethnicity was calculated. results: people of color were more likely to be employed in essential industries and in occupations with more exposure to infections and close proximity to others. black workers in particular faced an elevated risk for all of these factors. conclusion: occupational segregation into high-risk industries and occupations likely contributes to differential risk with respect to covid- . providing adequate projection to workers may help to reduce these disparities. americans, are at an elevated risk both for contacting the disease, being hospitalized and dying from it. , different explanations have been provided to account for these disparities including people of color being more likely to live in densely populated areas, and, due to structural factors like discrimination and racism, being more likely to be socioeconomically disadvantaged and have comorbid health conditions that contribute to the risk of covid- . , discrimination within the healthcare system may also contribute to worse outcomes when covid- is contracted. occupational exposures are an important factor to consider in explaining these racial and ethnic health disparities. it has already been established that some occupations and industries are an elevated risk for covid- , especially those employed in healthcare and other essential industries. , some of these differences may be related to different characteristics of the occupation worked, including exposures to infections and close proximity to others. , due to occupational segregation, people of color are often employed in an occupation that tends to be at a higher risk for occupational injuries, illnesses and fatalities. to assess how this occupation segregation may contribute to racial and ethnic disparities for covid- , this study sought to determine whether there were racial and ethnic disparities in workers employed in essential industries and in occupations with a higher risk of exposure to infections and close proximity to others. data showing employment by industry and occupation according to race and ethnicity were obtained from the bureau of labor statistics (bls) current population survey (cps) for the year . the only race and ethnicities included in this data were white, black, asian, and hispanic. the hispanic category included all those indicating that they were hispanic regardless of race and the individual race categories included those who indicated that they were hispanic. the brookings institute performed an analysis in which they characterized industries according to whether they were either likely or possibly part of the essential workforce according to guidelines published by the department of homeland security. we matched this data with the employment data from the bls cps and calculated the percentage of workers likely or possibly employed in essential industries according to race and ethnicity. we also provide data about employment to select essential industries. based on previous analyses, , we obtained data about the occupational risk for infections and close proximity other from the occupational information network (o*net). the data for exposure to infections is based on a survey that is sent to workers that ask, "how often does this job require exposure to disease/infections?" the data for proximity to others is based on another question that asks, "to what extent does this job require the worker to perform job tasks in close physical proximity to other people?" based on the responses to this question, occupations are given a score between and that corresponds to their frequency of exposure to infections/proximity to others. for this analysis, high-risk occupations for infections were categorized as those with a score of or higher and higher risk for proximity to others was categorized as or higher. we combined these occupational scores with the employment data from the bls cps and calculated the percentage of workers with a high risk of exposure to infections and proximity to others according to race and ethnicity. we also provide data about select occupations with high exposure to infections and close proximity to others. finally, we categorized some occupations as high risk for both exposure to infections and proximity to others if they were in the high-risk group for both variables. again we calculated the percentage of workers who fell into this high-risk category by race and ethnicity and provide data about select high-risk occupations. this project was considered exempt from review by the mcphs university institutional review board because it was conducted with previously collected, deidentified data. possibly essential industries can be seen in table s . black and asian workers were also more likely to be employed in occupations with a high risk of infections. both black and asian workers were more likely to be employed as respiratory therapists. asian workers were more likely to be employed as registered nurses and black workers were more likely to be employed as licensed practical and vocational nurses. black workers were most likely to be employed in occupations frequently requiring close proximity to others. with respect to some of the occupations that require the most frequent proximity to others, white and asian workers were most likely to be employed as physical therapists. black, hispanic, and asian workers were most likely to be employed as personal care aids, and black and hispanic workers were most likely to be employed as medical assistants. black and asian workers were most likely to be employed in occupations with both frequent exposures to infections and proximity to others. two occupations that fell into this category-bus drivers and flight attendants-had black workers more likely to be employed in them. employment in all occupations with data available according to the risk of infection and proximity to others can be seen in table s . protecting frontline workers is essential in the current crisis because these workers are particularly vulnerable to the disease. such protection may also help to reduce racial and ethnic disparities in the burden of covid- . these protections should include personal protective equipment to limit exposure to the virus, as well as protections for if a worker becomes sick, including paid sick leave and worker's compensation benefits. the author declares that there is no conflict of interest. john d. meyer declares that he has no conflict of interest in the review and publication decision regarding this article. devan hawkins conceived of this study, acquired data, and drafted the paper. he approves this version of the manuscript and agrees to be accountable for all aspects of the work. this project was considered exempt from review by the mcphs university institutional review board because it was conducted with previously collected, deidentified data. devan hawkins http://orcid.org/ - - - hospitalization rates and characteristics of patients hospitalized with laboratory-confirmed coronavirus disease -covid-net, states age-adjusted rates of lab-confirmed covid- non hospitalized cases, estimated non-fatal hospitalized cases, and total persons known to have died (lab-confirmed and probable) per , by race/ethnicity group covid- and racial disparities disparities in the population at risk of severe illness from covid- by race/ethnicity and income risk factors of healthcare workers with corona virus disease : a retrospective cohort study in a designated hospital of wuhan in china italian workers at risk during the covid- epidemic estimating the burden of united states workers exposed to infection or disease: a key factor in containing risk of covid- infection the workers who face the greatest coronavirus risk. the new york times workers are people too: societal aspects of occupational health disparities-an ecosocial perspective labor force statistics from the current population survey how to protect essential workers during covid- supporting information additional supporting information may be found online in the supporting information section how to cite this article: hawkins d. differential occupational risk for covid- and other infection exposure according to race and ethnicity key: cord- - tun fjk authors: robin, charlotte; bettridge, judy; mcmaster, fiona title: zoonotic disease risk perceptions in the british veterinary profession date: - - journal: prev vet med doi: . /j.prevetmed. . . sha: doc_id: cord_uid: tun fjk in human and veterinary medicine, reducing the risk of occupationally-acquired infections relies on effective infection prevention and control practices (ipcs). in veterinary medicine, zoonoses present a risk to practitioners, yet little is known about how these risks are understood and how this translates into health protective behaviour. this study aimed to explore risk perceptions within the british veterinary profession and identify motivators and barriers to compliance with ipcs. a cross-sectional study was conducted using veterinary practices registered with the royal college of veterinary surgeons. here we demonstrate that compliance with ipcs is influenced by more than just knowledge and experience, and understanding of risk is complex and multifactorial. out of respondents, the majority were not concerned about the risk of zoonoses ( . %); however, a considerable proportion ( . %) was. overall, . % of respondents reported contracting a confirmed or suspected zoonoses, most frequently dermatophytosis ( . %). in veterinary professionals who had previous experience of managing zoonotic cases, time or financial constraints and a concern for adverse animal reactions were not perceived as barriers to use of personal protective equipment (ppe). for those working in large animal practice, the most significant motivator for using ppe was concerns over liability. when assessing responses to a range of different “infection control attitudes”, veterinary nurses tended to have a more positive perspective, compared with veterinary surgeons. our results demonstrate that ipcs are not always adhered to, and factors influencing motivators and barriers to compliance are not simply based on knowledge and experience. educating veterinary professionals may help improve compliance to a certain extent, however increased knowledge does not necessarily equate to an increase in risk-mitigating behaviour. this highlights that the construction of risk is complex and circumstance-specific and to get a real grasp on compliance with ipcs, this construction needs to be explored in more depth. veterinary professionals can encounter a variety of occupational health risks. a high prevalence of injury has been reported, predominantly in relation to large animal work (beva, ; fritschi et al., ; lucas et al., ) , dog and cat bites and/or scratches and scalpel or needle stick injuries (nienhaus et al., ; phillips et al., ; soest and van fritschi, ) . in addition to the risk of injury, the profession is also at risk of other occupational hazards including exposure to chemicals, car accidents (phillips et al., ) and infec-tious diseases from zoonotic pathogens (constable and harrington, ; dowd et al., ; epp and waldner, ; gummow, ; jackson and villarroel, ; lipton et al., ; weese et al., ) . work days lost because of zoonotic infections are less frequent than days lost to injury (phillips et al., ) ; however, because of the potential seriousness of some zoonotic infections and increasing reports of occupationally-acquired antimicrobial resistant bacteria in veterinary professionals (cuny and witte, ; groves et al., ; hanselman et al., ; jordan et al., ; weese et al., ) , zoonotic risk in the veterinary profession deserves attention. there are no recent data on the risk of zoonotic infections in the british veterinary profession. one study published over years ago estimated . % of veterinary surgeons working for government agencies reported one or more zoonotic infections during their career (constable and harrington, ) . research from veterinary populations overseas indicates a substantial risk of http://dx.doi.org/ . /j.prevetmed. . . - /© elsevier b.v. this is an open access article under the cc by license (http://creativecommons.org/licenses/by/ . /). infection within the profession, with incidence of reported infections during their career ranging from % in the united states (lipton et al., ) , % in australia (dowd et al., ) , . % in canada (jackson and villarroel, ) to % in south africa (gummow, ) . in both medical and veterinary professions, infection prevention and control (ipc) practices are fundamental to reduce the risk of healthcare-associated infections in patients, as well as occupationally-acquired infections in practitioners. in the united kingdom (uk), universal and standard precautions are recommended by the department of health. in human medicine, research has highlighted sub-optimal compliance with ipc practices. in one uk study, observed hand hygiene adherence in nurses was . % and . %, before and after contact with patients, respectively. in doctors in the same study, the compliance was much lower, at . % and . %, before and after patient contact (jenner et al., ) . non-adherence to guidelines is a global issue, with reported hand hygiene compliance rates of % in hospitals in finland (laurikainen et al., ) , . % in an infectious diseases care unit in france (boudjema et al., ) and % in paediatric hospitals in new york (løyland et al., ) . in veterinary medicine in the uk, there are no enforceable national policies for ipc practices. for veterinary practices in the royal college of veterinary surgeons (rcvs) accreditation scheme, guidelines are available and specific standards have to be met to retain accreditation status. only % of practices are members of the accreditation scheme (rcvs, ) and although guidelines and recommendations are available for non-members, they tend to be practice-specific. additionally, the emphasis is on patient, rather than practitioner health. other countries have developed national standards for ipc in veterinary medicine, specifically related to occupationallyacquired zoonotic infections. these include the australian veterinary association guidelines for veterinary personal biosecurity and the compendium of veterinary standard precautions for zoonotic disease prevention in veterinary personnel, developed by the national association of state public health veterinarians in the united states (nasphv). even when national guidelines exist, not all practices have ipc programmes (lipton et al., ; murphy et al., ) . where effective procedures and resources are available, their effectiveness is dependent on uptake (dowd et al., ) . decision-making surrounding ipc practices will depend on a number of different factors. there are few data available focussing on awareness and perceptions of zoonotic diseases within the veterinary profession in the uk, however from studies that have been conducted overseas it appears that awareness is poor and compliance with ipc guidelines is low (dowd et al., ; lipton et al., ; nakamura et al., ; wright et al., ) . in a survey of american veterinary medicine associationregistered veterinary surgeons, under half ( . %) of small animal vets washed or sanitised their hands between patients and this proportion was even lower in large and equine vets ( . % for both). in addition, only a small proportion of large and equine vets washed their hands before eating, drinking or smoking at work ( . % and . %, respectively), compared with . % in small animal vets. veterinary surgeons who worked in a practice that had no formal infection control policy had lower awareness, as did male veterinary surgeons (wright et al., ) . in a smaller survey of american veterinary professionals, although % of respondents agreed it was important for veterinary surgeons to inform clients about the risk of zoonotic disease transmission, only % reported they initiated these discussions with clients (lipton et al., ) . in a study of veterinary technicians and support staff, only . % reported washing their hands regularly between patients (nakamura et al., ) . in a sample of australian veterinary surgeons, . % wore no personal protective equipment (ppe) for handling clinically sick animals and the majority ( . %) wore inadequate ppe for handling animal faeces and urine (dowd et al., ) . in the veterinary profession, the dichotomy between a professional status and increased risk of infection has been viewed as counterintuitive (baker and gray, ) , as it could be expected a comprehensive understanding of zoonotic disease risks would manifest in more risk-averse behaviour. in both medical and veterinary medicine, education has been identified as a key intervention to increase compliance (dowd et al., ; ward, ) ; however good knowledge does not necessarily lead to good practice (jackson et al., ) . compliance is influenced by many factors, including motivation, intention, social pressure and how individuals understand or 'construct' risk (jackson et al., ) . understanding of risk and why people engage in risk-mitigating behaviour (or not) is complex and perceived knowledge of the disease is only one factor that should be considered. a better understanding of how veterinary professionals in britain understand the risks surrounding zoonotic diseases will aid in the development of effective and sustainable ipc practices, reducing the risk of zoonotic infections within the profession. this paper examines how the veterinary profession in britain understand zoonotic risk and motivators and barriers for using ppe. a cross-sectional study was conducted october to december ; the sampling frame was all veterinary practices in great britain registered in the rcvs database. the rcvs database holds information on registered veterinary businesses, including private practice, referral hospitals, veterinary teaching hospitals and veterinary individuals. sample size calculations indicated that information from veterinary practices was required for an expected prevalence of %, with a precision of %. assuming a % response rate, practices were selected from the rcvs database by systematically selecting every third practice. the principle veterinary surgeon and head nurse were identified at each practice using the rcvs register and sent a postal questionnaire. a total of questionnaires were posted to veterinary practices. for non-responders, reminder emails were sent out from four weeks after the initial posting and a second reminder, including an electronic copy of the questionnaire was sent out a further four weeks after the first reminder, to any remaining non-responders. the questionnaire was developed based on a similar study in australian veterinary professionals (dowd et al., ) and a larger, multi-country risk perception study on severe acute respiratory syndrome (de zwart et al., ) . the questionnaire was an a page booklet (available in supplementary information), containing four sections including veterinary qualifications and experience, disease risk perceptions, infection control practices and management of zoonotic diseases. the questionnaire included both closed and open-ended questions and was piloted on a small convenience sample of veterinary surgeons, but not veterinary nurses, prior to being finalised. questionnaires were designed in automatic data capture software (cardiff teleform v . ), which allowed completed questionnaires to be scanned and verified and the data imported directly into a custom-designed spreadsheet (microsoft excel, redmond, wa, usa). the clinical scenarios respondents were asked to assess the risk from included contact with animal faeces/urine; contact with animal blood; contact with animal saliva or other bodily fluid; performing post mortem examinations, assisting conception and parturition for animals, contact with healthy animals; contact with clinically sick animals and accidental injury. * post mortem examination. descriptive statistics were performed using commercial software (ibm spss version , armonk, ny, usa). proportions were calculated for categorical data; median and interquartile ranges (iqr) for continuous data. a "risk perception score" was calculated as the mean value of the scores (high risk = ; medium risk = ; low risk = ), based on the participant's opinion of the risk (high, medium or low) of contracting a zoonosis from eight different clinical scenarios detailed in fig. . scores for ppe use in five clinical scenarios were calculated using pearson's correlation coefficient to compare reported use of gloves, masks and gowns/overalls to the recommendations in the nasphv guidelines. these guidelines were chosen because no uk equivalent that applies across all veterinary species could be found, but the nasphv standards are likely to be considered as reasonable levels of protection in the uk situation. the clinical scenarios included handling healthy animals (no specific protection advised: possible scores - ); handling excreta and managing dermatology cases (gloves and protective outerwear advised: possible scores − to ); performing post mortems and performing dental procedures (gloves, coveralls and masks advised: possible scores − to ). a score of indicated compliance, < indicated less ppe than recommended was used and > more ppe than recommended was used. redundancy analysis (rda) was used to determine if demographic or other factors accounted for any observed clustering of the motivators or barriers to use of ppe, or for the reported ppe use in different scenarios. redundancy analysis is a form of multivariate analysis that combines principal component analysis with regression, to identify significant explanatory variables. this was performed using the r package "vegan" (oksanen et al., ) , based on the methods described by (borcard et al., ) . the adjusted r value was used to test whether the inclusion of explanatory variables was a significantly better fit than the null model and a forward selection process was used to select the significant variables that explained the greatest proportion of the variance in the response data (borcard et al., ) . permutation tests were used to test how many rda axes explained a significant proportion of the variation. barriers and motivators to use of ppe were assessed by asking respondents to grade the influence of certain factors on their use of ppe (see fig. for a full description of the barriers and motivators). the response options "not at all", "a little" and "extremely" were ranked as , and , respectively. redundancy analyses, as described above, were used to determine if demographic or other factors accounted for any observed clustering of a) barriers or b) motivators to use of ppe. explanatory variables investigated were gender, age, length of time in practice, position (veterinary surgeon or nurse; owner or employee); type(s) of veterinary work undertaken (small, large/equine or exotics/wildlife); previous experience of treating a zoonotic case; level of concern over risk (for themselves or clients). additional explanatory variables investigated in the redundancy analysis for reported ppe use were the barrier and motivator scores and the attitude and belief scores (described below). participants were also asked about their level of agreement with certain statements describing their attitudes and beliefs around zoonotic disease risk and ppe use (see fig. for a full description of the statements); the responses "disagree", "agree" and "strongly agree" were scored as − , and , respectively. principal component analysis was used to investigate clustering of these "attitude" statements. as only two axes contributed variation of interest (according to the kaiser-guttman criterion, which compares each axis to the mean of all eigenvalues), the attitude statements were grouped into two subsets; those that contributed principally to pca (seven statements) and those that contributed to pca (three statements). cronbach's alpha was calculated on these subsets of the attitude statements, using the "psy" package in r (falissard, ) , to test whether any of these variables may indicate an underlying latent construct. where correlation was judged to be acceptable or better (cronbach's alpha coefficient > . ), the principal component scores were used as a proxy measure for this latent construct. potential explanatory variables, including the same demographic variables used for the redundancy analyses, and responses to motivators and barriers, were tested using linear regression modelling. multivariable regression models were fitted using the base and stats packages in r software (r core team, ). a manual stepwise selection of variables was performed based on knowledge of expected potential associations and confounders that made biological sense. variables were added one by one to the null model. two-way interactions were tested and variables or interactions were retained if likelihood ratio tests showed a significant improvement in model fit (p < . ). non-significant variables were removed, including variables that later became non-significant when additional variables were added. over the -week study period, a total of useable questionnaires were returned from the invited individuals, giving an overall response rate of . %. for a number of questions, there were some missing data; therefore the denominator for all results was unless otherwise stated. a summary of demographic characteristics of the respondents is presented in table . the majority of respondents had managed a zoonotic case within the months prior to completing the questionnaire ( . %; n = / ). the most commonly reported infections treated were campylobacter (n = ), dermatophytosis (n = ) and sarcoptes scabeii (n = ). overall, . % (n = / ) of respondents reported they had previously contracted at least one confirmed occupationallyacquired episode of zoonotic disease. when including suspected zoonotic diseases, this increased to . % (n = / ). the most common zoonotic disease experienced by respondents who reported confirmed or suspected zoonotic infection was dermatophytosis ( . %; n = / ). the relative frequency of reported zoonotic infections (confirmed and suspected) is reported in fig. , showing the reported frequency in respondents who had qualified or practised outside of britain, compared with veterinary professionals with exclusively british experience. overall, the majority ( . %; n = / ) of respondents were not concerned that they or their colleagues would contract an occupationally-acquired zoonotic disease, however a considerable proportion were ( . %; n = / ). only a small proportion ( . %; n = / ; . - . ) stated they had not thought about the risk of infection. in total, . % (n = / ) of respondents agreed or strongly agreed they had a high level of knowledge regarding zoonotic diseases. based on the eight different clinical scenarios respondents were asked to assess, the highest risk situation for zoonotic disease transmission was considered to be accidental injury, such as a needle stick injury, bite or scratch. coming into contact with animal faeces/urine was also considered high risk for zoonotic disease transmission. these scenarios were classified as high risk by . % (n = / ) and . % (n = / ) of respondents, respectively. the aspect of the job considered to represent the lowest risk of exposure to zoonoses was contact with healthy animals, with . % (n = / ) of respondents considering this to involve low risk of exposure to disease (fig. ) . the amalgamated risk perception scores ranged from (all scenarios considered low risk) to (all scenarios considered high risk), with a median of . (iqr . - . ). the majority of respondents reported they were aware of their practice having standard operating procedures (sops) related to infection control practices ( . %; n = / ). all workplaces provided ppe for members of staff, although . % did not provide training on how to use it. the majority provided separate eating areas ( . %; n = / ) and restricted access from staff and visitors to patients in isolation ( . %; n = / ). when asked about what level of ppe was used in five different clinical settings, . % (n = / ) reported they would not use any specific ppe for handling healthy animals, in line with the nasphv guidelines. when handling dermatology cases, % (n = / ) reported using no ppe. only . % (n = / ) reported not using any ppe for handling urine or faeces; one respondent did not use any ppe for post mortem examination (n = ; . %), and % (n = / ) did not use any for performing dentistry work. correlation between the ppe scores for the different scenarios was low, the greatest correlation (r = . ) was between the scores for handling excreta and for handling dermatology cases. there was no evidence that respondents who wore more ppe than required in the guidelines (i.e. gloves and/or masks) for handling healthy animals would correctly select the appropriate level of ppe (i.e. gloves, masks and a protective coverall) for post mortem or dentistry. a redundancy analysis indicated that greater ppe use (a higher ppe score) was negatively correlated with a fatalistic attitude for the two higher risk scenarios. belief that sops acted as a motivating factor to use ppe and agreement that "i consciously consider using ppe in every case i deal with" were positively correlated with greater ppe use in dermatological cases, handling healthy animals and excreta (fig. ) . all respondents indicated that perceived risk would have some effect on their motivation to use ppe, either a little (n = / ; . %) or extremely (n = / ; . %). respondents were also strongly motivated by previous experience with similar cases (n = / ; . %) and a high profile or recent disease outbreak (n = / ; . %). few respondents indicated any of the suggested barriers to ppe would have a strong influence as a deterrent to using ppe; safety concerns was most frequently cited, with . % (n = ) respondents stating this would be an extreme deterrent to using ppe. when combining both positive responses (extreme and a little influence), time constraints and safety concerns were the most frequently cited barriers, with . % (n = / ) and . % (n = / ) of respondents indicated these barriers would affect their decision not to use ppe, respectively. potential barriers that most respondents considered had no influence on their decision to use ppe were negative client perceptions and ppe availability, with . % (n = / ) and . % (n = / ) of respondents stating this, respectively. demographic variables that had significant associations with responses regarding motivators and barriers towards the use of ppe are illustrated in fig. . the explanatory variables in the model were statistically significant, however they only explained a small amount of the variation in the respondents' perceptions of barriers (adjusted r-square . %) and motivators (adjusted r-square . %). respondents with previous experience of treating a case of zoonotic disease were less likely to regard time or financial constraints, or concern for adverse animal reactions as a deterrent to using ppe (fig. a) . veterinary surgeons were more likely than nurses to be deterred from using ppe because of concerns about negative client perceptions (fig. a) ; although positive client perceptions were marginally more likely to act as encouragement in both vets and nurses who reported themselves concerned about zoonotic risk in relation to clients (fig. b) . those working in large animal practice were more likely to be motivated to use ppe by concerns over liability and nurses tended to be more motivated than veterinary surgeons by sops and concern over the perceived risk to themselves. respondents were asked to state their level of agreement with "attitude" statements (see fig. for a description of the statements) reflecting different aspects of zoonotic disease risk control in the workplace. all respondents agreed that using ppe and practising good equipment hygiene was an effective way of reducing the risk of zoonotic disease transmission. the majority thought they had a high level of knowledge regarding zoonoses (n = / ; . %) and that they were expected to demonstrate rigorous infection control practices (n = / ; . %). however, respondents ( . %) stated they just hoped for the best when trying to avoid contracting a zoonotic disease and ( . %) were concerned their colleagues would think they were unnecessarily cautious if they used ppe in their workplace. responses to seven of these "attitude" statements tended to cluster together along the first pca axis (fig. , statements a to g). cronbach's alpha coefficient for these statements was . , suggesting an acceptable level of internal consistency and a potential underlying latent construct (interpreted here as a "positive attitude" towards ipcs) for these responses. statements h to k, whilst all contributing greater weight to pca axis , had an alpha coefficient of below . and were therefore evaluated individually. respondents' scores from the first principal component axis (fig. ) were used as a proxy to represent this potential underlying "positive attitude" towards zoonotic disease risk reduction and a multivariable linear regression model was used to investigate potential explanatory factors. the only demographic variable that significantly altered model fit was profession, with veterinary surgeons tending to score lower than nurses in this "positive attitude". some of the factors identified as motivators and barriers also had a statistically significant association with the outcome. those who agreed that sops, positive client perceptions and risk to themselves motivated them to use ppe scored more highly; whereas those who regarded time constraints as a barrier to ppe use tended to have lower positive attitude scores (table ) . there were . % (n = / ) of respondents who agreed or strongly agreed with the statement, "i just hope for the best when it comes to trying to avoid contracting a zoonotic disease". a multivariable model suggested that respondents who had spent less time in practice tended to agree more with this "fatalistic" attitude, as did those who held the opinion that negative client perceptions deterred them from using ppe. furthermore, individuals with higher risk perception scores (i.e. who believed they tended to have a medium to high risk of exposure to zoonoses from clinical work) were more likely to agree that they "just hope for the best" (table ) . a regression model was also constructed for the statement, "if i use ppe, others in my workplace think that i am being unnecessarily cautious". explanatory variables included an interaction between gender and profession; nurses, particularly male nurses, were more likely to agree, whereas there was no significant gender difference in veterinary surgeons. the aim of this research was to explore zoonotic disease risk perceptions within a cross-section of the veterinary profession in britain, and to identify barriers and motivators towards infection control practices and the use of ppe to minimise the risk of disease transmission. the large proportion of respondents ( . %) who had contracted either a confirmed or suspected occupationallyacquired zoonotic infection highlights the level of occupational risk encountered by veterinary surgeons and veterinary nurses. a substantial proportion of respondents stated they were concerned about the risk of zoonoses ( %), and the majority thought the highest risk of transmission was through accidental injury, despite few reported zoonoses in the study being transmitted this way. this dissonance may be reflecting other occupational risks encountered by veterinary professionals, of which zoonotic diseases only represent a small proportion. data from studies conducted overseas suggests veterinary medicine is a high risk profession. in one survey of australian veterinary professionals, % reported at least one physical injury over a year period (phillips et al., ) . in addition to practice-acquired injuries, such as dog and cat bites, scalpel blade cuts and lifting of heavy dogs, the risk of car accidents was also noted (phillips et al., ) . further research in the german veterinary profession highlighted workplace accidents as the most prevalent occupational hazard ( . %), followed by commuting accidents ( . %). occupationally-acquired zoonoses only represented . % of the total hazards in the study (nienhaus et al., ) . practitioners are clearly working in a risky environment, particularly large animal vets, where farm environments are known to be inherently dangerous. a total of fatal injuries and major injuries were reported in british farmers or farmworkers in - (hse, , and a recent survey by the british equine veterinary association revealed that on average, equine vets sustain seven to eight work-related injuries during a year period (beva, ), highlighting just how hazardous these environments can be. few data are available on occupational injuries in the british veterinary profession; however, when working in what could be interpreted as a high-risk environment, a constant exposure to risk for those living or working in these types of environment may lead to habituation to, or normalisation of risk (clouser et al., ) . individuals in this study who tended to grade common clinical scenarios as posing a moderate to high risk of zoonosis exposure were also more likely to "just hope for the best", perhaps suggesting they have normalised these situations and do not perceive them as requiring additional precautions. within the veterinary environment, it is also possible that risks are rationalised; when faced with a very tangible risk of accident or injury, the more imperceptible risk of zoonotic infection becomes less important. this rationalisation of risk is also noted in the healthcare profession, where healthcare workers are more careful when handling sharps, compared with demonstrating compliance with ipc practices for infectious diseases (nicol et al., ) . the invisibility of the disease also plays a role here; the pathogens are not visible therefore the perception of the risk they pose is more abstract. in addition, there is often a time lapse between exposure to the pathogen and onset of clinical signs, making an association between suboptimal ipc behaviour and outcome difficult (cioffi and cioffi, ) . in the uk, personal risk receives little attention in the veterinary profession's media, especially when compared with issues such as mental health, with reports of high levels of psychological distress and suicide in the profession (bartram et al., ) and inclusion of issues around stress and mental wellbeing in surveys (vet futures, ) and veterinary curricula. this makes zoonotic disease risk less visible and may subject it to an availability heuristic, where the likelihood of an event is judged based on how easily an instance comes to mind (tversky and kahneman, ) . the absence of diseases such as rabies from the uk may also mean that veterinary professionals underestimate the risk of zoonoses because they consider the impacts to be relatively minor, short-term and treatable. this affect heuristic may be especially pronounced when decisions are made under time pressure (finucane et al., ) , perhaps reflected in this study's finding that those who viewed time constraints as a barrier to their use of ppe had less positive attitudes towards it. the disconnect between risk perception and health protective behaviour in the present study could be explained by perceived vulnerability. a risk might be acknowledged, yet if an individual does not feel vulnerable to this risk, there is no motivation or intention to change their behaviour. this perceived vulnerability is one of the factors considered in the protection motivation theory, where concern about a potential threat influences perception of the risk i.e. the more concerned an individual is about a disease, the higher risk they perceive it poses. if an individual feels vulnerable, this acts as a motivator for behaviour change (schemann et al., ). this behavioural model has been applied to horse owners following the equine influenza outbreak in australia where different levels of perceived vulnerability were identified in a cross section of the equine sector (schemann et al., (schemann et al., , . perceived vulnerability may be influencing health protective behaviour in the present study. it is possible that veterinary professionals, because they feel knowledgeable about zoonotic diseases, feel less vulnerable to the risks they pose. this lack of perceived vulnerability may account for the substantial proportion of respondents who stated they would not use ppe when handling clinically sick animals; perhaps because they are confident in their ability to identify those cases with potentially zoonotic or infectious aetiologies. identification of risk to self as a motivating factor was associated with a more "positive attitude" towards ppe use, but being a nurse was independently correlated with both of these variables. possibly because nurses often have less influence in decisions over diagnostics or handling of cases, they may feel more vulnerable. the protection motivation theory is only one of numerous health behaviour models that have been applied to both medical and veterinary research. these models are useful for explaining behaviour change in relation to infection control or biosecurity however they have had limited success in practice (pittet, ) . the main criticism of these models is that they make an assumption that behaviour is rational, controllable and therefore modifiable (cioffi and cioffi, ) . in reality, behaviour is affected by many external influences such as culture and society. society and culture are fluid, constantly changing concepts and consequently it makes incorporating them into behavioural models problematic. so while these models of behaviour are useful in explaining behaviour change to a certain extent, to gain a full understanding of what drives or inhibits behaviour change, social psychology and qualitative research is essential for making real impacts on practice. in the current study, individuals motivated by sops were found to have more positive attitudes towards ppe and also to report better compliance with ppe guidelines for medium-risk scenarios, such as dermatology cases and handling excreta. the "positive attitude" construct, related to self-efficacy, knowledge and confidence in equipment and practices, also clustered with a feeling that there is an expectation to demonstrate good practice. this could be a reflection of the influence of the practice culture on behaviour. in human healthcare, organisational factors, have been identified as one of the main drivers behind poor compliance with ipc practices (cumbler et al., ; de bono et al., ) . as compliance with infection control intersects individual behaviour and the cultural norms of the practice, the culture of veterinary practice will also be influencing behaviour surrounding infection control. it appears from the present study that when veterinary practices promote a culture of positive health behaviour and have high expectations of employees, this acts as a motivator for compliance with ipc practices. this highlights that behaviour change should also be implemented at an organisational level, rather than just focussing on individual behaviour. veterinary surgeons were more concerned than nurses that using ppe would be perceived negatively by clients. this attitude could be reflecting the importance of the vet-client relationship in veterinary practice. this is particularly relevant in farm animal practice, where vet-farmer relationships are often cultivated over extended time periods and each individual agricultural client represents a significant proportion of practices' income. respondents working in large animal practice were more likely to be motivated to use ppe by liability concerns, again potentially a reflection of the pressure felt by veterinary professionals from their clients. this is an interesting dichotomy, as the use of ppe not only protects the practitioner, but also the animal from zoonotic disease transmission. educating farm clients as to what infection control practices they should expect during clinical work on the farm may help mitigate concerns about negative client perceptions. choices around ppe use appear to be specific both to individuals and contexts, demonstrated by the low correlation between ppe scores in different clinical scenarios. this finding that protocols are often adapted to a specific situation has been observed previously in veterinary professionals (enticott, ) . the models that people construct to inform their behavioural decision making are highly individual and influenced by their biology and environment, but also their past experiences (kinderman, ) . in the present study, previous experiences of treating zoonotic cases were correlated with lower concern about potential barriers to ppe use. this may suggest that practical experience of dealing with zoonoses is more influential than the theoretical knowledge in negating negative attitudes to ppe use. a limitation of this study, as with any questionnaire based study, is that self-reported behaviours may not necessarily reflect actual practice. this discrepancy between reporting behaviours and actually performing them has been observed previously, particularly in relation to infection control practices and hand hygiene. one ukbased study highlighted no association between self-reported and observed hand-hygiene practices in a sample of healthcare professionals (jenner et al., ) , reflecting how self-reported behaviour should be interpreted with caution in any context. observation is considered the gold standard method of assessing behavioural practices, however is still subject to bias in the form of observer bias (racicot et al., ) and video recording has been used recently to monitor hand hygiene practices (boudjema et al., ) . these methods could also be effectively applied in a veterinary context and qualitative research methods, such as ethnography, would also provide valuable insights into the culture and practices of infection control and health protective behaviours in veterinary practice. the veterinary practices invited to take part in this study were randomly selected, using systematic random sampling, from the rcvs database. this system of using the rcvs database to sample the veterinary profession has been used previously for other research studies and is an established method of sampling this target population (nielsen et al., ) . the selection of practices was random, however the selection of participants at each practice may have been subject to selection bias. to facilitate a greater response rate, where data were available, individual respondents at each practice were selected from the rcvs register. to ensure this was consistent, the principal veterinary surgeon and head nurse were selected for each practice. using individual names may have increased the likelihood of the participant responding, however this may have introduced some selection bias as the selected participants are likely to be a more experienced professional. our results suggested that some workplace factors, such as sops and expectations of colleagues, influenced respondents' perceptions and attitudes to ppe use. these might be expected to cluster within practice; the response from a veterinary surgeon and nurse from the same practice might not be completely independent. however, it was not feasible to introduce practice as a random effect, as not enough practices returned two responses ( . % returned responses from a veterinary nurse and veterinary surgeon from the same practice). as with any questionnaire-based research, this study will be subject to an element of responder bias, and the relatively low response rate of this study may accentuate this bias. this is particularly evident with male nurses, who are few in number, making them difficult to target using random selection methods. according to the latest rcvs annual report, male nurses represented just . % of the total veterinary nurse population in the uk (rcvs, ), in the present study, % ( % ci . - . ) of respondents were male nurses. the rcvs database used to sample the veterinary population for this study does not contain information on specialism or type of practice, therefore it is not possible to assess whether this sample is representative of the wider veterinary profession. however, the demographic data on respondents are similar to data from the rcvs annual report; the mean age in our study was years, compared with years in the annual report. in addition, the gender split was similar; in our study, . % ( % ci . - . ) of respondents were female and the rcvs reported . % were female (rcvs, ) . despite similarities between the respondents and the veterinary population in the uk, the low response rate means the results from this sample may not necessarily be generalisable to the wider veterinary population, however this study is the first to provide these baseline data on attitudes and beliefs regarding zoonoses in the british veterinary population, which can be built on with future studies. the majority of respondents worked in small animal practice, which partly reflects the distribution of british practice types, but as the questionnaire was posted to the practice, this may have made it easier for small animal practitioners to respond as the majority of their time is spent within the practice premises. this means the study may be more representative of small animal veterinary professionals, rather than large and equine practice. to negate this in future studies, the use of stratified sampling would be a useful sampling method to ensure representative samples from each sector of the veterinary profession. this study aimed to investigate risk perceptions of zoonotic disease transmission in the veterinary profession in britain. the high infection rate within the profession suggests transmission of zoonotic infections from patient to clinician should be of concern. this study identified a few concepts that were reported to influence the use of ppe including a fatalistic attitude, the social environment and an individual's position within the practice. improving education provided to veterinary professionals may help improve compliance with sops and infection control practices to a certain extent, however this study has highlighted that increased knowledge does not necessarily equate to exhibiting riskmitigating behaviour. this suggests construction of risk is complex, circumstance-specific and can be influenced by a number of different internal and external factors. a qualitative study, using mixed qualitative methods including in-depth interviews and focus group discussions, to explore the construction of risk in the veterinary profession, is currently being developed to understand these concepts in more depth. survey reveals high risk of injury to equine vets a review of published reports regarding zoonotic pathogen infection in veterinarians interventions with potential to improve the mental health and wellbeing of uk veterinary surgeons numerical ecology with r journal of nursing & care hand hygiene analyzed by video recording challenging suboptimal infection control keeping workers safe: does provision of personal protective equipment match supervisor risk perceptions? risks of zoonoses in a veterinary service culture change in infection control mrsa in equine hospitals and its significance for infections in humans organizational culture and its implications for infection prevention and control in healthcare institutions zoonotic disease risk perceptions and infection control practices of australian veterinarians: call for change in work culture the local universality of veterinary expertise and the geography of animal disease occupational health hazards in veterinary medicine: zoonoses and other biological hazards psy: various procedures used in psychometry the affect heuristic in judgments of risks and benefits injury in australian veterinarians molecular epidemiology of methicillin-resistant staphylococcus aureus isolated from australian veterinarians a survey of zoonotic diseases contracted by south african veterinarians health and safety in agriculture in great britain methicillin-resistant staphylococcus aureus colonization in veterinary personnel a survey of the risk of zoonoses for veterinarians infection prevention as a show: a qualitative study of nurses' infection prevention behaviours discrepancy between self-reported and observed hand hygiene behaviour in healthcare professionals carriage of methicillin-resistant staphylococcus aureus by veterinarians in australia new laws of psychology: why nature and nurture alone can't explain human behaviour hand-hygiene practices and observed barriers in pediatric long-term care facilities in the new york metropolitan area adherence to surgical hand rubbing directives in a a survey of veterinarian involvement in zoonotic disease prevention practices significant injuries in australian veterinarians and use of safety precautions evaluation of specific infection control practices used by companion animal veterinarians in community veterinary practices in southern ontario hand hygiene practices of veterinary support staff in small animal private practice the power of vivid experience in hand hygiene compliance survey of the uk veterinary profession: common species and conditions nominated by veterinarians in practice work-related accidents and occupational diseases in veterinarians and their staff disease and injury among veterinarians the lowbury lecture: behaviour in infection control rcvs facts evaluation of the relationship between personality traits, experience, education and biosecurity compliance on poultry farms in québec. can horse owners' biosecurity practices following the first equine influenza outbreak in australia perceptions of vulnerability to a future outbreak: a study of horse managers affected by the first australian equine influenza outbreak occupational health risks in veterinary nursing: an exploratory study judgment under uncertainty: heuristics and biases report of the survey of the bva voice of the profession panel the role of education in the prevention and control of infection: a review of the literature occupational health and safety in small animal veterinary practice: part i -nonparasitic zoonotic diseases suspected transmission of methicillin-resistant staphylococcus aureus between domestic pets and humans in veterinary clinics and in the household infection control practices and zoonotic disease risks among veterinarians in the united states perceived threat, risk perception, and efficacy beliefs related to sars and other (emerging) infectious diseases: results of an international survey the authors gratefully acknowledge all participating veterinary nurses and veterinary surgeons, and dr j.l. ireland for her guidance and advice. this work was supported by the national institute for health research health protection research unit (nihr hpru) in emerging and zoonotic infections at university of liverpool in partnership with public health england (phe), in collaboration with liverpool school of tropical medicine. charlotte robin is based at the university of liverpool. the views expressed are those of the author(s) and not necessarily those of the nhs, the nihr, the department of health or public health england. no competing interests were declared. approval for this study was agreed by anglia ruskin university faculty of health, social care and education research ethics' panel. key: cord- -v cveegl authors: déportes, isabelle; benoit-guyod, jean-louis; zmirou, denis title: hazard to man and the environment posed by the use of urban waste compost: a review date: - - journal: science of the total environment doi: . / - ( ) - sha: doc_id: cord_uid: v cveegl abstract this review presents the current state of knowledge on the relationship between the environment and the use of municipal waste compost in terms of health risk assessment. the hazards stem from chemical and microbiological agents whose nature and magnitude depend heavily on the degree of sorting and on the composting methods. three main routes of exposure can be determined and are quantified in the literature: (i) the ingestion of soil/compost mixtures by children, mostly in cases of pica, can be a threat because of the amount of lead, chromium, cadmium, pcdd f and fecal streptococci that can be absorbed. (ii) though concern about contamination through the food chain is weak when compost is used in agriculture, some authors anticipate accumulation of pollutants after several years of disposal, which might lead to future hazards. (iii) exposure is also associated with atmospheric dispersion of compost organic dust that convey microorganisms and toxicants. data on hazard posed by organic dust from municipal composts to the farmer or the private user is scarce. to date, microorganisms are only measured at composting plants, thus raising the issue of extrapolation to environmental situations. lung damage and allergies may occur because of organic dust, gram negative bacteria, actinomycetes and fungi. further research is needed on the risk related to inhalation of chemical compounds. in the management of household wastes, the sorting-composting approach presents many advantages: (i) sorting not only provides for the selection of recyclable and compostable materials, it also reduces the volume of waste to be treated by incineration. hence, the putrefiable part represents - % of the weight of the entire municipal solid waste (msw) [ , ] . (ii) the volume of the putrefiable portion will be reduced during the composting process. (iii) compost is widely utilized in agriculture, especially in europe [ ] and its use is also strongly encouraged in the usa [ - . depending on its degree of maturity and quality, it can be used in vine yards, for mushroom farming (fresh compost), in horticulture (hot-beds with fresh compost), sylviculture, country-side planning (flower beds), preparation of sports fields, or golf courses, maintenance of public or private parks, maintenance of motor-way embankments to cover waste discharge systems, or in the rehabilitation of sites such as mines and sand pits [ - . the chemical components play important roles in the physical and chemical properties of the soil f , ]. amendment is more interesting for the improvement of soil characteristics than for the fertilizer value of the compost [ ] . indeed, the use of municipal solid waste (msw) influences the water retention capacity, resistance to erosion, density, ph, conductivity and the nutrient content of the soil [ , . there are three main methods for cornposting: gathering of waste in windrows that are turned at regular intervals, static piles of waste that are aerated by deliberate passage of air within the mass (aerated static piles), and finally by gathering waste materials in a totally enclosed and controlled environment, that is, in a reactor [ , - . many organic waste products are used for compost: yard wastes (yw), sewage sludge (ss), municipal solid wastes (msw), industrial and agricultural (wood, animal droppings, etc.) [ . in the present work, we have only studied domestic waste in single composts or accompanied with ss or yw. the compost is not a harmless product; msw may contain a number of contaminants with health or environmental risks. these chemical or biological contaminants may expose different populations to health hazards, ranging from the composting plant workers to the consumer of vegetable products treated with compost fertilizers. for example, the humus part of composts consists of numerous ligands, some of which are more or less irreversibly bound to metallic elements [ ] . the metals may be released in the soil, when a change in environmental conditions such as ph occurs during application of the compost [ ] ; released metals then become bioavailable for plants. among the toxic elements found in composts, arsenic, asbestos, hexavalent chromium, nickel or pcb have been classified as carcinogens or potential carcinogens [ . finally, composts could be potentially hazardous through the presence of microorganisms [ ] . in france, out of tons of msw per year, % ( t) are treated and transformed into tons of compost [ . due to the diversity of the populations exposed to the use of compost, the huge mass of products involved, and the potential risk of contamination, it is interesting to evaluate the public health and environmental risks arising from the utilization of compost originating from msw. based on literature surveys, the state of present knowledge with regard to the type and the quantity of contaminants is discussed. the different populations at risk through various routes of contamination are also discussed. in the preparation of this review, various comparable literature data on msw from europe, north america and japan were collected [ , ] . all the articles were selected on composts of msw origin. on account of the similarity of the problems encountered, our literature review also covered compost made from sediment wastes from water treatment stations and green wastes (gardens, parks, forests, etc.) . for the same reasons, some articles that treated non-composted ss were considered. the quality of a compost determines its sales [ ] and a hazard-free utilization, which may be viewed from two points: the agronomic value and the absence of contaminants. many authors have written on the agronomic quality of composts t , - . the contamination of the finished product may come from the primary material, that is, essentially from the content of our garbage bins as well as other composted wastes (yw or ss). metals, for example, are provided by plastics (pigments and stabilizers), batteries (torch and radio), car batteries, electronic components, electric bulbs and their sockets, leather materials, glassware or ceramics [ , , ] . asbestos is found in household refuse because of insulation materials [ ] . a number of organic compounds (solvents, grease, pesticides, etc.) find their way into our garbage bins as residues. these are also found in yw (lindane, pcb, pcdd, pcdf), and in ss (pah, pesticides, halogenated hydrocarbons, phthalates, esters, pcb) - . sorting of msw is increasingly used and has proved to be very effective in reducing the contamination of the finished product. mercury, lead, chromium, cadmium, zinc and copper are mostly derived from batteries, glassware, plastics and ferrous materials. elimination of these recyclable components before making the compost leaves not more than % of lead and copper, - % of zinc and nickel, which persists in papers/cardboards and is more or less strongly bound to organic materials [ ] . sorting may be carried out at the source by the producer or at the waste disposal plant, implemented manually or automatically by special machines, especially in the case of small amounts of plastics and metals, or after cornposting [ , , . early sorting ensures lower contamination with organic and inorganic pollutants [ , , ] . biological contamination is also encountered in msw. pathogenic microorganisms are likely to come from dirty discarded cloth, faeces of domestic animals, sanitary tissue papers or putrefying foods [ , ] . as a result of their origin, contamination exists in ss where one can easily find many strains of bacteria, viruses, fungi and other parasites (table ) the cornposting procedure is very important. the parameters that must be controlled during this process are: the composition of the mass, aeration, temperature, humidity, carbon/nitrogen (c/n) ratio and ph. some components such as keratinous wastes, papers and cardboards are not easily composted [ ] . depending on the method of compost formation that is chosen, the parameters may be controlled to a reasonable extent [ - . anaerobic conditions may provoke an increase in the duration of cornposting and, more table pathogens that may be found in sewage sludge and in municipal solid wastes ( , , , , ) importantly, an adverse sanitary condition if the temperature conditions are not fulfilled. a hygienic compost is free of the pathogens that it might have contained. there is an inherent risk in the use of unhygienic composts [ , - . it is possible to disinfect them by monitoring the temperature during composting. as can be seen in table , the destruction of the pathogens depends on the temperature reached and the duration of the oxidation process. temperatures of about - °c for at least days are recommended [ , , ] . this has proved effective against salmonella spp. and other parasites [ , ] . unfortunately, these conditions are table '\ level of temperature and length of time necessary to destroy some pathogens present in primary products of composts ( ). lethal temperature not always fulfilled because temperature is dependent upon the degree of oxidation of the entire mass of compost. control of temperature and aeration is not possible for the entire mass of wastes windrow cornposting. the compost is therefore not always disinfected by this method. the use of windrow presents, in addition, the risk of contaminating clean areas with unclean wastes from other parts of the heap infected during turning [ , , , ] . it should be noted however that these pathogens are not in their natural environment, that is, in their hosts. this does not favor their survival. if fungi and bacteria strains have the possibility to multiply, viruses and other parasites can at best acquire a form of resistance. the pathogens are forced to compete with other microorganisms which are in their natural environment. these are mostly bacterial strains at the beginning of the composting process, but turn to fungi and actinomycetes by the end, as a result of selection pressure due to increased temperature [ , ] . the method of composting permits both improvement of the hygienic conditions of the compost and promotes chemical decontamination. during the degradation of the primary wastes, the chemical contaminants are often implicated in the process of transformation. this is supported by laboratory results which show that chlorpyrifos, isofenphos, diazinon (insecticides) and pendimethalin (herbicide) found in plant wastes are totally degraded by composting [ ] . other studies have reported a degradation of % of chlorinated pesticides and % of pcb [ ] . the phenomenon is widespread and it has been tried for the rehabilitation of soils by composting [ ] . only pollution by organic wastes can be treated by this approach. by contrast, mineral contaminants tend to concentrate during composting by reduction in volume. the process used must encourage a good maturity of compost. maturity refers to the degree of biological, chemical and physical stability of the compost. it can be measured in several ways [l& - ] . there is a meeting point between the two terms maturity and stability. while maturity defines such aspects of the compost like color, friability and odor, stability on the other hand is based on all the physical, chemical and biological evaluations [ ] . the use of an immature compost may present an agronomic problem since it may become toxic to plants [ , . the degradation of the wastes continues in the soil after application of the compost, with several toxic intermediate metabolites being present, such as phenols, ammonia, acetic, propionic, butyric and isobutyric acids [ , ] . on the other hand, when immature composts are used, the risks inherent in increasing soil temperature, the competition between plants and microorganisms for available soil nitrogen and a reduction in the level of soil oxygen are equally important unfavorable factors for plant growth. the maturity of compost is also important because of its ability to create nuisance when used. the decomposition of immature compost is completed in the soil under anaerobic conditions, resulting in odor [ ] . furthermore, the bioavailability of heavy metals is a function of the degree of maturity of the compost, since the humic material is capable of binding them. experiments have shown that metals become less bioavailable with increasing maturity. this condition, in turn, limits the risk of spreading and hence contamination of the food chain (via plants) and the entire environment [ , - . the parameters discussed above are controlled and described differently in laboratory experiments or when field measurements are taken, making a summary of such diverse data difficult. the quantitative data that have been considered in this review are the highest and the lowest values encountered in composts and not averages, since there is a wide variability between data from each article. in addition, the values of the parameters that determine the level of contamination in each compost are not always described in each article. man and his environment are exposed to contaminants from composts during processing, storage and utilization. fig. summarises the different means of contamination and their risk implication. there is limited literature data on storage of compost. oral contamination through contaminated hands is possible for biological and chemical contaminants. this risk is particularly pronounced for children. studies have shown that a child may ingest as much as mg/day of dust from the soil, and when the child suffers from geophagy -or pica -(pathologic exaggeration of the hand-mouth behaviour) this may increase up to g [ , ] . environmental contamination may result from open air storage of compost or under bad or inadequate protection which exposes it to rain. pollutants are then washed by rain and carried along by water run-off or spread by percolation into the soil. wind may also disperse inadequately stored composts. the storage of immature compost also provokes the emanation of a nauseating odor. application. the applicator may be wounded if sharp objects are left in the compost. compost generates dusts and many particles are suspended in the air during spreading. their inhalation may be dangerous to health since they adsorb both biological and chemical contaminants. dispersion of these dusts and their components in the ecosystem may also constitute an environmental risk. spread compost. composts add chemical or biological pollutants to the soil. these pollutants exist either in a free state or are bound to humus components of the soil. the free chemical contaminants are said to be bioavailable and may be assimilated by plants. there is therefore a potential risk of contamination of the food chain through plants cultivated on such soils and to animals which are fed with these plants as well as to their predators. the external parts of plants are in contact with treated soil. a consumer who does not wash or peel edible materials is thus exposed to the risk of either chemical or biological contamination. the consumption of animals fed with such plants might lead to hazard. under this condition, meat transmits essentially chemical contaminants and parasites. from a different point of view, the free chemi- cal and biological contaminants may be washed along by flowing water or rain and may percolate down to groundwaters. this is a means of dispersion in the environment and a threat to man if these infiltrations reach groundwater used as drinking water. the concentration of contaminants in composts, their bioavailability, the recovery rate in plants cultivated on compost-treated soils and in compost sewage systems will now be described. bioavailability of contaminants may be evaluated in different ways using different agents for extraction or binders. bioavailability is calculated from the ratio of the weight of the extract and the quantity of the compound in the compost or treated soil. this knowledge helps in the estimation of the free fraction of metals that may be assimilated by plants or capable of contaminating water bodies or soils. odor resulting from soil treatment with immature compost will not be dealt with here because of the very limited amount of research done in this area [ , ]. only a certain number of compounds selected according to their known toxicity will be discussed in the following section. their general characteristics as well as those of other contaminants are presented in table . cadmium. the amount in compost ranges from . ppm to . ppm (fig. , graph a) (unless otherwise stated, all data are in dry weights), although the minimum level is much lower, since about observations did not reach the detection level with the analytical method used [ , ] . although there is no statistical difference in the quantity of cadmium in compost of diverse origins (p = . ), uncomposted sewage sludge contains higher levels (s- ppm) [ - . bioavailability varies from about to % (fig. , . for the record, the acceptable maximum concentration of cd in potable water in france is i*,g/l. one does not detect it in leachates originating from compost constituted from plant materials [ ] . several authors have considered the assimilation of cd by plants as a result of treatment of soils with compost [ , - ; cd content of . ppm was therefore found in beetroot [ ] . a cd range from . ppm to . ppm was found in beetroot cultivated on soils treated with ss composts or yw. table indicates that assimilation depends on the plant and the level of metals in the soils. similar results have been obtained in other studies carried out in fields treated with uncomposted ss. one month after planting, tobacco plants assimilated - % of cd present in the soil while maize took in only % [ ] . the effects are long lasting. for example, on a soil treated for years, . - ppm cd was observed in beetroot several years after treatment was stopped [ ] . a concentration of - ppm was found in the leaves of maize years after a treatment with ss was stopped [ ] . lead. the levels of lead in compost of msw range from . ppm to ppm (fig. , graph b) with a statistically significant difference between composts made partially or completely of msw and those without msw (p = m ). the latter contains much lower levels of pb. this may attain - ppm in non-composted sewage sludge [ , - . its bioavailability is from . % to . % (fig. , graph b ). these two extreme values have been measured in two mixtures of soil/compost of urban wastes. the methods of extraction were similar but the ph of both soils was very different. the lowest bioavailability was measured in a soil of ph . [ ] while that of ph . [ gave a very high bioavailability value. the release of pb in ss is known to be low [ ] . experiments carried out with lysimeters gave con- table literature dealing with compounds present in urban compost. . nickel. the usually observed level of nickel in composts, . - ppm, is only slightly dependent on the original primary materials from which the composts were formed (p = . ), though the highest values are obtained in composts containing ss (fig. , graph c). the extreme values of bioavailability range from . % to . % (fig. , mercury. the only available information deals with the mercury content of composts. no author, to the best of our knowledge, has been interested in its bioavailability and its presence in plants as a result of the application of composts or in leachates. the quantity found in a compost depends on the origin of the compost and there is a significant difference between composts of different origins (p = . ). composts of msw have the highest level of mercury which range from . ppm to . ppm (fig. , graph e). selenium. the selenium content of composts varies from . ppm to . ppm with a difference as a function of their origins (p = . ) which may be explained by a high contamination with ss (fig. , graph f) . yw and msw composts have similar concentrations of selenium but data are scarce. beetroot grown on soils treated with a msw compost may contain about . ppm to . ppm of selenium depending on the origin of the compost [ ] . we have no information on selenium in leachates of composts. arsenic. very little information on the measurements of arsenic in compost has been found in the course of this research. the values range from ppm to ppm. the application of msw compost for more than years did not increase the arsenic content of plants cultivated in sites [ , ] . asbestos. the presence of asbestos has been studied in different experiments [ , ] . in eight samples of msw composts, fibres were found in all, while observations were positive out of carried out with non-msw composts. in contrast to metals, organic contaminants may be metabolized by microorganisms during the process of composting [ ] . only the very stable compounds may persist in the composts because of the length of time of compost formation, the presence of microorganisms and the high temperature. hence, pesticides such as diazinon which has a life span of weeks were not found in compost [ ] , while pentachlorophenol which may persist for years in the soil is found in compost [ ] . few studies have been made on pollution with organic contaminants, but many compounds have been measured (table ). for the sake of a summary, these substances will be examined in groups, adding also specific examples. the major families treated have been chosen based on their relative resistance to biological degradation, or their toxicity to man, animals and plants, and on their tendency to accumulate in the food chain [ ] . unlike the inorganic compounds, organic compounds may be transmitted to plants in two ways: by classical routes through the roots, and through the air into the leaves during vaporization of some of the compounds [ ]. pah. their half-life in the soil is long and they are stable even when metabolized in the plants. total pah content in composts ranges from ppm to ppm. for each element, the values range from . ppm to . ppm (table ) . pesticides. though many organochlorinated pesticides have been banned, they are often found in the environment. composts made of wastes that are sorted early contain very little organochlorinated pesticides [ ] . pesticides detected are in the range of . ppm to . ppm (table ). chlorinated hydrocarbons. the different substances gathered in this family of compounds range from . to . ppm. volatile solvents are among the most important since they are found at a concentration as high as . ppm. pcdd /f. they are lower in composts of msw that have been sorted early and range from . to ppm. very small quantities are found in ss (in the order of ppb) [ ] . pcb. the quantity of pcb found in composts is about . - ppm (table ). in ss, this may vary from . to ppm. natural substances. in contrast to pollutants (exogenous compounds), natural substances (endogenous compounds) are derived from the compost itself. for example, microorganisms produce fatty acids and methylated esters, but in quantities that do not add more than . - . ppm to the soils during application. these quantities are considered not to be dangerous [ ] . the excess soluble organic materials from composts may constitute a nuisance, particularly for flowing water. successive leachates might introduce organic matter into water and may pose a risk of ecological disequilibrium. in a study over years [ , a decrease of cod (chemical oxygen demand) from mg/l to mg/l (corresponding to mm of leachates) was observed after years, and bod (biological oxygen demand) represented % of cod. given the diversity of the references as to the type of microorganisms and the method of composting, it is difficult to summarize the literature. the aim of this work, therefore, is simply to find a base from which the microbiological risks associated with composting can be evaluated. those who wish more details may go through the original papers. two types of microorganisms are envisaged. first, the pathogens present in raw materials meant for composting and liable to disappear during the composting process (tables , ) . these agents are representative of the microorganisms present in the digestive tract. secondly, there are microorganisms which develop during the process of compost formation and which play a role in the degradation of organic matter. these are fungi and mesophilic and thermophilic bacteria. there are several obligatory and facultative pathogens [ ] . the organism as well as its spore or toxin (endotoxins of gram negative bacteria), may be implicated in the pathogenicity. hazard essentially occurs through the respiratory system and these germs constitute a potential risk for workers at the composting site and for users of composts, whether workers or private users. table summarises the concentrations of these germs found in the literature, according to their route of penetration (ingestion or inhalation). air measurements have been carried out in tunnels of a mushroom culture house or in composting sites, near mounds of mature compost. the latter will be used, in this review, as an index for the risk resulting from the use of composts because no other data on atmospheric measurements during compost disposal are available. microorganisms which constitute a potential respiratory hazard are more frequent than those which follow the digestive route. these are gram positive (including actinomycetes) and gram negative bacteria and fungi. the microorganisms are essentially bound to dusts produced by composts, especially during turning [ ] . about - % of particles in suspension in the atmosphere about composts can be inspired because of their small diameter (< pm), and can therefore reach pulmonary alveoli [ , , ] . in parallel to the microbiological hazard through inhalation, there is also a physical risk due to the deposition of dust in the lungs. in a study among workers, concentrations of . - mg/m of dust (n = ) were measured in the atmosphere at a composting site [ , , ] . these values are higher than the occupational standard of mg/m [ ] . very few studies have been done on organic dusts from composts. total colifonns: . x lo to . x lo cfu/g (n = , from to organisms/g (n = ) ( , ) fecal coliforms: . x ' to x lo cfu/g (n = ), . x lo'- . x lo organisms/g (n = ) ( , ). fecal streptococcus: (n = ) lo to . x ' organisms/g, (n = ). lo to . x lo cfu/g ( , , , ) ( ) 'on two occasions, the authors suggested a sampling error. the two assays, depending on the method used (most probable nu only give a qualitative identification, i.e., < . organism/g. this review deals with a limited number of contaminants because not all have been studied in compost. the risk associated with ingestion of dust from an amended soil depends on its use. for farming, especially for vegetables and potted farm crops, t/ha of compost is used while t/ha is spread in public places (gardens and green playgrounds). composts are usually mixed with a depth of soil of approximately cm [ ] . if the density of soil and compost are comparable, the dilution of the compost in the soil is, depending on the quantities that are disposed, in a fraction of one-hundredth or one-tenth. the level of exposure of an individual depends on the quantity of soil ingested. the best estimate of ingestion of telluric dust by a normal child is about mg and about mg for an adult while a child suffering from geophagy may absorb up to g/day [ , ] . this route of exposure is, however, modest because the use of compost for soil treatment by private owners or for public gardens and parks currently represents only - % of the total compost production [ ] . the hypothesis that an individual is in permanent contact with an amended soil is theoretically possible, but this is not very likely to occur. the maximum quantity of contaminants that are absorbed daily may thus be estimated as: q absorbed ( pg) = maximum concentration of toxicant observed in composts (c, ppm) x dilution of compost in ground x amount soil ingested (g) + amount compost dust in the air x c x amount air inspired ( ). as a first approach, aerial contamination may be ignored due to relatively low exposure. compost is dispersed in the air essentially during manipulation. this is a minor route of exposure for the general population. on the other hand, however, this route is not negligible for the workers of the compost production and manipulation sites. the ingested fraction (eda: estimated daily absorption) will hereafter be the main route of exposure considered in this review. its intake will be compared with the acceptable daily intake (adi) for each substance. exposure by ingestion may occur through contamination of the food chain from plants and animals raised on soils treated with composts (animals bred on open fields may ingest soil up to % of their daily food ration) [ ] . the contamination of underground water by products from amended soils may equally present a human health hazard or an environmental nuisance when the ground water is not used for potable water. . . chemical hazard table summarizes results which help comparisons for inorganic compounds' adis and estimation of the maximum potential exposure associated with compost. the contribution of the ingestion of a mixture of soil and composts to the adis of an adult is shown to range from . x e to %. it is therefore negligible for all metals. for a normal child, the risk is higher for chromium and lead in the case of direct contact with treated public gardens and parks. the contribution of this type of soil may rise up to l- %. in the case of pica, four contaminants may be dangerous. for cd, eda may amount to % of the adi, % for cr and % for pb. these are extreme values estimated from the most contaminated composts, and are unusual (fig. ) . the us-epa assessed the risk associated with the use of ss and considered all the possible routes of contamination. it estimated a corresponding noael (no observed adverse effect level) for some pollutants [ ] . according to the data in the literature, only pb occurs in composts at quantities that are frequently higher than the noael ( % of samples described in the literature) (fig. , graph b ). in view of the possible dilution in the environment, the concentration of metals encountered in leachates does not suggest a significant risk of contamination of the environment or of drinking water. the nuisances that concern soil, fauna and flora after treatment with composts are not well known, but they are unlikely to yield important risks. only pb is suggested to have an impact on invertebrate fauna in the soil through which the food chain of wild life may be contaminated [ , ] . though the risks indicated in this review seem to be of little importance, it must be remembered that composts persist in the soil for several years [ , ] and that repeated applications may lead to an accumulation of pollutants [ , ] . the table potential contribution of composts to the contamination of the soil (based on the estimation of l/ for the compost/soii volume ratio). as na: information non available *maximum quantity: concentrations in the most polluted soil after treatment with the most contaminated compost excess relative to the highest %: percentage increase in the concentration of contaminants after soil amendment = (concentration in the least contaminated soil + the most contaminated compost)/(concentration in the least contaminated soil) **the risk associated with the absorption of argricultural soil is not considered ***acceptable daily intake ****no observed adverse effect level. fate of pollutants in treated agricultural soil was studied bi-annually in holland for years, showing an increase in the quantity of cd ( . times the baseline level), of as and of cr (x . ), ni (x . ), hg and pb (x ). in spite of this, no increases were observed in the levels of cd and ni in cultivated crops. rather, a decrease in the quantity of as was observed while cr and pb increased only by % in carrots, beetroot, turnip, pears and beans. however, in another s-year study of the fate of cd and ni in soils amended annually with ss and of their transfer to the leaves of maize plants, a strong increase in the quantities of both metals were observed ( - %) for cd and % for ni in comparison with the levels measured after the first application [ ] . the food chain is a potential route of human exposure because most of the composts produced are used in agriculture [ ] , but it is difficult to make a global assessment of the risk to the gen-era population since the overall agricultural land area where composts are applied is not well known. some hypotheses will be made in order to set the risk scenario. it will be estimated that the dilution of compost in targe and vegetable farm soils is about one-hundredth burden. under this assumption, each inorganic pollutants will be reviewed. cadmium. the body burden of cd comes essentially from food, except among smokers [ , ] . cd is very persistent in the human body, since its half-life is about - years [ , ] . the consequence of an intoxication with cd is primarily renal and hepatic; the bones are also a target after very high contamination (the 'itai-itd episode in japan) [ , , , ] . cd found in compost may strongly contaminate food because the relationship between the quantity of cd in the soil and that in plants is linear without threshold [ , ] . a steeper gradient is found in lettuce ( %) than in irish potatoes ( %) [ , ] , showing that cd accumulates differently according to the organ of the plants (leaves > roots > fruits and grains) [ , ] . the quantity of cd in food also depends upon the use of the food. for instance, when wheat is transformed into flour, the quantity of cd is reduced by about % due to elimination from the chaff [ ] . contamination of the food chain may also come through animals. grass grown on soils with high concentrations of metals may contaminate animal feed. this hazard is apparently minimal for cd because the quantities present in composts are not sufficient to yield high amounts in plants that might accumulate in the muscles of animals. with increasing daily intake, accumulation in the body is minor. a -fold increase in cd daily intake in pig does not yield a significant increase in the quantity in the muscle [ , . cd normally accumulates in the offal which represents only a small fraction of our food regime ( . - . %). only populations that abundantly eat liver or kidney are seriously exposed [ . however, there is a direct potential risk for animals grazing on grounds treated with composts. during grazing, animals ingest - % of soil with grass. an increase has been observed in muscular tissues for animals grazing in fields amended with ss containing . ppm of cd [ ] . the concentration observed in compost is much lower (maximum ppm) and the risk is therefore less than when ss is used, even after application for several decades [ ] . the case of lettuce is interesting because its cd content depends very strongly on the quantity in the soil [ ] . the normal level of cd in lettuce grown in normal agricultural land ranges between and pg/kg (fresh wt.). hence, lettuces cultivated on pure compost (a very unfavorable situation and very unlikely, except for experiments) may contain only pg/kg (fresh wt.) [ ] , which is very close to the normal value. risks associated with food also depend on the frequency at which that food is consumed. a low contamination of a food item that is consumed frequently may be more deleterious than high contamination of food that is rarely eaten. unfortunately, though there are data on the average food intakes and on the natural content of metals in food items [ ] , data from cd contaminated zones are scarce [ ] . lead. pb quantities in composts are apparently not high enough to represent a risk to animals through grazing. furthermore, the risk of contamination to man through the consumption of such animals is low, since, as in the case of cd, pb does not accumulate in meat but in offal . pb is assimilated in small fractions by plants. its penetration in grapes is poor but higher in maize and wheat. the risk of contamination of products made from these crops is reduced by the biological barriers in plants which prevent the penetration of pb in the grains [ , ] . nickel. ni is found mainly in green vegetable . there are poor data on the transfer of ni to man through plants, and the existing data does not suggest any risk. some studies seem to indicate that at the levels recovered, ni may not pose any phytotoxicity problem [ ] . however, this metal is toxic to crops before attaining the toxic dose for man [ ] . chromium. data on the risk of cr via food is also scarce; % of cr in the food chain is found in plants and an increase in consumption might be dangerous [lo] . cr accumulates mostly in the roots of some vegetables , . mercury. hg is only slightly assimilated from the soil by higher plants (mineral mercury even less than methylated mercury) [ , ] ; on the other hand, mushrooms accumulate hg very easily [ , ] . in france, the use of urban compost in mushroom farms has decreased from to % in years, and an afnor standard has targeted a reduction from to % which will further reduce the risk associated with such practice [ ]. it has not been proved if hg accumulates in animals fed on plants grown on compost amended soils [ ] . selenium. fruits and vegetables contain on average . - . ppm se [ ] . one study on the assimilation of se by plants cultivated on msw compost amended soil did not indicate an increase compared to the normal level . paradoxically, a study on amendment of ss has shown in several species of fruits, vegetables and cereals, that the levels obtained are well below those measured in normal food ( . - ppb) [ ] . the maximum concentration observed in soil does not exceed the standard values for agricultural or residential areas (table ) [ ] . arsenic. there is only a limited amount of studies on the absorption of as by plants. however, available results suggest that assimilation takes place in the leaves and not through the roots [ ] . as to the risk from arsenic, there is no adverse indication regarding the use of compost in agriculture. the potential risks associated with organic compounds have been assessed to a lesser degree. pah. the international reference values for total pahs range l- ppm for gardens or residential areas [ ] . quantities found in compost may reach ppm which, for a one-tenth dilution, give a non-negligible surplus; this increase is lower in the case of a one-hundredth dilution. these results are due to the high content of certain contaminants (phenanthrene, naphthalene, anthracene, pyrene, acenaphthalene and fluorene) but these compounds are not very toxic. out of the seven pahs that are well known carcinogens (table ) none was found above the standard in the soil after compost was applied. agricultural use of ss (at levels comparable to those of composts) gives only low concentrations of pah in plants, even after a very long trial [ ] . while pahs penetrate into plants, they concentrate mainly in the underground teguments and only very little in the aerial parts. peeling and washing before cooking helps to avoid contamination through the food chain [ ] . however, the risks might increase during repeated applications. no accumulation of pahs in agricultural soil was noticed after treatment with msw composts for years but the authors of the study admit the inadequacies of the data and the necessity to carry out experiments of longer duration [ ] . as regards the risks associated with leachates of composts, % of pahs are free in ss (and many thus percolate) [ ] . if this fraction holds true in compost and taking into account future dilution, the chances of contamination of water is minute [ ] . pcdd / f. though who has recommended an adi of pg/day/kg for , , , tcdd [ ] , the ad for complex mixtures of compounds is not known. the us-epa has calculated an acceptable exposure dose to pcdd/f for a general population as x d rig/kg/day [ ] . this exposure should not provoke the occurrence of more than one cancer in a population of one million persons. if one considers the least contaminated compost ( . ppm) and depending on what the soil is used for, the daily absorbed doses by an adult, a normal child and a child with pica are - times the recommended dose. however, this result should be considered with care because there are only a very small amount of measurements on dioxins and furans in composts and it is not possible to assess whether the samples were representative. these compounds are sparingly soluble in water and are strongly adsorbed onto dust and soils. therefore, they accumulate but have low availability . yet, a literature review shows that several surveys reported leaf assimilation of pcdd/f by plants due to their dispersion in air [ ] . the transfer of pcdd/f by animals has been reported when grazing animals ingest dispersed contaminated ss [ ] . finally, pcdd/fs are not easily bioavailable and, therefore, are not likely to concentrate in leachates. pcb. the maximum quantities recommended in the soil range from . ppm to . ppm [ ] . the concentrations observed in composts ( . - ppm), with a dilution of one-tenth, might lead to an increase of the amounts in the soil above standard values and represent a potential qk in the case of ingestion of soil, especially to chrldren. risks through crops are weaker. after dispersion of ss with a pcb level of ppm in a trial including several crop species, it was recovered only from carrot [ ] . on the other hand, fields amended with ss containing an average of ppm pcb might represent a risk to farm animals by accumulation of the chemical [ ] . it should be noted that pcbs are very persistent products and measures should be taken to control long-term uses. pesticides. the maximum acceptable quantities for pesticides such as aldrin, dieldrin or ddt for agriculture purposes without risk are . ppm for the first two and . ppm for ddt [ ] . these values are higher than those found in composts. the contaminants often concentrate in the external teguments and peeling might reduce risk . plants possess metabolic pathways which, over a certain length of time, eliminate pesticides from the tissues [ ] . diazinon, isofenphos, chlopyrifos and pendimethalin (non organochlorinated) are less persistent and disappear during composting [ ] . ch. the non-volatile ch accumulates only slightly in crops grown on compost amended soil, which eliminates the danger inherent in the consumption of these crops [ , ] . by contrast, the volatile ones (trichloromethane, chloroethylene) may pose a risk since marked cl indicated a non-negligible foliar assimilation . unfortunately, these works concern ss and there is no information on the volatilization of volatile organic compounds (voc) in msw compost during use [ . it should be lower than during the application of ss since there are many occasions for loss during the process of composting (heat, turning of the windrow, duration of cornposting). authors have shown that at the composting site, the highest levels of vocs are found near fresh wastes [ ] . there are two routes of exposure for pathogens or toxins: ingestion of a mixture of soil/compost and inhalation of microorganisms dispersed in the air during manipulation of composts. microorganisms present in composts do not seem to compete with those in the soil. therefore, spreading of composts is apparently without biological risk to the environment [ ]. x . . risks associated with ingestion of microoqanisms different authors and institutions have proposed standards for the microbial quality of composts using indicators of contamination as an index. this issue still provokes scientific controversy in terms of relevance [ , ] . the following have been proposed as limiting values: x lo faecal streptococci/g, x ' enterobacteria/g, absence of salmonella in g, and absence of eggs of parasites [ ] . salmonella strains are rarely present in msw compost but more often in compost of ss [ ] while eggs of ascaris are absent. with regards to faecal streptococcus (table ) , seven studies out of which were screened in the literature showed higher concentrations than the recommended values. however, these results are inconsistent and these observations were made under different processes of composting. the slow methods of composting (such as turning of compost in windrow) are less efficient in sanitization of composts [ , ] . during storage or after application of composts, pathogens may disappear more or less rapidly. survival of viruses depends on humidity, temperature and on the type of strain [ , ] . they do not seem to concentrate in leachates [ ] . some authors reported that it is possible for viruses to penetrate into the plant through the roots and to migrate into the stem [ . the survival of enteric bacteria is also influenced by humidity and temperature. in the soil, they survive longer in the saturated zone. in an experiment, though no bacterial indicator was detected in the soil before amendment with ss, it took months to eliminate the effect of x lo strepto-coccus/g, . x lo total coliforms/g and . x ' faecal coliforms/g [ , ] . during this time span, the paradoxical phenomenon of recolonization of mixtures of soil and compost may take place especially after rain. hence, a compost with a normal rate of indicators may undergo a sudden increase in their concentration (and reach values above the proposed norms) following changes in environmental conditions [ . the survival of parasites is the longest. eggs of ascaris may persist years during storage of mud and up to - days in the soil [ , ] . however, the us-epa has shown that after years of soil amendment with ss that were contaminated by parasites, toxocaru were isolated in % of the samples but no ascaris was found [bo] . pathogens may be ingested by children through hand-mouth contact, which is an important route of exposure in cases of geophagy. the composts observed in this review did not contain adequate levels of salmonella as to represent a risk, and parasites were rarely found. the infective dose of e. coli is about lo [ ] which should not be reached after ingesting a mixture of soil and compost. the infective dose of streptococcus (lo'), however, may be attained in case of pica. atlatoxin is produced mainly by aspergillus jravus and a. parasiticus. this carcinogenic toxin is hazardous when ingested [ , ] . the impact of aflatoxin is difficult to evaluate because there is only qualitative information on its presence in composts. this route of contamination may be found during utilization of composts which cause air dispersal of microorganisms responsible of the process of cornposting. these are different from the faecal microorganisms which come from a contamination related to the nature of the composted material. apparently, respiratory infection caused by enteric pathogens during air dispersion of composts is negligible. studies on workers at sewage treatment plants (a highly exposed zone) have never shown evidence of disea.se due to faecal pathogens through the inhalation route [ , ] . among the most frequently studied organisms, aspergillus fimigutus remains a controversial subject. the amounts measured in the air are often high but serological studies carried out on exposed workers did not indicate the presence of circulating antigens [ , ] . the infective dose of this fungus has not been assessed but was shown to be hazardous only to susceptible persons who are hypersensitive or immunodepressed [ , , , , ] . by contrast, evidence of higher risks has been shown after exposure to dust containing gram negative bacteria. epidemiological studies carried out among workers in a msw sorting factory, or in wastewater treatment plants, in farms, mushroom farms or in a cornposting plant have shown that symptoms of headache, diarrhea or eye problems are more frequent during massive exposure to factory dust [ , - . gram negative bacteria are pathogenic because of the endotoxin they produce. the activity of the endotoxin does not depend on the integrity of the bacterial cell because it relates to the lipopolysaccharides present in their wall. a fragment of the wall is as dangerous as the whole bacterial cell [ , ] . a security level of /m has been proposed for gram negative bacteria [ ] , though some authors claimed that this concentration could provoke an allergic reaction [ ] . when endotoxins are measured, several air measurements around composts (lo- ng/m ) were comparable to the limit proposed by several authors as levels without effect [ , ] . the risk associated with actinomycetes is well known to workers in mushroom farms and is referred to as 'mushroom farmers lung' [ , ] . those germs also cause the 'farmers lung' disease [ . a massive ( /m ) and sudden exposure to these bacteria during utilization of composts may initiate a sensitization and an allergic reaction [ , ] with circulating antibodies being measurable . it is therefore plausible that sensitized individuals develop allergic reactions to actinomycetes following exposure to mature compost. some atmospheric measurements during mature compost handling at composting plant gave results similar to those where allergic reactions were observed among exposed workers [ ] . as an example, a -year-old urban planner was reported to have developed a pulmonary problem h after he had manipulated farmyard waste composts. retrospective reconstitution of his exposure provided the following data in the air: . x - . x ' cfu/m of fungi, . x - . x lo &t/m of bacteria (with gram negative bacteria in the majority) and . x lo*- . x ' spores/m [ ] . no hazardous yeast strain was found in this review. one of the most studied yeast, cundiuh albicans, was never encountered. extrapolation of these results obtained in the context of professional exposure to the general population using compost should be done with caution. measurements taken at the composting sites are the only data available. hence, the concentrations of the organisms are not necessarily representative of those obtained in natural conditions of use. the amounts that are measured at the plant site represent a mixture of emanations from the different stages of composting. at the beginning of composting, one essentially observes bacterial populations [ ] , which are subsequently replaced by fungal populations [ ] . a study comparing just mature compost with composts at the salting point, has shown a big decline in gram negative bacteria between the two locations. by contrast, the populations of actinomycetes remained constant [ ] . to our knowledge, air measurements during application of composts were not made. therefore, other studies should be made in order to assess this risk more precisely. the hazard associated with chemical contamination of the food chain during agricultural use of composts seems very low. however, application of compost by individuals or during the amendment of public fields (parks, playgrounds) might pose a risk to health and these applications are likely to develop in the future. the risks that were discussed in this review have been assessed using extreme concentrations of contaminants in composts -levels that were found in rare circumstances. the most prominent risks are associated with hand-mouth contact and ingestion by children. a child with geophagy might ingest, under such hypotheses, % of the total admissible daily intake (adi) of lead, % and % for chromium and cadmium, respectively. for a normal child, the intake of lead through ingestion of compost poses a risk for lead, and to a lesser extent, for chromium. the same route of exposure might incur a significant hazard with pcdd/f and pcb but the data are too scarce to make conclusions. repeated application of composts may cause accumulation of contaminants in the soil. this can be prevented by appropriate msw management policies and by the extension of selective collection of msw that should contribute to a reasonable reduction in the contamination of composts, hence reducing the risks. the microbiological hazard arising from fecal contamination is apparently modest, although direct intake of soil contaminated by faecal streptococcus present in the compost might represent a potential danger. exposure to organisms responsible for the composting are difficult to control. some of the fungi and bacteria are direct pathogens or can act through their toxins. the manipulation of composts triggers their aerial dispersion that can induce their inhalation, as shown among workers in composting plants or in mushrooms farms (fresh compost). it is presently difficult to extrapolate these results to populations that do not produce but use msw composts. future studies aimed at assessing the risk associated with inhalation of compost dust should also take into account the chemical hazard caused by molecules adsorbed on dusts. such hazard has never been described to date. the high dilution of composts and of their pollutants throughout the environment (water, air, soil) and the discontinuous exposure of the population create a low risk for use of msw composts by the public. the health risk associated with cornposting seems to be occupational. this work was supported by the ademe (agence de l'environnement et de la maitrise de l'energie), the french environment agency and has been developed by the gridec (groupe de recherche interdisciplinaire sur les d&hets, grenoble, france). eur en model for humus in soil and sediments chemical properties of soils amended with compost of urban waste physical and chemical properties of soil as affected by municipal solid waste compost application plant quality and soil residual fertility six years after a compost treatment understanding the process municipal solid waste composting: physical and biological processing composting sewage sludge: basic principles and opportunities in the uk windrow composting of agricultural and municipal wastes source separation and collection in germany proton binding to humic substances. . electrostatic effects chemistry of metal retention by soil. environ iarc monographs on the evaluation of the carcinogenic risk of chemical to human objectives for the development of composting in france: a strategic approach transfer of inorganic pollution by composts, in composting and compost quality assurance criteria. commission of the european communities publisher, eur en the impact of separation on heavy metal contaminants in municipal solid waste composts cost consideration of municipal solid waste compost: production versus market price factors influencing the agronomic value of city refuse composts quality of urban waste compost related to the various composting processes criteria of quality of city refuse compost based on the stability of its organic fraction assuring compost quality: suggestions for facility managers, regulators and researchers some environmental problems connected with the use of town refuse compost sources and fates of lead and cadmium in municipal solid waste plastics from household waste as a source of heavy metal pollution. an inventory study using inaa as the analytical technique asbestos in yard or sludge composts from the same community as a function of time-of-waste-collection organic chemicals in compost: how revelant are they for the use of it? in composting and compost quality assurance criteria. commission of the european communities publisher, eur en fate of organic contaminants during sewage sludge composting clean compost production organische schandstoffe in siedlungsabfallen: herkunft, gehalt und umsetzung in bidden und pflanzen. toxic organic compounds in municipal waste material: origin, contents and turnover in soils and plants level and source of pcdds, pcdfs, cps, and cbzs in compost from a municipal yard waste composting facility potential emissions of synthetic vocs from msw composting recycling at msw composting parameters for sorting/composting of municipal solid wastes, in composting and compost quality assurance criteria. commission of the european communities publisher, eur en sorting/composting of domestic waste technical brochure on administration of water resources pollution and risk prevention separate collection of compostables diaper industry workshop identifies research needs to minimize environmental impacts the composting process: susceptible feedstock, temperature, microbiology, sanitisation and decomposing technological aspects of composting including modeling and microbiology composting process design criteria. part i: feed conditioning definition of compost-quality: a need of environmental protection compost processes in waste management, commission of the european communities publisher composting process design criteria kinetik der inaktivierung von salmonellen bei der thermischen desinfektion von fliissigmist survival of plant pathogens and weed seeds during anaerobic digestion principle of compostmg leading to maximization of decomposition rate, odor control, and cost effectiveness, in composting of agricultural and other wastes relationship amongst organic matter content heavy metal concentrations, earthworm activity, and soil microfabric on sewage sludge disposal site microbiological specification of disinfected compost comparative survival of pathogenic indicators in windrow and static pile phytotoxins during the stabilization of organic matter degradation of diazinon, chlorpyrifos, isofenphos and pendimethalin in grass and compost humidification index (hi) as an evaluation of the stabilization degree during cornposting organic fertilizer and humification in soil characterization of humified substances in organic fertilizers by means of analytical electrofocusing (ef): a first approach change in organic matter during stabilization of compost from municipal solid waste experimentation of three curing and maturing processes of fine urban fresh compost on open air areas. a study carried out and financed on the initiative of the county council of cotes du nord -france evaluating garbage compost. part ii chemical properties of municipal solid waste composts parasitological study of waste-water sludge compost stability phytotoxicity suppression in urban organic wastes evaluation of heavy metals during stabilization of organic matter in compost produced with municipal solid wastes the influence of composting and maturation processes on the heavy metal extractability from some organic wastes how composting affects heavy metal content hazards from pathogenic microorganisms in land-disposed sewage sludge a methodology for establishing phytotoxicity criteria for chromium, copper, nickel and zinc in agricultural land application for municipal sewage sludge how much soil do young children ingest: an epidemiologic study the development of assessment and remediation guidelines for contaminated soils, a review of the science factor affecting ammonia volatization from sewage sludge applied to soil in a laboratory study survey of toxicants and nutrients in composted waste materials environmental impact of yard waste compost mobility and extraction of heavy metals from sewage sludge effet de l'utilisation de boues urbaines en essai de longue duree: accumuiation des metaux par les v getaux superieurs incidence de l'bpandage des boues urbaines sur l'apport de chrome alimentaire speciation of heavy metals in sewage sludge and sludge-amended soil chemical characteristics of leachate from refuse-sludge compost leaching of heavy metals from composted sewage sludge as a function of ph cadmium and selenium absorption by swiss chard grown in potted composted materials fate of trace metal in sewage sludge compost cd and zn phytoavailability of a field-stabilized sludgetreated soil study of the organic matter and leaching process from municipal treatment sludge compost: brown gold or toxic trouble? trace element in municipal solid waste composts: a review of potential detrimental effects on plants, soil biota and water quality chemical fractionation and plant uptake of heavy metals in soils amended with co-composted sewage sludge evaluation of heavy metals bioavailability in compost treated soils effect of using urban composts as manure on soil contents of some nutrients and heavy metals results of municipal waste compost research over more than fifty years at the institute for soil fertility at haren/groningen, the netherlands guide for identifying cleanup alternatives at hazardous waste sites and spills: biological treatment bioavailability to plants of sludge-borne toxic organics identification of free organic chemicals in composted municipal refuse leaching from land disposed compost municipal compost: . organic matter bacterial and fungal atmospheric contamination at refuse composting plants: a preliminary study health and safety aspects of compost preparation and use occurrence, growth and suppression of salmonellae in composted sewage sludge hygienic quality of sewage sludge compost survival of fecal indicator micro-organisms in refuse/sludge composting using the aerated static pile system the aspergillus fumigatw debate: potential human health concerns levels of aspergillusfumigatus in air and in compost at a sewage sludge composting site mushroom worker's lung: serologic reactions to thermophilic actinomycetes present in the air of compost tunnels airborne endotoxins: an association with occupational lung disease levels of gram-negative bacteria, aspergillus jkmigatus, dust and endotoxin at compost plants dispersal of aspergillusfimigats from sewage sludge compost piles subjected to mechanical agitation in open air airborne microorganisms associated with domestic waste composting microbiological characterization of four composted urban refuses yeast microflora evolution during anaerobic digestion and cornposting of urban waste quantitative assessment of factors affecting the recovery of indigenous and released thermophilic bacteria from compost survival of pathogenic micro-organism and parasites in excreta, manure and sewage sludge identification of thermophilic bacteria in solid-waste composting umweltrevelante schastoffe in miillkomposten determination of pathogen levels in sludge products clinical and immunological findings in workers exposed to sewage dust gestion de la matiere organique, f. dubosc cadmium: a complex environmental problem. part ii: cadmium in sludge used as fertilizer effect of cadmium on the biota: influence of environmental factor long-term effects of quality-compost treatment on soil controlling cadmium in the human food chain: a review and rationale based on health effects toxicologic et s&mite des al.iments. technique et documentation occupational and community exposure to toxic metals: lead, cadmium, mercury and arsenic cadmium in the environment and human health: an overview plomb cadmium et mercure. rapport du conseil superieur d'hygi ne publique de france, section alimentation cadmium uptake and distribution in three cultivars of lactuca sp translocation of lead and cadmium from feed to edible tissues of swine table de compositions des aliments biochemistry and measurement of environmental lead intoxication toxicologic et hygiene industrielles, ii: les derives mincraux. techniques et documentation mercury and selenium content and chemical form in vegetable crops grown on sludgeamended soil bioaccumulation of hg in the mushroom pleutorus ostreatus selenium in the environment premiere approche pour l'kaluation de la pollution dun site d'ancienne usine b gaz: utilisation de valeurs guides de differents pays atmospheric deposition of trace elements around point sources and human health risk assessment ii: uptake of arsenic and chromium by vegetables grown near a wood preservation factory determination of polynuclear aromatic compounds in composted municipal refuse and compost-amended soils by simple clean-up procedure estimation of the environmental hazard of organochlorines in pulp mill biosludge used as soil fertilizer assessment of health hazards associated with exposure to dioxins. chemosphere environmental toxicology of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans the influence of sewage sludge applications to agricultural land on human exposure to polychrorinated dibenzo-p-dioxins (pcdds) and -furans (pcdfs) polychlorinated biphenyls in digested uk sewage sludge sorption and degradation of pentachlorophenol in sludge amended soils plant uptake of pentachlorophenol from sludge-amended soils f-specific coliphages in disposables diapers and landfill leachates survival of indicator organisms in sonoran desert soil amended with sewage sludge biological health risks associated with the cornposting of wastewater treatment plant sludge health risks of cornposting: a critique of the article 'biological health risks associated with the cornposting of wastewater treatment plant sludge coliforms in aerosols generated by a municipal solid waste recovery system circulating antibodies against thermophilic actinomycetes in farmers and mushroom workers occupational symptoms among compost workers respiratory impairment among workers in garbage-handling plant biological health risk associated with resource recovery, sorting of recycle waste and composting un risque respiratoire nouveau: les stations d'bpuration et les installations de compostage organic dust exposures from compost handling: case presentation and respiratory exposure assessment actinomycetes as agents of biodegradation in the environment -a review produ$o de fertilizante organic por compostagem do lodo gerado por estacoes de tratamento de esgotos chemical toxicity of metals and metalloids composition of toxicants and other constituents in yard or sludge composts from the same community as a function of time of waste collection leaching from land disposed compost municipal compost: inorganic ions fertilizing value and heavy metal load of some composts from urban refuse heavy metal levels and their toxicity in composts from athens household refuse chemical and physico-chemical characterization of vermicomposts and their humic acid fractions effect of the application of municipal refuse compost on the physical and chemical properties of a soil the llip side of compost: what's in it, where to use it and why changes in atp content, enzyme activity and inorganic nitrogen species during cornposting of organic wastes hygienische untersuchungen an einzelbetriebmlichen anlangen sowie einer grobtechnischen anlagen zur entseuchung von flissigmist durch aerob-thermophile behandlung. forum stldte-hygiene anaerobic cornposting saves waste from landtill sludge cornposting maintains growth comment on: 'acid digestion for sediments, sludge, soils and solid wastes. a proposed alternative to epa sw method when is compost 'safe evaluation de certains additifs ahmentaires et contaminants, ""' rapport du comite mixte fao/oms d'experts des additifs aliientaires copper and zinc concentrations in edible vegetables grown in tarragona province key: cord- -jtx ntil authors: gratz, kim l.; tull, matthew t.; richmond, julia r.; edmonds, keith a.; scamaldo, kayla m.; rose, jason p. title: thwarted belongingness and perceived burdensomeness explain the associations of covid‐ social and economic consequences to suicide risk date: - - journal: suicide life threat behav doi: . /sltb. sha: doc_id: cord_uid: jtx ntil objective: the social and economic consequences of covid‐ and related public health interventions aimed at slowing the spread of the virus have been proposed to increase suicide risk. however, no research has examined these relations. this study examined the relations of two covid‐ consequences (i.e., stay‐at‐home orders and job loss) to suicide risk through thwarted belongingness, perceived burdensomeness, and loneliness. method: online data from a nationwide community sample of adults (mean age = ) from states were collected between march and april , . participants completed measures assessing thwarted belongingness, perceived burdensomeness, loneliness, and suicide risk, as well as whether they (a) were currently under a stay‐at‐home order and (b) had experienced a recent job loss due to the pandemic. results: results revealed a significant indirect relation of stay‐at‐home order status to suicide risk through thwarted belongingness. further, whereas recent job loss was significantly correlated with suicide risk, neither the direct relation of job loss to suicide risk (when accounting for their shared relations to perceived burdensomeness) nor the indirect relation through perceived burdensomeness was significant. conclusions: results highlight the potential benefits of interventions targeting thwarted belongingness and perceived burdensomeness to offset suicide risk during this pandemic. , extraordinary social distancing interventions have been implemented in many states to slow the spread of the virus, including relatively restrictive shelter-in-place or stay-at-home orders issued in states, the district of columbia, and puerto rico (mervosh, lu, & swales, ) . these orders, which have shuttered schools, universities, and nonessential businesses, urge individuals to stay at home unless it is absolutely necessary to leave, and promote strict physical distancing to slow the spread of the virus (cdc, ). from a public health perspective, the reasoning behind such interventions is clear: physically separating people is an effective strategy for preventing infectious diseases from spreading (ahmed, zviedrite, & uzicanin, ; jackson, mangtani, hawker, olowokure, & vynnycky, ; qualls et al., ) , including covid- (flaxman et al., ; thakkar, burstein, hu, selvajar, & klein, ). yet, despite the necessity of stay-at-home orders and other social distancing interventions from a disease prevention perspective, these measures are likely to have numerous unintended social and economic consequences that may adversely affect psychological outcomes during this time (galea, merchant, & lurie, ; reger, stanley, & joiner, ; thunström, newbold, finnoff, ashworth, & shogren, ) . indeed, pandemics of this nature have well-documented economic and social consequences (chen, huang, chuang, chiu, & kuo, ; reger et al., ; thunström et al., ) -some of which have been linked to psychological difficulties (montemurro, ; wang et al., ) , including suicide risk (see reger et al., ) . currently, in the united states, beyond the immediate physical health consequences of covid- (and related fear and distress associated with these consequences), two consequences of the covid- pandemic that stand out as particularly relevant to suicide risk are the social isolation related to stay-at-home orders and the widespread job loss related to the current economic crisis-both of which have been theoretically and/or empirically linked to suicide risk (e.g., classen & dunn, ; oyesanya, lopez-morinigo, & dutta, ; reger et al., ) . for example, with regard to the economic consequences of this pandemic, both theory and research support an association between involuntary job loss and suicide risk (classen & dunn, ; milner et al., ) , with recent job loss from mass-layoffs in particular (comparable to what is occurring currently in the united states) associated with increased suicide risk (classen & dunn, ) . likewise, the widespread social distancing interventions implemented to slow the spread of the virus (of which stay-at-home orders are the most restrictive) have been proposed to increase suicide risk by increasing social isolation and loneliness (reger et al., ) . specifically, although stay-at-home orders are designed to increase physical distancing in particular (and need not negatively impact social connections and connectedness through remote or virtual means), researchers have suggested that an unintended consequence of social distancing interventions may be an increase in social isolation and related feelings of loneliness (reger et al., ) . loneliness, in turn, is a well-documented suicide risk factor (e.g., calati et al., ; joiner, ribeiro, & silva, ; li, dorstyn, & jarmon, ) that evidences strong associations with suicidal ideation, suicide attempts, and suicide risk (e.g., calati et al., ; chang et al., ; li et al., ; stickley & koyanagi, ; stravynski & boyer, ) . beyond just examining the relations of pandemic-related stay-athome orders and job loss to suicide risk, research is needed to clarify the factors that may account for these relations. the interpersonal psychological theory of suicide (its; van orden et al., ) provides a particularly useful framework in this regard. according to this theory, the desire for suicide is driven by perceived burdensomeness (i.e., perceptions of being a burden to others) and thwarted belongingness (i.e., feeling disconnected from and lacking meaningful relationships with others). notably, although thwarted belongingness overlaps with loneliness, it is a broader construct that also captures the nature and extent of supportive and reciprocal interpersonal relationships. a recent meta-analysis provides empirical support for this theory and the proposed relations of perceived burdensomeness and thwarted belongingness to suicidal desire (chu et al., ) . with regard to the relevance of these factors to the relations of interest in this study, thwarted belongingness would be expected to play a particularly important role in the relation of stay-at-home orders to suicide risk, capturing the proposed unintended negative consequences of social distancing interventions on social connectedness (reger et al., ) . conversely, although a recent job loss could also contribute to thwarted belongingness (particularly if that job was a primary source of social connection), theory suggests the particular relevance of perceived burdensomeness to the relation between job loss and suicide risk. specifically, the inability to provide for loved ones or support oneself financially could increase the experience of being a burden on others, which, in turn, would increase the desire for suicide and suicide risk (cukrowicz, cheavens, van orden, ragain, & cook, ; van orden et al., ) . the present study examined the relations of covid- -related stay-at-home orders and job loss to suicide risk, both directly and indirectly through thwarted belongingness and perceived burdensomeness. given that social distancing and related social isolation have been proposed to increase suicide risk through loneliness, we also examined the indirect relation of stay-at-home orders in particular to suicide risk through loneliness. we hypothesized that both recent job loss and stay-at-home order status would be associated with increased suicide risk. we also hypothesized the differential relevance of thwarted belongingness and perceived burdensomeness to the relations of stay-at-home order status and pandemic-related job loss, respectively, to suicide risk. specifically, we hypothesized that stay-at-home order status would be indirectly related to suicide risk through thwarted belongingness and loneliness, whereas recent job loss would be indirectly related to suicide risk through perceived burdensomeness. participants included a nationwide community sample of adults from states in the united states who completed online measures through an internet-based platform (amazon's mechanical turk; mturk) from march , , through april , . the study was posted to mturk via cloudresearch (cloudresearch.com), an online crowdsourcing platform linked to mturk that provides additional data collection features (e.g., creating selection criteria; chandler, rosenzweig, moss, robinson, & litman, ) . mturk is an online labor market that provides "workers" with the opportunity to complete different tasks in exchange for monetary compensation, such as completing questionnaires for research. data provided by mturkrecruited participants have been found to be as reliable as data collected through more traditional methods (buhrmester, kwang, & gosling, ) . likewise, mturk-recruited participants have been found to perform better on attention check items than college student samples (hauser & schwarz, ) and comparably to participants completing the same tasks in a laboratory setting (casler, bickel, & hackett, ) . studies also show that mturk samples have the advantage of being more diverse than other internet-recruited or college student samples (buhrmester et al., ; casler et al., ) . participants ( % women; . % men; . % transgender; . % nonbinary; . % other) ranged in age from to years (m age = . ± . ). all states in the united states were represented, with the exception of delaware, new hampshire, north dakota, vermont, and west virginia (see table for the distribution of participants across states). most participants identified as white ( %), followed by black/african-american ( . %), asian/asian-american ( . %), latinx ( . %), and native american ( . %). regarding educational attainment, . % had completed high school or received a ged, . % had attended some college or technical school, % had graduated from college, and . % had advanced graduate/professional degrees. most participants were employed full-time ( . %), followed by employed part-time ( . %) and unemployed ( . %). annual household income varied, with . % of participants reporting an income of <$ , , . % reporting an income of $ , to $ , , and . % reporting an income of ≥$ , . with regard to participants' household composition, . % reported living alone and the remaining . % reported living with at least one other person (ranging from - ; mean = . ± . ). very few participants reported having sought out testing for covid- ( %) or having been infected with covid- ( . %). all procedures received approval from the university's institutional review board. to ensure the study was not being completed by a bot (i.e., an automated computer program used to complete simple tasks), participants first responded to a completely automatic public turing test to tell computers and humans apart (captcha) prior to providing informed consent. on the consent form, participants were also informed that "…we have put in place a number of safeguards to ensure that participants provide valid and accurate data for this study. if we have strong reason to believe your data are invalid, your responses will not be approved or paid and your data will be discarded." data were collected in blocks of nine participants at a time, and all data, including attention check items and geolocations (i.e., geographical coordinates used to identify participants outside of the united states and/or in locations determined to be "bot farms" within the mturk community; see kennedy, clifford, burleigh, jewell, & waggoner, ) , were examined by researchers before compensation was provided. attention check items included three explicit requests embedded within the questionnaires (e.g., "if you are paying attention, choose ' ' for this question"), two multiple-choice questions (e.g., "how many words are in this sentence?"), a math problem (e.g., "what is plus "), and a free-response item (e.g., "please briefly describe in a few sentences what you did in this study"). participants who failed one or more attention check items were removed from the study (n = of completers). workers who completed the study and whose data were considered valid (based on attention check items and geolocations; n = ) were compensated $ . for their participation. the ucla loneliness scale-version (uls- ; russell, ; russell, peplau, & cutrona, ) is a -item self-report measure of perceptions of loneliness and social isolation. participants rate items (e.g., "no one really knows me well;" i lack companionship;" "there are people i feel close to [reverse scored]") based on how often they apply to themselves on a -point likert-type scale ranging from (never) to (often). higher scores are indicative of greater loneliness. the uls- has demonstrated adequate test-retest reliability and good construct validity (russell, ) . internal consistency in the present sample was acceptable (α = . ). the depression symptom index-suicide subscale (dsi-ss; metalsky & joiner, ) was used to measure current suicide risk. the dsi-ss is a -item screening measure that assesses the frequency and intensity of suicidal thoughts, plans, and impulses over the past weeks. scores on this measure have been found to be positively associated with depression symptoms (cukrowicz et al., ; joiner, pfaff, & acres, ) and to be higher among individuals with (vs. without) a history of suicide attempts (capron et al., ) . for the present study, a continuous variable assessing the severity of current suicide risk was calculated by summing all four items (α = . in this sample). at the time of data collection, . % (n = ) of participants were under a stay-at-home order and % (n = ) reported a recent job loss related to the pandemic. on the dsi-ss, . % (n = ) of participants were classified as having high suicide risk (operationalized as a score of ≥ on this measure; joiner et al., ) . descriptive data for and correlations among the primary variables of interest are ta b l e descriptive data for and correlations among the primary variables of interest (n = ) presented in table . results revealed significant positive zero-order associations between recent job loss ( = no; = yes) and both suicide risk and perceived burdensomeness. stay-at-home order status ( = no; = yes) was not significantly correlated with suicide risk; however, it was significantly positively correlated with thwarted belongingness and loneliness. the process (version . ) macro for spss (model ; hayes, ) was used to examine the indirect relations of (a) recent job loss to suicide risk through perceived burdensomeness (thwarted belongingness and loneliness were not examined in this model due to their lack of significant associations with recent job loss); and (b) stay-athome order status to suicide risk through thwarted belongingness and loneliness (perceived burdensomeness was not examined in this model due to its lack of significant association with stay-at-home order status). in both models, age, sex, racial/ethnic background, income, and household composition (lives alone vs. lives with other people) were included as covariates, given their relevance to suicide risk and/or pandemic-related outcomes. all indirect relations were evaluated using bias-corrected % confidence intervals based on , bootstrap samples. with regard to the analysis examining the indirect relation of recent job loss to suicide risk through perceived burdensomeness, the overall model was significant, accounting for % of the variance in suicide risk, f ( , ) = . , p < . . although the total relation of recent job loss to suicide risk (including both the direct relation and the indirect relation through perceived burdensomeness, represented in figure as path c) was significant, the direct relation of recent job loss to suicide risk (i.e., the remainder of the relation not accounted for by the indirect relation through perceived burdensomeness, represented in figure as c'; preacher & hayes, ) was not significant. further, although recent job loss was significantly associated with perceived burdensomeness and perceived burdensomeness was significantly associated with suicide risk in the model, the indirect relation of recent job loss to suicide risk through perceived burdensomeness was not significant (see figure ). as for the analysis examining the indirect relation of stay-athome order status to suicide risk through thwarted belongingness and loneliness, the overall model was significant, accounting for % of the variance in suicide risk, f ( , ) = . , p < . . of note, although stay-at-home order status was significantly uniquely associated with both thwarted belongingness and loneliness, only thwarted belongingness (and not loneliness) was significantly uniquely associated with suicide risk. in addition, results revealed a significant indirect relation of stay-at-home order status to suicide risk through thwarted belongingness, but not loneliness (see figure ). the results of this study provide preliminary empirical support for the theorized relations of covid- -related social and economic consequences to increased suicide risk (reger et al., ) . specifically, the results of this study highlight the differential relevance of thwarted belongingness and perceived burdensomeness to the relations of stay-at-home orders and pandemic-related job f i g u r e indirect relation of recent job loss to suicide risk through perceived burdensomeness. loss, respectively, to suicide risk. providing partial support for study hypotheses, although the presence of a stay-at-home order was not significantly associated with greater suicide risk at a zero-order level, it was indirectly related to suicide risk through greater thwarted belongingness. these findings suggest that any association of stay-athome orders (at least in the short-term) to suicide risk is due to the association these orders have with increased social disconnection (reger et al., ) . interestingly, although the presence of a stay-athome order was significantly uniquely associated with both loneliness and thwarted belongingness, only thwarted belongingness was uniquely associated with suicide risk and explained the relation of stay-at-home order status to suicide risk in this sample. together, these results suggest that although stay-at-home orders may very well increase the potential for loneliness among adults in the united states, it is not loneliness specifically but a broader sense of disconnection and absence of meaningful relationships that accounts for the relation of stay-at-home orders to greater suicide risk. results of this study also provide partial support for study hypotheses pertaining to the relation of pandemic-related job loss to suicide risk. specifically, although recent job loss evidenced a significant zero-order correlation with suicide risk, it was not uniquely associated with suicide risk when perceived burdensomeness was included in the model. likewise, results provided no support for an indirect relation of job loss to suicide risk through perceived burdensomeness. these findings are most consistent with a proxy risk factor model (see kraemer, stice, kazdin, offord, & kupfer, ) , suggesting that the total relation of recent job loss to suicide risk is due to their shared association with perceived burdensomeness. although inconsistent with our hypotheses, these results are not without support in the literature, as there is some evidence to suggest that involuntary job loss in general is not associated with increased suicide risk in the short-term, outside of mass-layoff events (see classen & dunn, ) . instead, evidence suggests that the duration of time spent unemployed following a job loss may be more strongly associated with suicide risk (classen & dunn, several limitations warrant consideration. first, the use of cross-sectional data precludes any conclusions about the precise nature or direction of the associations examined here. in particular, although theory and research suggest that both job loss and social isolation may increase suicide risk (classen & dunn, ; oyesanya et al., ; reger et al., ) , our data cannot rule out the possibility that elevations in suicide risk reported in this study preceded or occurred concurrently with (but unrelated to) these factors. prospective, longitudinal studies are needed to clarify the extent to which the social and economic consequences of covid- and related stay-at-home orders increase suicide risk, as well as the mechanisms underlying these relations. another limitation is the exclusive reliance on self-report questionnaire data, which may be influenced by social desirability biases or recall difficulties. future research should incorporate structured clinical interviews and/or timeline follow-back procedures to assess suicide risk and its temporal relation to social distancing and economic difficulties. likewise, although the use of a diverse nationwide community sample is a strength of this study, the generalizability of our findings to particular at-risk groups (e.g., hospitalized patients, individuals with chronic medical conditions, health care workers) remains unclear. future research is needed to examine the relations of covid- and related social and economic consequences to suicide risk within these vulnerable groups in particular. finally, it is important to note that the results of this study speak to only the early associations of stay-at-home orders and covid- related job loss to suicide risk among individuals in the united states. however, it is likely that the consequences and psychological impact of these factors may change over time. for example, and consistent with the proposed mechanisms through which stay-at-home orders and other social distancing interventions are thought to increase suicide risk (reger et al., ) , the psychological impact and negative consequences of these orders may intensify over time, with suicide risk increasing as the duration of these orders increases. likewise, research suggests that the duration of unemployment following an involuntary job loss is more strongly associated with suicide risk than the initial job loss (classen & dunn, ) ; thus, it is likely that the relation between pandemic-related job loss and suicide risk may increase over time, particularly in the context of the current economic crisis and ongoing stay-at-home and shelter in place orders (which decrease the likelihood of obtaining a new job in the near future). although research examining the early impact of this pandemic and associated factor on suicide risks is important, it is imperative that research continues to track these relations as the pandemic and related public health interventions persist over time. despite these limitations, the results of this study highlight the potential impact of covid- social and economic consequences on suicide risk among adults in the united states, as well as the relevance of thwarted belongingness and perceived burdensomeness to these associations. these results are consistent with theory and research highlighting the relevance of thwarted belongingness and perceived burdensomeness to suicide risk (e.g., chu et al., ; van orden et al., ) , and suggest that these may be important factors to target in the context of focused interventions aimed at decreasing suicide risk during this time. in the absence of effective covid- infection prevention efforts and/or pharmacological interventions (e.g., vaccines), large-scale public health interventions such as social distancing or stay-at-home orders are necessary to reduce the spread of the virus and infection-related mortality. however, in the context of these necessary public health interventions, our results speak to the need to also implement interventions aimed at mitigating the negative psychological consequences of both the social isolation and economic problems that can arise from or be exacerbated by stay-at-home orders. specifically, our results provide further support for recent suggestions to focus on increasing social connection and connectedness in the context of stay-at-home orders and other social distancing interventions, in an effort to offset the isolation, loneliness, and disconnection that may inadvertently accompany these orders (see reger et al., ) . likewise, among individuals who have experienced a job loss during this time, our findings suggest that interventions aimed at decreasing perceived burdensomeness and increasing individuals' awareness of and connection to their contributions to the lives of others may help to decrease suicide risk among this vulnerable population. finally, given both theoretical and emerging empirical literature suggesting increased suicide risk during this pandemic, it is important that crisis call centers continue to be funded and staffed to ensure that individuals who may have limited social contacts are able to seek help in emergency situations. likewise, it is imperative that evidence-based tele-mental health services are made available and accessible to vulnerable individuals throughout the duration of stay-at-home orders and other social distancing interventions (reger et al., ) . effectiveness of workplace social distancing measures in reducing influenza transmission: a systematic review amazon's mechanical turk: a new source of inexpensive, yet high-quality, data? suicidal thoughts and behaviors and social isolation: a narrative review of the literature role of anxiety sensitivity subfactors in suicidal ideation and suicide attempt history separate but equal? a comparison of participants and data gathered via amazon's mturk, social media, and face-to-face behavioral testing coronavirus (covid- ) online panels in social science research: expanding sampling methods beyond mechanical turk loneliness and suicidal risk in young adults: does believing in a changeable future help minimize suicidal risk among the lonely? social and economic impact of school closure resulting from pandemic influenza a/h n the interpersonal theory of suicide: a systematic review and meta-analysis of a decade of cross-national research the effect of job loss and unemployment duration on suicide risk in the united states: a new look using mass-layoffs and unemployment duration perceived burdensomeness and suicide ideation in older adults the social impact of hiv/aids in developing countries an interactive web-based dashboard to track covid- in real time estimating the number of infections and impact of non-pharmaceutical interventions on covid- in european countries. imperial college covid- response team the mental health consequences of covid- and physical distancing: the need for prevention and early intervention clinical and socioeconomic impact of seasonal and pandemic influenza in adults and the elderly the german version of the interpersonal needs questionnaire (inq)-dimensionality, psychometric properties and population-based norms attentive turkers: mturk participants perform better on online attention checks than do subject pool participants introduction to mediation, moderation, and conditional process analysis: a regression-based approach evaluating the interpersonal needs questionnaire: comparison of the reliability, factor structure, and predictive validity across five versions. suicide and life-threatening behavior the effects of school closures on influenza outbreaks and pandemics: systematic review of simulation studies a brief screening tool for suicidal symptoms in adolescents and young adults in general health settings: reliability and validity data from the australian national general practice youth suicide prevention project nonsuicidal self-injury, suicidal behavior, and their co-occurrence as viewed through the lens of the interpersonal theory of suicide the shape of and solutions to the mturk quality crisis how do risk factors work together? mediators, moderators, and independent, overlapping, and proxy risk factors identifying suicide risk among college students: a systematic review incubation period and other epidemiological characteristics of novel coronavirus infections with right truncation: a statistical analysis of publicly available case data genomic characterisation and epidemiology of novel coronavirus: implications for virus origins and receptor binding analysis of the psychometric properties of the interpersonal needs questionnaire (inq) among community-dwelling older adults the economic impact of pandemic influenza in the united states: priorities for intervention see which states and cities have told residents to stay at home. the new york times the hopelessness depression symptom questionnaire the effects of involuntary job loss on suicide and suicide attempts among young adults: evidence from a matched case-control study the emotional impact of covid- : from medical staff to common people systematic review of suicide in economic recession asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models community mitigation guidelines to prevent pandemic influenza -united states suicide mortality and coronavirus disease -a perfect storm? jama psychiatry ucla loneliness scale (version ): reliability, validity, and factor structure the revised ucla loneliness scale: concurrent and discriminant validity evidence covid- infection: origin, transmission, and characteristics of human coronaviruses estimating the burden of pandemic influenza a (h n ) in the united states loneliness, common mental disorders and suicidal behavior: findings from a general population survey loneliness in relation to suicide ideation and parasuicide: a population-wide study. suicide and life-threatening behavior the socio-economic burden of influenza social distancing and mobility reductions have reduced covid- transmission in king county, wa. report prepared by institute for disease modeling the benefits and costs of using social distancing to flatten the curve for covid- thwarted belongingness and perceived burdensomeness: construct validity and psychometric properties of the interpersonal needs questionnaire the interpersonal theory of suicide immediate psychological responses and associated factors during the initial stage of the coronavirus disease (covid- ) epidemic among the general population in china rolling updates on coronavirus disease (covid- ) a pneumonia outbreak associated with a new coronavirus of probable bat origin key: cord- - xwcb j authors: bachman, thomas e.; iyer, narayan p.; newth, christopher j. l.; ross, patrick a.; khemani, robinder g. title: thresholds for oximetry alarms and target range in the nicu: an observational assessment based on likely oxygen tension and maturity date: - - journal: bmc pediatr doi: . /s - - - sha: doc_id: cord_uid: xwcb j background: continuous monitoring of spo( ) in the neonatal icu is the standard of care. changes in spo( ) exposure have been shown to markedly impact outcome, but limiting extreme episodes is an arduous task. much more complicated than setting alarm policy, it is fraught with balancing alarm fatigue and compliance. information on optimum strategies is limited. methods: this is a retrospective observational study intended to describe the relative chance of normoxemia, and risks of hypoxemia and hyperoxemia at relevant spo( ) levels in the neonatal icu. the data, paired spo( )-pao( ) and post-menstrual age, are from a single tertiary care unit. they reflect all infants receiving supplemental oxygen and mechanical ventilation during a -year period. the primary measures were the chance of normoxemia (pao( ) – mmhg), risks of severe hypoxemia (pao( ) ≤ mmhg), and of severe hyperoxemia (pao( ) ≥ mmhg) at relevant spo( ) levels. results: neonates were categorized by postmenstrual age: < (n = ), – (n = ) and > (n = ) weeks. from these infants, , spo( )-pao( ) pairs were evaluated. the post-menstrual weeks (median and iqr) of the three groups were: ( – ) n = ; ( – ) n = ; and ( – ) n = , . the chance of normoxemia ( , %-ci – %) was similar across the spo( ) range of – %, and independent of pma. the increasing risk of severe hypoxemia became marked at a spo( ) of % ( , %-ci – %), and was independent of pma. the risk of severe hyperoxemia was dependent on pma. for infants < weeks it was marked at % spo( ) ( , %-ci – %), for infants – weeks at % spo( ) ( , %-ci – %) and for those > weeks at % spo( ) ( , %-ci – %). conclusions: the risk of hyperoxemia and hypoxemia increases exponentially as spo( ) moves towards extremes. postmenstrual age influences the threshold at which the risk of hyperoxemia became pronounced, but not the thresholds of hypoxemia or normoxemia. the thresholds at which a marked change in the risk of hyperoxemia and hypoxemia occur can be used to guide the setting of alarm thresholds. optimal management of neonatal oxygen saturation must take into account concerns of alarm fatigue, staffing levels, and fio( ) titration practices. shifts in spo exposure have a profound impact on neonatal outcomes. control of exposure is associated with the selection of a desired target range, selection of alarm limits as well as nursing compliance with good practices. manual titration of fio to address unstable spo is an arduous task. infants in the nicu typically spend only about half the time in the desired range, and there is significant variation among centers [ ] . nursing intervention is driven by high and low spo alarms, probably more than the prescribed target range. oximeter alarms are notorious for false positives and are associated with alarm fatigue [ ] [ ] [ ] . a persistent low alarm necessitates the need for increased supplemental oxygen to minimize the impact of transient hypoxemia, usually a result of respiratory instability. in contrast, high alarms usually signal the need to titrate the oxygen down following recovery from a marked desaturation. if the alarm limits are too narrow or the response to aggressive, troublesome swings between hypoxemia and hyperoxemia can occur. further there is little evidence supporting guidelines and general practice with regard to selection of spo alarm limits. even consensus international guidelines for extremely preterm infants are not consistent. european guidelines report there is weak evidence to support setting the alarms close to the desired target range [ ] . clearly doing so increases the frequency of false alarms and the potential for alarm fatigue [ , ] . the most recent guidelines from the american academy of pediatrics, in contrast, suggest looser low alarms are more appropriate [ ] . they further suggest that spo alarm limits and target range should not only be decoupled, but also take into account the infant's maturity. neither guideline integrates the possible impact of differences in averaging period, alarm delay or differences in devices. in the last two decades studies have focused on the intended spo target ranges for the extremely premature with a resulting evolution of the standard of practice [ , ] . the most recent very large studies suggest a higher, narrower target range might be preferred for extremely preterm infants [ , ] . this perspective is, however, far from a consensus [ , [ ] [ ] [ ] [ ] . evaluations of the optimal spo exposure for more mature infants are lacking. the risks associated with hypoxemia in near term infants are appreciated; however concerns about hyperoxemia have until recently been limited, at least compared to the extremely preterm. we have developed an extensive spo -pao database from our nicu and previously reported on the magnitude of the change of risk of severe hypoxemia and hyperoxemia across different spo ranges [ ] . the aim of this analysis was to see if specific spo levels for selection of high and low alarms and target ranges could be identified based on the difference in the risk of hypoxemia and hyperoxemia and further to determine to what degree these thresholds might change depending on infant maturity. this is a prospectively defined analysis with the aim of describing arterial oxygenation levels (pao ) associated with various possible spo alarm limits and target ranges. the study is based on the paradigm that high and low spo alarm limits should consider the risk of hypoxemia and hyperoxemia independent of the desired spo target range and further consider infant maturity [ ] . this study reflects infants in the neonatal and infant critical care unit (niccu) of children's hospital los angeles. it is a tertiary care referral center affiliated with the keck school of medicine of the university of southern california. the -bed niccu receives transfers from the greater southern california area. the bioethics review organization at children's hospital los angeles (chla- - ) has waived the need for informed consent for aggregate data analysis studies and specifically approved this project. in a previous publication we described the development of a spo -pao database of infants receiving mechanical ventilator support with supplemental oxygen between august and july [ ] . the database links arterial blood gas measurements in laboratory records with simultaneous spo data from the patient monitor system. the spo level is the mean of four -s readings coincident with the arterial sample. the gestational age from medical records for each infant, along with the date of measurement permitted calculation of post-menstrual age for each sample. the oximeter in the patient monitoring system used masimo set technology (masimo corporation irvine, california), with s averaging. continuous monitoring of spo is by practice post-ductal, pre-ductal assessments are conducted with another oximeter. arterial samples were collected when clinically indicated. umbilical catheters are used in most infants in their first week of life. as a matter of practice after that right radial lines are preferred, but when not possible left radial or posterior tibial lines are placed. these study parameters were prospectively defined. normoxemia was defined as pao between and mmhg. other oxemic levels were defined as severe hypoxemia (pao ≤ mmhg) and severe hyperoxemia (pao ≥ mmhg), we also evaluated levels below and above normoxemia (pao < , > mmhg). the selection of the severe thresholds was consistent with our previous publication. also a consensus of the investigators, the potential ranges of spo alarm limits were - % and - % and spo target ranges within the envelop of - %. the endpoints were the chance of normoxemia, and the risk of the oxemic levels. based on our previous work, we hypothesized that infant maturity would significantly impact the chance of normoxemia and risk of severe hyperoxemia and but not of severe hypoxemia. we used post-menstrual age (pma) as the metric of maturity. pma values were categorized into three groups. these were < weeks, - weeks and > weeks pma. we felt that categories would be of more use clinically than a continuous effect. on a post hoc basis we also explored the impact of postnatal age. our primary measure was the risk or chance of each of these oxemic categories within the relevant spo range. for the power analysis we assumed a baseline of relevant risk or chance of %, and considered sample sizes of pao values for both and in an adjacent spo bins. the range of - was selected as this was consistent with the numbers of observations in the smaller maturity categories at the spo extremes. based on this, we determined that there would be an % chance, at the p < . level, that we could detect a reduction to % with observations and to % with observations. we treated each spo -pao pair as an independent observation. we deemed consideration of within patient effects as not only impractical because of the large number of patients, but also inappropriate because of intrapatient sample variability of temperature, ph, paco and transfusion timing. descriptive presentations of continuous data are shown as median and iqr, and of proportions as percent. the primary variables are presented as percentage along with their % confidence intervals of the proportion. comparison of continuous variables used the kruskal-wallis test with dunn's procedure for pairwise comparisons. comparisons of proportions were evaluated using the chi-square test, with maracuilo's procedure for pairwise comparisons. the impact of maturity on each of the three oxemic category parameters was tested by including maturity-category with spo , as independent variables, in a logistic regression equation with oxemic risk or chance as the dependent variable. for the exploratory analysis of the effect of postnatal age, we added age to this logistic regression model. a two-tailed p < . was considered statistically significant for all comparisons. statistical tests were conducted with xlstat v . (addinsoft, paris, france). our data included , spo -pao observations of infants receiving supplemental oxygen and respiratory support over a -year period. figure provides a graphic overview of the risk of hypoxemia and hyperoxemia across spo levels between and %. the risk of each rises dramatically as spo moves from a nominal target range. even when moving within the latter the trade off between hypoxemia and hyperoxemia is obvious. it is also of note that the difference in risk of severe hypoxemia and a pao < mmhg, is much larger than the difference between severe hyperoxemia and a pao > mmhg. for analysis these observations were divided into three groups according to post-menstrual age (pma). details characterizing the groups are shown in table . there were observations from infants less than weeks pma, observations from infants between and weeks pma and , observations from infants greater than weeks pma. the number of observations per infant was similar among the three groups. the gestational age and postmenstrual age were consistent with the maturity categories. the median spo and pao levels were lower in the group less than weeks pma. this group also included a higher share of measurements in normoxemia and less in severe hyperoxemia. the chance of normoxemia was dependent on spo (p < . ) but not pma. the chance of normoxemia across the range of - % spo was % ( - % ci). the actual chance of normoxemia for different overlapping spo target ranges are shown in table , and were different, specifically slightly lower in the lower ranges (p < . ). the pao levels for each are also shown in the table and the differences between them are statistically significant (p < . ). higher target ranges increase the possibility of higher the risk of hypoxemia (pao < and < mmhg) was independent of pma but not spo (p < . ). the risks at different potential alarm levels are shown in table . the risks are not different at settings of , , and % spo for either pao < mmhg or < mmhg. they were both markedly higher at and % spo . (p < . ) at these levels the risk of severe hypoxemia (< mmhg) was marked; at % spo (risk: % ( - , % ci)) and at % spo (risk: % ( - , % ci)). the changes in risks are consistent with the changes in the pao also shown in the table. the variation (interquartile range) of pao levels is similar. the risk of hyperoxemia (pao > and > mmhg) was significantly different among the pma categories (p < . ) and within each category among the spo levels (p < . ). the actual risks at different potential alarm levels are shown in table for each maturity category. the potential point of marked increase in the risk of a pao > and > mmhg were different for the three maturity categories. with regard to severe hyperoxemia, for those < weeks it was a reading of % spo (risk: % ( - , % ci)), which was significantly higher than at and % spo (p < . ). it was a spo reading of % for those - weeks (risk: % ( - %, % ci)), which was not significantly higher than and %. a reading of % for those > weeks ( % risk: ( - , % ci)), and the difference between all pairs was statistically significant (p < . ). a point of demarcation for the risks of pao > mmhg is spo level lower for each of the pma categories. the changes in risks are consistent with the changes in the pao levels also shown in the table. the variation (interquartile range) of pao levels is similar except at % spo , which is wider. our exploratory analysis determined that postnatal age was an independent predictor of chance of normoxemia (p < . ) and risk of severe hyperoxemia (p < . ), but not severe hypoxemia. with increasing age the chance of normoxemia increased while the risk of hyperoxemia decreased. however the size of the effect predicted by the regression equation was quite small; that is changes of + . % (normoxemia) and − . % (severe hyperoxemia) for each week of age. we evaluated a large database of neonatal spo -pao observations paired with infant postmenstrual age. our aim was to provide additional guidance to support the selection of spo alarm levels and target ranges for neonates receiving supplemental oxygen. we identified a spo range consistent with normoxemia, and showed how a target range could shift depending on a preference for avoiding higher or lower levels of pao . we showed that the risk of hyperoxemia and hypoxemia increases exponentially as spo moves toward extremes. we found that the risk of severe hypoxemia does not become marked until a level well below common low alarm settings. finally we found that the risk of severe hyperoxemia becomes marked at different levels depending on postmenstrual age and importantly at thresholds not consistent with standard practices. this report is, to our knowledge, the first to document these perspectives. we evaluated four overlapping target ranges, each wide with mid points of , , , and % spo . our data showed that there was a similar chance of normoxemia across these potential target ranges, but slightly favoring the higher target ranges. this consistency also suggests that a wider target range, even - % spo , would maintain a similar chance of normoxemia, but could be easier to maintain. a wider range at the low end has been suggested for extremely preterm infants [ , ] , in contrast to the european guidelines that recommend a higher target range [ ] . two recent reports of practices in europe and the us reported that most target ranges were within this wider envelop, though more often narrower than seven but rarely or less [ , ] . our analysis did not identify an effect related to maturity associated with normoxemia as we had expected. however our hypothesis was based on risk data of extreme pao levels (< and > mmhg) at spo levels between and %, which is different from our normoxemia criteria (pao - mmhg). further the information about likely pao values, consideration of which might align with maturity, ought to be useful in selecting a target range within these boundaries [ ] . a clinical aversion to higher or lower pao levels is reasonable. the consideration of a trade off of high and low oxygen exposure is supported by a landmark evaluation comparing the long term outcomes of nearly extremely preterm infants randomized to one of two spo target ranges ( - % or - %) [ ] . it found the high range was associated with increases in severe retinopathy of prematurity and more likely need for supplemental oxygen at weeks pma, but lower levels of necrotizing enterocolitis and death. alarm fatigue in the nicu is a serious problem. pulse oximetry, while an essential tool, generates the most false alarms and is the alarm least likely to be associated with an actionable nursing intervention [ , , ] . it is not uncommon with unstable infants to experience a spo alarm every few minutes, while an intervention is often only warranted every - min. faced with this dilemma nurses have been shown to disregard alarm policy [ ] . attention to selection of reasonable alarm settings (delay, and level) as well as sensor/probe integrity, can impact the frequency of alarms not needing intervention [ , ] . however setting alarms, whether by policy or practice, to avoid excessive frequency must also consider the risk of missing or delaying response to important events. policy and practice must balance the need to find an acceptable medium to balance the risks associated with each. our data provide spo thresholds that are associated with marked hyperoxemia and hypoxemia. it is reasonable to consider a buffer zone between the alarm setting and the level of spo concern. in addition, many events are short and it is standard practice to set the alarm delay to avoid these transient events not needing intervention. correspondingly it seems appropriate to set a longer alarm delay when the buffer zone is wider. our data indicate that the risk of hypoxemia is not related to maturity and is not marked until the spo is at % or %, at which point the risk is increasing exponentially. in contrast we found no relevant difference in risk at levels between and %. setting the low alarm between and % spo would create a buffer but at the expense of increased false alarms and alarm fatigue, without a compensating longer alarm delay. a recent analysis has determined that episodes that are significantly lower (< % spo ) and prolonged (> s) are related to bad outcomes [ ] . however, we speculate that episodes of spo with a nadir between and % even if prolonged, would not have a clinical impact, because of the low risk of severe hypoxemia. finally, based on an audit of extremely preterm infants in nicus, hagadorn et al. reported good compliance with low spo alarm unit guidelines, but provided no related details on the actual settings [ ] . in preterm infants we found the risk of hyperoxemia did not become marked until spo reached - % in those < weeks pma and those - weeks pma. this is higher than the most recent recommendations for setting the high spo alarm around % in extremely preterm infants [ , , ] . such a lower setting could be appropriate with two difference rationales. it could be considered an appropriate buffer zone. but it certainly would increase false positive alarms, without a compensating longer alarm delay. it might also be appropriate if the goal was to avoid pao levels approaching mmhg, in alignment with a lower target range. consistent with this likely excessive false positive rate from tighter high alarms, hagadorn reported only % compliance with high spo alarm unit guidelines [ ] . in contrast to preterm infants, we found that the risk of hyperoxemia, pao > and > mmhg, in infants > weeks pma was marked at a spo of %. while reports of guidelines are sparse [ , ] , it is our impression that upper alarms for near term populations are often set much higher than %. this practice provides no buffer zone and certainly increases false negatives that could increase clinical risk of hyperoxemia. the concern about the risks associated with hyperoxemia in near term infants is less prevalent than in preterms. nevertheless, hyperoxemia in children and adults has been associated with morbidity and mortality [ , ] and it is reasonable to project these risks to near term infants. the shift of the oxy-hemoglobin dissociation curve with increasing maturity that one would anticipate, was evident in high levels of spo but not at moderate and low levels. while the predicted shift in the sao -pao relationship is characterized in a shift of p , it is understandable that the smaller predicted shifts in spo at lower levels would be muted. the lack of precision and bias of the pulse oximeter, especially in these ranges, as well as other factors such as local perfusion are documented [ ] . the transition from fetal to adult hemoglobin is quite predicable over a couple months of life in healthy neonates, but we did not identify a meaningful impact associated with postnatal age. however the transition from fetal hemoglobin is affected by treatment and disease severity. transfusions have a marked effect [ ] [ ] [ ] . our study population, all transferred for a higher level of care, commonly were transfused. accordingly, transfusion naive infants would be shifted more to the left [ ] . such a shift would reduce the risk of hyperoxemia. this study's design has several limitations. first the pao thresholds we used for hypoxemia, normoxemia and hyperoxemia, while generally accepted, have not been validated with regard to outcome risk. it is unlikely they ever will be. there is a need for and a growing body of data correlating spo exposure and outcomes. of particular interest is a pending analysis of the impact of the actual, rather than assigned, spo exposure in the neoprom population [ ] . we speculate that these interpretations will be easier with a better understanding of the relationship between pao and spo . other factors such as small for gestational age and hemoglobin level as well as cerebral and intestinal oxygenation are also relevant. second, the study is observational. the location of the spo sensor and site of arterial sampling were not controlled. it is likely that some of the paired comparisons do not reflect pre-ductal assessment. this could increase the variance, but we do not think this would have a relevant effect on the bias of the risk (median values). third, we categorized the hyperoxemic risk into three pma groups. these are reasonable groupings, but it is probable that the effect is somewhat continuous with increasing maturity, but certainly not strictly categorical. whether using these results to design research or to evaluate unit guidelines, several generalizability issues should be considered. the first is comparabilty to our study population. our unit is referral based, with all infants transferred in for tertiary care. after intervention and recovery infants are often returned when they only need low levels of inspired oxygen and minimal pressure support. as reported their supplemental oxygen requirements are quite high. also previously noted, as a result of transfusions, their oxy-hemoglobin relationship is shifted to right. illustrative of this, in our least mature cohort we identified an incidence of severe hyperoxemia more than times higher than that reported in a more traditional inborn population during the first week of life [ ] . another important consideration is the averaging and alarm delay settings on the oximeter. one large study confirmed the clinical relevance of these settings [ ] . they documented a marked decrease in the incidence of severe hypoxemic events with increasing averaging time, and also demonstrated that it was associated with increased duration of episodes. they recommended using shorter averaging times and longer delays. finally the oximeter measurement itself must be considered. our data reflect a good bit of scatter in the pao at each spo level. sources of the scatter seen with spo monitoring are well described [ , ] . .consideration of differences in oximeter brands, and models should be considered as well. our group previously reported no difference in bias between the massimo and nellcor devices across the range of saturations in the picu, but did identify a problem with the use of inappropriate sensors [ ] . of more potential relevance, a difference between the massimo and nellcor oximeters has been reported in the spo range of - % [ ] . while this difference is within the device's % accuracy specifications, it might well effect a decision about selecting a lower target range, or the low spo alarm setting. we provide quantification of the rate at which the risk of hyperoxemia and hypoxemia increase exponentially as spo moves towards extremes, and how it is affected by maturity. postmenstrual age influences the threshold at which the risk of hyperoxemia became pronounced, but pma did not alter the threshold for hypoxemia or normoxemia. the thresholds at which a marked change in the risk of hyperoxemia and hypoxemia occur can be used to guide the setting of alarm thresholds. these findings support reconsideration of common alarm treshold practices. in extreme preterm infants, but not in more mature infants, high spo alarms may be set higher than %. likewise low spo alarms may be set lower than %. spo targeting ranges may be selected within the range of - % spo . optimal management of neonatal oxygen saturation must take into account concerns of alarm fatigue, staffing levels, and fio titration practices. integration of these factors should be evaluated in quality improvement programs. fio : fraction of inspired oxygen; spo : arterial oxygen saturation measured noninvasively; nicu: neonatal intensive care unit; pao : arterial partial pressure of oxygen (mmhg); paco : arterial partial pressure of carbon dioxide (mmhg); pma: post-menstrual age (weeks) alarm safety and oxygen saturation targets in the vermont oxford network inicq collaborative nurses' reactions to alarms in a neonatal intensive care unit balancing the tension between hyperoxia prevention and alarm fatigue in the nicu alarm safety and alarm fatigue european consensus guidelines on the management of neonatal respiratory distress syndrome in preterm infants- update retrospective analysis of pulse oximeter alarm settings in an intensive care unit patient population committee on fetus and newborn. oxygen targeting in extremely low birth weight infants pulse oximetry saturation target for preterm infants: a survey among european neonatal intensive care units association between oxygen saturation targeting and death or disability in extremely preterm infants in the neonatal oxygenation prospective meta-analysis collaboration safe oxygen saturation targeting and monitoring in preterm infants: can we avoid hypoxia and hyperoxia? graded oxygen saturation targets and retinopathy of prematurity in extremely preterm infants pulse oximetry targets in extremely premature infants and associated mortality: one-size may not fit all oxygen saturation targeting by pulse oximetry in the extremely low gestational age neonate: a quixotic quest hypoxemia and hyperoxemic likelihood in pulse oximetery ranges: nicu observational study video analysis of factors associated with response time to physiologic monitor alarms in a children's hospital evaluation of two spo alarm strategies during automated fio control in the nicu: a randomized crossover study reducing alarm fatigue in two neonatal intensive care units through a quality improvement collaboration association between intermittent hypoxemia or bradycardia and late death or disability in extremely preterm infants practical recommendations for oxygen saturation targets for newborns cared for in neonatal units. new zealand: newborn clinical network clinical reference group monitoring of oxygen saturation levels in the newborn in midwifery setting admission hyperoxia is a risk factor for mortality in pediatric intensive care oxygen exposure resulting in arterial oxygen tensions above the protocol goal was associated with worse clinical outcomes in acute respiratory distress syndrome accuracy of pulse oximetry in children the reactivation of fetal hemoglobin synthesis during anemia of prematurity the effect of blood transfusion on the hemoglobin oxygen dissociation curve of very early preterm infants during the first week of life effects of fetal hemoglobin on accurate measurements of oxygen saturation in neonates arterial oxygen tension (pao ) values in infants < weeks of gestation at currently targeted saturations alarms, oxygen saturations, and spo averaging time in the nicu oxygen targeting in preterm infants: a physiological interpretation oxygen targeting in preterm infants using the masimo set radical pulse oximeter publisher's note springer nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations none. authors' contributions tb was responsible for the conception of the study, the data analysis and initial draft of the manuscript. cn and ni collected the data. the authors (tb, ni, cn, pr, rk) critically reviewed and approved the manuscript and agree to be accountable for all aspects of the project. there was no funding provided to support the planning, implementation, analysis or manuscript development. the data sets generated and analyzed during this study are not currently publically available, but are available from the corresponding author on reasonable request. the bioethics review organization at children's hospital los angeles (chla- - ) has waived the need for informed consent for aggregate data analysis studies and specifically approved this project. not applicable. key: cord- -gpnaldjk authors: gomes, m. gabriela m. title: a pragmatic approach to account for individual risks to optimise health policy date: - - journal: nan doi: nan sha: doc_id: cord_uid: gpnaldjk developing feasible strategies and setting realistic targets for disease prevention and control depends on representative models, whether conceptual, experimental, logistical or mathematical. mathematical modelling was established in infectious diseases over a century ago, with the seminal works of ross and others. propelled by the discovery of etiological agents for infectious diseases, and koch's postulates, models have focused on the complexities of pathogen transmission and evolution to understand and predict disease trends in greater depth. this has led to their adoption by policy makers; however, as model-informed policies are being implemented, the inaccuracies of some predictions are increasingly apparent, most notably their tendency to overestimate the impact of control interventions. here, we discuss how these discrepancies could be explained by methodological limitations in capturing the effects of heterogeneity in real-world systems. we suggest that improvements could derive from theory developed in demography to study variation in life-expectancy and ageing. using simulations, we illustrate the problem and its impact, and formulate a pragmatic way forward. since the detection of aids in the early s, it has been evident that heterogeneity in individual sexual behaviours needed to be considered in mathematical models for the transmission of the causative agent -the human immunodeficiency virus (hiv) . much research has been devoted to measuring contact networks in diverse settings and by different methods, to attempt to reproduce transmission dynamics accurately [ ] [ ] [ ] . meanwhile other equally important sources of inter-individual variation were overlooked. for example, unmodelled heterogeneity in infectiousness and susceptibility led to over-attribution of hiv infectivity to the acute phase and, consequently, to concerns that interventions relying on treatment as prevention might be compromised. the problem of unaccounted heterogeneity in predictive models can be illustrated with the simplest mathematical description of infectious disease transmission in a host population. figure shows the prevalence of infection over time under three alternative scenarios: all individuals are at equal risk of acquiring infection (black trajectories [notice unrealistic time scale]); individual risk is affected by a factor that modifies either their susceptibility to infection (blue) or exposure through connectivity with other individuals (green). risk modifying factors are drawn from a distribution with mean one (blue and green density plots on the left) while the homogeneous scenario is sketched by assigning a factor one to all individuals (black frequency plot). as the virus spreads in the human population, individuals at higher risk are predominantly infected as indicated at endemic equilibrium (figure a, b , c, density plots on the right, coloured red) and after years of control (figure d, e, f). the control strategy applied to endemic equilibrium in the figure is the - - treatment as prevention target advocated by the joint united nations programme on hiv/aids whereby % of infected individuals should be detected, with % of these receiving antiretroviral therapy, and % of these should achieve viral suppression (becoming effectively non-infectious). ; distributed susceptibility to infection with variance (b, e); distributed connectivity with variance (c, f). in disease-free equilibrium, individuals differ in potential risk in scenarios b and c, but not in scenario a (risk panels on the left). the vertical lines mark the mean risk values ( in all cases). at endemic equilibrium, individuals with higher risk are predominantly infected (risk panels on the right, where red vertical lines mark mean baseline risk among individuals who eventually became infected), resulting in reduced mean risk among those who remain uninfected (black vertical lines). to compensate for this selection effect, heterogeneous models require a higher ! to attain the same endemic prevalence (a, b, c). interventions that reduce infection also reduce selection pressure, which unintendedly increases mean risk in the uninfected poll and in heterogeneous models, ( ) is a probability density function with mean and variance , and 〈 # 〉 denotes the th -moment of the distribution. gamma distributions were used for concreteness. figure shows that heterogeneous models that account for wide biological and social variation require higher basic reproduction numbers ( ! ) to reach a given endemic level and predict less impact for control efforts when compared with the homogeneous counterpart model. this holds true regardless of whether heterogeneity affects susceptibility or connectivity. at endemic equilibrium, individuals at higher risk are predominantly infected (red distributions have mean greater than one as marked by the red vertical lines), and hence those who remain uninfected are individuals with lower risk (blue and green distributions have mean lower than one as marked by the black vertical lines). thus, the mean risk in the uninfected but susceptible subpopulation decreases, and the epidemic decelerates (thin blue and green curves); higher values of ! are consequently required if the heterogeneous models are to attain the same endemic level as the homogeneous formulation (heavy blue and green curves). finally, interventions are less impactful under heterogeneity because ! is implicitly higher. indeed, these biases could help explain trends in hiv incidence data which lag substantially behind targets informed by model predictions, even in settings that have reached the - - implementation targets , . a novel severe acute respiratory syndrome coronavirus (sars-cov- ) isolated at the end of from a patient in china has spread worldwide causing the covid- pandemic, despite intensive measures to contain the outbreak at the source. countrywide epidemics have been extensively analysed and modelled throughout the world. initial studies projected attack rates of around % if transmission had been left unmitigated , while subsequent reports noted that individual variation in susceptibility or exposure to infection might reduce these estimates substantially risk distributions are simulated in three scenarios: homogeneous (black); distributed susceptibility to infection with variance (blue); distributed connectivity with variance (green). left panels represent distributions of potential individual risk prior to the outbreak, with vertical lines marking mean risk values ( in all cases). as the epidemic progresses, individuals with higher risk are predominantly infected, depleting the susceptible pool in a selective manner and decelerating the epidemic. the inset overlays the three epidemic curves scaled to the same height to facilitate shape comparison. right panels show in red the risk distributions among individuals who have been infected over months of epidemic spread (mean greater than one when risk is heterogeneous, as marked by red vertical lines) and the reduced mean in heterogeneous models, ( ) is a probability density function with mean and variance , and 〈 # 〉 denotes the thmoment of the distribution. gamma distributions were used for concreteness. as models inform policies, we cannot but stress the importance of representing individual variation pragmatically. while much is being discovered about sars-cov- and its interaction with human hosts, epidemic curves are widely available from locations where the virus has been circulating. models can be constructed with inbuilt risk distributions whose shape can be inferred by assessing their ability to mould simulated trajectories to observed epidemics while accounting for realistic social distancing interventions . variation in infectiousness was critical to attribute the scarce and explosive outbreaks to superspreaders when the first sars emerged in , but what we are discussing here is different. infectiousness does not respond to selection as susceptibility or connectivity do, i.e. models with and without variation in infectiousness perform equivalently when implemented deterministically and only differ through stochastic processes. the need to account for heterogeneity in risk to acquire infections is not restricted to aids and covid- but is generally applicable across infectious disease epidemiology models. moreover, similar issues arise in methods intended to evaluate the efficacy interventions from experimental studies as illustrated for vaccines in the sequel. individual variation in susceptibility to infection induces biases in cohort studies and clinical trials. vaccine efficacy trials offer a useful illustration of the problem and give insight into the potential solution. in a vaccine trial, two groups of individuals are randomised to receive a vaccine or placebo and disease occurrences are recorded in each group. as disease affects predominantly higher-risk individuals, the mean risk among those who remain unaffected decreases and disease incidence declines. in the vaccine group the same trend will occur at a slower pace (presuming that the vaccine protects to some degree). as a result, the two randomised groups become different over time with more highly susceptible individuals remaining in the vaccine group. the vaccine efficacy, described as a ratio of cases in vaccinated compared to control group, therefore appears to wane (figure ) , . this effect will be stronger in settings where transmission intensity is higher, inducing a trend of seemingly declining efficacy with disease burden . the concept is illustrated in figure by simulating a vaccine trial with heterogeneous and homogeneous models analogous to those utilised in figures and . selection on individual variation in disease susceptibility thus offers an explanation for vaccine efficacy trends that is entirely based on population level heterogeneity, in contrast with waning vaccine-induced immunity, an individual-level effect . as both processes may occur concurrently in a trial, it is important to disentangle their roles, as they lead to different interpretations of the same incidence trend. for example, vaccine efficacy might wane in all individuals, or it might be constant for each individual but decline at the population level due to selection on individual variation. to capture this in a timely manner requires multicentre trial designs with sites carefully selected over a gradient of transmission intensities (e.g. optimally spaced along the incidence axis in figure c, f) , and analyses performed by fitting curves generated by models that incorporate individual heterogeneity. an alternative and more tightly controlled approach would be to use experimental designs in human infection challenge studies where these are available to generate dose-response curves and apply similar models. these approaches have recently been successfully tested in animal systems . heterogeneities in predispositions to infection depend on the mode of transmission but play a role in all high-burden infectious diseases. in respiratory infections, heterogeneity may arise from a variation in exposure of the susceptible host to the pathogen, or the competence of host immune systems to control pathogenic viruses or bacteria. these two processes have multiple the mechanisms underpinning single factors for infection and their interactions determine individual propensities to acquire disease. these are potentially so numerous that to attain a full mechanistic description may be unfeasible. even in the unlikely scenario that a list of all putative factors may be available, the measurement of effect sizes would be subject to selection within cohorts resulting in underestimated variances . to contribute constructively to the development of health policies, model building involves compromises between leaving factors out (reductionism) or adopting a broader but coarse description (holism). holistic descriptions of heterogeneity are currently underutilised in infectious diseases. recently, measures of statistical dispersion commonly used in economics have been adapted to describe risk inequality in cancer , tuberculosis and malaria , offering a holistic approach to improve the predictive capacity of disease models. essentially, this involves stratifying the population into groups of individuals with similar risk, which may be as granular as individual level for frequent diseases, such as malaria or influenza. for infectious diseases which cluster by proximity, such as tuberculosis, stratification can use geographical units. familial relatedness pertains when there is a clear genetic contribution to risk, such as cancer. by recording disease events in each group, specific incidence rates can be calculated and ranked. unknown distributions of individual risk are then embedded in dynamic models and estimated by fitting the models to the stratified data. because they incorporate explicit distributions of individual risk, these models automatically adjust average risks in susceptible pools to changes in transmission intensity, occurring naturally or in response to interventions. not subject to the selection biases described above, this model approach inherently enables more accurate impact predictions for use in policy development. there is compelling evidence that epidemiologists could use indicators that account for the whole variation in disease risk. heterogeneity is unlimited in real-world systems and cannot be completely reconstructed mechanistically. inspired by established practices in demography and economics and supported by successful applications in both infectious and non-communicable diseases, the use and further development of these approaches offers a powerful route to build disease models that enable more accurate estimates of intervention efficacy and more accurate predictions of the impact of control programmes. an application of the theory of probabilities to the study of a priori pathometry, part i modeling infectious disease dynamics in the complex landscape of global health is the unaids target sufficient for hiv control in botswana? joint united nations programme on hiv/aids (unaids), global aids update elimination of lymphatic filariasis in south east asia herd immunity thresholds for sars-cov- estimated from unfolding epidemics impact of heterogeneity in individual frailty on the dynamics of mortality a preliminary study of the transmission dynamics of the human immunodeficiency virus (hiv), the causative agent of aids heterogeneities in the transmission of infectious agents: implications for the design of controls programs networks and epidemic models transmission network parameters estimated from hiv sequences for a nationwide epidemic reassessment of hiv- acute phase infectivity: accounting for heterogeneity and study design with simulated cohorts impact of non-pharmaceutical interventions (npis) to reduce covid- mortality and healthcare demand (imperial college covid- response team individual variation in susceptibility or exposure to sars-cov- lowers the herd immunity threshold a mathematical model reveals the influence of population heterogeneity on herd immunity to sars-cov- superspreading and the effect of individual variation on disease emergence estimability and interpretability of vaccine efficacy using frailty mixing models apparent declining efficacy in randomized trials: examples of the thai rv hiv vaccine and caprisa microbicide trials clinical trials: the mathematics of falling vaccine efficacy with rising disease incidence seven-year efficacy of rts,s/as malaria vaccine among young african children design, recruitment, and microbiological considerations in human challenge studies vaccine effects on heterogeneity in susceptibility and implications for population health management understanding variation in disease risk: the elusive concept of frailty inequality in genetic cancer risk suggests bad genes rather than bad luck introducing risk inequality metrics in tuberculosis policy development modelling the epidemiology of residual plasmodium vivax malaria in a heterogeneous host population: a case study in the amazon basin key: cord- -ny lj authors: vese, donato title: managing the pandemic: the italian strategy for fighting covid- and the challenge of sharing administrative powers date: - - journal: nan doi: . /err. . sha: doc_id: cord_uid: ny lj this article analyses the administrative measures and, more specifically, the administrative strategy implemented in the immediacy of the emergency by the italian government in order to determine whether it was effective in managing the covid- pandemic throughout the country. in analysing the administrative strategy, the article emphasises the role that the current system of constitutional separation of powers plays in emergency management and how this system can impact health risk assessment. an explanation of the risk management system in italian and european union (eu) law is provided and the following key legal issues are addressed: ( ) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; ( ) the potential and limits of the precautionary principle in eu law; and ( ) the italian constitutional scenario with respect to the main provisions regulating central government, regional and local powers. specifically, this article argues that the administrative strategy for effectively implementing emergency risk regulation based on an adequate and correct risk assessment requires “power sharing” across the different levels of government with the participation of all of the institutional actors involved in the decision-making process: government, regions and local authorities. “and the flames of the tripods expired. and darkness and decay and the red death held illimitable dominion over all”. edgar allan poe, the mask of the red death, complete tales and poems (new york, vintage books ) p international concern" (pheic). in the light of its later levels of spread and severity worldwide, the who then assessed covid- as a "pandemic". the pandemic has spread rapidly in several european union (eu) member states. italy, however, is a special case: here, the covid- outbreak spiralled upwards earlier and more severely than elsewhere in europe, reaching a high mortality rate and creating the conditions for the public healthcare system's collapse. in this scenario, the italian government (from now on the government) declared a nationwide state of emergency, followed by increasingly restrictive measures aimed at slowing and containing the spread of the virus and mitigating the pandemic's effects under the by now well-known "flatten the curve" imperative. the last of these measures established the national lockdown, extending the emergency rules to the entire country for six months and, more generally, providing what has been called the "italian model to fight covid- ", namely "diminish viral contagions through quarantine; increase the capacity of medical facilities; and adopt social and financial recovery packages to address the pandemic-induced economic crisis". in this article, starting from the main regulatory acts and considering recent scientific knowledge and epidemiological data on covid- , we will examine the administrative measures the government has taken and the strategy it has implemented to deal with the pandemic in the immediacy of the emergency. after this initial analysis, we might legitimately wonder whether those measures and that strategy have proven effective in containing the pandemic. more generally, by analysing the administrative strategy, the article emphasises the role that the current system of constitutional separation of powers plays in emergency management and how this system can impact health risk assessment. an explanation of the risk-management system in italian and eu law will be provided and the following key legal issues will be analysed: ( ) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; ( ) the potential and limits of the precautionary principle in eu law; who, "statement on the second meeting of the international health regulations ( ) emergency committee regarding the outbreak of novel coronavirus ( -ncov)", geneva, switzerland, january . pheic has been defined in the international health regulations (ihr) of as an extraordinary event which can: ( ) constitute a public health risk to other states through the international spread of disease; and ( ) potentially require a coordinated international response. furthermore, this definition implies a situation that is: ( ) serious, unusual or unexpected; ( ) carries implications for public health beyond the affected state's national borders; and ( ) and may require immediate international action. who, "director-general's opening remarks at the media briefing on covid- ", march . resolution of the council of ministers of january , adopted pursuant to legislative decree / (civil protection code) . on the declaration of emergency rule, see european commission for democracy through law (venice commission) . dpcm of march . for the general framework of all measures adopted by the italian state during the covid- emergency, see . fg nicola, "exporting the italian model to fight covid- " (the regulatory review, april ) . and ( ) the italian constitutional scenario with respect to the main provisions regulating central government, regional and local powers. specifically, the article argues that the administrative strategy for effectively implementing emergency risk regulation based on an adequate and correct risk assessment requires "power sharing" across the different levels of government with the participation of all of the institutional actors involved in the decision-making process: government, regions and local authorities. following the declaration of the state of emergency, the government approved decree-law no. of february vesting the president of the council of ministers with wide ordinance powers to handle the emergency by issuing his own administrative decrees. in particular, decree-law / gave the prime minister the power to issue typical emergency administrative measures in order to ensure social distancing, impose lockdown areas, close offices and public services and suspend economic activities. in addition, it allowed him to adopt atypical administrative powers whereby "further containment and emergency management measures" could be established. in a matter of days, the government approved three important regulatory acts based on the implementation of decree-law / : first with the decree of the president of the council of ministers (dpcm) of march , second with the dpcm of march and third with the dpcm of march, the government established stringent emergency administrative measures to curb the pandemic's spread throughout the country. in the first instance, these measures were gradual and concerned specific municipalities, provinces or regionsespecially in northern italythat were hardest hit by the virus and therefore classified as "red zones" subject to government-imposed local lockdowns. later on, the government established the national lockdown, and emergency measures were extended to the entire country for six months. in particular, pursuant to article ( ) of the dpcm of march , the government imposed a lockdown in lombardy and another fourteen provinces of northern italy. in doing so, the government introduced several legal prohibitions, such as the ban on people travelling to and from places in the red zones. with the subsequent national lockdown, the government imposed a travel ban in the entire country according to article ( ), dpcm of march , and prevented all forms of social gathering in public places or places open to the public across the country, according to article ( ), dpcm of march . furthermore, pursuant to articles ( ), ( ) and ( ), dpcm of march , retail businesses and personal services were suspended. as a consequence of the national lockdown, the ministry of health's order of march provided several stringent measures that prohibited many activities, such as the ban on accessing all public places, on exercising in public places and on going to holiday homes. in addition, with its order of march , the ministry of health, in agreement with the ministry of transport, established that people entering italy by plane, boat, rail or road must declare their reason for travel, the address where they plan to self-isolate, how they intend to travel there and their phone number so that authorities can contact them throughout an obligatory fourteen-day quarantine. moreover, several administrative sanctions were gradually established in the various regulatory acts. the last of these acts introduced rigorous sanctions for people who leave home without valid reasons and for undertakings that do not comply with the order to close. in the meantime, the regions and local authorities also adopted several ordinances establishing emergency administrative measures for the pandemic in their area. lastly, the government issued decree-law no. of march , with the aim of rationalising and coordinating emergency powers among the different levels of government. *** in the following pages, emphasising the role that the current structure of constitutional separation of powers plays in risk assessment, i will argue that the main problems of the italian administrative strategy for the covid- pandemic are due to the lack of effective "sharing of powers", and more specifically to the failure to share administrative in particular, art ( ) of decree-law / did not affect the effects produced and acts adopted on the basis of decrees and ordinances issued pursuant to decree-law / or art of law / , and established that the measures previously adopted by the dpcms of march , march , march and march as still in force on the date of entry into force of the said decree-law shall continue to apply within the original terms. regulatory powers among the different levels of government with the participation and cooperation of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. from this point of view, as i will attempt to explain, the failure to share administrative regulatory powers can have a decisive impact on risk assessment at the national level in terms of the effectiveness/ineffectiveness of the strategies adopted by the various institutional actors called upon to manage the emergency in their own areas. here, by "sharing powers", i mean the idea that the institutional actors involved in the decision-making process cooperate in the exercise of their powers by adopting consistent measures in the public interest; that is to say, with the aim of maximising the rights of individuals as required by the italian constitution. power sharing does not mean homologation. indeed, adopting different administrative strategies at different levels of government might increase the effectiveness of the response to a pandemic, but these measures must be shared among all of the actors involved in emergency management. sharing powers, measures and local strategies will be useful for an effective policy for containing the virus's nationwide spread based on an overall risk assessment. hence, the idea of shared powers emphasises the role of cooperation in specific institutional contexts, such as italy's, where competences are allocated across the different levels of government. the sense, more generally, is that sharing powers in multi-level systems enables states to perform better in terms of democracy, as powers are balanced between state and local levels. as we will see, however, the absence of effective power sharing at all levels of government in a pandemic can produce serious problems in correctly assessing risk and consequently in the emergency management strategy. in particular, i will discuss the problem of the lack of effective power sharing in italian policies from two key points of view: the government's administrative strategy in addressing the virus's spread by means of an "incremental approach" (section iv. .a); and the government's administrative strategy in implementing a national pandemic health plan (section iv. .b). before doing so, i will outline some key legal issues for the topics examined in this article. in particular, to put the administrative strategy devised by the government in the covid- emergency into context, i will analyse: ( ) the notion and features of emergency risk regulation from a pandemic perspective, distinguishing between risk and emergency; ( ) the potential and limits of the precautionary principle in eu law; and ( ) the italian constitutional scenario with respect to the main provisions governing government's, regions' and local authorities' powers. this preliminary analysis of key legal issues is useful for understanding why the administrative strategy has proven ineffective in managing the pandemic (sections iv. .a and iv. .b). placing the notion and its main features in the context of a pandemic, we could define emergency risk regulation as the action undertaken in the immediacy of a pandemic in order to mitigate its impact. from this perspective, we should bear in mind the distinction between risk and emergency. generally speaking, the traditional approach of administrative law refers to the notion of emergency and not also to the notion of risk, which legal doctrine touches on only marginally. with regards to the emergency, as a safeguard clause to deal flexibly with pandemic risks, governments and other public authorities may invoke the use of extraordinary powers to restore the normal course of legal relations. what is more, regulators have used emergency tools to act in the expectation of a risk for many years, although there is no denying that a risk is a potential danger, whereas an emergency is an actual danger. indeed, it should be sufficiently clear that emergency power is ineffective when applied in a situation that is only potentially dangerous. in this connection, it has been argued that the methods of exercising administrative powers can be better regulated by putting the administrative regulation in the category of risk rather than that of emergency. we might observe that if the notion of "risk" characterises a peculiar, intermediate state between security and destruction, in "emergency risk" the balance between these two clearly tilts towards the latter. in fact, as it is triggered by a pandemic, emergency risk regulation presupposes the existence, or the mere threat, of a pandemic. the pandemic, as a alemanno (ed.), governing disasters: the challenges of emergency risk regulation (cheltenham, edward elgar ) p xix. however, the notion of risk in italian administrative law is analysed by m simoncini, la regolazione del rischio e il sistema degli standard. elementi per una teoria dell'azione amministrativa attraverso i casi del terrorismo e dell'ambiente [risk regulation and the standards system. elements for a theory of administrative action through the cases of terrorism and the environment] (napoli, editoriale scientifica ) chs and , where the author postulating the notion of risk argues and suggests, in an innovative approach, the transition from the "emergency" perspective to the "risk regulation" perspective. . beck is responsible for analysing the sociopolitical dimension of risk management and in particular the problem of the relationship between science and society through the criticism of the monopoly that scientific rationality currently holds. alemanno, supra, note , xxii. a possible cause of disaster for humans, is an event of substantial extent causing significant physical damage or destruction, loss of life or drastic change to the natural environment. typically, one speaks of a pandemic when a threat to people's health is perceived that calls for urgent remedial action under conditions of uncertainty. fundamentally, emergency risk regulation in a pandemic event, as in other disasters, finds its natural regulatory space in two stages: mitigation and emergency response. in principle, mitigation efforts attempt to reduce the potential impact of a pandemic before it strikes, while a pandemic response tends to do so after the event. however, the distinction between emergency mitigation and emergency response is not always very sharp. when called upon to act under the menace of a pandemic, governments must both mitigate and respond to the threat in a situation characterised by suddenness (emergency) and significance. in a pandemic, emergency risk regulation is clearly called on to operate in the initial phase of the disease's spread, when the mere threat overshadows the regulatory context by virtue of its status as an emergency. accordingly, the most cost-effective strategies for increasing pandemic preparedness with administrative regulation, especially in resource-constrained settings, may consist of: ( ) investing to reinforce the main public health infrastructure; ( ) increasing situational awareness; and ( ) quickly containing further outbreaks that could extend the pandemic. in addition, especially once the pandemic has begun, a coordinated response should be implemented where the public regulator focuses on: ( ) maintaining situational awareness; ( ) public health messaging; ( ) reducing disease transmission; and ( ) care and treatment of the ill. successful contingency planning and an administrative strategy using the emergency risk regulation approach call for surge capacity, or in other words the ability to scale up the delivery of health interventions in proportion to the severity of the event, the pathogen and the population at risk. the pandemic may produce significant impact on the regulatory context by justifying the partial or total suspension of the ordinary decision-making process. departures from the rule of law, or simply from established procedures, are generally perceived as necessary if the event has met the significance threshold. however, the use of emergency administrative measures, such as temporary and exceptional measures, should be considered legitimate only for the period in which the pandemic ibid, xxii-xxiii. see also dd caron, "addressing catastrophes: conflicting images of solidarity and self interest" in dd caron and ch leben (eds), lasts. by contrast, prolonging exceptional order beyond the time of the pandemic means that any powers and measures designed to be temporary will be made permanent, intensifying the controlling authority's capacity, even though this might limit the enjoyment of individual rights. in addition, if the general need to prevent a pandemic cannot be ignored, it should be well thought out as an opportunity for risk regulation to prevent not only the sudden impact of a pandemic situation, but also any distorting effects or mishandling of the necessary recourse to emergency powers. consequently, it might now be inferred that emergency risk regulation in the context of a pandemic is a relevant regulatory methodology that combines the risk approach with the possibility of resorting to extraordinary measures in case a pandemic occurs. this methodology is essential for an effective administrative strategy for dealing with a pandemic because it permits constant monitoring and management of risks that can have serious consequences for society. by assessing the risks and taking proportionate measures, the negative effects of the emergency can be reduced and the use of emergency powers can be limited. indeed, it should be pointed out that the principle of reasonableness, which is generally invoked in the exercise of emergency powers against immediate danger, does not operate in emergency risk regulation. instead, as i will claim later, it will be the precautionary principle that matters (section iii. ). furthermore, it must be said that emergency risk regulation entails an accurate assessment of the factual situation based on scientific evidence. to apply this methodology correctly, a variety of factors must be consideredincluding the real level of the threat as well as how people perceive itin a step-by-step analysis based on the available scientific knowledge. in particular, as i will claim in analysing the italian policies (sections iv. .a and iv. .b), the administrative strategy for effectively implementing emergency risk regulation in a pandemic requires power sharing across the different levels of government with the participation of all of the institutional actors involved in the decision-making process in order to adopt consistent measures based on the constant monitoring and updating of the nationwide epidemiological risk assessment. hence, effective sharing of administrative powersand more specifically the administrative regulatory powers for emergenciesbetween the government, regions and local authorities would optimise the adoption of proportionate measures for controlling and containing the virus throughout the country, avoiding or at least delaying the application of stringent measures such as the lockdown of municipalities, provinces, regions or entire states. g martinico and m simoncini, "emergency and risk in comparative public law" (verfassungsblog, may ) . according the authors, it is the facts and not the law that indicate the conclusion of an emergency. thus, the risks posed by the use of extraordinary administrative measures should be considered, especially at the end of the emergency when the government's powers should be subject to legal control in order to avoid departures from original objectives. in the same sense, see also simoncini, supra, note , . on the state of exception, see c schmitt, die diktatur: von den anfängen des modernen souveränitätsgedankens bis zum proletarischen klassenkampf (berlin, duncker & humblot ). schmitt's jurisprudential thinking placed the state of exception at the very centre of analysis, beginning with his work on the roman dictatorship. martinico and simoncini, supra, note . in managing the pandemic, the government's administrative strategy should take the emergency risk regulation methodology we have just outlined into account. in the eu legal system, the precautionary principle is described in article ( ) tfeu on environmental policy. the jurisprudence of the european court of justice (ecj) played a prominent role in elevating the precautionary principle to the status of a general principle of eu law. some ecj judgments in health matters are seminal in this regard. according to the ecj's jurisprudence, the precautionary principle requires that competent authorities adopt appropriate administrative measures to prevent specific potential health risks. the ecj's approach maintains that an appropriate application of the precautionary principle presupposes the identification of hypothetically harmful effects for health flowing from the contested administrative measure, combined with comprehensive assessment of the risks to health based on the most reliable scientific data available. in like manner, the european commission (ec) has contributed significantly to outlining the features of the precautionary principle in the eu legal system. in the communication of , the ec sought to establish a common understanding of the factors leading to recourse to the precautionary principle and its place in decisionmaking. according to the ec communication, the principle covers those circumstances where scientific evidence is insufficient, inconclusive or uncertain, but where preliminary scientific evaluation provides reasonable grounds for concern that the potentially dangerous effects on human health might be inconsistent with the chosen level of protection. various factors can trigger the adoption of precautionary measures. these factors inform the decision on whether to act or not, this being an eminently political decision, a function of the risk level that is "acceptable" to the society on which the risk is imposed. the ec has also established guidelines for those situations where action based on the precautionary principle is deemed necessary in order to manage risk. in these situations, a cost-benefit analysis to compare the likely positive and negative effects of the envisaged action and of inaction is recommended, and it should also include non-economic considerations. however, risk management in accordance with the precautionary principle should be proportionate, meaning that administrative measures should be proportional to the desired level of protection. in some cases, an administrative response that imposes a total ban may not be proportional to a potential risk; in others, it may be the only possible response. in any case, such measures should be reassessed in the light of recent scientific data and changed if necessary. in eu law, therefore, the precautionary principle has been widely recognised as a defining principle of risk regulation alongside the regulatory aim of a high level of protection. nevertheless, this principle might prove ineffective or even harmful if applied in a "strong" form. the strong form of the principle has been authoritatively criticised on the grounds that it suggests that regulation is required whenever there is a potential risk to health, even if the supporting evidence is conjectural and the economic costs of administrative regulation are high. in particular, if governments adopt the strong form of the principle, it would always require regulating activitiesconsequently imposing a burden of proof each timeeven if it cannot be demonstrated that those activities are likely to cause harms. in addition, as the need for selectivity of precautions is not simply an empirical fact but is a conceptual inevitability, no society can be highly precautionary with respect to all risks. hence, in this strong form, the precautionary principle proves ineffective and even harmful by requiring stringent administrative measures that can be paralysing, in that they prohibit regulation and all courses of action, including inaction. thus conceived, this principle may not lead in any direction or provide precise guidance for governments and regulators. recently, the limits of the precautionary principle have been analysed in the field of administrative and constitutional law. an interesting recent work proposes that precautionary and optimising constitutionalism are a dichotomy. in summary, the theory advances two distinct propositions. the first is that constitutions should be viewed as devices for regulating political risks. those political risks are referred to as "second-order risks", as opposed to "first-order risks" such as wars, diseases and other social ills. many of these risks are described as "fat-tail risks" that are exceedingly unlikely to materialise, but more likely than in a normal distribution, and are exceedingly damaging if they do materialise, as in the case of a pandemic. under "maximin constitutional" approaches, it is suggested that precautionary rules can overcompensate for these low-likelihood risks and even cause the very dangers that they seek to prevent. hence, precautionary constitutionalism is myopic in focusing on certain risks, and the notion of unappreciated or unaccommodated risks is central. on the basis of this hypothesis, the best way to regulate risk is thus to avoid obsessive views on risk avoidance or precautions and instead to allow greater flexibility in addressing the full array of risks inherent in government. what vermeule calls "optimising constitutionalism" is an answer to those who frame their understanding of the constitution along more rigid precautionary principles. vermeule's approach has been criticised. following these criticisms, i believe that this approach also reveals some critical points about the notion of risk. unless one adopts a more fungible notion of risk, i do not believe that "precautionary constitutionalism" is suboptimal for risk. it depends on how one weighs the risks involved in governing, even if one accepts risk analysis as the best measure for the success of a constitutional system. i claim, more generally, that correctly applying the precautionary principle, although it works better in a context of risk rather than one of emergency, is nonetheless important in managing a pandemic because it makes it possible to delay the implementation of stringent emergency measures. we have emphasised that administrative precautionary measures, unlike emergency ones, do not suspend the rule of law, since they activate soft government regulation that does not jeopardise fundamental rights concurrent with those threatened by imminent danger. hence, in my opinion, precautionary measures, where they are effectively shared across the different levels of government through appropriate risk assessment, would serve to avoid or at least delay governments' activation of a state of emergency. activating a state of emergency, consequently, would trigger hard government regulation through emergency measures that suspend the rule of law and therefore jeopardise fundamental rights. in a particular context such as the covid- pandemic, the precautionary principle could also be invokedand the implementation of precautionary administrative measures would be usefulin the presence of an emergency declaration issued by governments. in this sense, i argue that the declaration of a state of emergency for a pandemic is based on a technical risk assessment (ie technical discretion ) by the administration (eg government). in a pandemic, then, the emergency relates essentially to the capacity of administrations (eg governments, health authorities) to manage cases requiring healthcare (eg intensive care for respiratory support, hospitalisations for advanced pharmacological treatments and so on). thus, the subject of the technical assessment of the fact (the pandemic) is be provided by the evaluation relating to the administration's capacity to fulfil the tasks established by the legal system to protect the right to health enshrined in article of the italian constitution (section iv. ). furthermore, to be effective in emergencies such as a pandemic, the notion of the principle to which i refer should not entail the activation of precautionary measures typical of its strong version (which is exemplified in the well-known phrase "better safe than sorry"). in its strong version, in fact, the precautionary principle would be both paralysing and uneconomical, since it requires that any and all risks be prevented, even those that are least likely to occur or have been created artificially for italian legal doctrine distinguishes between "administrative discretion" and "technical discretion" under the influence of ms giannini, il potere discrezionale della pubblica amministrazione political reasons (i am thinking here of george w. bush's preventative war doctrine) in order to justify stringent administrative measures issued by governments for purposes not necessarily related to the alleged risk. by contrast, balancing costs against benefits might provide the basis of a principled approach for making decisions in complex contexts, such as the italian legal system, where the current constitutional separation of powers can lead to an inadequate and incorrect assessment of risks and therefore to ineffective emergency management by the different levels of government. in any case, scientific evidence is an essential prerequisite for better regulation by acting on the precautionary principle. to be cost effective, governments should take precautionary administrative measures based on scientific knowledge and thus carefully assess the risks they intend to manage. taking the potential and limits of the precautionary principle from the perspective we have outlined above into account might have an impact on governments' ability to deal effectively with pandemic emergencies. this matters in the case of italy, where the current structure of the constitutional separation of powers between the government, regions and autonomous local authorities plays a crucial role in effectively managing the pandemic emergency. analysing the italian constitutional scenario can provide substantial guidance for understanding the legal structure of powers and competences of government, regions and local authorities and explain why assessing pandemic risk can be impacted by a given separation of powers. such an analysis can shed light on the administrative strategy implemented by the government in the pandemic and enable us to evaluate its effectiveness in managing covid- across the country. first of all, we should bear in mind that the italian constitution (from now on the constitution) does not explicitly refer to emergency power, except for a state of war (article ). however, this power has traditionally been included in the typical powers that the constitution assigns to the government. in the constitutional system, the main rules governing the government's powers are established by articles and . indeed, parliament does not have a monopoly on legislative power, and the government may also issue laws by two legal instruments that should be understood as extraordinary: legislative decree and decree-law. in particular, article allows parliament to delegate its legislative power to the government, which in turn is given the power to issue legislative decrees. hence, the legislative decree is a form of delegated law-making power, where parliament may pass an enabling act entrusting the government to adopt one or more acts that have legal force. generally, the legislative decree is a legislative tool that is often deployed in all matters where a strong technical content is present. the second extraordinary instrument, the decree-law, is provided for by article . this is a form of law-making through emergency powers that the government may exercise in "exceptional cases of necessity and urgency" and under "its own responsibility". the government can thus issuewithout an enabling act from parliament as required by the provisions of article administrative measures that have the force of ordinary laws. however, such administrative measures will lose their effects as of the date of issue if parliament does not transpose them into an ordinary law within sixty days of their publication. with the major reform on "administrative federalism" enacted by law no. of october , which amended title v of the constitution, italy rapidly devolved legislative and regulatory powers to the regions. fundamentally, the constitutional amendment provided a new framework for the distribution of powers and competences between the national and local levels. it established a new institutional structure by dividing legislative and administrative competences and powers across the different levels of government. the amended articles of the constitution are the basis for the fundamental reform of administrative federalism. article recognises local authorities (municipalities, provinces, metropolitan cities) and regions as autonomous entities of the state with their own statutes, powers and functions in accordance with the principles laid down in the constitution. article establishes the role and legislative powers of the state and regions, indicating those matters for which the state has exclusive legislative power and those for which concurrent legislation of both the state and the regions is possible. the regions have exclusive power in all matters not expressly covered by state law. municipalities, provinces and metropolitan cities also have regulatory powers for the organisation and implementation of the functions attributed to them. specifically, article ( ) establishes that the state and regions have concurrent power, and the regions have regulatory powers, in matters of public health. in this connection, at the national level, parliament and government are called upon to: ( ) adopt fundamental health principles by means of framework laws and guidelines; and ( ) establish essential levels of healthcare. at the regional level, the regions implement: ( ) general legislative and administrative activity; ( ) the organisation of health facilities and services; and ( ) the provision of healthcare based on specific local needs. article provides for the subsidiarity principle, according to which all functions are exerted by municipalities, while the possibility remains to confer them to higher levels of government in order to guarantee the uniform implementation of spending functions across the country. article guarantees national unity and the unitary nature of the constitutional system by providing for the government's substitution power. according to article ( ), the government can act for the regions and other local authorities if: ( ) the latter fail to comply with international rules and treaties or eu legislation; ( ) in the case of grave danger for public safety and security; or ( ) whenever such action is necessary to preserve legal or economic unity and in particular to guarantee the basic level of benefits relating to civil and social entitlements, regardless of the geographical borders of local authorities. to this end, the law shall lay down the procedures to ensure that ( ) subsidiary powers (ie the government's substitution power) are exercised in compliance with the principles of "subsidiarity" and "loyal cooperation". lastly, with regards to powers and competences in emergencies, it should be noted that in the italian legal system several authorities can introduce specific regulatory acts establishing administrative measures needed to deal with emergencies in accordance with the constitution. the power of ordinance has a particular role in managing emergencies, as it can be exercised in situations of necessity and urgency. in particular, the legal system provides for: ( ) as we will see, the structure of power just described highlights the problem of risk assessment among the institutional actors involved in the administrative decisionmaking process. though the current system of allocation of powers and competences to the regions and other local authorities might be an advantage in terms of correctly assessing and managing risk in their areas, at the national level, this system requires an effective sharing of powers and strategies between the centre and the periphery, where the measures of the regions and local authorities must be adopted in accordance with the measures advanced by the government, and vice versa. since correct risk assessment by an authority must take the characteristics of its area into accountdata on the epidemiological situation, for example, or on the average age of the legal nature of the "state's substitution power" in italian legal doctrine has been extensively discussed. in particular, some scholars argue that art provides a form of "administrative" substitution of the state over the regions, and that art ( ) concerns "legislative" substitution. other scholars agree on the idea that art provides the genus of substitution powers, whereas art ( ) refers to one species of the genus, being a mere specification of art . however, the constitution seems clear on this point. as we have seen, the provisions of art speak of the "government", while the provisions of art ( ) speak of the "state". the population, and the capacity of the health system with regards especially to the availability of intensive care bedsit might be assumed that in the italian legal system's effective risk assessment could be facilitated by the specific competences established by the constitution for the regions and other local authorities in health matters. however, as i will argue, this is a theoretical advantage that works only if power is effectively shared between the different levels of government. in fact, in order to provide an adequate and correct risk assessment at the national level and take effective measures to contain and manage the pandemic, the current system needs powers and strategies to be shared between local authorities, regions and the government. sharing administrative powers at all levels of government is an important part of the task of states. indeed, enhancing multi-level regulatory governance has become a priority in many eu states. for this reason, the eu supports sharing of administrative regulatory powers by encouraging better regulation at all levels of government, calling on the member states to improve coordination and avoid overlapping responsibilities among regulatory authorities. in italy, until the adoption of constitutional law / , regulatory reform had been promoted, designed and implemented mainly at the national level. with the reform, as we have seen (section iii. ), such a centralised approach lost legal and political ground. at the same time, responsibilities for developing and implementing administrative regulation policies have not been explicitly allocated to either the state, the regions or the local authorities. hence, the responsibility for administrative regulation and regulatory reform lies with each of the levels of government in the matters where they exert legislative powers. in like manner, there is no overall competence at the central level to monitor and control regulatory reform programmes at the local level. accordingly, the new constitutional structure calls for effective sharing of administrative powers across the different levels of government. on the basis of the analysis carried out so far, i will now argue that the main problems of the italian administrative strategy for the covid- pandemic are due to the lack of effective sharing of administrative powers and, more specifically, to the failure to share regulatory powers across the different levels of government with the participation and cooperation of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. in particular, this problem oecd, "the territorial impact of covid- : managing the crisis across levels of government" (last updated june ) . the european committee of the regions (cor), "division of powers between the european union, the member states and regional and local authorities" (december ) . see also, oecd-puma, "managing across levels of government" ( ) . has impacted the risk assessment of the various authorities called upon to manage the health emergency. as a result, the problem has impacted nationwide risk assessment and, consequently, the management of the emergency at the national level, leading to the adoption of inconsistent measures by the various institutional actors involved in the administrative decision-making process. in particular, i discuss this problem in italian policies from two key points of view: the government's administrative strategy for managing the virus's spread by means of the "incremental approach" (section iv. .a) and the government's administrative strategy for implementing the nationwide pandemic health plan (section iv. .b). in doing so, i shall take into account the considerations presented above concerning emergency risk regulation (section iii. ), the precautionary principle (section iii. ) and the rules governing powers in the constitutional scenario (section iii. ). one of italy's main problems in relation to the ineffective sharing of administrative powers for managing the pandemic is clearly displayed in what i will call the "incremental approach". this approach is essentially based on the "progressive" application of emergency measures by the government in order to manage the "exponential" spread of the virus. the italian administrative strategy for the pandemic is fundamentally founded on such an approach. in fact, as we have seen (section ii), the government addressed the pandemic by enacting several decrees (dpcms) that "progressively increased" restrictions in lockdown areas (red zones), which were then extended from time to time until they finally applied to the entire country in the national lockdown. in my opinion, although the incremental approach may be a correct application of the principle of proportionality, given the government's proportionate use of emergency powers in dealing with the pandemic, it is the result of an ineffective sharing of administrative regulatory powers between the government, regions and local authorities. indeed, the progressive enforcement of lockdown areas, which from time to time increased the extent and severity of the emergency measures, demonstrates the difficulty of governing the spread of the virus in the red zones rather than the effective implementation of a proportionate administrative strategy. and this is mainly due to the lack of effective cooperation between the government and the regions in exercising their respective emergency powers. from a general point of view, the incremental approach reveals the limited effectiveness of the national and local measures and strategies for managing and containing the pandemic when those measures and strategies are not shared. i argue that even the stringent national lockdown is essentially the result of the ineffective sharing and planning of administrative measures and strategies for managing the pandemic across the different levels of government and especially, in on this approach, see g pisano, r sadum and m zanini, "lessons from italy's response to coronavirus" (harvard business law review, march ) . dpcm of march . this case, between the government and the regions. one can legitimately wonder whether the government can adopt an effective administrative strategy for managing the emergency without sharing and planning their measures with those of the regions. from this perspective, we can say that the government's incremental approach has proven ineffective in coping with the pandemic. i will now explain why in the following points. ( ) regarding risk assessment for pandemics, the science shows that the spread of covid- is rapid and exponential. consequently, the incremental approach does not work if it is not properly implemented with the effective participation of all institutional actors involved in managing the pandemic. scientific data and statistics on the spread of the virus were not predictive of what the situation would have been in the short and medium term. hence, a correct risk assessment of the virus's nationwide spread would have suggested that the administrative measures and, more generally, the strategies should have been shared among all players involved in the main strategy. very often, however, the government's strategy has not been in line with those of the regions, revealing an inadequate assessment of the risk that the virus would spread throughout the country, and thus the ineffective sharing of emergency powers. in fact, some important emergency measures implemented by the regions clearly contradict the government's main strategy. to take a few examples, marche region ordinance no. of february , issued pursuant to decree-law no. of , established measures that were more stringent than the government's, disregarding the latter's strategy. for this reason, the government contested the order before the court. although a judgment in favour of the government was handed down and the challenged ordinance was suspended, the marche region legitimately adopted a new ordinance establishing emergency measures based on the same decree-law no. / , once again disregarding the government's strategy. another paradigmatic case is provided by a series of ordinances by the campania region aimed at imposing a more stringent lockdown at the local level than the lockdown established by the government at the national level. unlike the marche case, the ordinances of the campania region, although contested before the administrative judge, were not suspended, thus making the government's strategy ineffective. consequently, in the absence of effective sharing and planning of the main strategy with the regions, the government had to 'increase' the emergency measures from time to time until finally imposing the stringent national lockdown. ( ) in the absence of power sharing and strategies based on correct risk assessment at the national level, the government's incremental approach seems to have played a considerable role in people's behaviour, inducing them to make "bad choices". as the data show, the government's incremental lockdown of municipalities, provinces and regions in northern italy induced masses of people to move towards the southern regions, spreading the virus to parts of italy that had not yet been affected. an emblematic case of this kind took place immediately after the dpcm of march (see section ii) locked down lombardy and another fourteen provinces in northern italy, spurring thousands of people to flee to the south. such potential negative externalities, as well as other negative spill-overs or distortions, should have suggested that the government share its regulatory acts with those of the "target" regions (ie the northern regions), as well as with the other regions that could be indirectly jeopardised by the lockdown measures (ie the southern regions). alternatively, the government should have undertaken to coordinate the strategies of the regions and local authorities in order to enhance the adoption of effective control measures for people exiting the red zones and entering less affected regions. more generally, in applying lockdown measures, the government should have shared and planned its strategy with the regions on the basis of a common risk assessment that took into account not only the regional territories, but the entire country. accordingly, the government should have established effective countermeasures together with all of the regions potentially involved in lockdown decisions to prevent the virus from spreading from high-risk to low-risk areas. an effective emergency response must be coordinated as a consistent system of actions taken simultaneously by the different actors involved in the decision-making process. ( ) the government's incremental approach also revealed the problem of effectively sharing and planning precautionary measures (see section iii. ) across the different levels of government. the critical situation that arose because of the epidemic's severity called for effective testing of symptomatic and asymptomatic cases, as well as proactive tracing of potential positives across the country. on this point, these precautionary measures were supported by scientific data on the transmission of covid- by asymptomatic people. the absence of a shared strategy for the adoption and implementation of precautionary measures proved particularly harmful in regions where the epidemic risk is higher. indeed, it is no coincidence that the outbreak spread so quickly in northern italy and especially in lombardy. in this region, the efficient public rail transport network connecting urban areas, large numbers of commuters and high levels of air pollution are thought to have increased the incidence of infection. from this point of view, it is clear that risk assessment has been inadequate, and strategies have thus been ineffectively shared between lombardy and the government. the government should have promoted an effective precautionary strategy for health checks by sharing it with the strategies of the regions and ensuring efficient nationwide implementation on the basis of a global risk assessment. conversely, data on infections and deaths reveal that strategies were not shared effectively with the hardest-hit regions. ( ) the incremental approach shows that most of the problems of administrative strategy are also motivated by political issues between parties governing regions and belonging to the coalition now governing the country. from the time when the virus began to spread, the multi-level management of the emergency has triggered competition and institutional division between the government and regions due to policymakers' political differences. the management of the pandemic, in fact, has thrown light on the deep political division between the government, led by the coalition of left-wing parties such as the democratic party and the five star movement, and the hardest-hit regions -lombardy and venetoled by traditionally right-wing populist parties such as the league and brothers of italy. in particular, many of the administrative measures taken by the regions were in contrast with the government's strategy, largely for political reasons. from this standpoint, it can be seen that there has been an "institutional clash" between the regional governments and the national government on the political and administrative actions to be taken to effectively manage the emergency. it is no coincidence that the government's minister of health is a member of one of the opposition parties in lombardy and veneto, and that the governors of lombardy and veneto belong to the coalition opposing the government. to give a few specific examples, a bitter dispute has occurred between prime minister giuseppe conte and attilio fontana, governor of lombardy and member of the rightwing populist party league, with regards to the ineffective management of the emergency in the region most affected by the virus. similarly, as we have seen, luca ceriscioli, governor of the marche region and member of the centre-left party in the majority coalition, opposed the government's decision to declare a state of emergency only in the northern regions. in essence, these strong political divisions have impacted effective power sharing among the different levels of government, causing problems for the government's incremental administrative strategy. ( ) the incremental approach also shows the important role that scientific competence plays in emergency management. in this regard, one of the main goals of scientific expertise is to inform and legitimise governments' decisions, especially in high-uncertainty situations relating to public health. during the covid- outbreak, scientific and technical experts have assisted central and regional governments by contributing to the content of decisions and, more generally, of administrative emergency management strategies. as scientific evidence is the basis for sound political choices, scientific and technical experts have become part of the rationale of governments' decisions and have been useful in reassuring the public with concrete solutions. indeed, in the immediacy of a pandemic, as is logical to assume, the demand for scientific expertise increases as governments search for certainty in understanding problems and choosing effective measures for managing the emergency. especially in the most delicate phases of an emergency, scientific expertise is useful in informing, legitimising and justifying government evaluations and responses to problems, even as political and administrative considerations continue to govern such choices. the result is an increased reliance on scientific expertise and politicisation of scientific and technical information. by invoking scientific expertise, policymakers create the need for what is perceived as evidence-based policymaking, which suggests to the public that political and administrative decisions are based on reasoned and informed judgments aimed at ensuring the public interest and guaranteeing individual rights. however, a major problem is that scientific expertise might obscure the accountability of decisions. as scientific and technical experts serve to inform and legitimise political and administrative decisions, they may also obscure responsibility for policy responses and outcomes. scientific expertise helps to establish the severity of a pandemic in a population, to understand the epidemiological trend over time and to evaluate the effects of political and administrative measures, from mitigation to suppression. nonetheless, undertaking policy actions is the responsibility of government leaders. as scientific expertise becomes more prominent in the policy process, who is accountable for policymaking becomes more obscure. to work better in emergencies, scientific expertise also requires effective sharing of administrative powers based on accurate risk assessment, as i will now explain. in italy, since the beginning of the virus's spread, the various institutional actors, especially the government and the regions, have established their own scientific task forces to support administrative measures and strategies in managing the pandemic. the main problem is that, by doing so, risk assessment at the national level is fragmented. conflicts can also arise between institutional actors involved in the decision-making process. in this scenario, indeed, the government and the regions have adopted administrative decisions and strategies based on the risk assessments provided by their own central and regional task forces. it should be noted that this situation, like others discussed here, derives from the current constitutional architecture of separation of powers where the decision-making process is assigned to the different levels of government. however, managing a pandemic requires a comprehensive risk assessment. italian policies matter, as they show how, at the beginning of the pandemic, some regions' task forces underestimated covid- , while other regions gave it a certain importance. this behaviour on the part of policymakers was not led by the government, which, on the contrary, criticised the regional governments' solutions. the outcome, as i claimed for the incremental approach, is that the government's measures and strategies are not shared with those of the regions and vice versa, and policymakers' accountability is obscured by invoking scientific expertise for pandemic management decisions. b. implementing the national pandemic health plan there is no doubt that a pandemic affects the whole of society. no single organisation can effectively prepare for a pandemic in isolation, and uncoordinated preparedness of interdependent public organizations will reduce the ability of the health sector to respond. a comprehensive, shared, coordinated, whole-of-government approach to pandemic preparedness is required. the government's strategy, as we have seen in the incremental approach to dealing with the emergency, proved particularly ineffective due to the failure to share administrative powers with the other institutional actors involved in the pandemic decision-making process, particularly the regions. but this, as we shall see now, was not the only weak point. i will argue here that another of the major problems was the lack of effective implementation of the national pandemic health plan. in particular, we will see how and why the ineffective implementation of the plan by the government, regions and local authorities posed serious problems for containing the spread of the virus and, more specifically, for avoiding the collapse of the public healthcare system. on this point, one of the main problems for public health posed by the novel coronavirus is its ability to spread with exceptional ease and speed, threatening to overwhelm the healthcare system. in particular, what should be especially clear from the data is the critical situation of the intensive care system in italy, which has been severely weakened by the pandemic. intensive care system at the national level, cooperating with the regions and local authorities to ensure that critical care bed availability is efficiently managed. in this case, effective actions shared among all institutional actors and based on an adequate and accurate risk assessment at the national level would avoid saturating the intensive care system in the medium and long term, while the government should be able to increase capacity in the short term. yet, the data on the intensive care system show that the situation was inefficiently managed in the regions hardest hit by covid- , especially in lombardy, which paid a high price at the local level for the ineffective implementation of the pandemic health plan at the national level. more generally, it should be emphasised that this point also demonstrates the importance of sharing administrative powers between government, regions and local authorities to implement the pandemic management plan effectively throughout the country. in this connection, many elements based on scientific and epidemiological data demonstrate that the covid- pandemic called for effective cooperation and coordination across all levels of government. in addition, it must be borne in mind that fighting a pandemic hinges on many factors, most of which are time consuming or in any case cannot be accomplished quickly. preparing a candidate vaccine, for example, takes a long time in terms of both preclinical and clinical development. likewise, developing and testing an effective drug involves complex multi-stage clinical trials. such considerations might be sufficient on their own to justify taking effective actions to mitigate the pandemic emergency's impact on the public healthcare system. in this phase, as we have seen, emergency risk regulation requires that regulatory action be taken in the immediacy of an emergency in order to mitigate its impact (section iii. ). to avoid the collapse of the public health system, the government should thus have contained the spread of the virus by effectively implementing the nationwide pandemic management plan with the participation of all institutional actors. the who has recognised the importance of sharing administrative powers through the participation and cooperation of the various institutional actors involved in the strategy against pandemics. in this regard, the who has drawn up specific guidelines for implementing a pandemic influenza preparedness plan that states should apply in order to manage the spread of the virus throughout their territories. in particular, the who's guidelines encourage states to develop efficient plans, based on national risk assessments, with the effective participation of institutional actors at all levels of government. in italy, the most serious problem is that the government, although it had already developed its own national plan, foster its effective adoption by the regions and local authorities, disregarding a crucial point of the who's guidelines. consequently, the failure to implement the national pandemic plan, as we have seen, created the conditions for the collapse of the public health system, with the overcrowding of intensive care units and the consequent loss of life. *** in conclusion, the italian policies regarding the covid- outbreak can demonstrate the importance of: ( ) rethinking the incremental approach; and ( ) implementing a national health plan for pandemics by sharing powers, and more specifically administrative regulatory powers for emergencies based on an adequate and accurate risk assessment at the national level, among the different levels of government with the participation, cooperation and coordination of all institutional actors involved in the pandemic decision-making process. as we have seen, sharing administrative powers at the different levels of government plays a particularly important role in managing emergencies in the constitutional scenario, where competences are distributed between government, regions and local authorities, and several institutional actors are allowed to adopt regulatory acts (see section iii. ). the major changes that the constitutional amendments have brought to policymaking in the italian legal system require that constant support be provided to the regions and local authorities, especially in emergencies. despite significant decentralisation, the government still has a fundamental role to play in sharing and coordinating administrative powers at the different levels of government and in ensuring loyal cooperation among all of the institutional actors involved in emergency decisionmaking processes. indeed, the government is tasked with promoting and coordinating "action with the regions" (article of law / ), as well as with advancing cooperation "between the state, regions and local authorities" (article of legislative decree / ). similarly, the government must promote "the necessary actions for the development of relations between the state, regions and local authorities" and ensure the "consistent and coordinated exercise of the powers and remedies provided for cases of inaction and negligence" (article of legislative decree / ). looking at the constitutional perspective, some possible solutions might be proposed. ( ) in the italian constitutional scenario, although concurrent power to legislate on matters of public health is vested in the state (ie the government) and the regions pursuant to article ( ), the state (ie the government and the regions together), on the basis of the principle established by article ( ), "safeguards health as a fundamental right of the individual and as a collective interest". i argue, more specifically, that safeguarding health is a task of the state based on the fundamental principle of the constitution referred to in article ( ) , where the duty of the state is to "remove those obstacles of an economic or social nature" that, by constraining legislative decree / . the "freedom and equality of citizens", impede the "full development of the human person and the effective participation of all workers in the political, economic, and social organisation of the country". thus, i believe that under the joint interpretation of article ( ) and article of the constitution, as well as the principle of loyal cooperation, the government and the regions must act by sharing administrative powers (and strategies) among them in order to protect the fundamental right to health. in so doing, the government can play an essential role in promoting institutional balance and cooperation between the national and local levels, maximising loyal cooperation and implementing vertical and horizontal subsidiarity. ( ) sharing administrative powers for emergencies can also be encouraged and enhanced through the effective implementation of constitutional tools, such as the system of conferences based on the principle of loyal cooperation. (a) the conference on the relationships between government, the regions and the self-governing provinces is the key legal tool for multi-level political negotiation and collaboration. it serves in an advisory, normative and planning capacity and acts as a platform facilitating power sharing. (b) the conference on the relationships between government and the municipalities coordinates relations between the government and local authorities through studies, information and discussion of issues affecting local authorities. (c) the permanent conference on the relationships between government, the regions and the municipalities deals with areas of shared competence. ( ) in order to "safeguard health as a fundamental right of the individual and as a collective interest", article ( ) of the constitution could be applied whenever it is necessary to guarantee "the national unity and the unitary nature of the constitutional system". i claim that this provision, which establishes the government's administrative substitution power, provides for the centralisation of administrative powers in specific cases contemplated by the constitution. in this sense, article ( ) lays down that the government can act for the regions and/or local authorities in cases of "grave danger for public safety and security". in the light of this definition, the government's substitution for the regions and/or local authorities might be invoked as a result of the "grave danger for public safety", as well as in order to preserve "economic unity" and guarantee the "basic level of benefits relating to civil and social entitlements". in my view, however, the government should exercise its power of substitution as an extrema ratio whenever effective sharing among all of the institutional actors has not been implemented. article ( ) is clear in this regard, requiring that the substitution power be exercised in compliance with the principles of "subsidiarity" and "loyal cooperation". italy's national pandemic plan was adopted through the permanent conference on the relationships between central government, the regions, municipalities and other local authorities . administrative powers"and more specifically the administrative regulatory powers for emergenciesbased on an adequate and accurate risk assessment, across the different levels of government with the participation, cooperation and coordination of all institutional actors involved in the emergency decision-making process: the government, regions and local authorities. fundamentally, i emphasised that the italian case reveals the importance of sharing administrative powers from two main points of view. first, i argued that the "incremental approach" to dealing with the emergency, although based on the proportionate use of powers, is largely ineffective or even harmful in the absence of cooperation among all actorsthe regions and local authoritiesinvolved in the main strategy implemented by the government (section iv. .a) . second, i discussed the importance of cooperation between the government, regions and local authorities for the effective and efficient implementation of a nationwide pandemic health plan (section iv. .b). i suggested that these points be viewed from a constitutional perspective in order to propose some possible solutions. from this perspective, the problems of effective sharing of administrative powers across the different levels of government could be resolved by systematically interpreting the constitution and implementing specific constitutional tools provided by the legal system (section iv. ). in conclusion, more generally, i argue thatand this is the main thrust of the articleadministrative powers should be shared across the different levels of government based on an adequate and accurate risk assessment with the participation and cooperation of all of the institutional actors involved in the emergency decision-making process in order to safeguard the fundamental rights enshrined in the constitution as well as in eu and international law. in pandemics, this aim must be achieved not only to guarantee the right to health, but also to safeguard all of the rights that might be jeopardised by the exercise of administrative powers and, more specifically, the exercise of emergency powers in dealing with the pandemic. the strong measure of "lockdown", for example, should be the extrema ratio of administrative powers because it suspends the rule of law and jeopardises rights. indeed, as i have claimed in analysing the italian policies, sharing powers with effective cooperation between government, regions and local authorities in managing the pandemic would optimise the adoption of nationwide virus containment measures, avoiding or at least delaying the application of stringent emergency measures such as the lockdown of municipalities, provinces, regions or even the entire country. taking into consideration the correct application of emergency risk regulation (section iii. ) and the precautionary principle (section iii. ), although lockdowns aim to contain specific areas that are most affected by the virus, they must be proportional to the risk that they intend to curtail. when such measures are adopted to protect the right to health, as is the case in a pandemic, this right must be balanced with other rights. yet, if administrative powers are not shared effectively across the different levels of government, the balancing principle might be disregarded by jeopardising one or more rights without legitimate justification (eg the right to freedom of movement enshrined in article of the constitution). this is the problem that the italian policies bring to light: a problem that i believe that the government must take into account in the near future as it strives to manage covid- and other similar pandemics. perspectives on the precautionary principle les avatars du principe de precaution en droit public le principe de précaution en droit communautaire: stratégie de gestion des risques ou risque d'atteinte au marché intérieur? the legal origins of the precautionary principle are to be found in the vorsorgeprinzip established by german environmental legislation in the mid- s; see there is a close relationship between the two principles that has led some to argue that they may be used "interchangeably". however, other authors contend that the prevention principle applies in situations where the relevant risk is "quantifiable" or "known" and there is a certainty that damage will occur. in this sense, see, respectively, wt douma principio di prevenzione e novità normative in materia di rifiuti dal pericolo al rischio: l'anticipazione dell'intervento pubblico" [from danger to risk: the anticipation of public intervention] ( ) diritto amministravio . ecj case t- / pfizer animal health sa v council in the same sense, see also case c- / national farmers' union case c- / united kingdom v. commission [ ], ecr i- case c- / monsanto agricoltura italia art of the italian constitution