key: cord-0501097-8jemebui authors: Lai, Kenneth; Yanushkevich, Svetlana N. title: Machine Reasoning to Assess Pandemics Risks: Case of USS Theodore Roosevelt date: 2020-08-24 journal: nan DOI: nan sha: 2cede6e08d9e3c5dea1b4ede2eada147a3d999c2 doc_id: 501097 cord_uid: 8jemebui Assessment of risks of pandemics to communities and workplaces requires an intelligent decision support system (DSS). The core of such DSS must be based on machine reasoning techniques such as inference and shall be capable of estimating risks and biases in decision making. In this paper, we use a causal network to make Bayesian inference on COVID-19 data, in particular, assess risks such as infection rate and other precaution indicators. Unlike other statistical models, a Bayesian causal network combines various sources of data through joint distribution, and better reflects the uncertainty of the available data. We provide an example using the case of the COVID-19 outbreak that happened on board of USS Theodore Roosevelt in early 2020. Epidemiological Surveillance (ES) uses various models to forecast the spread of infectious disease in real-time. The ES models can predict the pandemic's mortality, but they do not account for uncertainties such as reliability of testing technology, specific environmental and social factors. In the context of preparedness for future pandemics, they also do not account for "the availability of treatment, clinical support, and vaccines" [1] . Some ES models can be "stratified" for age, gender, or other variables, but do not provide any causal analysis of those and the risks of interest that must be assessed by the preparedness decision-makers. In other words, they are not presented in the form that enables its use in pandemics analysis, including sensitivity analysis and model explainability. COVID-19 outbreak provided valuable lessons and unveiled critical disadvantages of the existing ES models including the following: 1) The ES model's outcomes need to be further translated to become usable for human decision-makers. There is a technology gap between the existing models and the decision-making process, as illustrated in Fig. 1 . 2) This can be implemented using computational intelligence (CI) support in the decision-making process. 3) The CI tool needs to be based on causal models that account for uncertainties, as well as perform fusion and forecasting on those uncertainties. The answers to the above challenges lie in the usage of machine reasoning, namely, causal models such as causal Bayesian networks. These causal models operate using probabilities, thus accounting for uncertainties, and enable knowl- edge inference based on priors and evidence [2] . This approach has been applied to risk assessment in multiple areas of engineering and business [3] , risk profiling in identity management [4] , [5] , medical diagnostics [6] , and very recently to the analysis of COVID-19 risks such as fatality and disease prevalence rates [7] . The causal models shall be the core component of the Decision Support System (DSS) that would support humandecision makes in assessing the risk of the infectious disease outbreaks. The DSS concept was once known as an "expert system" that provides certain automation of reasoning (though mainly based on deterministic rules rather than Bayesian approach) and interpretation strategies to extend experts' abilities to apply their strategies efficiently [8] . Examples of contemporary DSS are personal health monitoring systems [9] , e-coaching for health [10] , security checkpoints [11] , [12] , and multifactor authentication systems [13] . This paper focuses on developing a DSS for ES, with a CI core based on causal Bayesian networks. We define a DSS as a crucial bridging component to be integrated into the existing ES systems in order to provide situational awareness and help handle the outbreaks better. This paper is organized as follows. Contributions are listed in Section II. Definitions of the relevant concepts are given in Section III. The DSS concept and the fundamental risk assessment operations are described in Section IV. An example of reasoning on a causal network for the case of the USS Theodore Roosevelt is shown in Section V. Section VI concludes the paper. Fig. 2 . The overall architecture for the proposed decision support system. We focus on a specific case scenario of COVID-19 on the USS Theodore Roosevelt. Therefore, the team leader, captain of the ship, is given recommendations by both the medical officer and the support system. II. CONTRIBUTION Fig. 2 illustrates the general framework for the proposed Decision Support System (DSS). In this paper, we illustrate the idea of using machine reasoning to assist the team leaders in making decisions by recommending the best course of action given evidence. Using the USS Theodore Roosevelt as a case study, we explore how different preventive behaviors or measures, such as wearing face masks, impact the chance of subjects being infected with COVID-19. Since all data regarding this case study is obtained after the fact, all reasoning is based on evidence (reactive) and not proactive. It stands to reason that when using these data as well as the fusion of various current and past heterogeneous variables, we can accumulate new knowledge for predicting the impact of future outbreaks, and help prepare for those. In this study, we identify a technological gap in the ES in both technical and conceptual domains (Fig. 1) . Conceptually, the ES users require significant cognitive support using the CI tools. This paper addresses the key research question: How to bridge this gap using the DSS concept? We follow a well-identified trend in academic discussion on the future generation DSS [14] . This paper make further steps and contributes to the practice of technology gap bridging. The key contribution is twofold: 1) Development of a reasoning and prediction mechanism, the core of a DSS; for this, a concept of a Bayesian causal network [15] is used; in particular, a recent real-world scenario of COVID-19 was described using a Bayesian network [7] . 2) Development of the complete spectrum of the risk and bias measures, including ES taxonomy updating. These results are coherent with the solutions to the following related problems: − The technology gap "pillars" in Fig. 1 are the Protocol of the ES model and the Protocol of the DSS. These protocols are different, e.g. spread virus behavior and conditions of small business operation. The task is to convert the ES protocol into a DSS specification. Criteria of efficiency of conversion are an acceptance of a given field expert. The reasoning mechanism based on the causal network intrinsically contains the protocol conversion. We demonstrate this phenomenon in our experiments. − The DSS supports an expert to make decisions under uncertainty in a specific field of expertise. Specifically, intelligent computations help an expert in better interpretation of uncertainty under chosen precautions. The risk and bias are used in this paper as a precaution of different kinds of uncertainties related to ES data [16] , testing tools, human factors, ES model turning parameters, and artificial intelligence. A DSS concept suitable for the ES model is proposed in this paper. In our study, we model the DSS as a complex multi-state dynamic system [12] . A cognitive DSS is a semi-automated system, which deploys CI to process the data sources and to assess risks and other "precaution" measures such as trust in the CI and various biases influencing the decision [5] . The crucial idea of our approach is that the risk assessment should be performed using the reasoning mechanism [2] . This assessment is submitted to a human operator for the final decision. The core of the proposed DSS is a causal network that allows us to perform reasoning. The reasoning operations are defined as follows: 1) Prior data representations and assessments, such as statistics and distribution of data after an outbreak that has already happened, as well as the pre-existing conditions. In causal modeling, the priors are represented by a corresponding probability distribution function [2] . 2) Causal analysis is based on the "cause-effect" paradigm [15] . Another advanced tool is Granger causality analysis, usually used to analyze time series and to determine whether one can forecast the other [17] . 3) Reasoning is the ability to form an intelligent conclusion or judgment using the evidence. Causal reasoning is a judgment under uncertainty performed on a causal network [15] . 4) Prediction. In complex systems, meta-learning and metaanalysis are be used to predict the overall success or failure of the predictor. The most valuable information is in the "tails" of the probabilistic distributions [18] , [19] . A causal network is a directed acyclic graph where each node denotes a unique random variable. A directed edge from node X 1 to node X 2 indicates that the value of X 1 has a direct causal influence on the value of X 2 . Uncertainty in causal networks is represented as Conditional Uncertainty Tables (CUTs). A CUT is assigned to each node in the causal network, and it is a table that is indexed by all possible value assignments to the parents of this node. Thus, each entry of the CUT is a model of a conditional "uncertainty" that varies according to the choice of the uncertainty metric. A recent review [20] describes the various types of causal networks that are deployed in machine reasoning: [26] . The type of a causal network can be chosen given the DSS model and a specific scenario. The choice depends on the CUT as a carrier of primary knowledge and as appropriate to the scenario. Various causal computational platforms for modeling several systems were compared, in particular, in [27] (Dempster-Shafer vs. credal networks), and [4] , [5] (Bayesian vs. interval vs. Dempster-Shafer vs. fuzzy networks). In our study, we use Bayesian causal networks, often simply called Bayesian networks. Our motivation for choosing this type of causal networks is driven by the fact that the Bayesian (probabilistic) interpretation of uncertainty provides acceptable reliability for decision-making. The Bayesian decision-making is based on evaluation of a prior probability given a posterior probability and likelihood (event happening given some history of previous events). In a Bayesian network, the nodes of a graph represent random variables X = {X 1 , . . . , X m } and the edges between the nodes represent direct causal dependencies. To construct a Bayesian network, factoring techniques are generally applied. Thus, this network is based on a factored representation of joint probability distribution: where Par(X i ) denotes a set of parent nodes of the random variable X i . The nodes outside Par(X i ) are conditionally independent of X i . The posterior probability of X 1 is called a belief for X 1 , Bel(X 1 ), and the probability P (X 1 |X 2 ) is called the likelihood of X 1 given X 2 and is denoted L(X 1 |X 2 ). While the Bayesian network structure reflects the causal relationships, its probability reflects the strengths of the relationships. Risk and other "precaution" measures such as bias and trust are often used to evaluate a cognition-related performance in cognitive decision support systems [14] . The risk and trust measures are used in ES in simple forms such as 'high-risk group', 'risk factor', and 'systematic difference in the enrollment of participants' [16] . However, the cognitive DSS is expected to provide the experts with detailed assessments of epidemic scenarios and make the decision process more transparent and explainable. For example, syndrome surveillance consists of real-time indicators for a disease allowing for early detection [16] . The experts seek the DSS support in answering the following questions: • What are the risks given the state of the disease outbreak and the health care resources? • How reliable are the surveyed or collected data? • What kind of biases are present or expected in data collection, algorithms, and CI decision making? Below, we provide a definition of risk by the US' National Institute of Standards and Technology (NIST) [28] . Definition 1: Risk is a measure of the extent to which an entity is threatened by a potential circumstance or event, and typically is a function of: (i) the adverse impact, also called cost or magnitude of harm, that would arise if the circumstance or event occurs, and (ii) the likelihood of event occurrence. For example, in automated decision making, and in our study, the Risk is defined as a function F of Impact (also known as Cost), I, of a circumstance or event and its occurrence probability, P : In other words, risk of event represents its impact provided given the likelihood of the decision error. The other "precaution" measures often used in causal models include bias and trust to the CI recommendations [14] . Definition 2: Bias in the ES refers to the tendency of an assessment process to systematically over-or under-estimate the value of a population parameter. For example, in the context of detecting or testing for infectious disease, the bias is related to the sampling approaches (e.g. tests are performed on a proportion of cases only), sampling methodology (systemic/random or ad-hoc), and chosen testing procedures or devices [29] . The biases are probabilistic in nature because the evidence and information gathered to make a decision is incomplete, inconclusive, ambiguous, conflicting, and has various degrees of believability. Identifying and mitigating bias is essential for assessing decision risks and CI biases [14] , [30] , [31] . Acceptance of the cognitive DSS technology by human decision-makers is determined by the combination of the bias, trust, and risk factors [32] , [33] . Other contributing factors include belief, confidence, experience, certainty, reliability, availability, competence, credibility, completeness, and cooperation [34] , [35] . In our approach, the causal inference platform calculates various uncertainty measures [4] in risk and bias assessment scenarios. Probabilistic reasoning on causal (Bayesian) networks enables knowledge inference based on priors and evidence has been applied to diagnostics for precision medicine [6] . Recently, COVID-19 risks analysis was performed by [7] : the Bayesian inference was applied to learn the proportion of the population with or without symptoms from observations of those tested along with observations about testing accuracy. During the time of deployment of the USS Theodore Roosevelt Ship around mid-January, an outbreak of COVID-19 occurred that affected marines (younger healthy adults). Approximately 1000 of the 1417 service members were determined to be infected with COVID-19. An investigation during April 20-24, conducted by US Navy and CDC, includes a study on 382 voluntary service members [36] . In our study, we created a fragment of a causal network based on the available data ( Figure 3 ). The risks assessed in the DSS using the causal network include the 'Infection Rate', False Positive Rate (FPR), False Negative Rates (FNR). We define the 1 st , 2 nd , and 3 rd order knowledge as the prior, calculated, and inferred knowledge, respectively. The causal network for this example is a Bayesian network (BN), with Conditional Probability Tables (CPTs) assigned to the nodes. The CPTs were constructed using the data retrieved from [36] . Given the reported test results based on two types, enzymelinked immunosorbent assay (ELISA) and real-time reverse transcription-polymerase chain reaction (RT-PCR), error rates such as FPR and FNR can be estimated. The designed BN includes a 'Test node representing the results of ELISA and the previous RT-PCR test results. Using these results, FNR and FPR are computed as follows: where True Negative (T N ) represents a healthy subject reported (by testing) as healthy, True Positive (T P ) represents an infected subject reported (by testing) as infected, False Positive (F P ) represents a healthy subject reported (by testing) as infected, and False Negative (F N ) represents an infected subject reported (by testing) as healthy. In this paper, we measure the 'Infection Rate', defined as the ratio between the number of infections and the amount of population at risk: where K is a constant value which we set to be 100 in order for the Infection Rate value to be within the interval 0 to 100. Table I shows all the nodes in the BN and their corresponding states and probabilities. The probabilities for the prior nodes in Table I are captured based on the statistics collected in [36] . For example, in [36] there is a total of 382 volunteers, of which 351 volunteers reported washing their hands as a prevention measure. This results in the probabilities of 8.12% (31/382) of volunteers who were not washing their hands, and 91.88% (351/382) of those washing their hands. In this paper, we assume a uniform distribution for the node 'Infection Rate' as no value was given in [36] . It should be noted that this value is approximately 70% (1000/1417) for the USS Theodore Roosevelt, based on the reported results [36] . In this paper, we introduce a measure called the Preventive Index (PI) that illustrates the idea of how selected actions can indirectly increase/decrease the chance of infection. PI for a specific action is defined as follows: where subscript i represents one of the prevention measures, α i represents the probability of having COVID-19 given that prevention action i is performed, and β i represents the probability of having COVID-19 given no prevention measure i implemented. In other words, it captures the degree of influences of each preventive measure on the infection rate. The probability values, α and beta, are calculated based on the statistics from [36] , and are summarized in Table II . For example, it was reported that 283 volunteers used face covers as a preventive measure, of which 158 reported having COVID-19. In addition, it is known that a total of 238 volunteers were having COVID-19. Therefore, combining both knowledges, we get 55.83% (158/283) and 80.81% (80/99) of having COVID-19 for the subjects who used face cover or not, respectively. Based on Equation (5), a PI of 1.3091 (1 + (80.81 − 55.83)/80.81) is obtained for using a face cover. This represents a "positive" PI, therefore, reducing the overall probability of infection. The CPT for 'Prevention Index' represents a distribution of the cumulative prevention index. It is calculated based on the product of the individual preventive indices: where N is the number of prevention measures (7 in this paper), γ i is a binary value indicating whether the prevention measure i is taken, PI i is the individual prevention index for behavior/action i, and PI 0 represents the default prevention index for no preventive measure taken (it is assumed to be 1 in this paper). For example, a base case where no preventive action is used will result in a prevention index of 1. Based on Equation (6), if 'Hand Wash' is applied, the index is increased to 1.0373 (1 × 1.0373). This can be further increased to 1 Similarly, the computation of the CPTs for 'Vulnerable' (V) uses the normalized values of the product of the probabilities for 'Gender' (G) and 'Age' (A): where A i represents the probability of having COVID-19 for age category i (18-24, 25-29, 30-39, and 40-59) , and G i represents the probability of having COVID-19 for gender category j (male or female). For example, given that the probability of having COVID-19 for the age group 18-24 is 68.1% and the probability of having COVID-19 for gender group male is 65.7%, the degree of vulnerability is computed as follows (Equation (7)): As a sample, two CPT tables, for the Nodes "Symptoms" and "Vulnerable" are shown in Tables III and IV, respectively. The CPT for 'Has COVID' is estimated based on the relationship of 'Prevention Index' (PI), 'Vulnerable' (V) and 'Infection Rate' (IR), specifically: In Equation 8, the IR is reduced based on the PI and then multiplied by the value V + 1. This serves as a multiplier when calculating the chance of having COVID-19. The value V = 0 (False) means that the person is not vulnerable. In this paper, we assume V = 1 (True) when a subject is vulnerable. Thus, the vulnerable subject's chance of having COVID-19 is multiplied by a factor of V + 1 = 2. The remaining nodes 'Symptoms' and 'Test' have CPTs created directly based on the data from [36] . For example, the conditional probability for 0 symptoms is defined as P (Symptoms = 0|COVID = Y es) = 11.52% and P (Symptoms = 0|COVID = N o) = 14.14%. The list of the COVID-19 symptoms reported in [36] included loss of taste, smell, or both, palpitations, fever, chills, myalgia, cough, nausea, fatigue, shortness of breath/difficult breathing, chest pain, abdominal pain, runny nose, diarrhea, headache, vomiting, and sore throat. Note that in this paper, we are interested in the number of symptoms and not their type. One of our key interest in this paper is to evaluate the risk of being infected with COVID-19. Equation (2) defines risk as a function of impact and probability of the event of interest. In this paper, we define two types of infection risk: the risk of missing the true infection (positive risk, Risk p ) and the risk of declaring the infection when it is not the case (negative risk, Risk n ). A positive risk reflects the impact of virus spreading undetected, while a negative risk is determined by the false positive testing resulting in unnecessary treatment or quarantine. These risk equations are defined as follows: Risk n = Impact q × FPR + Impact c × TNR (10) where Impact u represents the cost of undetected virus spreading (very high), Impact k represents the cost of unnecessary "precaution" such as quarantining (high), Impact q represents cost of quarantine (low), Impact c represents the base cost of testing (very low), True Positive Rate (TPR) is defined as TPR = 1 − FNR, True Negative Rate (TNR) is defined as TNR = 1 − FPR and FPR/FNR is defined by Equation (3). For example, given a specific scenario where the error rates (FPR = 1% and FNR = 20%) and impact values (very high = 4, high = 3, low = 2, very low = 1) are given, the overall risk is estimated as follows: Risk p = 4 × 0.20 + 3 × 0.80 = 3.20 Risk n = 2 × 0.01 + 1 × 0.99 = 1.01 Experiments on the BN shown in this section were implemented using the open-source Python library pyAgrum [37] . The following scenarios were considered in our experiments: Table V shows the probability of a subject having COVID-19 given various 'Prevention Index' and whether or not they are 'Vulnerable'. As the prevention index increases, the chance of getting COVID-19 decreases regardless of vulnerability. For example, when (a) prevention index 0.9 and 1.0, the chance of getting COVID-19 is reduced from 82.23% to 79.41%, which represents about 2.82% difference in getting COVID-19 for a 0.1 increase in prevention index. Table V (c) assumes a specific case where no evidence for 'Vulnerable' is given. Scenario 2: For scenario 2, we analyze the causal relationship between the number of symptoms and the fact of having COVID-19. Table VI shows the chance of having COVID-19 given the number of symptoms. As the number of symptoms increases, the chance of the individual having COVID-19 increases proportionally. For 0 symptoms (a subject is asymptomatic), a chance of having COVID-19 is 29.67%. It increases to 59.57% for the subjects having 4-5 symptoms. Scenario 3: Scenario 3 considers how to infer the chance of a subject being vulnerable given the evidence of having COVID-19. Table VIII shows the estimated average 'Infection Rate' for fixed prevention index of 0.9 (a), 1.5 (b), and 2.3 (c). We observed, quite surprisingly, that the infection rate is minimally impacted by the prevention index. In part, this is because we assumed that 'Prevention Index' and 'Infection Rate' are independent. The number of symptoms, on the other side, has a direct influence on the estimated infection rate. Assuming the specific scenario of USS Theodore Roosevelt with the estimated error rates are estimated assuming that RT-PCR test results are ground truth and ELISA test results are predicted cases. Based on this assumption, FPR and FNR are calculated to be = 10.88% (16/147) and 9.79% (23/235), respectively. With the given impact values (very high = 4, high = 3, low = 2, very low = 1), the overall risk is computed as follows: Risk p = 4 × 0.0979 + 3 × 0.9021 = 3.0979 Risk n = 2 × 0.1088 + 1 × 0.8912 = 1.1088 The maximum value of positive risk is 4 (FNR=100%) and the minimum value is 3 (FNR=0%). These two cases represent the extreme cases where either all infected subjects are correctly diagnosed or all of them are misdiagnosed. Similarly, the maximum value of negative risk is 2 (FPR=100%) and the minimum value is 1 (FPR=0%). The two cases represent the extreme cases where either all healthy subjects are misdiagnosed or cleared. In this section, we illustrate the idea of how a decisionmaker such as a medical officer and/or a captain, can use the proposed causal network to infer the risks in relation to the actions/decisions. Specifically, we show that inaction (no prevention behavior, Preventive Index = 1) and infection rate of 70% results in 71.71% of the crew being infected. If all beneficial prevention behavior is taken (Preventive Index = 2.3), the chance for the crew to be infected is reduced to 48.92%. Given the proposed causal model, there are several limitation/assumptions including: • Insufficient amount of data to capture the true causal relationship, • The CPT of the BN nodes are populated based on simplified equations and approximations/assumptions, and • There is also an assumption that each prevention measure is independent, while in reality they might be related. As indicated earlier, some causal relationships are inferred based on the data. Therefore, in case of insufficient data, some relationships can be misleading and/or missing. For example, data regarding spatial location and congestion of the crew is currently missing. Selected subjects may be required to travel through the ship to the targeted areas due to their duty, and this required movement may result in an increased chance of infection. As well, the nodes such as 'has COVID', 'Vulnerable', and 'Prevention Index', are populated based on the proposed equations. These equations only illustrate the general, not necessarily the exact relationship, and were derived for the given scenario, and may require modification when transferred to another study. For example, age, gender, and preventive behaviors can greatly increase or decrease the chance of infection, but this relationship cannot be captured by deterministic equations. In this study, we assume that the seven preventive behaviors reported by the volunteers in [36] are independent. This assumption is not sufficient, as subjects can generally be clas-sified as risk-averse or risk-tolerant. Risk-averse individuals are much more likely to take preventive measures, that is, the subject that uses face masks are also the ones who keep a social distance. In addition, the bias in the sampling of data can severely impact the causal network model, specifically the creation of the CPTs. In the USS Theodore Roosevelt case, there is a significant bias regarding the crew composition of younger males. Based on the collected data [36] , age group 18-24 contains the most members but also contributes to the largest percentage of having COVID-19 (68.1%), whereas the age group 40-59 contains the least amount of infected (55.8%). This contradicts the belief that older people are more vulnerable. Possible reasons for this contradiction may be that younger people are prone to more interactions, while older people take more precautionary measures, as well as they are are more likely to be of higher rank on the ship and have different duties requiring less contact. Lastly, all the data collected in [36] were collected on a volunteer basis, and, therefore, represent only a fraction of the ship population. This paper contributes to bridging the technology gap (Figure 1 ) that exists between the contemporary ES models and human expert's limitations to handle uncertainty provided by the model while striving to make reliable decisions. It asserts that the solution lies in deploying the causal networks that capture an approximation of joint probabilistic distributions of epidemiological factors. We propose a general DSS model with an embedded reasoning mechanism using a causal Bayesian network. This reasoning results in the probabilities and risk assessment of the outcomes of interest, thus providing recommendations to the human decision-makers. The DSS ability to support human experts with or without technical background should be estimated using various measures, including the generally used "technological" performance measures. The recent emergence of "precaution" measures such as risk, trust, and bias address this trend. In this paper, we focus on risk assessment. It should be noted that Other open applied problems to be further addressed include studies of other precautionary measures such as bias and trust. These shall reflect various decision-making dimensions: − Technical, e.g. prediction accuracy and throughput [13] , − Social, e.g. trust in CI [30] , [38] , − Psychological, e.g., efficiency of human-machine interactions [39] - [41] , and − Privacy and security domain, e.g., vulnerability of personal data [42] - [44] . Finally, in the context of epidemic or pandemic preparedness, the human decision makes may need support as the situation develops (proactive reasoning). Given data from the epidemiological model, the output of such DSS is a result of dynamic evidential reasoning. This approach shall be further developed for better managing future epidemics and pandemics. Novel framework for assessing epidemiologic effects of influenza epidemics and pandemics The seven tools of causal inference, with reflections on machine learning Risk assessment and decision analysis with Bayesian networks Understanding and taxonomy of uncertainty in modeling, simulation, and risk profiling for border control automation Cognitive identity management: Risks, trust and decisions using heterogeneous sources A personalized infectious disease risk prediction system Bayesian network analysis of covid-19 data reveals higher infection prevalence rates and lower fatality rates than widely reported Decision support systems: Integrating decision aiding and decision training," in Handbook of human-computer interaction Wize mirror-a smart, multisensory cardio-metabolic risk monitoring system Architecting e-coaching systems: a first step for dealing with their intrinsic design complexity Biometric recognition in automated border control: a survey Cognitive checkpoint: Emerging technologies for biometric-enabled watchlist screening A fuzzy decision support system for multifactor authentication Aessing risks of biases in cognitive decision support systems Probabilistic reasoning in intelligent systems: Networks of plausible inference Principles of epidemiology in public health practice; an introduction to applied epidemiology and biostatistics Causal discovery and inference: concepts and recent methodological advances Statistics of extremes On the favorable estimation for fitting heavy tailed data Uncertainties in conditional probability tables of discrete bayesian belief networks: A comprehensive review Imprecise Probability Probability intervals: a tool for uncertain reasoning Credal networks Bayesian networks inference algorithm to implement dempster shafer theory in reliability analysis Inference and learning in fuzzy bayesian networks Subjective networks: Perspectives and challenges Tackling uncertainty in security assessment of critical infrastructures: Dempster-shafer theory vs. credal sets theory Security and Privacy Controls for Information Systems and Organizations A manual for estimating disease burden associated with seasonal influenza Technology-assisted risk of bias assessment in systematic reviews: a prospective cross-sectional evaluation of the robotreviewer machine learning tool Pruning trust-distrust network via reliability and risk estimates for quality recommendations A security risk analysis model for information systems: Causal relationships of risk factors and vulnerability propagation analysis Trust prediction via belief propagation A survey on trust modeling Sars-cov-2 infections and serologic responses from a sample of us navy service membersuss theodore roosevelt agrum: a graphical universal model framework Algorithmic bias in autonomous systems Computational modeling of the dynamics of human trust during human-machine interactions Cognitive and motivational biases in decision and risk analysis Optimizing operatoragent interaction in intelligent adaptive interface design: A conceptual framework Identity vs. Attribute Disclosure Risks for Users with Multiple Social Profiles Privacy and synthetic datasets Data quality and artificial intelligence-mitigating bias and error to protect fundamental rights This research was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant "Biometric-enabled Identity management and Risk Assessment for Smart Cities". The authors acknowledge Dr. V. Shmerko for valuable ideas and suggestions, and Ivan Hu for helping to collect data on the COVID-19 outbreak case on the USS Theodore Roosevelt."