key: cord-0614379-b666hkaf authors: Paulus, David; Fathi, Ramian; Fiedrich, Frank; Walle, Bartel Van de; Comes, Tina title: On the interplay of data and cognitive bias in crisis information management -- An exploratory study on epidemic response date: 2022-01-10 journal: nan DOI: nan sha: 741422a803f70c20966b2392b955e6c499e0ed33 doc_id: 614379 cord_uid: b666hkaf Humanitarian crises, such as the 2014 West Africa Ebola epidemic, challenge information management and thereby threaten the digital resilience of the responding organizations. Crisis information management (CIM) is characterised by the urgency to respond despite the uncertainty of the situation. Coupled with high stakes, limited resources and a high cognitive load, crises are prone to induce biases in the data and the cognitive processes of analysts and decision-makers. When biases remain undetected and untreated in CIM, they may lead to decisions based on biased information, increasing the risk of an inefficient response. Literature suggests that crisis response needs to address the initial uncertainty and possible biases by adapting to new and better information as it becomes available. However, we know little about whether adaptive approaches mitigate the interplay of data and cognitive biases. We investigated this question in an exploratory, three-stage experiment on epidemic response. Our participants were experienced practitioners in the fields of crisis decision-making and information analysis. We found that analysts fail to successfully debias data, even when biases are detected, and that this failure can be attributed to undervaluing debiasing efforts in favor of rapid results. This failure leads to the development of biased information products that are conveyed to decision-makers, who consequently make decisions based on biased information. Confirmation bias reinforces the reliance on conclusions reached with biased data, leading to a vicious cycle, in which biased assumptions remain uncorrected. We suggest mindful debiasing as a possible counter-strategy against these bias effects in CIM. their previous assumptions and decisions and neglect disconfirming information (Nickerson, 1998) . Consequently, crisis responders might disregard valid and important information only because it conflicts with or does not confirm their initial assumptions. We argue that the interplay of data bias and confirmation bias threatens the digital resilience of crisis response organizations. The consequences for crisis response can be particularly severe when data bias and cognitive bias reinforce each other in sequential decisions over time. When initial assumptions are made based on biased data, confirmation bias may lead people to further rely on information that confirms their initial biased assumptions. This might lead to a vicious circle that hampers adaptation and prolongs initially wrong decisions rather than correcting them. Conventionally, the literature suggests that decisions in crises need to be adaptive to new information (Turoff et al., 2004) . The principle of strengthening the adaptive capacity to manage uncertainty is underlying a broad range of literature on adaptive management in crises and (digital) resilience (Tim et al., 2021; Schiffling et al., 2020) . However, we know little about the effectiveness of such adaptive approaches against the backdrop of combined data and confirmation bias. A potential counter-strategy to mitigate the negative consequences of biases on CIM is mindful debiasing. Mindfulness means being more aware of the context and content of the information one is engaging with (Langer, 1992) , thereby becoming less prone to confirmation bias (Croskerry et al., 2013) . In a mindful state, information managers are more open to new and different information (Thatcher et al., 2018) . In contrast, when being less mindful, people rely on previously constructed categories and neglect the potential novelty and difference within newly received information (Butler and Gray, 2006) . This exploratory study investigates the interplay of data and confirmation bias in a sequential setup. Through a three-stage experiment with experienced practitioners, we studied how our participants dealt with biased data, and in how far they were able to correct initial decisions, or whether pathdependencies to biased decisions emerged. Based on our findings, we outline how mindful debiasing can support the detection and mitigation of data and confirmation biases in crisis response. The remainder of this paper is structured as follows: the next section reviews the relevant literature related to CIM, digital resilience and biases, and provides the research gap and research questions this paper is addressing. Section 3 describes the research design and methods, and Section 4 provides the results from our experiment. In Section 5, we discuss our contributions to literature and practice as well as future research avenues. In Section 6, we reflect on the limitations of this exploratory study, and Section 7 concludes the paper. Crisis information management (CIM) entails the formulation of data needs, identification of data sources, data collection, cleaning and structuring, data analysis, and the design and development of information products (Currion et al., 2007) . The objective of CIM is to support decision-making by providing trustworthy, accurate, and actionable information. With the rise of Big Data and Artificial Intelligence, larger humanitarian organizations have invested in analytics capacity (Akter and Wamba, 2019) . While the potential for working with unstructured data for predictive analytics has been recognized, many humanitarian organizations active in the Global South do not possess the resources for large investments into information technology and statistical sophistication (Prasad et al., 2018; Baharmand et al., 2021) . In these contexts, large parts of CIM are still supported through common office information systems such as Microsoft Excel and Google Spreadsheets (United Nations, 2020). These are used, amongst others, to store survey responses, conduct data integration, and develop information products, e.g., maps, tables, and infographics (Thom et al., 2015) . Especially in sudden-onset disasters, organizations frequently surge additional data analyst capacity to rapidly strengthen their CIM and digital resilience. Often, these are remotely working digital volunteers, that have been regarded as cost-effective, additional analyst capacities to support CIM (Poblet et al., 2018; Castillo, 2016) . These external analysts contribute to CIM by supporting tasks such as data collection, analysis as well as the development of information products for decision support (Chaudhuri and Bose, 2020; Hughes and Tapia, 2015; Karlsrud and Mühlen-Schulte, 2017) . External analysts have also contributed to epdiemics CIM, e.g., in the 2014 West Africa Ebola outbreak (Hellmann et al., 2016) , or the ongoing Covid-19 response (Fathi and Hugenbusch, 2021) . Figure 1 shows on the left side an information product developed by external analysts during the 2014 Ebola outbreak. The product highlights the major challenges of access to data and shows that the mobile phone network corresponds to the areas of the officially reported cases (WHO map at the right-hand side of Figure 1 ), clearly an indication of the widespread data biases, whereby access and phone coverage hampered reporting. Other information products created through such joint CIM processes include Excel and Google spreadsheets, graphs, and 1-pager summarizing results of social media data analyses (Hughes and Tapia, 2015) . Tasks and responsibilities frequently shift in crises (Nespeca et al., 2020) , requiring information managers and decision-makers to interact with data in different ways. While external analysts are primarily turning raw data into information, decision-makers are concerned with interpreting the situation and putting received information into context by using experience, communicating with partners, acting, and reacting. While much work on decision-making in crises focuses on optimizing for isolated decisions, crises are typically characterized by nested and interdependent decisions, driven by cognition and experience. This process is recognized by the literature on sensemaking, whereby decisions are part of a broader collective process of meaning-making (Weick, 1995; Klein and Moon, 2006; . Important components of sensemaking are information seeking, processing, creating, and using (Muhren et al., 2008) . Data-driven approaches, e.g., predictive analytics, can support sensemaking by revealing internal and external cues. Sensemaking is also influenced by an organization's mandate, strategy and modes of operation (Zamani et al., 2021) , and especially describes how people deal with 'gappy' information environments (Muhren et al., 2008) . Early studies on the work of external analysts emphasized the added value they bring to CIM by their remote and flexible structures (Meier, 2012; Ziemke, 2012; Bott and Young, 2012) . It has been argued that their work contributes to the situational awareness of response organizations (Hughes and Tapia, 2015; . To achieve situational awareness successfully, however, it is important to switch between goal-driven and data-driven approaches (Endsley, 1995; Endsley et al., 2003; Fromm et al., 2021) . While for goal-driven approaches, informational cues are intentionally considered in the pursuit of a set goal, data-driven approaches refer to open exploration of perceived cues that can lead to changes in priorities and readjustments. Situational awareness requires to alternate between these two forms because stringent goal-focus will lead to neglect of cues in the data, while stringent data-focus will be perceived as overly taxing (Fromm et al., 2021) . There are diverging perspectives on what constitutes digital resilience and whether it plays at the level of the physical infrastructure, the people or groups using the infrastructure, or the interplay among both. Some authors focus on the impact of digital technology on the user, stressing the importance of (access to) information in crises. For instance, according to Wright (2016) , "digital resilience means that to the greatest extent possible, data and tools should be freely accessible, interchangeable, operational, of high quality, and up-to-date so that they can help give rise to the resilience of communities or other entities using them." Others focus on the resilience capabilities of individuals to process digital data and engage with virtual environments (UK Council for Internet Safety (UKCIS), 2019). [Here, we take an information systems perspective, understanding digital resilience as a phenomenon that emerges from the interaction of people with data through digital tools and infrastructure. We follow a crisis-related definition that describes digital resilience as a means to cope with disruptions: "[...] digital resilience [...] refer[s] to the phenomena of designing, deploying, and using information systems to quickly recover from or adjust to major disruptions from [...] shocks." (Constantinides et al., 2020) . Crisis information management needs to foster digital resilience by supporting flexibility, agility, and adaptability (Turoff et al., 2004) . Our definition also covers specific aspects of digital resilience during epidemics (Ma'rifat and Sesar, 2020), namely the collection and analysis of outbreak data, as well as the use of analysis results to inform crisis response. Since CIM incorporates data collection, analysis, and sharing to support crisis decisions, it is directly linked to digital resilience. Previous literature identified several challenges to CIM that affect different functions (Van de Walle and Comes, 2015; Lauras et al., 2015) at different hierarchical levels (Bharosa et al., 2010) . We argue that data and cognitive biases can emerge as consequences to these challenges and affect CIM by posing threats to digital resilience in terms of hampering the rapid recovery from crises. We use the challenges described below to design our experiments, described in Section 3. Information has to feed into the fast crisis decision-making process (Warnier et al., 2020; Lauras et al., 2015; Turoff et al., 2004) . The time pressure reinforces the tendency to focus only on information that is immediately available (Higgins and Freedman, 2013) , which may induce a range of biases (Maule et al., 2000) . Information needs also rapidly change during different crisis stages (Hagar, 2011; Gralla et al., 2015; Nespeca et al., 2020) , posing challenges to the agility and flexibility of information management (Lauras et al., 2015) . As the destruction of infrastructure or lack of access may affect different regions to different degrees (Altay and Labonte, 2014) , datasets are often geographically imbalanced or biased. Demographic biases can influence the data further. Especially in the Global South, the most vulnerable groups might not have access to mobile phones and therefore are not included in mobile phone data to track and trace population movements (IOM, 2021) . Underrepresentation of geographic areas or social groups can lead to violations of the humanitarian imperative to 'leave no one behind ' (Van de Walle and Comes, 2015) . Relevant information about the crisis situation is often uncertain. Uncertainty is an umbrella term for information that is unavailable, incomplete, ambiguous, or conflicting (Comes et al., 2011; Tran et al., 2021) . To reduce uncertainty, people likely use the tools and methods they are most familiar with. This behavior could lead to what is known as the law-of-the-instrument, which states that people tend to overly rely on a particular familiar tool (Johnson and Gutzwiller, 2020) . The high volume, velocity, and variety of irrelevant data can quickly lead to information overload, particularly when the veracity of data has to be evaluated as well (Schulz et al., 2012) . This issue has become particularly prominent with the ubiquity of social media (Gupta et al., 2019) , which makes it virtually impossible to filter and process all available data on time . Information overload has been shown to induce confirmation bias (Goette et al., 2020) . Confronted with an overload of information, it is hard to identify any gaps in the available data, leading to exploiting what is known rather than exploring what could be known . In the high stakes decision contexts of humanitarian crises, tremendous potential losses are combined with the irreversibility of decisions (Kunreuther et al., 2002) . High stake situations have been shown to induce a large number of biases, ranging from a tendency to focus on short-term perspectives as well as an over-reliance on social norms and emotional cues (Kunreuther et al., 2002) . For example, high-stakes decisions can lead decision-makers to exert groupthink, which is manifested by overconfidence and a strive for in-group harmony, rather than critical self-reflection (Kouzmin, 2008) . As we have shown, the characteristics of crises provide a breeding ground for data biases and cognitive biases (Comes, 2016a) . Here, we zoom into two of the most prominent biases that are relevant in the interplay of information and decision-making: data and confirmation bias. Data can become biased due to historical, social, political, technical, individual, and organizational reasons (Jo and Gebru, 2020) . Representational data bias is among the most common forms and a broad category of data bias. It comes from the "divergence between the true distribution and digitized input space" (ibid.). In practice, that often means that a dataset systematically deviates from the real-world phenomenon the data is supposed to represent, for example, leading to the under-representation of geographic areas or social groups. Data bias can be understood as a flaw of a dataset, negatively affecting the quality of the data and potentially causing damages and losses in organizational processes (Storey et al., 2012) . Especially in sensitive contexts, data bias has been shown to replicate and reinforce existing inequalities (Jacobsen and Fast, 2019; Bender et al., 2020) . Urgency and overload combined with uncertainty are common causes for data bias in crises (Fast, 2017) . In epidemic response, the misrepresentation of infection rates has been documented during the 2014-2016 Ebola outbreak in West Africa (Fast, 2017) . Similarly, during the COVID-19 pandemic, different testing, tracing, or counting strategies have resulted in incomplete datasets and incomparable statistics (Fenton et al., 2020) . We look at representational bias in two key variables for epidemic response: numbers of infections and treatment capacity. Representational bias in those two variables can lead to a flawed understanding of the outbreak's severity and the available capacity, leading to misallocations and delayed or ineffective response. One of the hopes in using additional analytic capacity is that this additional capacity identifies additional information and thereby helps overcome data bias. To test if additional external capacity actually helps in overcoming data bias, we draw inspiration from traditional hidden profile experiments (Stasser and Titus, 1985; Lightle et al., 2009) . These experiments evaluated groups' decision-making performance. Group members received two sets of information, one set that contains the same information for all group members and another set that is different between group members. Only by joining the different, individual information sets together groups can identify the hidden profile, which is crucial to make the optimal decision. Hidden profile experiments have shown that generally groups overly discuss common information and neglect individual information so that the hidden profile remains hidden and the groups make an inferior decision (Stasser and Titus, 1985; Lightle et al., 2009) . This behavior was also found in experiments on crisis decisionmaking (Muhren et al., 2010) . However, previous experiments did not specifically look at representational bias in crises and whether adaptive approaches to surge additional analyst capacities help to improve the identification and mitigation of biases. A cognitive bias that hampers adequate adaptation to new information is confirmation bias. Research on confirmation bias has shown that people tend to limit their information retrieval efforts to information that is more likely to confirm their assumptions (Nickerson, 1998) . Because information that opposes preliminary assumptions increases discomfort (Hart et al., 2009) , it may be discarded, and wrong assumptions remain undetected, leading to flawed decision-making (National Research Council, 2015) . Confirmation bias, like cognitive biases in general, are often characterized as a byproduct of information processing limitations: because of urgency and overload, people use biases as mental shortcuts to judge and decide quickly. The urgency of crises likely fosters confirmation bias because relying on already formed assumptions accelerates decision-making. Domain experts, however, can show the opposite behavior and deliberately seek disconfirming information (Klein and Moon, 2006) . Counterfactual mindsets have been shown to be an effective debiasing strategy (Kray and Galinsky, 2003) . However, we know little about the potential influence of confirmation bias on the information search and selection behavior of experienced crisis responders. In this study, we investigate if crisis decision-makers and analysts are susceptible to confirmation bias and if they search for non-confirmatory data as a debiasing strategy. It could be possible that the deliberations between experts induce counterfactual mindsets, which, in turn, lead to a more critical assessment of prior decisions. However, path-dependencies may arise, whereby confirmation bias leads decision-makers and analysts to confirm assumptions in subsequent decisions, even though they were made based on biased data. Previous research measured confirmation bias through tasks with two parts (Jonas et al., 2001; Fischer et al., 2011) . First, participants made a preliminary decision between two options on a certain matter. Then, they were presented a set of information, which often are summaries of articles on the matter participants just made their preliminary decision on. For example, ten summaries of articles are presented, five supporting participants' preliminary choice, and five opposing it. Participants are then asked to select the articles they would like to receive in full. The experiment finishes, and participants are told there will be no full articles because it is unnecessary for the experiment. The researcher later counts the numbers of selected supporting and opposing article summaries and conducts a significant test for the difference. If significantly more supporting summaries were selected, we speak of confirmation bias. In dynamic situations such as crises, information on the best course of action continuously changes. Therefore, the literature advocates for agile and adaptive management in epidemics (Merl et al., 2009; Janssen and van der Voort, 2020) or, more generally, in crises (Charles et al., 2010; Anson et al., 2017; Schiffling et al., 2020; Turoff et al., 2004) . Response organizations often lack sufficient capacities to respond. Therefore, remotely working external analysts are added as surge capacity. There is some hope that via this additional capacity, exploratory search strategies may be favored that help overcome the responsive and exploitative strategies of decision-makers. At the same time, the remote nature of the work of analysts may add to the biases they are subject to (Comes, 2016a) and may make especially data interpretation harder . Therefore, it is not yet known how and in how far the interplay of analysts and decision-makers in sequential decisions reduces or amplifies biases. In this paper, we investigate whether the surge of additional analyst capacity is effective to mitigate bias effects. In sequential decisions, initial biases might limit the ability to effectively adapt, even though adaptation is widely described in the crisis management literature as key to managing the uncertainties and data biases that often prevail at the onset of a crisis (Mendonca et al., 2001; Quarantelli, 1988) . Potentially, representational data bias and confirmation bias reinforce each other, leading to amplified biases. This is especially harmful if path-dependencies arise whereby the initial data bias does not only influence initial decisions but leads to flawed decision trajectories through confirmation bias. Figure 2 depicts the interaction of the identified main challenges within the external analyst-supported CIM process. The response organizations activate external analysts in the first step (1). In steps (2) and (3) external analysts and decision-makers conduct information management and decision-making under the influence of the crisis, which can lead to biases. Information management and decision-making need to identify and mitigate biases to lead to unbiased results (4). Finally, the resulting information and decision are either influenced by biases, or bias mitigation was successful (5). We are interested in (RQ 1) whether the surge of external analysts leads to unbiased information products for decision support, (RQ 2) if the joint CIM process between analysts and decision-makers facilitates debiasing, and (RQ 3) if data bias and confirmation bias reinforce each other leading to path dependencies in sequential decisions. We address the following research questions: RQ 1: Is surging external analysis capacity effective in identifying and mitigating data bias? RQ 2: How do external analysts and decision-makers jointly handle data bias in the decision process? RQ 3: Does confirmation bias create path dependencies whereby biased assumptions persist in sequential decisions? We used an exploratory, three-stage experiment to examine these research questions, which is described in detail in the next section. We conducted an exploratory study with three stages to address the three research questions ( Figure 3 ). RQ 1 and RQ 2 were addressed through a scenario-based workshop with experienced practitioners in the fields of crisis decision-making and external analysis for CIM support. RQ 3 was addressed through an online survey with the same participants. Figure 3 depicts the research questions together with the corresponding experiment stages, data collection, and analysis methods. The experiment was designed to observe the crisis information management and decision-making process in a controlled environment. The controlled environment enables observation without interfering with the real response and allows us to conduct the experiment with three different groups. Yet, by designing realistic information flows, creating time pressure and providing the typical tools, the scenario is sufficiently realistic enough to inspire the same ways of thinking that external analysts or decision-makers also show in real epidemics. Through this setting, it was possible to observe the practices, communication and interactions within and between the participant groups. The experiment took place at the TU Delft Campus in The Hague in January 2020. Participants had to have work experience as external analysts or decisionmakers in crises to be eligible for participation. The recruitment was done based on the competencies required to fulfill the tasks of our experiment. These competencies included technical skills such as merging tabular data in MS Excel or a similar tool and developing and interpreting crisis information products such as maps and graphs. In addition, participants needed to be affiliated to an established crisis response organization, data analytics organization, or research institute on crisis or epidemic management. The authors had contacts to a network of potential candidates through previous research. This enabled us to recruit participants who had the required skills and experience. The participants were recruited internationally from various countries. Table 1 lists the descriptive information of our participants. Twenty-four participants participated in the experiment, of which twentyone were experienced in crisis management (eleven external analysts and ten decision-makers), and three were students. We added three students to create an element of reality to the group compositions as staff turnover is high in crisis response teams, with new and inexperienced staff needing to be integrated (St. Denis et al., 2012; Fathi et al., 2020) . Based on the background and experience of the participants, they were given either the role as an external analyst or as a decision-maker. Participants within the group of external analysts were part of professional disaster relief organizations as well as organizations representing different fields of expertise such as digital mapping, social media analysis, and data analytics. The group of decision-makers consisted of representatives from different governmental and non-governmental crisis response organizations from numerous countries, including The Netherlands, Germany, the United Kingdom, and the United States. Table 1 gives an overview of all participants, their corresponding organizations, and competencies. Recruiting experienced professionals for a scientific experiment leads to a smaller pool and thereby also lower participant numbers as compared to experiments with students or the general public. As the objective of this exploratory experiment was to gain insights into information management and decision-making approaches by actual practitioners, relying on samples drawn from student populations or the general public would have been inadequate. Our sample size is in a similar range as comparable exploratory studies on information systems and information management (Antunes et al., 2020) . Such exploratory studies provide a valid approach to build theory and identify metrics, mechanisms, processes, and concepts that can be investigated further in subsequent empirical research (Antunes et al., 2020) . We divided the participants into three groups of seven to nine members. The group sizes match real-world work team sizes of external analyst-supported CIM processes (St. Denis et al., 2012) . Further, members of geographically distributed teams of up to nine members have been shown to participate more actively and are more committed to and more aware of the team's goals than in larger teams (Bradner et al., 2003) . Our groups were purposefully mixed with participants having complementary skills and expertise so that each group included experts on mapping and data analytics on a similar level. Therefore, the number of participants and the group compositions are a good representation of real-world teams. The fictional scenario of our experiment was an epidemic outbreak happening simultaneously in three countries. The experiment was inspired by the 2014-2016 Ebola outbreak in Guinea, Liberia, and Sierra Leone. The three country groups had to assess the situation in their respective country by analyzing the data provided during the experiment with the goal to support decisions on where (in which districts) to place treatment centers. The experiment resembled the main challenges of crisis information management, as mentioned in Sections 2.2 and 2.3, by putting participants under time-pressure (urgency), providing incomplete and low-quality data (uncertainty), requiring participants to make high stakes sequential decisions on treatment center placements and having to do so with a shortage of resources. Before each stage of the experiment, we gave a brief introduction about the scenario and the participants' tasks. Each stage was concluded with a reflection moderated by the researchers. As our participants were experienced practitioners, the data used in the experiment had to resemble reality closely. We used original data from the 2014-2016 Ebola epidemic. The datasets selected for inclusion were on infection rates, infrastructure capacities, demographics, and geography. We adjusted the original data for three reasons. First, some of the participants had been involved in the 2014-2016 Ebola response and should not have a head-start by already being familiar with the data. Second, our experiment required us to introduce a controlled representational bias into the data. Third, the original datasets were too large for the time frame of the experiment. The original data was downloaded from the Humanitarian Data Exchange platform 1 and we adjusted it as described in the following. The infection rate is the key variable in epidemic response. We adjusted the original data so that infection rates were higher and more cases occurred in a shorter time. We retained columns from the original datasets and removed auxiliary columns to avoid information overload in the participants (Table 2) . We included infection data for the first four months of the fictional outbreak (Table 3) . Inspired by hidden profile experiments, in our experiment, one district per country was created with substantially more total cases than the other districts in the country. The data of this district was split among group members' datasets (Table 4 ). This implies that only by joining their datasets participants were able to identify the district with the most cases. If the bias remained undetected and untreated, the resulting information products would also become biased. Infrastructure and capacity data: During the 2014-2016 Ebola outbreak, mapping healthcare facilities and their capacities became a crucial task for crisis information management. However, up to 60 % of values in the original data on health infrastructure and capacities were missing, highlighting once more the high uncertainty analysts are confronted with. In addition, values had unclear and ambiguous meanings, making interpretation difficult. We adjusted the original datasets to include a reduced number of key variables. In the original datasets, detailed capacity data, i.e., numbers of beds per treatment center, was incomplete for 58 % of entries. We mimicked this representational bias in our adjusted datasets. Only one participant per group Table 4: Step 3: Introduction of representational bias. We created biased versions of the adjusted datasets from step 2. The biased versions were distributed among participants. The bias is here introduced in the district of Niprusxem. The district has the most cases in the unbiased dataset, but the least cases in the biased datasets. One group member only receives data for month 1 (displayed). Each other group member also only receives data for one month (not displayed). Only by joining the datasets, the unbiased case numbers could be received. received capacity data on the number of beds per facility. The other group members received the same dataset but with an empty column for capacities. Demographic and geographic data: Demographic data are part of the common operational datasets in crisis response (Van de Walle, 2010). They are used to understand the overall population distribution in terms of age, gender, and geographic location. By providing a sense of population density and bordering regions, they become very important in predicting trends in epidemic outbreaks. We collected the original data, replaced country and district names with randomly generated names, and slightly adjusted the demographic numbers. We further included randomly generated maps corresponding to the three randomly generated countries and districts. The maps were distributed to the participants in digital and printout versions. Data volume: Data volume differed slightly between the groups, with no large differences that could have significantly eased or complicated one group's data review and analysis process (Table 5) . Participants' access to the data: We created Google accounts for each participant, and the created datasets were uploaded into the Google Drive folders of each participant. This allowed us to distribute the created datasets to the members of each group while making sure the introduced bias was identifiable. A print-out sheet with login information for the Google folder was created for each participant. Each participant received a laptop to access the files. The laptops had MS Office pre-installed for the information manage- ment work on the data. Further tools also used by our participants in their professional work, including RStudio Online and Google Spreadsheets, were also available. To address the first two research questions (Is surging external analysis capacity effective in identifying and mitigating data bias? and How do external analysts and decision-makers jointly handle data bias in the decision process? ), we set up the first two stages of the experiment. To address research question three (Does confirmation bias create path dependencies whereby biased assumptions persist in sequential decisions? ), we conducted an online survey with the same participants. Stage 1 was conducted only with the group of external analysts. They were divided into the three groups we had defined in the planning of the experiment (Table 1) . Each group was responsible for the information management for one country affected by the fictional outbreak. Participants were told their group's objective was to review the available data and develop information products that could be used in stage 2 of the experiment for the prioritisation of districts that needed most urgent assistance. As all participants are used to preparing information products for crises, they were free to decide which information products to create (e.g. maps, tables, graphs, etc.). Participants were briefed they could use the MS Office Suite installed on the laptops provided to them, or any other online tools they would use in their professional work. Because of participants' experience, the importance of developing accurate information was clear to them. This includes the checking of data issues, gaps and comparing information quality among group members. We gave them no indication that they could expect the data they received to be perfect, accurate and unbiased. Rather, we briefed them that the experiment should be seen as a simulation of a real case, with challenges that can be expected from real epidemic crises. Participants were briefed they had 2.5 hours for their task. After the introduction, the three groups formed in three rooms, equipped with laptops and information sheets that contained user-login information for each participant to access the available data. The groups were asked to present the developed information products and suggestions for response decisions at the end of experiment stage 1. In stage 2, decision-makers joined each of the three groups. Participants were briefed they had to make resource allocation decisions by placing treatment centers in priority districts of their respective countries. External analysts had to brief the decision-makers on the outbreak situation, priority issues, and districts using the information products developed by them in stage 1. Each group received a limited amount of treatment centers (in the form of small building blocks) that could be placed in districts of the fictional countries on printout maps. Participants were told that each treatment center, i.e., building block, had a fixed capacity of ten beds. We implemented resource constraints by limiting the number of available treatment centers and beds. Thus, not all districts could be fully equipped to respond to the rising infections and prioritization decisions had to be made. Participants were briefed that all decisions had to be made within 60 minutes. After the introduction, the three groups formed in three rooms, equipped with laptops and the information products developed in stage 1. The groups were asked to present their final decisions at the end of the experiment. To address the third research question after stage 2 was completed, all participants were asked to fill out an online survey on site. The research objective was to assess whether confirmation bias would lead to path-dependencies toward decisions that were made based on biased information. A significant confirmation bias result would mean that participants preferred to seek information that confirmed their previously formed assumptions, even when they were influenced by biased datasets. The survey referred to participants' previous decision from stage 2, where they selected a priority district to which most treatment centers were allocated. In stage 3, participants were briefed that new information was available after they had made prioritization and allocation decisions. Their task was to select from a list of datasets those ones that they found most important to support further information management and decision-making. The survey item and confirmation bias measure is described in section 3.5.2. In stages 1 and 2, one observer per group took notes of the information management processes, communication, and interaction within the groups. Photos were taken to document intermediate results and processes, for example of post-its on the printout maps. After the session, the group members' files of the information products created on the laptops were saved and analyzed by the researchers. We conducted structured observations of the first two stages of the experiment that included the use of protocol sheets with guiding questions. Data collection through researcher observation is highly suitable in interactive experimental settings with dynamic group discussions. The goal was to capture verbal data, i.e., what is discussed, how by whom and when, as well as interactions among group members (Steffen and Doppler, 2019) . Since an observer must select which person and interaction is the object of observation (selection problem), a result bias can occur (Steffen and Doppler, 2019) . We addressed this potential issue by briefing observers beforehand on the observation protocol and guiding questions. Thus, before beginning an observation, researchers numbered participants in a common format to protocol activities in a standardized way, quickly and effectively. The protocol guideline included example observation items and was divided into three different sections: (1) description of workshop site, (2) communication and interaction description, (3) general impressions. The complete observation protocol is provided in the Appendix. The collected data was evaluated through qualitative content analysis (Döring and Bortz, 2016) . The main activity was to summarize the collected observational data and reveal content related to our research questions. We further evaluated the information products developed by the participants in addition to conducting the qualitative document analysis. We proceeded in three steps: 1. Paraphrasing: To reduce the volume and complexity of the observational data and of the created information products, the first step was to identify passages that carry content relating to our research questions and delete passages that did not. In this process, the different data forms (text passages of the sheets and information products, e.g. maps) were analyzed separately. 2. Coding: In the second step, all paraphrases representing the main content were summarized in a single document. The separate paraphrases were coded and structured to answer our research questions and find explanations for these answers. We conducted two coding iterations to develop a set of coded categories of the observed discussions and activities. 3. Analyzing: In the final step, we analyzed the structured content with regard to our research questions. Through this content analysis, we were able to systematically evaluate and analyze all observation sheets and information products and present key results. The first author coded the data in the first iteration. The resulting codes and corresponding observational notes were discussed with the second author. Adjustments were made to some of the coded categories, followed by the second iteration of coding by the first author. After review by the whole author team, the final categories of codes were agreed on. Table 6 presents example observation notes and coded categories. In stage 3, participants were asked to complete the online survey on site. The survey was implemented in a Google Form and distributed to each participant. The survey prompted the participants with the following text: "Below are the summaries of 10 new datasets that are available. You can request the full version of those datasets but you only have limited time and resources to evaluate them all in detail. Select as many datasets as you want. District X is the district you have identified in the last session as the most critical district." In stage 2, participants had to allocate treatment centers to the districts with the highest priority (referred to as "District X" in the survey). In the survey, ten summaries of ten fictional datasets were given in one-sentence statements. Five dataset summaries supported that District X was indeed a priority district, whereas the other five dataset summaries opposed this. An example of a summary of a supporting dataset is "Dataset 9: District X has a high amount of health care workers infected." An example of a summary of an opposing dataset is "Dataset 10: District X has a low amount of heath care workers infected." Participants did not receive any data to review besides those summaries, and after the survey was completed, they did not receive the datasets they selected, as it was not necessary to measure confirmation bias (Jonas et al., 2001; Fischer et al., 2011) . The complete confirmation bias measure can be found in the Appendix. The response data from the survey was imported into SPSS for statistical analysis. Following the measures of confirmation bias in previous studies, we first counted the selected supporting and opposing datasets per participant. Then, we used a paired samples test to identify whether the mean counts of selected confirming and opposing datasets were significantly different. In the following, we present the results for our three research questions. In the first stage of the experiment, all three groups of external analysts identified differences between group members' datasets and discovered that the data providing the numbers of infections were biased. Example observation: EA8 is looking up the data for Niprusxem. He says he only has month 2 for this and that this is strange. Asks to see EA12's data. EA9 says she only has month 3. EA12 has month 4. EA9: We have different datasets! However, the bias within the capacity data remained undetected in all three groups (see Table 7 ). This led to the development of information products that were overly focused on the outbreak situation and overlooked existing capacities. Figure 4 shows the results of the coding and categorization process of our qualitative content analysis. The figure provides a summary of the sensemaking process within the groups. It shows the share of each coded category (in percent) within the overall activities of the groups during five time intervals of 30 minutes each. In the initial phase, participants rushed into downloading the datasets stored in their individual Google accounts and started the data analysis by importing the data into their preferred information systems (e.g., Excel, RStudio). Participants familiarized themselves with their own data and identified differences in the data of their group members. Figure 4 shows the share of data work remained constant during the first two time intervals (i.e. first 60 minutes). It became the dominant category during the third interval and then lost importance by making room for an increased focus on decision-making recommendations. Figure 4 also shows the groups started with attempts to integrate datasets as debiasing behavior in the first interval. Example observation: EA10 suggests to the group to upload the data into Google Drive so he can easily merge them. These attempts were, however, not efficiently followed-up upon, and the share of debiasing behavior was reduced in the second time interval. After an initial familiarization with the data, a collective sensemaking process started to emerge, characterized by intensive socializing, working, and experimenting with the data. The groups discussed how to define priority districts and what should be the key variables. This led to debiasing behavior gaining significance slightly and reaching its peak at the second last interval when groups recognized that datasets remained biased. The sensemaking process did not lead to due attention to biases. When differences between the group members' datasets were recognized, measures taken by the groups were insufficient to debias the data. One reaction was that one group member would upload their biased dataset into a shared folder, and the other group members would from then on use this data folder as the single-point-of-truth. From that time on, all group members accessed the same biased data. This behavior might be explained by groupthink, as the individual members of the groups strived to establish harmonic relationships, characterized by conformity and the minimization of conflict rather than openly articulating the disconfirming information they held. Participants struggled with the non-availability of data they wished to have and perceived the data quality of some datasets to be too low to build accurate situational awareness and determine priorities. With the end of the experiment stage approaching and time pressure increasing, groups tasked individual members with creating information products, i.e., maps, graphs, and tables. Example observation: EA11: Data quality is questionable, it is not meaningful to go into data analysis in the last 20 minutes, must be quick... I need to think of the report, we should still name projects or tasks that our organizations would work on. At this point, it became increasingly difficult for the groups to mitigate any data biases because individuals would turn their own data into information for decision support, and no critical data assessments were done. Figure 4 shows Interpretation of data and decision-making recommendations dominated the last time interval and debiasing behavior was again neglected. Even though all groups identified the bias within the infection data, the groups failed to successfully debias the data. Successful debiasing would have required that members of each group merge their datasets for infection rates and infrastructure capacity. However, even though the bias was recognized, each group relied on the data of only one of its members in the design of information products. Remarkably, one group identified early during the experiment that its members had received biased data and shared their finding with the other groups, but still all groups presented results based on biased data at the end of the experiment. The resulting information products of each group showed numbers of infections in the most affected districts that were lower than the complete and unbiased information they could have acquired by merging their datasets. Figure 5 shows one example of a developed information product. It depicts that the district with the most cases in the unbiased dataset was presented with biased numbers based on only one of the participants' datasets. Overall, an explanation for the unsuccessful debiasing is the strong perception of time pressure and the experienced urgency by participants to deliver an information product in time that is presentable and actionable for decisionmakers. Even though the additional analyst capacity is meant to alleviate the Fig. 5 : Example information product resulting from stage 1. Country map shows the numbers of cases per district (colored by the participants in red, yellow, and blue). The green box (added by us) shows that the unbiased numbers of cases for the most affected district were much higher than those reported in the information product developed by the participants. time pressure, they are subject to the same biases of exploiting, rather than exploring data . Analysts were not able to develop unbiased information products for decision support, since the data was accepted with its flaws, and information products needed to be developed anyway based on the low-quality data. In experiment stage 2, all three groups relied on the biased datasets and resulting biased information products from stage 1 in their discussions on treatment center placement decisions. External analysts briefed decision-makers using the biased numbers of infections. Example observation: They decide to place treatment centers based on the case numbers, and also want to place them along the border. EA12 shows the map of the confirmed cases to the DMs. As described Section 4.1, no group was able to identify the data bias on existing bed capacities during information product development (see Table 7 ). Consequently, no detailed capacity data was communicated to decisionmakers, and allocation decisions were made in the absence of detailed data on existing capacities. If the capacity data bias had been discovered, it potentially could have facilitated the groups' allocation decisions. Decision-makers took the role of advocatus diaboli by critically questioning the underlying data of the developed information products. In their role as decision-makers, they pressured external analysts on the data gaps and data quality issues very early in the experiment. Example observation: DM3: why are some areas empty? EA5: the data is not very clean; possibly underreporting. DM1: is the data trustworthy? EA5: we had different datasets between group members. Analysts briefed decision-makers on data limitations. This led to the joint understanding that the available data was unreliable to some degree. However, when data limitations were mentioned, decision-makers did not pressure enough. When analysts explained data gaps, other group members, who had access to that missing data, would not step in to clarify. Decision-makers would not press the group sufficiently to mitigate the data bias. Instead, they would pressure to make prioritization decisions for treatment center allocation. Example observation: DM5: Based on my experience, you have to make decisions on very little data. Indecision kills. Figure 6 shows the results of the coding and categorization process of our qualitative content analysis of experiment stage 2. It shows the share of the coded categories (in percent) within the overall activities of the groups during four time intervals which are 15 minutes each. Deliberations on allocation strategies dominated discussions from the second interval onward till the end of the experiment. It reached its peak during the second last interval, where 35 % of discussions were on allocation strategies. Groups showed stronger debiasing behavior at the beginning of the session, where data limitations were communicated and discussed. However, this focus was reduced over time, only increasing slightly in the last time interval. This pattern of debias neglect was already observed in stage 1. Requirements for additional data mainly were articulated in the beginning and were then constant throughout the later intervals even though it was communicated to the participants that there would be no additional data provided during the experiment. This behavior shows a heavy dependency on more data and the conviction that more data will help the decision process, even if the quality of the future data is unknown and can be questioned if the currently available data is already of low quality. Interpretation of the situation out-weighted the interpretation of the data throughout all intervals, showing the influence of the decision-makers who relied more on their previous experience to assess the situation than basing their assumptions on the available data that was known to have limitations. Overall, the joint information management and decision-making process between analysts and decision-makers did not result in sufficient debiasing, and allocation decisions were made based on biased information. Fig. 6 : Experiment stage 2 results of the coding and categorization process. The graph shows the share (in percentage over time) of the coded categories within the overall activities of the groups. Initial discussions on data limitations were not sufficiently followed-up upon and discussions on allocation strategy dominated the group discussions from the second interval onward. In the final phase, participants were asked to select additional information that supported or conflicted with their allocation decisions. Our analysis of the survey responses shows that the mean count of selected supporting datasets was higher (M = 2.94, SD = 1.56) than the mean count of selected opposing datasets (M = 1.82, SD = 1.88), indicating that participants selected more supporting than opposing datasets (see Table 8 ). Wilcoxon signed-rank test was used to test if the discrepancy between means was statistically significant. The result reveals significant confirmation bias in the participants' selection of additional datasets (n = 17, z = -2.537, p = .011). We, therefore, find that our participants showed significant confirmation bias and that the bias drives their information selection decisions. This is particularly concerning as the participants' preliminary decisions were flawed and based on biased information. In stage 3, participants tried to substantiate further their previously biased decisions instead of using the opportunity to counter-check their assumptions. Confirmation bias reinforced their biased assumptions and strengthened their reliance on potentially further biased data. A significant confirmation bias at this stage is in line with our observations in the earlier stages of the experiment, where participants followed on exploitative and satisficing strategy given the time pressure, rather than an exploratory strategy. Although much of the literature on crisis and disaster management suggests an adaptive approach to manage the uncertainties that typically persist at the onset of a crisis Quarantelli, 1988 ), we found that over time the initial mental models and decisions became deeply ingrained and persistent. As such, it became increasingly difficult for participants to implement a debiasing strategy that allowed them to correct their decision because the initial data biases were never effectively discussed and mitigated, even though new information became available that could have facilitated corrections. Even though they knew that their information had been incomplete and possibly flawed, the participants' debiasing behaviour was diminished, and they were overconfident in their decisions. If participants would have laid more focus on discussions on data limitations, they might have been more mindful and showed a more balanced or even disconfirming information selection behavior to correct previously flawed decisions. Our experimental evidence adds to the theoretical understanding of the role of biases and debiasing strategies in crisis information management (Mirbabaie et al., 2020; Ogie et al., 2018; Comes, 2016b) . Our experiments show that a reason for the lack of debiasing efforts is the urgent context of crisis information management and strong group cohesion lead to a neglect of critical data assessments within the initial exploratory step of the analysts. Debiasing behavior is particularly strong during the onset of workgroup collaborations. However, these debiasing efforts are increasingly neglected as time pressure builds and mental models are formed. This implies that rather than using additional capacity to broadly scan the available information, the process follows a satisficing strategy, where by one data set is 'good enough' to develop information products quickly that are directly actionable and support decision-making. Because biases remain untreated, information products and decisions become affected by them. Even though conventionally there is hope that additional data analysts will mitigate the impact of data bias, our findings show that even though biases are detected, they are not mitigated. Hughes and Tapia (2015) emphasized the expertise of external analysts with specialized software. We find that the preference to start data analysis quickly in participants' preferred tools moves the focus away from debiasing efforts. The law-of-the-instrument was clearly present in our groups, especially in the initial phase of the experiment. This indicates that our participants had strong preferences for their preferred information systems. In an effort to understand their own data, participants approached data analysis with tools they were familiar with and knew best. Datasets from other group members, and their potential differences, were not receiving due attention. Our findings show the interplay of data and cognitive bias in crisis response. We find that confirmation bias can exacerbate the reliance on biased assumptions and that data biases and cognitive biases can reinforce each other, leading to amplified bias effects. As proposed by Comes (2016b) , and experimentally confirmed in our study, crisis information managers and decision-makers are prone to significant confirmation bias. Our participants significantly more often selected new information that confirmed their previous assumption about priority districts, which was influenced by biased data. This holds true even considering the broad level of experience of our participants, and although they did know the initial data was biased. We therefore show that awareness of bias does not automatically lead to bias mitigation. The urgent, uncertain, and resource-constraint contexts of crisis response have led to calls for adaptive management (Merl et al., 2009; Janssen and van der Voort, 2020; Charles et al., 2010; Anson et al., 2017; Schiffling et al., 2020; Turoff et al., 2004) . Our findings indicate that such adaptive approaches can fail due to the interplay of data and cognitive bias. Future CIM theory needs to further explain the interplay of data bias and cognitive bias, looking into reinforcing and mitigating mechanisms. Crisis situations are known to cause stress in responders, and this stress is known to increase the susceptibility to cognitive biases such as confirmation bias. Especially in data-critical environments like CIM, where responders have to handle various information systems, techno-stress can further increase stress and susceptibility to biases. Mindfulness has been found to alleviate some of this stress (Ioannou and Papazafeiropoulou, 2017) and therefore is a promising strategy to reduce the susceptibility to cognitive bias in CIM. Mindfulness means being more aware of the context and content of the information one is engaging with (Langer, 1992) . When crisis information managers are mindful about the context and content of the information they are engaging with, falling into the trap of ever-confirming information-seeking behavior becomes less likely. In a mindful state, information managers would be more open to new and different information, and able to develop new categories for information that is received. In contrast, in a less mindful state, people rely on previously constructed categories and neglect and ignore the potential novelty and difference within newly received information. Being mindful means to increase one's metacognition, i.e., being aware and have a focus on one's own thought processes (Croskerry et al., 2013) . Boosted metacognition might be effective in mitigating confirmation bias (Rollwage and Fleming, 2021) . Future research should investigate the effectiveness of such debiasing efforts empirically. Like Ogie et al. (2018) , we argue that data created in crises, especially from the affected population, can be subject to a multitude of biases, which have to be taken into account if systems and algorithms are designed that are supposed to turn those data into objective, neutral decision recommendations. In a similar vein as Weidinger et al. (2018) , who called for more research on users' perception of novel information systems and technologies for crisis response, we argue, information management literature needs to account for data biases that systematically over-or under-represent issues, social groups, or geographic areas in the form of representational biases. If information management does not account for biases, resulting information products can become flawed and negatively influence decision-making. Previous research proposed new forms of information systems, models, and algorithms to support resource allocation decisions in crises (Avvenuti et al., 2018; Kamyabniya et al., 2018; Schemmer et al., 2021) . We argue that such systems need to consider the abilities and limitations of information managers and decision-makers to identify and mitigate biases in the usage of such systems. This includes data biases as well as cognitive biases. We emphasize previously proposed debiasing efforts, e.g., nudging (Mirbabaie et al., 2020) , that can be implemented into information systems for crisis response with the objective to mitigate cognitive biases. Previous research provided examples on effective debias interventions. Interventions can range from fast and frugal options to intensive training sessions (Sellier et al., 2019) . Information managers and decision-makers can be trained to counter-check their assumptions by actively seeking disconfirming information and considering the opposite of their preliminary hypothesis (Satya-Murti and Lockhart, 2015; Lidén et al., 2019) . Future research needs to test the effectiveness of such interventions in crisis settings. We reiterate calls for sensemaking support in crisis response (Muhren et al., 2010; . We add to that with our finding that decision-makers can act as advocatus diaboli to their external analyst partners. By trying to make sense of the unfolding situation and posing confrontational questions to external analysts regarding the quality and shortcomings of the data that underpinned developed information products, decision-makers uncovered important data gaps quickly. However, these also have to be effectively followed-up upon to lead to lead to successful debiasing. It can be observed that the response organizations are building up stronger internal crisis information management structures. Where once there were large skill gaps in data analysis and mapping, digital response concepts are now being observed within established organizations (Fiedrich and Fathi, 2021) . External analysts are being integrated into permanent structures. However, our findings suggest that crisis information management needs to invest in detecting, and most importantly, mitigating biases. Even if complete debiasing is not feasible, we give some concrete implications of our findings on crisis information management practice. First, bias-awareness trainings can highlight the potential influence of biases in information management and decision-making, and provide guidelines for debiasing. We found that work groups initiated debiasing efforts and became aware of biases. Debiasing then however lost its significance in favor of quick analysis results and decision-making. More awareness of the pitfalls of biases might shift the focus to debiasing first, before final information products are developed and decisions are made. Post-mortem analysis of information management and decision-making processes after crisis response can be implemented in lesson learnt and debriefing sessions. Further, large-scale crisis response trainings, which are organized annually by major response organizations to train together for real crisis event (e.g., SIMEX, TRIPLEX), should incorporate debias interventions in training agendas. Second, the development of models, algorithms and information systems to support information management and decision-making in crisis response, should implement functions that help identify and mitigate biases in (a) the datasets used by these systems, and (b) the cognitive processes of system users. In our paper, we present an initial exploratory study on the interplay of data and confirmation bias in time-critical sequential decisions. Because of the exploratory nature of our study, there are several limitations that can be addressed in future research. First, and to the best of our knowledge, while our study is the first of its kind that brings external analysts together with decision-makers to study their joint CIM process in a realistic scenario-based experiment, and our participants were all experienced in their roles, the number of participants is a limiting factor in our study. Similar studies have reported larger participant groups, mostly of inexperienced students and other laypersons who are easy to recruit. We suggest to expand on our findings in additional larger-scale experiments and surveys across diverse groups and different professional experiences. Our experimental design was inspired by hidden profile experiments. In traditional hidden profile experiments (Stasser and Titus, 1985; Lightle et al., 2009) , participants are asked to study their received information before joining the group conversation. In contrast, we allowed for discussions from the start because crisis information management is characterized by fast, agile communication. Our approach decreased the chances that participants constructed a rigid mental model of what data they received initially. Two characteristics of our research design counter this shortcoming. First, we allowed for perfect recall, i.e., participants kept all materials during the workshop experiment. Second, participants needed to continuously engage with the data by aggregating, analyzing, and visualizing it, so they had to build a deep understanding of the data during the experiment. It is a major challenge to simulate a realistic crisis environment in an experimental setting. This includes a realistic but still unknown scenario, decisionmaking under urgency, uncertainty, high stakes, and constraint resources, allowing for interactive collaboration with multiple actors, and providing equipment that resembles experts' real work environment. Simplifications have to be made to make the experiments controllable. In addition, we had to consider that some organizations might implement and pursue different approaches to information management and decision support than required by the tasks we set. In real-world scenarios, external analysts work with a larger group of colleagues. Because of the framework required by our experiment, for example, the discussions on the creation of the information products had to be objectively observed on site, it was not possible to include further external analysts from those remotely working communities. Here, we suggest to complement our findings with more ethnographic and field studies in real disaster to observe real-world debiasing and decision-making behaviour. Crisis response organizations integrate external analysts into the CIM process to strengthen their digital resilience. In this capacity, external analysts collect and analyze data and develop information products (e.g., maps, tables, infographics) for decision support. While this extended capacity is meant to improve the evidence base for decisions, the CIM process remains challenged by circumstances of urgency, uncertainty, high stakes, and constraint resources. Consequently, crises are prone to induce biases into the data as well as the cognitive processes of external analysts and decision-makers. We investigated how biases influence the CIM process between experienced external analysts and decision-makers through a three-stage experiment. Our findings show that data biases, even if detected, influence the development of information products for crisis decision support. We show that effective debiasing does not happen because crisis information managers have a strong commitment and urgency to deliver a presentable information product that is actionable enough for decision-makers to make decisions directly. Efforts for creating information products are prioritized, and debiasing is neglected. In subsequent deliberations and decision-making discussions, decision-makers are influenced by biased information products in their allocation decisions of scarce resources. Confirmation bias amplifies the reliance on problematic assumptions that were made based on biased data. This implies that the biased, misleading information that shapes initial decisions is perpetuated by a vicious circle of biased information search that influences future decisions. Our findings indicate that decisions in crisis response can only be effective if initial data and confirmation bias are identified and mitigated. Mindful debiasing could be a successful strategy to improve broad information search and tackle both biases. This work was funded through the Special Priority Program "Volunteered Geographic Information: Interpretation, visualization and Social Computing" (SPP 1894) by German Research Foundation (DFG, Project number: 273827070). We thank all participants and especially the student assistants who supported the preparation and organization of the experiments and observations during the sessions. The authors have no conflicts of interests to declare. How is the need for information expressed and communicated? To what extent was available information not shared / retained? Which decisions are anticipated to be supported by the V&TCs? Additional comments Describe how and why specific types of information products are selected and created for the decision-makers. Which information is included and why? Which technology and other decision aid materials are utilized and how? "Below are the summaries of 10 new datasets that are available. You can request the full version of those datasets but you only have limited time and re-sources to evaluate them all in detail. Select as many datasets as you want. District X is the district you have identified in the last session as the most critical district." -Dataset 1: District X has less treatment capacity than infection cases. -Dataset 2: In district X the infection rate is likely to increase. -Dataset 3: District X has high infrastructural damage. -Dataset 4: District X has a low percentage of people reached. -Dataset 5: District X has more treatment capacity than infection cases. -Dataset 6: In district X the infection rate is likely to decrease. -Dataset 7: District X has low infrastructural damage. -Dataset 8: District X has a high percentage of people reached. -Dataset 9: District X has a high amount of health care workers infected. -Dataset 10: District X has a low amount of heath care workers infected. David Paulus is a PhD researcher at the Faculty of Technology, Policy and Management at Delft University of Technology. He studies data biases and cognitive biases in humanitarian information management and decision-making. He is Delft Global Fellow since 2019 and member of the International Association for Information Systems in Crisis Response and Management (ISCRAM) since 2016. In his research he combines theories and methods from computer science, psychology and organizational science. As a Research Associate at the United Nations University Institute for Environment and Human Security from 2015-2017, he was involved in ICT-supported institutional capacity building projects in North Africa and Southeast Asia. Ramian Fathi is a research associate and PhD candidate at the Chair for Public Safety and Emergency Management, University of Wuppertal. As part of the DFG-funded Priority Programme (1894) "Volunteered Geographic Information", his research focusses on the analysis of social media by digital volunteers and their participation in disaster management. In addition, he is the team leader of the Virtual Operations Support Team (VOST) of German Federal Agency for Technical Relief (THW) and vice president of the German Society for the Support of Social Media and Technology in Civil Protection (DGSMTech e.V.). Dr. Frank Fiedrich holds the Chair for Public Safety and Emergency Management at the University of Wuppertal since 2009. He studied Industrial Engineering and received his Ph.D. from the Karlsruhe Institute of Technology, Germany, where he worked on Decision Support Systems and Agent-based Simulation for disaster response. From 2005 to 2009, he was Assistant Professor at the Institute for Crisis, Disaster, and Risk Management ICDRM at the George Washington University, Washington DC. His research interests include the use of information and communication technology for disaster and crisis management, societal, organizational and urban resilience, interorganizational decision-making, critical infrastructure protection and societal aspects of safety and security technologies. Additionally, Professor Fiedrich is honorary member of the International Association for Information Systems in Crisis Response and Management (ISCRAM). Dr. Bartel Van de Walle is a UN diplomat and director of the United Nations University Institute UNU-MERIT in Maastricht, the Netherlands. UNU-MERIT carries out research and training on a range of social, political and economic factors that drive economic development in a global perspective. Dr. Van de Walle is also professor of policy analysis for global challenges at Maastricht University. His research focuses on humanitarian response, and specifically on the role of information systems for better coordination and response. He is a member of the steering committee for the Dutch Science Foundation's Science for Global Development initiative. Dr. Tina Comes is Full Professor on Decision Theory and Information Technology for Resilience at the TU Delft, Netherlands, and Full Professor in Decision-Making and Digitalisation at the University of Maastricht. Dr. Comes is a Visiting Professor at the Université Dauphine, France, a member of the Norwegian Academy for Technological Sciences and the Academia Europaea. She serves as the Scientific Director of the 4TU.Centre for Resilience Engineering, as Principal Investigator on Climate Resilience for AMS, as Director of the TPM Resilience Lab, and she leads the Disaster Resilience theme for the Delft Global Initiative. Prof. Comes' research focuses on decision-making and information technology for resilience and disaster management. This perspective on decision making, resilience and humanitarian response is reflected in more than 100 publications. Big data and disaster management: a systematic review and agenda for future research Analysing social media data for disaster preparedness: Understanding the opportunities and barriers faced by humanitarian actors Eliciting Process Knowledge Through Process Stories CrisMap: a Big Data Crisis Mapping System Based on Damage Detection and Geoparsing Developing a framework for designing humanitarian blockchain projects On the dangers of stochastic parrots: can language models be too big? Challenges and obstacles in sharing and coordinating information during multi-agency disaster response: Propositions from field exercises Effects of team size on participation, awareness, and technology choice in geographically distributed teams Reliability, Mindfulness, and Information Systems. Management Information Quarterly Big Crisis Data A model to define and assess the agility of supply chains: Building on humanitarian experience Exploring the role of deep neural networks for post-disaster decision support Cognitive biases in humanitarian sensemaking and decision-making lessons from field research Cognitive biases in humanitarian sensemaking and decision-making lessons from field research Information systems for humanitarian logistics: concepts and design principles Decision maps: A framework for multi-criteria decision support under severe uncertainty The Coordination-Information Bubble in Humanitarian Response: Theoretical Foundations and Empirical Investigations Call for Papers MISQ Special Issue on Digital Resilience Cognitive debiasing 2: Impediments to and strategies for change Open source software for disaster management Forschungsmethoden und Evaluation Toward a Theory of Situation Awareness in Dynamic Systems Designing for Situation Awareness -An Approach to User-Centered Design Diverging Data: Exploring the Epistemologies of Data Collection and Use among Those Working on and in Conflict Vost: A case study in voluntary digital participation for collaborative emergency management COVID-19 infection and death rates: the need to incorporate causal explanations for the data and avoid bias in testing Humanitäre Hilfe und Konzepte der digitalen Hilfeleistung The process of selective exposure: Why confirmatory information search weakens over time Social Media Data in an Augmented Reality System for Situation Awareness Support in Emergency Control Rooms. Information Systems Frontiers The challenges of data usage for the United States' COVID-19 response Information overload and confirmation bias. Cambridge Working Papers in Economics Understanding the information needs of field-based decision-makers in humanitarian response to sudden onset disasters Big data in humanitarian supply chain management: A review and further research directions Information needs and seeking during the 2001 UK foot-and-mouth crisis Feeling Validated Versus Being Correct: A Meta-Analysis of Selective Exposure to Information Collaborative analytics and brokering in digital humanitarian response Improving decision making in crisis Social Media in Crisis: When Professional Responders Meet Digital Volunteers Using it mindfulness to mitigate the negative consequences of technostress Assessing the Use of Call Detail Records (CDR) for Monitoring Mobility and Displacement Rethinking access: how humanitarian technology governance blurs control and care Agile and adaptive governance in crisis response: Lessons from the COVID-19 pandemic Lessons from archives: Strategies for collecting sociocultural data in machine learning A Cyber-Relevant Table of Decision Making Biases and their Definitions Confirmation Bias in Sequential Information Search After Preliminary Robust Platelet Logistics Planning in Disaster Relief Operations Under Uncertainty: a Coordinated Approach Quasi-Professionals in the organization of transnational crisis mapping. Professional Networks in Transnational Governance Making sense of sensemaking 2: A macrocognitive model Crisis Management in Crisis? Administrative Theory & Praxis The debiasing effect of counterfactual mindsets: Increasing the search for disconfirmatory information in group decisions High stakes decision making: Normative, descriptive and prescriptive considerations Matters of mind: Mindfulness/mindlessness in perspective Event-cloud platform to support decision-making in emergency management From devil's advocate to crime fighter: confirmation bias and debiasing techniques in prosecutorial decision-making. Psychology, Crime and Law Information exchange in group decision making: The hidden profile problem reconsidered Lessons from Thailand during COVID-19 pandemic: The importance of digital resilience Protecting_Migrant_Worker_during_Covid-19_Pandemic_Lessons_ from_Malaysia_and_Thailand/links/6035b68a92851c4ed59118d0/ Protecting-Migrant-Worker-during-Covid-19-Pandemic-Lessons-fr Effects of time-pressure on decision-making under uncertainty: changes in affective state and information processing strategy Crisis mapping in action: How open source software and global volunteer networks are changing the world, one map at a time Decision support for improvisation during emergency response operations A statistical framework for the adaptive management of epidemiological interventions Digital Nudging in Social Media Disaster Communication. Information Systems Frontiers A Confirmation Bias View on Social Media Induced Polarisation During Covid-19 Sensemaking and implications for information systems design: Findings from the democratic republic of congo's ongoing crisis National Research Council. Measuring Human Capabilities: An Agenda for Basic Research on the Assessment of Individual and Group Performance Potential for Military Accession Towards coordinated self-organization: An actor-centered framework for the design of disaster management information systems Confirmation Bias: A Ubiquitous Phenomenon in Many Guises Participation Patterns and Reliability of Human Sensing in Crowd-Sourced Disaster Management Cognitive bias, decision styles, and risk attitudes in decision making and DSS Crowdsourcing roles, methods and tools for data-intensive disaster management. Information Systems Frontiers Big data in humanitarian supply chain networks: A resource dependence perspective. Annals of Operations Research Disaster crisis management: A summary of research findings Confirmation bias is adaptive when coupled with efficient metacognition Recognizing and reducing cognitive bias in clinical and forensic neurology Conceptualizing digital resilience for ai-based information systems The implications of complexity for humanitarian logistics: A complex adaptive systems perspective Optimizing Decision-Making Processes in Times of COVID-19: Using Reflexivity to Counteract Information-processing Failures. Schippers, Michaela and Rus, Diana, Optimizing Decision-Making Processes in Times of COVID-19: Using Reflexivity to Counteract Information-processing Failures Crisis information management in the Web 3.0 age Debiasing Training Improves Decision Making in the Field A Behavioral Model of Rational Choice Global rise in human infectious disease outbreaks Trial by fire: The deployment of trusted digital volunteers in the 2011 shadow lake fire Self-organizing by digital volunteers in times of crisis. Conference on Human Factors in Computing Systems -Proceedings Promoting structured data in citizen communications during disaster response: An account of strategies for diffusion of the 'Tweak the Tweet' syntax Pooling of unshared information in group decision making: Biased information sampling during discussion Einführung in die Qualitative Marktforschung Data quality: Setting organizational policies Mindfulness in information technology use: Definitions, distinctions, and a new measure Can twitter really save your life? A case study of visual social media analytics for situation awareness Digital resilience: How rural communities leapfrogged into sustainable development An Investigation of Misinformation Harms Related to Social Media during Two Humanitarian Crises joint-intersectoral-analysis-framework. United Nations. Historic Economic Decline is Reversing Development Gains Review of the Operational Guidance Note on Information Management On the Nature of Information Management in Complex and Natural Disasters Improving situation awareness in crisis response teams: An experimental analysis of enriched information and centralized coordination Humanitarian access, interrupted: dynamic near real-time network analytics and mapping for reaching communities in disaster-affected countries Sensemaking in Organizations. Foundations for Organizational Science Is the Frontier Shifting into the Right Direction? A Qualitative Analysis of Acceptance Factors for Novel Firefighter Information Technologies Toward a digital resilience Making sense of business analytics in project selection and prioritisation: Insights from the start-up trenches Crisis mapping: The construction of a new interdisciplinary field?