key: cord-0582056-mpgoc7jc authors: Li, Tianshi; Cobb, Camille; Yang, Jackie; Baviskar, Sagar; Agarwal, Yuvraj; Li, Beibei; Bauer, Lujo; Hong, Jason I. title: What Makes People Install a COVID-19 Contact-Tracing App? Understanding the Influence of App Design and Individual Difference on Contact-Tracing App Adoption Intention date: 2020-12-23 journal: nan DOI: nan sha: 2744864b09cf1ad2c1b240e1c259c6e7276f711c doc_id: 582056 cord_uid: mpgoc7jc Smartphone-based contact-tracing apps are a promising solution to help scale up the conventional contact-tracing process. However, low adoption rates have become a major issue that prevents these apps from achieving their full potential. In this paper, we present a national-scale survey experiment ($N = 1963$) in the U.S. to investigate the effects of app design choices and individual differences on COVID-19 contact-tracing app adoption intentions. We found that individual differences such as prosocialness, COVID-19 risk perceptions, general privacy concerns, technology readiness, and demographic factors played a more important role than app design choices such as decentralized design vs. centralized design, location use, app providers, and the presentation of security risks. Certain app designs could exacerbate the different preferences in different sub-populations which may lead to an inequality of acceptance to certain app design choices (e.g., developed by state health authorities vs. a large tech company) among different groups of people (e.g., people living in rural areas vs. people living in urban areas). Our mediation analysis showed that one's perception of the public health benefits offered by the app and the adoption willingness of other people had a larger effect in explaining the observed effects of app design choices and individual differences than one's perception of the app's security and privacy risks. With these findings, we discuss practical implications on the design, marketing, and deployment of COVID-19 contact-tracing apps in the U.S. The COVID-19 pandemic started in March 2020 and has killed 316,844 people in the US as of December 21, 2020 1 . Contact tracing is widely known as a key strategy to slow the spread of infectious diseases such as COVID- 19 . It involves identifying who may have been exposed to an infected person and helping exposed people take protective measures at the right time [29] . The conventional approach to contact tracing relies on manual investigation, which can not keep up with the rising cases during the global COVID-19 outbreak [63, 62] . Hence, smartphone-based contact-tracing apps have been proposed as a complementary solution to help scale up the contact tracing process [67, 18, 56] . The effectiveness of contact-tracing apps is contingent on a critical fraction of the population installing and using the app [35, 28] . However, deployed contacttracing apps have suffered from low adoption rates (from 21.6% in Australia to 0.2% in Philippines) [19] , with security and privacy concerns blamed as a main culprit [13] . Although recent research has investigated factors that affect people's willingness to install a contact-tracing app in general [3, 55, 37, 71, 70, 49, 9, 4, 69, 34, 6, 59, 61, 42, 66, 76] , some important aspects remain unclear. In particular, we aim to focus on three fundamental issues: First, the design of contact-tracing apps lends itself to multiple choices featuring different trade-offs between security/privacy risks and public health benefits [10, 55, 20, 5, 47, 60] . Researchers have conducted choice-based conjoint studies to measure user preferences of different configurations of COVID-19 contact-tracing apps for the UK population [38, 73] and the U.S. population [44, 76] . This method emulates a situation where users have to choose between app designs, but the nature of contact-tracing apps determines that users in a certain region can only choose to install or not install a single designated app 2 . Second, prior research in contact-tracing apps focuses solely on measuring people's general intentions to install the app [3, 55, 37, 71, 70, 49, 9, 4, 69, 34, 6, 59, 61, 42, 66, 76] . However, app installation intentions is not sufficient for effective contact tracing because users must also actively report cases and keep the app installed in the long run [64] . Third, previous research has conducted qualitative studies to identify reasons why people would or would not install a contact-tracing app [61, 44, 74, 3] , with perceived risks and benefits turning out to be recurring themes. However, there is a lack of quantitative understanding in how perceived risks and benefits vary with different app designs, across people, and how these variances affect the app adoption intentions accordingly. As a result, it remains unclear which app designs best reconcile the risk-benefit trade-offs and what are the rationales behind the preferences of different sub-populations. In this paper, we present a national survey experiment (N = 1963) in the U.S. to complement prior findings on the impact of app design choices on app adoption intention for contact-tracing apps. We focus primarily on three research questions: RQ1 To what extent do app design choices affect people's adoption intentions about a COVID-19 contact-tracing app? RQ2 To what extent do individual differences affect people's adoption intentions about a COVID-19 contact-tracing app? RQ3 How do people's perceived risks and benefits about a contact-tracing app mediate the influence of app design choices and individual differences on the app adoption intention? In our study, we used a between-subjects factorial design, showing each participant only one solution and asking about their intentions to install and use the app. This is a better approximation for the choice they will actually face and can therefore lead to a more realistic estimation of how app design differences shape adoption intentions, compared to previous studies that have used a within-subjects approach. We vary design decisions by controlling four variables: proximity-based contact tracing architecture (i.e., decentralized vs. centralized architecture), location use, app provider, and security risk presentation. The first three correspond to app design choices that were found to be important in prior research in building privacy-preserving contact-tracing apps. The fourth variable security risk presentation allows us to compare participants' adoption intentions when not primed about any security risks and when primed about one of three major security risks of contact-tracing apps: data breach risk, secondary data use risk, and the re-identification risk of COVID-19 positive users. We also requested participants to answer questions about personal characteristics (prosocialness, COVID-19 risk perceptions, general privacy concerns, and technology readiness) and demographic information (e.g., age, gender) for analyzing the effects of individual differences on adoption intentions. Our study resulted in a number of key findings, including: • 58.9% people reported that they at least somewhat agreed to install the app, which is similar to prior work's estimation such as 55% in Li et al. [44] and 59% in a national poll [1] . However, only 41.7% people reported they at least somewhat agreed that most people would install this app, which shows that U.S. people hold an overly pessimistic attitude towards the adoption of contact-tracing apps. 76 .2% people reported that they at least somewhat agreed to report to the app if they test positive. This suggests that people are more amenable to using contact-tracing apps and contribute their data when they test positive for COVID-19. • App design choices had very small effects on all five aspects of app adoption intention (e.g., install app, report positive cases, keep the app installed). People were significantly more inclined to install apps that collect location than apps that do not collect location due to the additional benefits from the location data (e.g., for analyzing infection hotspots). Among the three security risks we tested, all of them increased users' perceived security and privacy risks while only the secondary data use risk significantly reduced adoption intention. • Individual differences had large effects on all five aspects of app adoption intention. Older people, females, and essential workers were significantly less inclined to install a COVID-19 contact-tracing app, while Hispanics, people with higher household income, frequent public-transit users during the pandemic, and people living in urban areas were significantly more inclined to install a COVID-19 contact-tracing app. • Certain app design choices could exacerbate the difference in adoption intention due to individual differences, which could lead to potentially unbalanced adoption for certain sub-populations. For example, people living in urban areas showed similar acceptance of state health authorities as the app provider and of a large tech company as the app provider, while people living in rural areas showed much lower acceptance of a large tech company than of state health authorities. • The perceptions about the app's benefits and how much adoption the app can achieve played a more important role in determining one's intention to install a contact-tracing app than perceptions about security and privacy risks. In this section, we present an overview of the contact-tracing app design space that we are studying in the survey experiment, drawing on both research proposals and industry frameworks (e.g., the Google/Apple Exposure Notification API) and review findings of prior work to introduce our research questions. A contact-tracing app needs widespread adoption to work [35, 28] . Specifically, the installation rate has been widely used as a success metric of contacttracing apps [24] and previous research focused on estimating the percentage of people that will install contact-tracing apps [55, 1] and factors that affect people's willingness to install contact-tracing apps [44, 76, 38, 73, 61] . However, for continued contact-tracing users need to keep the app installed and actively report if they test positive [2] . Some evidence has demonstrated that long-term use of the app and honest reporting of positive cases could be impeded by usability concerns (e.g., shorter battery life [30, 55] ) and privacy concerns (e.g., the app could remain as a surveillance tool after the pandemic [76] ). Note that the usability and privacy issues vary greatly among different app designs. To provide a more comprehensive understanding about factors that affect the adoption intentions of contact-tracing apps, we measure five outcome variables of different aspects of adoption in our survey design and analysis, including 1 the general app installation intentions 2 whether to report to the app if the user tests positive for COVID-19, 3 whether to keep the app installed when the battery drains faster, 4 when COVID-19 cases are steadily decreasing and 5 when a vaccine becomes available). Many digital technologies have been proposed and deployed to help combat the pandemic. In this paper, we focus on smartphone contact-tracing apps that users voluntarily install to complement conventional contact tracing [8] . Contact-tracing apps are inherently privacy-sensitive as they rely on users' sensitive data such as their contact history and location history to function [20, 10] . On the other hand, collecting more data can improve the accuracy of the automated contact tracing results [55] and provide more information to health workers [41, 38] . To tackle this risk-benefit trade-off issue, researchers have proposed technical solutions for privacy-preserving contact tracing. In the following, we introduce two main design dimensions of contact-tracing apps: Proximity-based contact tracing and Location-based contact tracing. Then we discuss two other factors related to contact-tracing app design: app providers and security risks. Research questions proposed in this subsection are extensions of RQ1: "To what extent do app design choices affect people's adoption intentions about a COVID-19 contact-tracing app? " Most contact-tracing apps offer Bluetooth Low Energy (BLE)-based proximity tracking to notify people who have recently come into close contact with people who test positive for COVID-19 [55, 5, 47, 60] . In March 2020, Singapore created the first COVID-19 contact-tracing app using a centralized architecture which completes the contact-tracing process on the server end [11] . This approach can lead to severe security risks because users' identities (e.g., phone numbers) are associated with their COVID-19 exposure status [11, 20] . Therefore, many researchers have proposed decentralized architectures that can fulfill the fundamental need of sending exposure notifications to people who might be infected with minimum data shared with a central entity [67, 18, 56] . This allows users to remain anonymous from the central server, but there is still a risk that other app users can identify the infected user they were exposed to by installing modded app that logs additional information such as locations [5, 67] along with the exposure history. Because the contact-tracing process is completed on the users' phones, the central server does not know how many exposure notifications were sent to users and how users reacted to them. This makes it difficult to evaluate the efficacy of the system and integrate it with the conventional contact tracing to facilitate further testing and quarantine processes [41, 38] . That being said, Google and Apple used this architecture in their Google-Apple Exposure Notification (GAEN) framework [32, 8] , which has become the most prevalent way of building contact-tracing apps in the U.S. [33] Researchers have also proposed privacy-preserving centralized contact-tracing architectures [51, 17] . Like decentralized architectures, these allow users to remain anonymous from the central server. Because the contact-tracing process is completed on the server end, the central server can track when and how many exposure notifications are sent out to help measure the performance of the system and integrate with the conventional contact tracing. However, it is still possible for app providers to infer the identities of users using the anonymized contact history shared with the server [67] . This system could also suffer from the re-identification risk under a Sybil attack, namely users of this app can narrow down the scope of infected users they were exposed to by registering multiple accounts [67] . Li et al. [44] conducted a choice-based conjoint study that studied similar design choices and found users preferred the centralized architecture. However they did not investigate the privacy-preserving centralized architecture which serves as a middle ground between the two extreme solutions. Besides, their description highlighted the re-identification risk for the decentralized architecture but did not mention other risks like data breach that a centralized architecture is more susceptible to, which could bias users' decisions. In our study, we examine users' preferences and feelings about these three mechanisms of proximity-based contact tracing described above. To what extent do different proximity-based contact tracing designs (1. decentralized architecture, 2. centralized architecture using anonymized identifiers, 3. centralized architecture using real identities) affect people's intentions to adopt a COVID-19 contact-tracing app? Infected people's location histories are useful for contact-tracing, especially for tracing indirect contact (e.g., spread through shared surfaces or aerosol in a public spaces) which cannot be captured by proximity-based contact-tracing apps [22, 54] . However, the use of location data in contact-tracing apps has been controversial and the Google/Apple exposure notification framework even forbids apps that built with it to collect location data [32, 8] due to the risks of increased surveillance of all app users and privacy leak and stigmatization of infected users [75, 76] . Previous research has not reached a consensus on how location use affects users' preferences of contact-tracing apps. Zhang et al. [76] showed that using Bluetooth data for proximity-only contact tracing increases users' acceptance of contact-tracing apps compared to using GPS for location-based tracing, while Li et al. [44] showed that collecting location data in public areas and providing users with infection hotspot information significantly increased willingness to adopt. These findings suggest that location data collection may be more acceptable to users when it provides additional benefits over basic proximity-based contact tracing. Therefore, our study focuses on comparing no location collection (and no additional benefits) with location features that have additional benefits and can still preserve privacy to some extent. The first feature we study relies on storing the location data on device to mitigate privacy risks, such as the Care19 Diary app in South Dakota, USA [48] . If a user of the app tests positive for COVID-19, they can refer to the location logs tracked by this app to help them recall their recent whereabouts when interviewed by a human contact tracer. The second feature we study relies on uploading the location data of infected users so that infection hotspots recently visited by many infected users can be shared with the public. Research has shown that users find knowing about infection hotspots useful and may be more willing to install an app that offers this feature [55, 44] . To protect users' privacy, researchers have proposed technologies such as Safe Paths [54] that enable users to upload anonymized, redacted, and obfuscated location history. To what extent do different location-based contact tracing features (1. no location use, 2. storing location on device as a memory aide, 3. sharing locations with health authorities to analyze infection hotspots) affect adoption intention for a COVID-19 contact-tracing app? In addition to different app designs, the organizations that develop and release the contact-tracing app and therefore have access to users' data can also have significant impact on users' intentions to adopt it [55, 44, 76, 38, 73, 61] . Previous research found that sharing sensitive information such as location and contact history with government agencies in general could lead to a low acceptance of contact-tracing apps [61, 38, 73] . In contrast, sharing data with health authorities in particular such as CDC of the U.S. and NHS of the U.K. could improve users' willingness to adopt contact-tracing apps [44, 38, 73] . However, the health-authority-leading solution encountered more challenges in the U.S. than other places. In the US, there is no single national contract tracing app due to the lack of coordination by the federal government, while the rollout of state-specific apps has been slow due to the lack of technical expertise in state health departments [24] . In fact, scholars recommended to seek "the piecemeal creation of public trust", and other entities have taken actions to help build contact-tracing apps [12] . For example, Google and Apple launched the "Exposure Notifications Express" project which integrates contact tracing as an opt-in feature built into their OS's to alleviate the need of users to install any contact-tracing apps [43] . Similarly, some U.S. universities have built their own contact-tracing apps to protect their faculty, staff, and students on campus [45, 16, 36] . In our study, we examine the impact of the four providers mentioned above on the adoption intention: state-level health authorities, federal-level health authorities, a large tech company (such as Google and Apple), and the users' employer or school. To what extent do different app providers (1. state health authorities, 2. federal health authorities, 3. a large tech company, 4. your employer or school) affect people's adoption intentions of a COVID-19 contact-tracing app? Despite all the technical approaches to protect users' privacy, the nature of contact-tracing apps means that some security risks are inevitable regardless of the specific app design, though developers rarely mention them in their app description [10, 20] . However, very few contact-tracing app studies explicitly explained their security risks to their participants, and they focused on a certain type of security risk that is less protected against in a certain app design. For example, Li et al. [44] highlighted the re-identification risk of infected users, which decentralized apps are more vulnerable to, and learned users tended to prefer centralized apps over decentralized ones. Horvath et al. [38] controlled for whether to prompt users about the data breach risk, which centralized apps are more vulnerable to, and found the data breach stimuli did not change users' preferences for data storage. In our research, we want to know how users' awareness of security risks affects their decisions in adopting contact-tracing apps. Because different app design choices are more vulnerable to different risks, we are also interested in whether they have different levels of impact on adoption intention. Specifically, we test four conditions, including a baseline condition that does not directly mention any security risk, and three other conditions that prime users about the data breach risk, secondary data use risk, or the re-identification risk. To what extent does priming users about different security risks of a COVID-19 contact-tracing app (1. not priming users about security risks, 2. priming about data breach risks, 3. priming about secondary data use risks, 4. priming about re-identification risks) affect their adoption intentions? Previous research has demonstrated that individual differences can play an important role in people's willingness to adopt a COVID-19 contact-tracing app. In our survey, we build upon prior findings to examine how different subpopulations and people who hold different opinions to certain topics in general (e.g., privacy, COVID-19 risks, etc.) react to contact-tracing apps. Research questions proposed in this subsection are extensions of RQ2: "To what extent do individual differences affect people's adoption intentions about a COVID-19 contact-tracing app? " Altruism and contributing to the "greater good" were identified as important reasons for contact-tracing app supporters [61, 74, 55] . Furthermore, Trang et al. [66] found that emphasizing the societal benefits of the app led to a higher adoption willingness than emphasizing the benefits to users themselves. Because people who are more prosocial may feel more strongly about contributing to the "greater good,", marketing these apps to appeal to this aspect of people could foster adoption and increase overall rates of usage of contact-tracing apps. Hence we have the following research question to formally study the effects of prosocialness on adoption intentions: To what extent is one's prosocialness associated with COVID-19 contacttracing app adoption intentions? In contrast, the fear of increased surveillance and privacy risks were identified as important reasons for people who did not want to install contact-tracing apps [61, 74, 55, 34] . As people's perceived privacy risks about contact-tracing apps in particular is likely to be affected by their privacy concerns in general, we have the following research question: To what extent is one's general privacy concern associated with COVID-19 contact-tracing app adoption intentions? We learned from past pandemics that public perceptions of the risks of a disease has a significant influence on the success of controlling the spread of a highly infectious disease [26, 27] . However, conspiracy theories about the seriousness of COVID-19 have become barriers to the adoption of measures to control the spread of the disease such as social distancing [57] . As a result, we have the following research question. To what extent is one's risk perception about COVID-19 associated with COVID-19 contact-tracing app adoption intentions? Parasuraman and Colby [50] divided people into five segments based on their attitudes towards technologies, including Skeptics, Explorers, Avoiders, Pioneers, and Hesitators and found that they exhibit different intentions and behaviors in adopting new technologies. Because contact-tracing apps are a new technology designed to complement the conventional manual contact-tracing process, people's intrinsic attitudes towards new technologies could have an essential impact on their adoption of contact-tracing apps. Therefore, we have the following research question: To what extent is one's attitude towards new technologies associated with COVID-19 contact-tracing app adoption intentions? A large body of research has studied the influence of demographic factors such as age [37, 71, 34, 70] , gender [37, 71, 34, 70] , race [7] , education [37, 71, 34, 70] , income [4] , and living area [4] on COVID-19 contact-tracing app adoption intentions in the settings of various countries. However, their findings are not consistent. For example, regarding the age factor, some research showed that older people are significantly less willing to adopt contact-tracing apps [37, 69] , while some found an opposite trend [34] and some did not find that age had a significant influence [71, 70, 59] . The difference could be due to differences in culture, political climate, and the stage of the pandemic in different countries when the studies were conducted. It could also be related to the difference in study design (e.g., within-subjects vs. between-subjects design) and the app description (e.g., a general description vs. a detailed description of the risks and benefits of a specific design). To what extent do demographic factors (e.g., age, gender, race, education, income, living area) correlate with a person's willingness to adopt a COVID-19 contact-tracing app in the U.S.? Note that certain sub-populations are at higher risks to get exposed to COVID-19, such as essential workers, health workers, and people who need to take public transit frequently during the pandemic. However, there has been little research about the adoption of contact-tracing apps for these people. Therefore, our survey asks users to self-report whether they belong to any of the above high-risk sub-populations to answer the following research question: To what extent do people at higher risks of getting exposed to COVID-19 (e.g., essential workers, health workers, frequent public transit users) like to install a COVID-19 contact-tracing app? Although some past work examined people's reactions to different app designs [38, 73, 44, 76, 68] , they focused on finding the designs that are likely to achieve high adoption rate for the entire population. We want to take a step further to understand more nuances about how installation intentions of different sub-populations (e.g. men vs. women, older people vs. younger people) are moderated by different app design choices. Hence, the following research question studies the interaction effect between factors related to app design choices and demographic factors: To what extent do app design choices moderate the intentions to install a COVID-19 contact-tracing app of different sub-populations? Recent qualitative research has identified the risks of increased surveillance and privacy invasion and the benefits to society and to the users themselves as two main reasons that explain why a person would install or not install a COVID-19 contact-tracing app [61, 44, 74, 3] . These findings are in line with the Privacy Calculus theory [25] , which states that individuals view privacy as a trade-off problem and make data disclosure decisions by weighing the potential risks and potential benefits. Correspondingly, some prior work has drawn on the Privacy Calculus theory and examined the influence of perceived risks and benefits in users' decisions and how perceived risks and benefits mediate the relationship between abstract attributes and app adoption intentions. Specifically, Hassandoust et al. [34] drew on the Privacy Calculus theory and conducted structural equation modeling to examine the influence of perceived risks and benefits in users' decisions and found that technical attributes (anonymity and information sensitivity) could influence the adoption intentions by affecting users' risk beliefs. Despite the theoretical insights, it is hard to link these abstract features to existing app designs and translate the results to practical design recommendations. In our survey, we use a similar method as the above work [34] to further explain why certain app design choices and individual differences have significant influences on app installation intention. We also use perceived risks and benefits as mediators, while our independent variables include factors related to app design choices grounded in real-world contact-tracing app designs (Section 2.2) rather than abstract features, which can more directly contribute to our understanding of the design space. The following research questions are extensions of RQ3: "How do people's perceived risks and benefits about a contact-tracing app mediate the influence of app design choices and individual differences on the app adoption intention? ": To what extent do security and privacy risks mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contacttracing app? To what extent does perceived protection to the users themselves mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app? To what extent does perceived effectiveness in slowing the spread of COVID-19 mediate the relationship between independent variables (i.e., app design choices and individual differences) and the installation intention of a COVID-19 contact-tracing app? Due to the unique requirement of achieving a widespread adoption to be effective, how much a person believes other people would like to install the app could affect their perception of the efficacy of the app [44, 68] . Therefore, we also include the factor perceived adoption as a potential mediator in our analysis: To what extent does perceived adoption of the app mediate the relationship between independent variables (i.e., app de-sign choices and individual differences) and the installation intention of a COVID-19 contact-tracing app? To answer the research questions and test the hypotheses about factors that affect people's intentions to adopt a COVID-19 contact-tracing app, we conducted a randomized between-subjects survey experiment on a representative sample of U.S. population (N = 1963) recruited using a Qualtrics panel. The sample size was determined before the formal study based on power analysis results (β > 0.8). The effect size was estimated using data collected in pilot studies. Our survey was programmed and hosted on Qualtrics. The data was collected in November, 2020. Our study has been reviewed and approved by our institution's IRB. We recruited participants based in the U.S. using a Qualtrics online panel. To obtain a nationally representative sample, we employed a quota-sampling method [23] for recruiting participants and controlled for gender, age, race, and living region to make the distributions of these variables consistent with U.S. census data. We required participants to be fluent English speakers, aged 18 or older, and use smartphones. Qualtrics handled the entire data collection process, including recruiting, survey distribution, and compensation. We paid $6.5 for each complete response. We obtained 2026 responses that passed all understanding check and attention check questions using a Qualtrics online panel 3 . 63 responses were removed as they did not provide a valid ZIP code, which yields a final sample of 1963 unique responses. The survey was configured to allow a respondent to take the survey only once so they could not re-attempt the survey after failing attention checks. As summarized in Table 1 , our study follows a 3 (Decentralized vs. Anonymized Centralized vs. Identified Centralized) x 3 (No location use vs. Location on device vs. Location uploaded) x 4 (State health authorities vs. Federal health authorities vs. Tech company vs. Employer or school) x 4 (No security risk vs. Data breach risk vs. Secondary data use risk vs. Re-identification risk) factorial design. Each participant was randomly assigned into one condition and saw the app description created with the selected values of the four variables. Note that compared to prior work, in this study, each participant is only presented with one contact-tracing app design to simulate a more realistic setting and reduce the effect of fatigue. Then they reported their willingness to install and use Data may be stored longer than needed and used for other purposes. Re-identification risk Exposed users could guess who were infected and led to their exposure. Our experiment consists of three main steps. The first step presents the app description and requires participants to correctly answer all quiz questions to proceed. The second step asks participants to report their intentions to install and use the app and their perceived risks, benefits, and community adoption rate of this app. The third step asks questions about the participants themselves, including validated scales that measure personal characteristics such as prosocialness and common demographic questions. the app and their perceived risks and benefits of the app. These manipulations allow us to study the effects of the four factors related to app design choices on the adoption intentions for contact-tracing apps (RQ1.1-1.4, see Section 2.2) and study the how app design choices affect the adoption intentions through perceived risks and benefits (RQ3.1-3.4, see Section 2.4). We intentionally had each participant see only one app design to emulate the real-world situation when there is only one COVID-19 contact-tracing app available in a region. This design also reduces the potential fatigue caused by reading and evaluating multiple app designs. We also asked participants to provide their demographic information, which allows us to study the effects of individual differences on the adoption intentions for contact-tracing apps (RQ2.1-2.4, RQ2.5 and RQ2.6, see Section 2.3) and the interaction effects between app design choices and individual differences (RQ2.7, see Section 2.3). Our experiment consisted of three steps as demonstrated in Figure 1 . An example of the complete survey can be found at https://github.com/covid 19-hcct/HCCT-documents/blob/master/national survey design example .pdf. Participants were first presented with a description about the COVID-19 contact-tracing app randomly selected from 144 variations (3x3x4x4 factorial design). We include a screenshot of one of the app descriptions as a example in the appendices (Figure A.7) . To ensure participants correctly understood the app's features and data practices, we required participants to answer quiz questions. If the participants gave an incorrect answer, they could go back to read the description again. However, they could not proceed to the next step until they answered all the quiz questions correctly. This method is borrowed from previous research that had similar experiment design [72] . All quiz questions are multiple choice questions except for the questions about security risks which requested participants to type the name of the security risk (ignoring spaces and case differences). This is because we did not want to prime users in the "No security risk " condition (control condition) about any security risk from reading the options in the quiz question. This step contains two pages and both pages began with the same app description as Section 1. In the first page, participants were asked to answer questions about their intentions to install and use the app. There were five questions corresponding the five aspects of app adoption introduced in Section 2.1, which covered the general intentions to install the app, and intentions to report positive case to the app and keep the app installed. In the second page, participants were asked to rate their perceived risks, benefits, and other people's adoption intentions. We inserted an attention check question after all other questions ("This is an attention check, please go ahead and select strongly agree"). When clicking on the next page button, the survey would automatically terminate if the participants did not pass the attention check. At the end of this step, there was an open-ended question that allowed participants to freely express their opinions regarding the contact-tracing app. After answering app-related questions, participants were asked to fill out validated scales that measure their prosocialness [15] , general privacy concerns [46] , technology readiness [50] , and COVID-19 risk perceptions [26] . The four scales were presented in four different pages in random order. We inserted an attention check question similar to Step 2 for each scale and the survey would terminate when participants clicking the next page button if they failed the attention check on that page. Finally, participants were asked to fill out demographic questions (e.g., age, gender, race). The complete list of demographic factors can be found in Section 3.4.3. We asked participants to report their adoption intentions in five aspects on a 7-point likert scale (1=strongly disagree, 7=strongly agree) in Step 2 Page 1 (Section 3.3.2). Install app: We asked participants to rate to what extent they agreed or disagreed with the statement "I will install this app if it becomes available in my area." Then we asked participants to assume they have already installed the app, and then rate to what extent to agreed or disagreed with the following statements: Report positive case: "I will report to this app if I test positive." Shorter battery life: "I will keep this app installed even if my phone battery seems to last less long." Fewer cases: "I will keep this app installed even if COVID-19 cases are steadily decreasing in my area." Vaccine available: "I will keep this app installed even if a COVID-19 vaccine becomes widely available." We asked participants their perceived risks, benefits, and other people's adoption intentions about the contact-tracing app presented to them on a 7point likert scale (1=strongly disagree, 7=strongly agree) in Step 2 Page 2 (Section 3.3.2). The statements for each variable are listed as follows: Security and privacy risks: "Installing this app presents a risk to my security and privacy." Self benefits: "Installing this app helps me protect myself against COVID-19." Societal benefits: "This app helps slow the spread of COVID-19 in my area." Perceived adoption: "Most people in my area would install this app if it became available." For factors related to app design choices, each presented contact-tracing app description was coded using four variables. We chose the condition "Decentralized, No location use, State health authorities developed, No security risk mentioned" as the reference levels for the four variables. Because they correspond with how contact-tracing apps are built in the U.S. (until December 2020): Different apps are developed for each state using the Google/Apple Exposure Notification framework, which implements the decentralized architecture and forbid the use of location in the same app. Proximity-based contact tracing: We operationalize the three types of designs as two indicator variables: Anonymized Centralized and Identified Centralized, which take the value of 1 for participants in the respective condition and 0 otherwise. Location use: We operationalize the three types of designs as two indicator variables: Location on device and Location uploaded, which take the value of 1 for participants in the respective condition and 0 otherwise. App providers: We operationalize the four app provider options as three indicator variables: Federal health authorities, Tech company, and Employer or school, which take the value of 1 for participants in the respective condition and 0 otherwise. Security risks: We operationalize the four types of designs as three indicator variables: Data breach risk, Secondary use risk, and Re-identification risk which take the value of 1 for participants in the respective condition and 0 otherwise. For individual differences, we first used validated scales to measure the following personal characteristics of interest: Prosocialness: We used the 16-item scale developed by Caprara et al. [15] to measure participants' prosocialness. The sixteen questions are on a 5point likert scale and higher score means higher prosocialness. We define the prosocialness value for each individual as the average rating of the 16 questions so the range of this variable is still [1, 5] . The internal consistency (Cronbach's alpha) of all 16 questions was 0.94 on our sample which showed high reliability. General Privacy Concerns: We used the 10-item Internet Users' Information Privacy Concerns (IUIPC) scale developed by Malhotra et al. [46] to measure participants' general privacy concerns. The ten questions are on a 7point likert scale and higher score means higher privacy concerns. We define the general privacy concern value for each individual as the average rating of the 10 questions so the range of this variable is still [1, 7] . The internal consistency (Cronbach's alpha) of all 10 questions was 0.86 on our sample which showed high reliability. We also asked users to report demographic factors: Age: We provided a text input box to allow participants to enter their age. Gender: We provided five options for participants to select: "Male", "Female", "Non-binary", "Prefer not to disclose", and "Prefer to self-describe". In our regression and mediation analysis, we only included participants who identified themselves as "Male" or "Female" and code the variable as 1 for "Female" and 0 for "Male" because the other groups contained too few responses. Race: We provided 9 options for participants to select: "American Indian or Alaska Native", "Asian", "Black or African American", "Hispanic or Latino", "Middle Eastern", "Native Hawaiian or Pacific Islander", "White", "Prefer not to disclose", and "Prefer to self-describe". In our regression and mediation analysis, we only included participants who identified themselves as Asian and Black or African American, Hispanic or Latino because the other groups contained too few responses. We operationalize this variable using three indicator variables: Asian and Black or African American, Hispanic or Latino, which take the value of 1 for participants belonging to the corresponding race and 0 otherwise. Education: We provided 11 options for participants to select: "No schooling completed", "Nursery school to 8th grade", "Some high school, no diploma", "High school graduate, diploma or the equivalent (for example: GED)", "Some college credit, no degree", "Trade/technical/vocational training", "Associate degree", "Bachelor's degree", "Master's degree", "Professional degree", "Doctorate degree". Because "Education" is an ordinal variable, we converted the 11 options to integers 1 to 11, with 1 corresponding to "No schooling completed" and 11 to "Doctorate degree". Income: We provided 7 options for participants to select: "Less than $25,000", "$25,000 to $34,999", "$35,000 to $49,999", "$50,000 to $74,999", "$75,000 to $99,999", "$100,000 to $149,999", "$150,000 or more". Because "Income" is an ordinal variable, we converted the 7 options to integers 1 to 7, with 1 corresponding to "Less than $25,000" and 7 to "$150,000 or more". Health workers: We asked participants to self-report whether they were health workers. This variable takes the value of 1 for participants who answered "Yes", and 0 for "No". Essential workers We asked participants to self-report whether they were essential workers 4 . This variable takes the value of 1 for participants who answered "Yes", and 0 for "No". Transit use: We asked the question " How often do you take public transportation during the pandemic?" and provided 5 options: "Never", "Rarely", "Monthly", "More than once a week", "Every day". Because "Transit use" is an ordinal variable, we converted the 5 options to integers 1 to 5, with 1 corresponding to "Never" and 5 to "Every day". Urban area percentage: We asked participants to provide their ZIP code to identify which county they resided in when taking the survey. Then we used the most recent U.S. Census data (2010) 5 to look up what percentage of area of the county is urbanized and operationalize this variable using this number. To answer our research questions, we used two statistical analysis methods: linear regression analysis and mediation analysis. We created five additive linear regression models to study the main effects of app design choices (RQ1.1-1.4) and individual differences (RQ2.1-2.4, RQ2.5-2.2) for each outcome variable and an interactive linear regression model to study the interaction effects between demographic factors and app design choices (RQ2.7) on the installation intentions. Multicollinearity was not a problem for all our linear regression analyese because the maximum generalized variance inflation factors (GV IF (1/(2 * Df ) )) for our models is 1.21, which is lower than the cutoff value 2.25. To answer RQ3 (Section 2.4), we analyzed the mediation effects of the four mediator variables (Section 3.4.2) using structural equation modeling (SEM), following guidelines from prior literature [58, 53] . For our mediation analysis, we focus on the main outcome variable "Install app" intention rating. We first selected independent variables that had a significant effect in our additive linear regression model for this outcome variable. Then we operationalize our mediation analysis using the following regressions: We summarize the demographics of our survey sample (N = 1963) in Table 2 . Our sample has consistent demographics statistics with the latest U.S. Census data 6 . For questions measuring the the five aspects of adoption (Section 3.4.1), we grouped the options "Somewhat agree", "Agree" or "Strongly agree" to estimate the percentage of people that would install and use contact-tracing apps. Table 3 summarizes the results. 58.9% participants reported they at least somewhat agreed that they would install the app, which is close to findings of previous studies with U.S. smartphone users such as 55% in Li et al. [44] and 59% in a national poll [1] When participants were asked about actions they would take if they had installed the app, 76.2% reported they at least somewhat agreed to report to the app if they tested positive for COVID-19. Note that this is higher than the estimated install rate, which suggests that there are people who do not want to 6 . For age and race, we used https://web.archive.org/web/20201220221336/https: //www.census.gov/data/tables/time-series/demo/popest/2010s-national-detail.html. For education, we used https://web.archive.org/web/20201117011544/https://www.census .gov/content/census/en/data/tables/2019/demo/educational-attainment/cps-detailed -tables.html. For income, we used https://web.archive.org/web/20201215160528/https: //www.census.gov/data/tables/2020/demo/income-poverty/p60-270.html. be tracked in general, but are less concerned to share the same information if they are infected to facilitate contact tracing. Then we estimated the install retention rate in the long run in three different situations. The Fewer cases situation achieved the highest retention rate (63.7%) and the Vaccine situation achieved the lowest retention rate (57.6%) which is similar to our expectation. Although it is surprising to see more than half of the participants rated they would keep the app installed even when a vaccine becomes widely available. This may be because some people have disbelief in vaccines or because they do not find these apps a big threat and tend not to actively uninstall the app once it is installed. We also note that the install retention rate if the app drains the battery quickly (58.8%) is close to the Vaccine situation. This suggests that practical concerns such as the impact on battery life can have crucial influence on users' decisions, which echos findings of prior work [55] . Table 3 : Estimates of adoption rate (%). A participant is considered as likely to install and use the app if choosing "Somewhat agree", "Agree", or "Strongly agree" for the corresponding statement (presented in Section 3.4.1). The first column of each variable is the reference condition, and the conditions that have significantly different adoption intentions in our linear regression analyses in Table 5 are marked in bold. We also calculated estimates of the four mediator variables using the same method as in Table 3 . Table 4 presents the result. More people believed that installing the app could provide benefits to themselves (68.2%) and to the society (64.9%) than believed that installing the app would present a risk to their privacy and security (54.8%). Interestingly, only 41.7% of our participants at least somewhat agreed that most people would install this app if it became available, which is much lower than the estimates of their own installation rate (58.9%). This suggests people generally hold an overly pessimistic attitude towards the adoption of contact-tracing app in the U.S. These estimates also help validate the manipulations of our survey design. For example, more people assigned to the identified centralized architecture con- Table 4 : Estimates of percentage of people who at least somewhat agreed the app has security and privacy risks/self benefits/societal benefits and other people will install this app (%). The first column of each variable is the reference condition. These results are in line with our expectations when designing the conditions, which demonstrates that our between-subjects design effectively conveyed the key characteristics of the app and our participants were able to correctly understand these characteristics before reporting their subjective feelings. In RQ1, we are interested in investigating how app design choices such as decentralized vs. centralized architecture, location use, app providers, and the description about security risks in the app affect one's adoption intentions. According to the linear regression results demonstrated in Table 5 , location use of the app (RQ1.2) and the disclosure of the secondary data use security risk (RQ1.4) had significant effects on several aspects of adoption intentions. Conversely, the difference in decentralized vs. centralized architectures (RQ1.1) and app providers (RQ1.3) did not have a significant effect on adoption intentions. We calculated the f 2 scores of factors related to app design choices to measure their effect size for the five outcome variables. The f 2 for the five outcome variables are 0.006, 0.004, 0.003, 0.002, 0.004 respectively, which shows app design choices have a very small effect on adoption intentions in general 7 . For location use, we can see that the condition Location on device and Location uploaded both had positive significant effects on some aspects of adoption intentions. For example, the coefficient for Location uploaded condition for the Install app outcome variable is 0.258, which represents an estimated increase in the 7-point installation intention rating if the app collects location data and uploads data to the servers for analyzing infection hotspots as compared to not providing location features at all. This suggests contributing a little more location data in exchange for more useful information could drive more adoption of the app. We want to note that the two location features follow privacy-bydesign principles by only storing and uploading the data if necessary (e.g., only uploading location data of users who test positive rather than all users). These design considerations should also be taken into consideration when interpreting the positive effects of these features. For security risk, although prompting participants about all three types of risks consistently increased their perceptions about the risks to their security and privacy caused by installing this app (Table 4) , only showing the secondary data use risk significantly decreased the installation intentions. This suggests that one's security and privacy concerns may not be a determinant of their adoption intentions, which is further supported with our mediation analysis (Section 4.4). In addition, the difference between the reactions to the secondary data use risk and the other two types of risks provides us with another aspect to view the comparisons between decentralized and centralized architectures. As centralized architecture requires more data to be stored on central servers, app users will become more vulnerable to the data breach risk and secondary data use risk than decentralized architectures. Therefore, although there is no significant difference in people's adoption intentions when directly comparing the two architectures, our results suggest that using a decentralized architecture could help reduce security risks that people are more concerned about. In RQ2, we are interested in investigating how individual differences such as demographic factors and other personal characteristics like prosocialness and general privacy concerns affect one's adoption intentions. Table 5 presents the results. The f 2 of all factors related to individual differences for the five outcome variables are 0.475, 0.286, 0.380, 0.385, 0.397 respectively, which shows that factors related to individual differences have a very large effect on adoption intentions, especially app installation intentions. We found prosocialness, COVID-19 risk perception, and technology readiness all had significant positive effect on the five aspects of adoption intentions. Conversely, general privacy concern had a significant negative effect on three out of the five aspects of adoption intentions. These results are consistent with our expectations and answer RQ2.1-2.4. We found multiple demographic factors had significant effects on adoptions intentions (RQ2.5). Females had significantly lower intentions to adopt the app Table 5 : Linear regression results: The main effects of app design choices and individual differences on app adoption intentions. As described in Section 3.4.3, we excluded the data from groups that contained too few responses (e.g., Non-binary gender, Native Hawaiian or Pacific Islander). The sample used for the regression analysis contains 1889 responses. in all five aspects, especially for installing the app (Coef. = -0.362, i.e., our model predicts females' 7-point installation intention rating 0.362 lower than males). High household income had significant positive effects on intentions to install the app and keep the app installed but had no significant effect on intentions to report positive cases. Higher education had significant positive effects on intentions to keep the app installed when the COVID-19 cases are steadily decreasing. Older people had significantly lower intentions to install the app, while they had significantly higher intentions to keep the app installed when the battery seems to last less long. Unlike other demographic factors, the significant effects of race mostly appeared on the intentions to keep the app installed rather than the intentions to install the app. For example, although only Hispanics had significantly higher intentions to install the app than Whites, Asians, Blacks, and Hispanics all had significantly higher intentions to keep the app installed even if a vaccine becomes widely available. Note that the causes of the higher intentions to keep the app installed for the three races could be different. For example, Pew research recently found that Black Americans are less inclined to get vaccinated than other racial and ethnic groups and Asians are the most inclined to get vaccinated [31] . This requires further investigation by future work. For people who are at higher risks to get exposed to COVID-19, we found people who are frequent public transit users during the pandemic and people who live in more urbanized areas had significantly higher adoption intentions. To our surprise, essential workers had significantly lower adoption intentions in several aspects, especially for reporting positive cases to the app (Coef.=-0.316). Our mediation analysis provides more insights into possible causes of this finding (Section 4.4). Note that the average app installation intention rating for all essential workers (mean=4.77) is actually slightly higher than other participants (mean=4.54) This may be because essential workers are generally younger (median age=35) than the rest of the sample (median age=49) and we have showed that younger people are more inclined to adopt contact-tracing apps, which counteracted the influence of being an essential worker. In RQ2.7, we focus on the app installation intentions and study if the same app design could result in different installation intentions for different subpopulations. This could help us predict the adoption of a certain app design by people who are at different levels of risks of getting exposed to or infected with COVID-19 and analyze the implications of potentially unbalanced app adoption. To answer this research question, we built a interactive model for the "Install app" outcome variable which includes the interaction between factors related to app design choices and demographic factors. Due to the space constraints, we only present the interaction that had a significant effect in Figure 2 , 3, 4, 5. The complete results can be found in the Appendices (Table B.7) . For the interaction between proximity-based contact tracing design and demographic factors (Figure 2 ), we found that the effects of different architectures to achieve proximity-based contact tracing are moderated by gender, race, The vertical bars represent the estimated 95% confidence intervals of the "Install app" intention rating. Note that we group the eleven education levels into two classes for illustrative purposes. and education level factors. Specifically, females tended to prefer the identified centralized architecture while males tend to prefer the decentralized architecture (Coef.=0.624, p<.01). The difference in installation intentions between Black and White people was exacerbated when changing from decentralized to anonymized centralized architecture (Coef.=0.662, p<.05). People who received higher education preferred identified centralized architectures to the decentralized architecture (Coef.=0.118, p<.05). For the interaction between location use and demographic factors (Figure 3 ), we found that the effects of location use are moderated by whether the person is a essential/health worker. Although the "Location uploaded " feature could drive a significantly higher installation intention rating at the population level, essential workers preferred the "No location use" condition (Coef.=-0.544, p<.05). Similarly, health workers preferred the "No location use" condition a lot more than the "Location on device" condition (Coef.=-0.939, p<.05). For the interaction between app provider and demographic factors (Figure 4) , we found that the effects of app provider are moderated by gender and urban area percentage. Females (Coef.=-0.621, p<.01) and people living in less urbanized areas (Coef.=0.0119, p<.05) tended to prefer contact-tracing apps provided by the state health authorities to a large tech company. For the interaction between security risk presentation and demographic factors ( Figure 5 ), we found that the effects of security risk are moderated by gender and whether the person is a health worker. Specifically, females were more discouraged by the secondary data use risk than males (Coef.=-0.471, p<.05); People who are not health workers were more discouraged by the secondary data use risk (Coef.=0.878, p<.05) and the re-identification risk than health workers (Coef.=1.05, p<.05). (RQ3) In RQ3, we aim to explain how certain app design choices and individual differences had significant effects on one's app installation intentions through the four mediator variables: security and privacy risks, self benefits, societal benefits, and perceived adoption. To answer this research question, we conducted a mediation analysis using structural equation modeling following methods of previous research [53] . We measured the relative magnitude of the indirect effects through the four mediators and calculated the 95% confidence intervals of these ratios using a bootstrap approach. Table 6 presents the ratios of the indirect effects to the total effects and their 95% confidence intervals for each Table 6 : This table shows the estimated ratios of the indirect effect to the total effect, which can be interpreted loosely as the percentages of the total effect of an independent variable on installation intention that are achieved through the four mediator variables. The cells that contain a ratio and its 95% confidence interval indicate that the independent variable (e.g., "Location uploaded") affects the app installation intentions through the mediator variable (e.g., "Self benefits") and the indirect effect is significant. A positive number means the indirect effect is in the same direction as the total effect and a negative number means the indirect effect is in the opposite direction of the total effect which counteracts the other variables' positive effects. We leave the cell blank ("-") if the indirect effect is not significant. pair of independent variables and mediator variables. Figure 6 illustrates the significant correlations between independent variables and mediator variables and between mediator variables and the outcome variable installation intention rating. Our model fit is acceptable according to the Standardized Root Mean Square Residual (SRMR=0.057) 8 Table 6 shows that when the perceptions of risks and benefits both had significant indirect effects, the effect sizes of the two benefit factors were almost always larger than the security and privacy risk factor. For Age and Transit use, security risks even had a negative indirect effect, which means although these two independent factors had significant effects on security risk perceptions, the effects on other mediator variables were larger and had an opposite direction. Therefore, we conclude that one's perceptions about the benefits of COVID-19 contact-tracing apps are more powerful determinants of app installation intentions than the perceptions about the security and privacy risks caused by the app. Furthermore, we learned from Table 6 that Perceived adoption often had an even larger effect size than the two benefit factors. This result is not surprising as we already knew the efficacy of contact-tracing apps largely depends on whether it can achieve a widespread adoption. If a person does not have enough confidence in having enough people installing a contact-tracing app, they may refrain from installing it themselves. Figure 6 provides more information about two parts of an indirect effect: Figure 6 : An illustration of mediation effects that explain how certain app design choices and individual differences affect people's intentions to install the app. Edges in solid lines (e.g., Secondary use risk → Security risk rating) indicate positive correlation and Edges in dashed lines (e.g., Technology readiness → Security risk rating) indicate negative correlation. Only edges that have significant effects are plotted and the edge weight and transparency corresponds to the standardized coefficients (NOT effect size; effect size is presented in Table 6 ). There is a significant indirect effect between an independent variable and the outcome variable (i.e., install intention rating) through a mediator variable if there is a pathway from the independent variable to the outcome variable through the mediator variable. the correlation between the independent variable (the first-row nodes) and the mediator variables (the four second-row nodes on the right) and the correlation between the mediator variables and the outcome variable Install app intention rating. This could help us gain more understanding in the results of RQ1 and RQ2. For example, we can see the negative correlation between being an essential worker and the installation intention rating could be partly attributed to the decreased perception of societal benefits. However, we want to note that these four mediators were not able to explain all the effects. For example, none of them had a significant indirect effect for explaining why Hispanics had significantly higher intentions to install the app than Whites, which requires further investigation by future work. Our research has several key practical implications on the design, marketing, and deployment of COVID-19 contact-tracing apps in the U.S., many of which could also apply in broader contexts such as strategies to increase adoption for digital technologies to help contain the spread of COVID-19 and building effective contact-tracing apps for infectious diseases in general. Overall, our regression analysis showed that app design choices such as decentralized vs. centralized architecture, location use, who provides the app, and disclosures about app security risks had very small effects on participants' adoption intentions of COVID-19 contact-tracing apps (RQ1, Section 4.2). Since the baseline levels in our study represent the current design of contact-tracing apps in the U.S. (State-level, decentralized architecture, location collection not permitted), which features the strictest restrictions in data use, our results convey a positive signal that U.S. mobile users are open to or may even slightly prefer alternative designs that collect more sensitive data in a privacy-friendly way and offer additional benefits. Participants also showed similar adoption intentions for app providers other than state health authorities, which suggests that using a piecemeal solution that leverages resources from different entities (e.g., Google/Apple OS-level support, apps provided by employers or schools) to complement the systematic yet slow responses from state-level authorities as proposed by Blasimme and Vayena [12] is a viable approach. The few factors related to app design choices that had significant effects on adoption intentions also point out the sweet-spot in the current design space of contact-tracing apps to optimize for app adoption. For the "location uploaded " feature, although the current GAEN API does not allow collecting location directly in the same app, researchers have proposed creative solutions to gather information about places that infected users visited without logging location traces at an individual level [22] . The key idea is to treat places as people so the GAEN API could be extended to monitor a place's exposure to infected users and gather anonymized location traces of infected users at an aggregated level. We consider this work a promising solution as it greatly reduces the security risk when maintaining the benefits that seem to be very attractive to users according to our results. For the study around the security risk presentation, we learned that people were more concerned about the risk of secondary data use (which is more of an issue for centralized architectures), while less concerned about the risk of re-identification (one of the few security risks that decentralized apps are vulnerable to). These results provide more empirical evidence to support the current deployment of decentralized architectures for contact-tracing apps. Furthermore, as our results suggest that priming users about security risks does not reduce their app adoption intentions in most situations, app developers should be more candid about the possible security risks when presenting contact-tracing apps to users to help them make informed decisions. Contrary to the small effects of app design choices, we found individual differences had large effects on adoption intentions of COVID-19 contact-tracing apps. First, we found people with higher prosocialness, higher COVID-19 risk perceptions and higher technology readiness are significantly more inclined to install and use contact-tracing apps. This shows an marketing opportunity of contact-tracing apps to appeal to people with these characteristics by emphasizing related values such as helping the society combat the disease, helping protect yourself and other people, and taking advantage of the new technology to alleviate the work of human contact tracers. Second, we found certain demographic groups had significantly higher or lower adoption intentions than other people regardless of the app design choices (RQ2.1-2.6, Section 4.3.1). Some findings show positive signals for the effectiveness of COVID-19 apps. For example, public transit use is positively correlated with the intentions to install COVID-19 contact-tracing apps, which corresponds to one of the scenarios that these apps are expected to be most useful for. Some of these findings are particularly concerning. For example, older people had significantly lower intentions to install COVID-19 contact-tracing apps although they are at higher risk for severe illness from COVID-19. Similarly, essential workers also had significantly lower intentions to install COVID-19 contacttracing apps although they are at higher risk for exposure to COVID-19. With our mediation analysis results (Section 4.4), we speculate that the lower installation intentions of older people could be because they were less tech-savvy and did not feel this technical solution provides much benefit to them. For essential workers, the mediation analysis only showed a significant indirect effect through a reduction in the perceived societal benefit rating of the app. We hope future research could conduct qualitative studies regarding the adoption intentions of essential workers in particular to provide better explanations about their preferences and rationales. Third, we found different demographic groups had different preferences among two app design choices (RQ2.7, Section 4.3.2). Although these interaction effects did not change the general trends of adoption intentions for different de-mographic groups, we want to caution potential developers of contact-tracing apps of the unequal effects of certain app design choices on different demographic groups. For example, although introducing location features sometimes increased the adoption intentions of participants in general, many essential workers and health workers seemed to prefer apps that do not collect location over those that do. We speculate this may be because there is a greater privacy risk related to essential workers as their job require them to go outside and visit more places than other people. This suggests that if app designers do want to incorporate location features for more public health benefits, the enabling of these features should be completely voluntary and require users to explicit opt in. By protecting these vulnerable groups, we could also help better protect the general population due to the increase in adoption rate of people who are at higher risks of getting exposed. For people living in rural areas, installation intention was drastically lower for apps developed by a large tech company than for apps developed by their state health authorities. That is to say, contact-tracing apps developed by a large tech company may not be as effective in rural areas as in urban areas. Note that in real world, the app provider may not be as obvious as in the app description of our study, which means that user's perceived app provider could have similar effect on their adoption intentions as the effects of app provider tested in our study. Since current U.S. contact-tracing apps are all built with the GAEN API provided by Google and Apple, it is important for the marketing of the app to clearly convey to users who built the app and who has access to their data. The findings of our mediation analysis showed that although both security and privacy risks and public health benefits had significant indirect effects, the indirect effects of perceptions about contact-tracing apps' benefits (i.e., protecting the users themselves and the societal benefit of slowing the spread of COVID-19) were consistently larger than the indirect effects of perceived security and privacy risks. This suggests that emphasizing the apps' benefits could increase user awareness of these benefits and drive more adoption, while efforts to decrease user awareness of security and privacy risks are likely to have less impact. This result echos Trang et al. [66] 's findings that the variations of app description in terms of benefits provided by the app had a larger effect size than variations in terms of privacy protection levels. Accordingly, we derive two recommendations for designing and deploying COVID-19 contact-tracing apps. First, contact-tracing app designers need to make sure the system works accurately, so that it actually offers key benefits. Opt-in features (e.g., progressive requests of location data) could allow users who are willing to contribute more data to obtain more useful features while enabling users who are more concerned about the security and privacy risks to share only the minimum amount of data. Second, contact-tracing app design and marketing should also serve an educational purpose and emphasize more on the public health benefits both to the user themselves and to the society. In addition to providing clear app descriptions, providing basic statistics using proper visualizations to help users get a better sense of how the app works in real life is also a direction worth exploring. This research has several limitations. First, because our study tested hypothetical app designs to achieve a thorough and systematic exploration of the design space, we could only investigate people's adoption intentions rather than their actual behaviors. Therefore, our findings may not fully represent the corresponding actions people take for a real-world contact-tracing app. Second, due to the constraint of the survey length, we had to trade off some details in app description and questions, such as to what extent the battery life is affected. Since the main goal of our study is to understand and compare the implications on adoption intentions of a wide range of factors, we consider this type of trade-off acceptable and would like to leave a more in-depth examination of specific factors for future work. Third, users were reading app descriptions that presented more app design and implementation details (even including security risks in some conditions), which contained more information than they can obtain in real-world situations. This could affect the generalizability of the results. Although our findings suggest contact-tracing app providers should be more open about what benefits the app offers to motivate more adoption and what potential risks the app can cause to give people more transparency when not heavily discouraging their interests in using the app. Fourth, We only surveyed mobile users and people aged over 18. So the findings may not generalize to people who are not using a mobile phone (but could use other approaches, such as IoT devices or infrastructure, to participate in digital contact tracing [52, 65, 40] ) and minors. Also, we only surveyed U.S. people, which means the estimates of app adoption rate may not generalize to other contexts. Lastly, due to the general limitations of quantitative study methodologies, we could not fully uncover the nuances in people's rationales behind their perceptions and adoption intentions, such as why Hispanic people and Black people had higher adoption intentions in some situations and why essential workers were less willing to install contact-tracing apps. We hope future work could investigate these aspects specifically. In this research, we conducted a national scale survey experiment (N = 1963) in the U.S. following a between-subjects factorial design to examine the effects of app design choices and individual differences on the adoption intentions of COVID-19 contact-tracing apps and how participants' perceptions of security and privacy risk, public health benefit, and community adoption rate mediate these effects. Our results showed that individual differences had a larger impact on participants' app adoption intentions than app design choices, and both app design choices and individual differences affect the adoption intentions more through the perceptions of public health benefit and community adoption rate than perceptions of security and privacy risk. Based on these findings, we derived practical implications on app design, marketing, and deployment. Specifically, we identified sweetspots in the contact-tracing design space that could drive higher adoption. We discussed app design considerations and marketing strategies with regards to individual differences, especially the importance of paying attention to protecting certain vulnerable groups such as essential workers, health workers, and people living in rural areas when designing and promoting the app. Lastly, we emphasized public health benefit as an effective leverage to promote contact-tracing app adoption. France's Covid-19 tracing app fails to engage, chalking up roughly 1.5 million users COVID-19 contact-tracing technology: acceptability and ethical issues of use A survey of covid-19 contact tracing apps Acceptability of app-based contact tracing for COVID-19: Cross-country survey evidence Most Americans don't think cellphone tracking will help limit COVID-19, are divided on whether its acceptable Privacy-Preserving Contact Tracing -Apple and Google Belief of having had unconfirmed Covid-19 infection reduces willingness to participate in app-based contact tracing Mind the GAP: Security & Privacy Risks of Contact Tracing Apps BlueTrace: A privacy-preserving protocol for community-driven contact tracing across borders What's next for COVID-19 apps? Governance and oversight Automated and partly automated contact tracing: a systematic review to inform the control of COVID-19. The Lancet Digital Health Comparison of model fit indices used in structural equation modeling under multivariate normality A new scale for measuring adults' prosocialness UC Campuses Pilot Google-Apple Notification Technology to Help Prevent COVID-19 Outbreaks 2020. ROBERT: ROBust and privacy-presERving proximity Tracing Pact: Privacy sensitive protocols and mechanisms for mobile contact tracing COVID-19 Contact Tracing Apps Reach 9% Adoption In Most Populous Countries Contact tracing mobile apps for COVID-19: Privacy considerations and related trade-offs Statistical power analysis for the behavioral sciences CoVista: A Unified View on Privacy Sensitive Mobile Contact Tracing Effort Is probability sampling always better? A comparison of results from a quota and a probability sample survey Why Aren't COVID-19 Contact Tracing Apps Working? -Time An extended privacy calculus model for e-commerce transactions Risk perceptions of COVID-19 around the world Coupled contagion dynamics of fear and disease: mathematical and computational explorations Quantifying SARS-CoV-2 transmission suggests epidemic control with digital contact tracing Centers for Disease Control and Prevention. 2020. Prioritizing Case Investigations and Contact Tracing for COVID-19 in High Burden Jurisdictions -CDC Covid-19 app: 150,000 uninstalled app after August battery issue Intent to Get a COVID-19 Vaccine Rises to 60% as Confidence in Research and Development Process Increases Exposure Notifications: Helping fight COVID-19 -Google states are using Apple's Exposure Notification API for COVID-19 contact tracing? Individuals' privacy concerns and adoption of contact tracing mobile applications in a pandemic: A situational privacy calculus perspective Effective configurations of a digital contact tracing app: A report to NHSX. en Available here UC Berkeley launches COVID-19 exposure notification system -Berkeley News Who does or does not use the "Corona-Warn-App" and why Citizens' Attitudes to Contact Tracing Apps Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal IoT-based Contact Tracing Systems for Infectious Diseases: Architecture and Analysis Can digital contact tracing make up for lost time? Times of Crisis: Public Perceptions Towards COVID-19 Contact Tracing Apps in China, Germany and the US. Germany and the US Google will build coronavirus contact tracing software right into your phone Decentralized is not risk-free: Understanding public perceptions of privacy-utility trade-offs in COVID-19 contact-tracing apps CMU-Created COVID Early Warning App to Assist Campus Community Internet users' information privacy concerns (IUIPC): The construct, the scale, and a causal model Demystifying COVID-19 digital contact tracing: A survey on frameworks and mobile apps Department of Health. 2020. COVID in South Dakota A national survey of attitudes to COVID-19 digital contact tracing in the Republic of Ireland An updated and streamlined technology readiness index: TRI 2.0 Pan-European Privacy-Preserving Proximity Tracing -High-Level Overview An Internet of Things Approach to Contact Tracing-The BubbleBox System Effect size measures for mediation models: quantitative strategies for communicating indirect effects Apps gone rogue: Maintaining personal privacy in an epidemic User Concerns & Tradeoffs in Technologyfacilitated COVID-19 Response The PACT protocol specification. Private Automated Contact Tracing Team Conspiracy theories as barriers to controlling the spread of COVID-19 in the US Mediation analysis in social psychology: Current practices and new recommendations Predicting public take-up of digital contact tracing during the COVID-19 crisis: Results of a national survey Survey of Decentralized Solutions with Mobile Devices for User Location Tracking, Proximity Detection, and Contact Tracing in the COVID-19 Era COVID-19 Contact Tracing and Privacy: Studying Opinion and Preferences Contact Tracing Survey: U.S. Workforce Surpasses 50,000 But Falls Short Of Need : Shots -Health News : NPR Survey Of Contact Tracing Workforce Shows Little Growth, Despite Surging Cases : Shots -Health News : NPR Reuters Staff. 2020. Active Irish COVID-19 tracing app users drop on battery problem -HSE IoTrace: A Flexible, Efficient, and Privacy-Preserving IoT-enabled Architecture for Contact Tracing One app to trace them all? Examining app specifications for mass acceptance of contact-tracing apps Martin Degeling, and Markus Dürmuth. 2020. Apps Against the Spread: Privacy Implications and User Acceptance of COVID-19-Related Smartphone Apps on Three Continents Are COVID-19 proximity tracing apps working under real-world conditions? Indicator development and assessment of drivers for app (non-) use Adoption of a contact tracing app for containing COVID-19: A health belief model approach Ready or Not for Contact Tracing? Investigating the Adoption Intention Contact-Tracing Technology Using an Extended Unified Theory of Acceptance and Use of Technology Model Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences Predicted Adoption Rates of Contact Tracing App Configurations-Insights from a choice-based conjoint study with a representative sample of the UK population Public attitudes towards COVID-19 contact tracing apps: A UK-based focus group study South Korea is reporting intimate details of COVID-19 cases: has it helped? Americans' perceptions of privacy and surveillance in the COVID-19 Pandemic The authors would like to acknowledge Cori Faklaris, Ruotong Wang, and Laura Dabbish for their help on the study design.This work is supported in part by CMU Block Center and the National Science Foundation under Grant No. 1801472.