key: cord-0775432-6glv8h5v authors: Johnson, Courtney A.; Tran, Dan N.; Mwangi, Ann; Sosa-Rubí, Sandra G.; Chivardi, Carlos; Romero-Martínez, Martín; Pastakia, Sonak; Robinson, Elisha; Jennings Mayo-Wilson, Larissa; Galárraga, Omar title: Incorporating respondent-driven sampling into web-based discrete choice experiments: preferences for COVID-19 mitigation measures date: 2022-01-11 journal: Health Serv Outcomes Res Methodol DOI: 10.1007/s10742-021-00266-4 sha: 4b251aa0bc15af426b154840b3f2d166e566fe0f doc_id: 775432 cord_uid: 6glv8h5v To slow the spread of COVID-19, most countries implemented stay-at-home orders, social distancing, and other nonpharmaceutical mitigation strategies. To understand individual preferences for mitigation strategies, we piloted a web-based Respondent Driven Sampling (RDS) approach to recruit participants from four universities in three countries to complete a computer-based Discrete Choice Experiment (DCE). Use of these methods, in combination, can serve to increase the external validity of a study by enabling recruitment of populations underrepresented in sampling frames, thus allowing preference results to be more generalizable to targeted subpopulations. A total of 99 students or staff members were invited to complete the survey, of which 72% started the survey (n = 71). Sixty-three participants (89% of starters) completed all tasks in the DCE. A rank-ordered mixed logit model was used to estimate preferences for COVID-19 nonpharmaceutical mitigation strategies. The model estimates indicated that participants preferred mitigation strategies that resulted in lower COVID-19 risk (i.e. sheltering-in-place more days a week), financial compensation from the government, fewer health (mental and physical) problems, and fewer financial problems. The high response rate and survey engagement provide proof of concept that RDS and DCE can be implemented as web-based applications, with the potential for scale up to produce nationally-representative preference estimates. In the initial phases of efforts to lessen the spread and mortality of the COVID-19 pandemic, countries around the world enacted nonpharmaceutical mitigation strategies such as social distancing, stay-at-home orders, closure of non-essential businesses, and mask use in public (Flaxman et al. 2020 , Lyu and Wehby 2020 , Medline et al. 2020 , Teslya et al. 2020 , The Economist 2020 . Research suggested that doing so could save millions of lives and decrease burdens on overstretched health care systems (Walker et al. 2020a, b; Pei et al. 2020) . Although vaccines are being administered in some countries, COVID-19 cases and mortality remain high (Anderson et al. 2020 , Carl Zimmer et al. (2020 Center for Systems Science and Engineering (CSSE) at Johns Hopkins University 2021). Therefore, nonpharmaceutical mitigation strategies will remain key to reducing disease spread and may be the 'new normal' while vaccination infrastructure is scaled up globally. However, the effectiveness of nonpharmaceutical mitigation strategies is dependent on population adherence, which in turn is partially driven by individual preferences. The impact of the COVID-19 pandemic on the quality of life and mental health of selected populations, such as health workers, has been previously studied. However, there is limited data on the quality of life impact and tradeoffs within the general population (Young et al. 2020; Zhang and Ma 2020) . In order to get a sense of individual preferences for varying levels of mitigation strategies across a diverse population, discrete choice experiments (DCE) could be combined with respondent driven sampling to promote representative sampling targeting specific populations. In a DCE, respondents are presented with a choice that consists of two or more discrete (i.e., mutually exclusive) scenarios with various combinations of alternatives. The respondents choose the scenario which best aligns with their preferences (Hensher et al. 2005; Train 2009 ). An example of a scenario is choosing whether to wear (or not wear) a mask in public during the COVID-19 pandemic. In the mask use example, alternatives could include wearing a mask 'none of the time', 'some of the time' or 'all of the time'. DCEs were originally developed for economic and market research and are an established tool for eliciting individual preferences (Ryan et al. 2001; de Bekker-Grob et al. 2012) . Increasingly, DCEs have been used in health economics to inform healthcare decision making (de Bekker-Grob et al. 2015; Flynn 2010; Ghijben et al. 2014; Ryan et al. 2006; Wilson et al. 2014) . Results from a DCE can be used to estimate which preferences increase or decrease utility, a concept used to model worth or value (Hensher et al. 2005; Train 2009 ). Because of the need to limit in-person contact during the COVID-19 pandemic, web-based DCE's could be utilized to remotely survey participants. In an effort to make the preference results representative of a diverse population, webbased respondent-driven sampling (RDS) could be used to target different demographic characteristics during recruitment. RDS is a probability sampling method that uses participant driven referral for recruitment of hard-to-reach populations for which a sampling frame may not exist (e.g., person employed by sex work or persons who inject drugs) (Abdul-Quader et al. 2006; Bengtsson et al. 2012; Heckathorn 1997; Hequembourg and Panagakis 2019; Jennings Mayo-Wilson et al. 2020; Magnani et al. 2005; Salganik and Heckathorn 2004; Wang et al. 2007 Wang et al. , 2005 . Study samples recruited using RDS are usually more heterogenous than populations recruited using other sampling strategies, and as a result can be more generalizable to a population of interest (Kendall et al. 2008) . RDS is traditionally done in-person, and participants are issued unique referral coupons which are used to trace recruitment patterns (Heckathorn 2007; Jennings Mayo-Wilson et al. 2020 ). Due to COVID-19 stay-at-home orders and travel restrictions, web-based RDS could be utilized in place of in-person recruitment. Web-based RDS has been used to recruit populations that are hard to reach using in-person methods or for well-networked but hardto-identify populations (e.g., persons with substance use disorder, in financial distress, or sexual minorities) (Bauermeister et al. 2012; Bengtsson et al. 2012; Hildebrand et al. 2015; Wejnert and Heckathorn 2008) . Also, web-based RDS has demonstrated an ability to overcome temporal and physical barriers to traditional in-person RDS by allowing participants to refer a peer using features on social networking sites, such as a wall post, status update, or personal message (Hildebrand et al. 2015) . Despite the ability of DCEs to assess whether a scenario results in an increase or decrease in utility and RDS to recruit diverse study samples, there is currently limited published evidence on the feasibility of using both techniques synchronously. Therefore, the primary objective of this paper is to show the feasibility, measured via overall response rate, of using these two methodologies in combination. The secondary objective is to report preliminary findings of preferences for COVID-19 nonpharmaceutical mitigation strategies from a small sample of university students and staff members in four different countries. Participants were recruited from universities in countries representative of the three World Bank income groups: high income (U.S.), upper-middle income (Mexico), and lower-middle income (Kenya). Participants meeting the following criteria were eligible for inclusion: 18 years or older; staff member or enrolled as a student at Brown University, Purdue University, Moi University or National Institute of Public Health (INSP); resided in the same country as their university (either U.S., Kenya, or Mexico) regardless of legal status or citizenship, from April 2020-June 2020 (the time frame mitigation measures were initially implemented in all three countries); and able to read English in the U.S., English or Kiswahili in Kenya, and Spanish in Mexico. Eligibility was confirmed using institution-provided emails and screening questions. This pilot study aimed to recruit approximately 60 participants (i.e., 10-20 individuals per university) using RDS methodology as illustrated in Fig. 1 to provide an adequate sample of participants to demonstrate feasibility. To begin RDS recruitment, five 'seed' individuals per institution were invited to complete the survey (wave 0, n ≤ 5 per site). Eligible seeds were 18 years or older, affiliated with one of the respective universities, and willing to invite 2 additional people in their network to participate in the survey. After survey completion, each seed was asked to invite 2 eligible recruits to complete the survey (wave 1, n ≤ 10 per site). A seed was considered productive if their invitees completed the survey, and unproductive if the seed or their invitee did not complete the survey. Emails were sent to recruit additional seeds in order to replace unproductive seeds. Post-survey completion, wave 1 recruits were asked to invite another 2 eligible recruits (wave 2, n ≤ 20 per site). Participants were given up to three email reminders if they had not responded to the survey link. With the exception of one site (INSP), participants received small incentives ($5-10) for survey completion in the form of a gift card, internet or airtime. Incentives were received for survey completion only (Brown University) or both survey completion and successful recruitment (Moi University and Purdue University). Successful recruitment meant that the recruiter's recruit completed the survey. However, a decision not to recruit did not prevent a participant from receiving an incentive for survey completion. To reduce potential for repeat enrollments, institutional emails were required for enrollment and tracked for duplications by survey programmers, and the completion of incentives was accounted for by administrative personnel at each institution. RDS process measures (Jennings Mayo-Wilson et al. 2020) , were used to assess the RDS recruitment process (Table 1) . RDS process measures included: the individual's selfreported social network size (participant was asked 'How many currently-enrolled students or staff members are there that you know by name and that know you by name?), recruiterrecruit relationship (length of the relationship [years], how met), number of recruiter reminders to complete the survey, and nature (friendly, aggressive, exciting, worrisome, other) of the invitation from their recruiter. Participants were also asked their willingness to recruit peers on a scale from 0 to 10 (0 = not willing, 10 = very willing). The DCE was designed to elicit participants' preferences for nonpharmaceutical COVID-19 mitigation strategies, and identify preferences that were associated with increases or decreases in utility. DCE creation was an iterative process based on the established literature which aimed to ensure that attributes were generalizable across countries, described individual level characteristics (e.g., mask ease of use, ability to work from home) not societal level characteristics (e.g., capacity of indoor locations, stay-at-home orders), and were not too numerous as to impose a cognitive burden to survey responders. The DCE was adapted from questions in the Understanding Society-UK Household Longitudinal Study COVID-19 Survey (University of Essex Institute for Social and Economic Research 2020). In addition to the DCE, participants completed a survey that assessed depression using the Patient Health Questionnaire-2 (PHQ-2) (Kroenke et al. 2003) , health related quality of Recruit 2 (Wave 1) Recruit 6 (Wave 2) life using the EQ-5D-3L (The EuroQol Group 1990), socioeconomic status, health (i.e., presence of chronic conditions), and sociodemographic characteristics. The survey was designed to take about 30 min to complete, and was tested by the study investigators prior to finalization and distribution to participants. Figure 2 is an example of the DCE that was administered to participants. The DCE consisted of nine individual-level attributes based on experiences during the initial three months (April 2020-June 2020) of social distancing and stay-at-home orders: (1) risk of contracting COVID-19 due to frequency of sheltering-in-place (2) frequency of mask use in public (3) relationship problems (4) mental health problems (5) physical health problems (6) problems performing daily activities (7) financial problems, (8) level of support received, and (9) financial compensation received from the government (Table 2) . These attributes represented both subjective and objective components of quality of life (Baker and Intagliata 1982; Caron 2012; Fleury et al. 2013) , and were applicable across countries. Each attribute was described using a 'level' (Hensher et al. 2005) . For example, the attribute 'reduction in COVID-19 risk due to days per week sheltering-in-place', consisted of three levels: 'high risk (0-2 days per week sheltering in place)', 'medium risk (3-4 days per week sheltering in place)', and 'low risk (5-7 days per week sheltering in place)' ( Table 2) . Eight of the attributes had three levels. One attribute, 'financial compensation received from the government', had 7 levels because of the wide variation in compensation across countries. The final DCE generated a large number of scenarios (S) per country (S = 3 8 × 7 1 = 45,927) which made a full factorial design unfeasible. Thus, we used the dcreate procedure (Stata SE, College Station, TX) (Hole 2015) , to implement a D-efficient partial factorial design (Carlsson and Martinsson 2003) . During the DCE, participants completed a choice task, which was choosing their preferred mitigation strategy from the randomly presented options. The task was comprised of 3 choice profiles: Shelter in Place Situation A, or Shelter in Place Situation B, or None (Fig. 2) . The None option represented the pre-COVID-19 status quo (optout) option (which explicitly allowed for higher risk preferences). Each profile was comprised of the nine attributes listed in Table 2 . To ascertain preference, participants were presented with a task that contained different combinations of the levels of each of the nine attributes (Hensher et al. 2005) . Participants were presented with 10 tasks twice, resulting in 20 choice tasks. The first task asked participants to choose the best options from the three choice profiles (Shelter in Place Situation A, or Shelter in Place Situation B, or None), and the second task asked them to choose the best option from the remaining two choice profiles (best-best analysis) (Ghijben et al. 2014; Lancsar et al. 2017 ). Since the study had three alternatives, use of a best-best DCE allowed for full preference ranking of the choice-sets (Ghijben et al. 2014; Lancsar et al. 2017 ). The number of tasks was chosen in alignment with the literature to prevent participant fatigue, limit cognitive overload, and ensure participants were able to consider all attributes (Coast and Horrocks 2007; Mangham et al. 2008; Clark et al. 2014 ). Participants received the survey via email link sent from the study team. The survey was administered using Qualtrics (Seattle, WA) (Weber 2019). It was translated from English into Spanish and Kiswahili; and provided in English in the U.S., English or Kiswahili in Kenya, and Spanish in Mexico. Use of Qualtrics allowed for the tracking of recruiterrecruit relationship using automated identification numbers that identified the RDS lineage. Time stamps and minimum time requirements were programmed into the DCE in order to ensure participants were taking time to read responses. The minimum sample size needed for a DCE in order to calculate main effects is estimated based on the number of tasks (t), alternatives (a) . The minimum required sample size for this DCE was 58 participants (solving with t = 20, a = 3, c = 7). Therefore, the final sample of 63 participants (with complete data) exceeded the minimum requirement. (For future research, larger samples sizes will be required to allow for subgroup analyses and heterogeneity.) A mixed rank-ordered logit model with normally distributed random parameters was used to estimate preferences for COVID-19 nonpharmaceutical mitigation strategies (Lancsar et al. 2017 ). The mixed rank-ordered logit is an extension of the conditional logit that takes into account ranked choices, and is expressed as the product of the logit formulas (Galárraga et al. 2020; Ghijben et al. 2014; Lancsar et al. 2017) . Normally distributed random parameters were included to allow for comparison of unobserved preference heterogeneity for a chosen social distancing scenario versus no social distancing (Galárraga et al. 2020; Lancsar and Louviere 2008) . Since this was a best-best analysis, the first choice set contained data representing the three alternatives (i.e., Shelter in Place Situation A, or Shelter in Place Situation B, or None), with the dependent variable (choice) = 1 for the first-best and = 0 for the remaining alternatives. A dummy variable controlling for block effects was included in the model to ensure that the block a participant answered had no effect on model results (Lancsar and Louviere 2008) . To check for robustness and due to possible correlation between attributes, a Bonferroni corrected p value was included to test statistical significance of the coefficients (Vander-Weele and Mathur 2018). The Bonferroni corrected p value was calculated by dividing the nominal significance level of the alpha test (α = 0.05) by the number of tests/attributes, 0.05/9 = 0.0056. The distribution (level balance) of the nine attributes was checked across participant's first and second choices, to ensure that the properties of the logit model were satisfied (Albert and Anderson 1984; Cook et al. 2018) , and that the frequencies of the attribute levels exceeded the rule of thumb of 10 events per variable (EPV) for logistic regression (de Jong et al. 2019) . RDS process measures were reported descriptively using means, standard deviations, and percentages. All analyses were performed using Stata SE (College Station, TX, version 16). Data collection was conducted from September through November 2020 and lasted an average of 17.5 days at each institution. A total of 99 people were invited to complete the survey, of whom 71 started the DCE (72% overall response rate, Table 3 ). Of starters, 63 out of the 71 completed all tasks in the DCE (89% engagement rate). For DCE completers, mean age was 26.4 years (SD 7.6), 64% were assigned female at birth, and 49% did not have a partner (Table 4 ). Approximately 65% of participants worked full or part time, and 56% were unable to telecommute or work from home. The participants' preferences are shown in Table 5 . Twenty seeds were recruited, five seeds per institution. Half (10) of these original seeds were unproductive, meaning the seed or their invitee did not complete the survey, and 8 seeds were replaced (Appendix Table 6 ). The final sample consisted of 28 seeds, 17 were productive (10 original seeds and 7 replacement seeds) and 11 seeds were unproductive. Data on RDS process measures were collected from the 63 individuals that completed the survey (Table 1 ). The mean network size of participants was 57 members (standard deviation [SD] 72). Most participants were friends with their recruiter (78%), had met them at school (57%), and had known them an average of 3.8 years (SD 4.4) . Approximately 86% of participants described the invitation from their recruiter as being "friendly" and received one follow up reminder to complete the survey. No participants reported safety concerns related to their participation in the study. Mean willingness to share the survey with others was 8.1 (SD 3.0) on a scale from 0 to 10 (0 = not willing, 10 = very willing). The DCE results, preferences for COVID-19 nonpharmaceutical mitigation strategies, are reported in Table 5 . The attributes were evenly distributed across first (Appendix Table 7 ) and second choices (Appendix Table 8 ), meaning that the DCE was balanced and the properties of the logit model were satisfied. The coefficients on reduction in COVID-19 risk and financial compensation from the government were statistically significant and positive; meaning that participants preferred to shelter in place more days a week in order to have a lower COVID-19 risk (0.230) and receive financial compensation from the government (0.097). Alternatives that were less preferred (statistically significant and had negative coefficients) included relationship problems (−0.239), mental health problems (−0.726), problems performing daily activities (−0.295), financial problems (−0.520), and physical health problems (−0.436) due to sheltering in place. The coefficient of the random intercept ('constant' in Table 5 ) was statistically different from zero (3.32, p < 0.001), indicating there was significant heterogeneity in preference for social distancing scenarios, but participants preferred social distancing scenarios over no social distancing. Since the block effect ('block * constant' in Table 5 ) was not significant (p = 0.556), the random version of the survey (i.e., variation of the levels) that the participant responded to had no effect on preference answer or model results. When using the Bonferroni corrected p value (p = 0.0056), reducing COVID-19 risk due to higher frequency of shelteringin-place and relationship problems, were no longer statistically significant predictors of respondents' preferences for COVID-19 nonpharmaceutical mitigation strategies. Due to small sample sizes, comparisons between countries could not be calculated. (Similarly, heterogeneity by type of subgroup by race/ethnicity, age, gender, region, etc. will be subject of future research with larger samples). This study used web-based respondent driven sampling (RDS) to recruit participants from four universities in three countries to complete a web-based Discrete Choice Experiment (DCE) on preferences for COVID-19 nonpharmaceutical mitigation strategies. The overall response rate was 72%, and engagement with the DCE was over 89%. This compares favorably with both RDS and DCE methods (Watson et al. 2017) . Participants preferred strategies that resulted in lower COVID-19 risk (e.g., more days per week shelteringin-place) and financial compensation from the government. The former results may be potentially driven by the fact that 55.6% of respondents reported not being able to work from home/telecommute; the latter may be explained because about a third of respondents reported working a full-time job while also attending school full-time, and could also explain why financial compensation is a preferred strategy. Also, participants preferred scenarios that caused fewer health (mental and physical), interpersonal, or financial problems. Interpreting coefficients from the DCE that are not statistically significant (i.e., neither more or less preferred) presents a challenge. For example, the lack of strong preferences for or against mask use may indicate ambivalence about mask wearing or that there is heterogeneity across respondents' preferences for mask use (e.g., half of the respondents may be strongly against mask use, while the other half are strongly supportive of mask use). Given the small sample size (N = 63), these are preliminary findings. To our knowledge, nevertheless, this is the first instance of RDS and DCE being used in combination. Use of these methodologies can serve to increase the external validity of an experiment by ensuring preference results are more generalizable to a specific target population (e.g., college students or university staff members or race/ethnic minorities) regardless of what country they are in. DCEs assessing population-level interventions can lack generalizability because they are commonly recruited from person receiving services in clinics, databases of willing research participants, disease registries, convenience samples, using timeand-place sampling, or snow ball sampling (Galárraga et al. 2014; Ghijben et al. 2014; Hobden et al. 2019; Lokkerbol et al. 2019; Sharma et al. 2020; Vallejo-Torres et al. 2018) . By using well-networked individuals, RDS enabled the creation of a heterogenous study population, representing perspectives of diverse participants. Heterogeneity is especially important in the context of COVID-19 mitigation, since impact varies widely by country and within countries, and relationships have emerged between COVID-19 morbidity/mortality and race, age, socioeconomic status, and health. Lessons learned from this pilot can be used to collect useful and more representative data at larger scale. Prior engagement with 'seeds', use of small incentives (gift cards), and broad inclusion criteria contributed to successful pilot implementation. While there were decreases in the number of subsequent recruits in waves 1 and 2, study sites that provided participants incentives to both complete the survey and recruit additional participants were most successful. Such recruitment incentives are traditionally a feature of RDS, and will be used when scaling-up the study. Additional challenges included managing computer timeout issues during peak times of internet use (Kenya) and misunderstanding referral expectations (Mexico). Because the studies were rolled out at different times (Brown University and Moi University were first), we were able to refine the directions and expectations for the 'seeds', which may explain the more successful wave 1 and wave 2 recruitment at Purdue University. Future work should explore heterogeneity in preferences by field of study or occupation: it may make a difference to ask students or staff members from different departments: medicine, pharmacy, or public health, vs. business, administration, culinary arts, etc. While most DCEs have an average of 7 attributes and the literature suggests 10 attributes as ideal (de Bekker-Grob et al. 2012; Mangham et al. 2008) , our DCE had 9 attributes. In future versions of this DCE 'physical health problems' may be removed since 'problems preforming daily activities' is an established quality of life metric used in the EQ-5D (The EuroQol Group 1990) and physical health problems would likely preclude performance of daily activities. Also, 'level of support received' may be removed since attributes related to relationship, mental health, and financial problems encompass similar themes. We created a DCE with attributes that are globally relevant and applicable to participants in high-, middle-, and lower-middle-income countries. Additionally, we were able to successfully carry out recruitment and administration of the DCE over the internet, and believe our current infrastructure can be repeated to recruit much larger and diverse samples. Response rates were generally high, and not drastically lower in sites that did not issue incentives at all or in sites that only issued incentives for survey completion. The topic of COVID-19 is very relevant currently and may remain relevant for years to come. Furthermore, these methods can also be used to examine the ways in which other public health interventions impact preferences and risk mitigation behaviors. The final sample size for this pilot was slightly above the minimum requirement for a DCE, but the results are not powered to be disaggregated by country (or other characteristics) and imply associations only at the general level. Recall bias is possible since we were asking participants in September-November 2020 to recall preferences and feelings from April-June of 2020. It is possible that a participant's current emotional state could influence their preference (Lerner et al. 2015) . The ever-changing COVID-19 news cycle, school/administrative work, and other concurrent global events, may have influenced how participants felt and responded to our pilot. Participants needed access to an internet-enabled device to participate in the pilot, therefore when expanding this study, steps need to be taken to ensure individuals without internet access can participate. Our timely and relevant pilot project of nonpharmaceutical COVID-19 mitigation preferences among university students and staff members has shown that using web-based respondent-driven sampling (RDS) to recruit participants for a web-based discrete choice experiment (DCE) from multiple sites across three counties is feasible and implementable. The combination of these techniques is promising because it can enable recruitment of hard-to-reach populations that are underrepresented in sampling frames, allow higher-risk populations to participate in research, and can be completed anywhere in the world with access to the internet or a smart phone. Appendix Tables 6, 7 shows the level balance of the 9 attributes included in the DCE, across choice categories, for the participants' first choice. The 'Total' column shows the frequency with which the particular level of an attribute appeared in the experiment. The column 'Chosen (Choice = 1)' shows how many times the particular level of an attribute was chosen, and the column 'Not Chosen (Choice = 0)' shows how many times the particular level of an attribute was not chosen. For example, for the attribute 'COVID-19 risk due to frequency of sheltering-in-place', the attribute levels (high, medium, low), each appeared about one third of the time, showing that the DCE was balanced. For the 'High (0-2 day)' risk due to sheltering in place category, we see that the option was presented 894 times ('Total' column) in the experiment, and was chosen 114 times ('Chosen (Choice = 1)' column) and not chosen 780 times ('Not Chosen (Choice = 0)' column). Appendix Table 8 shows the level balance of the 9 attributes included in the DCE, across choice categories, for the participants' second choice. The 'Total' column shows the frequency with which the particular level of an attribute appeared in the experiment. The column 'Chosen (Choice = 1)' shows how many times the particular level of an attribute was chosen, and the column 'Not Chosen (Choice = 0)' shows how many times the particular level of an attribute was not chosen. For example, for the attribute 'COVID-19 risk due to frequency of sheltering-in-place', the attribute levels (high, medium, low), each appeared about one third of the time, showing that the DCE was balanced. For the 'High (0 -2 day)' risk due to sheltering in place category, we see that the option was presented 894 times ('Total' column) in the experiment, and was chosen 270 times ('Chosen (Choice = 1)' column) and not chosen 624 times ('Not Chosen (Choice = 0)' column. Implementation and analysis of respondent driven sampling: lessons learned from the field On the existence of maximum likelihood estimates in logistic regression models Challenges in creating herd immunity to SARS-CoV-2 infection by mass vaccination Quality of life in the evaluation of community support systems Innovative recruitment using online networks: lessons learned from an online study of alcohol and other drug use utilizing a web-based, respondent-driven sampling (webRDS) strategy Implementation of web-based respondent-driven sampling among men who have sex with men in Vietnam Design techniques for stated preference methods in health economics Predictors of quality of life in economically disadvantaged populations in Montreal Discrete choice experiments in health economics: a review of the literature Developing attributes and levels for discrete choice experiments using qualitative methods A warning on separation in multinomial logistic models Sample size requirements for discrete-choice experiments in healthcare: a practical guide. The Patient Patient Cent Sample size considerations and predictive performance of multinomial logistic prediction models and Covid-Response Team Imperial College: Estimating the effects of nonpharmaceutical interventions on COVID-19 in Europe Predictors of quality of life in a longitudinal study of users with severe mental disorders. Health Qual Using conjoint analysis and choice experiments to estimate QALY values iSAY (incentives for South African youth): Stated preferences of young people living with HIV Willingness-to-accept reductions in HIV risks: conditional economic incentives in Mexico. The Eur. J. Health Econ. HEPAC Health Econ Preferences for oral anticoagulants in atrial fibrillation: a best-best discrete choice experiment Respondent-driven sampling: a new approach to the study of hidden populations* Extensions Of respondent-driven sampling: analyzing continuous variables and controlling for differential recruitment Applied Choice Analysis: A Primer Maximizing respondent-driven sampling field procedures in the recruitment of sexual minorities for health research Potential and challenges in collecting social and behavioral data on adolescent alcohol norms: comparing respondent-driven sampling and web-based respondent-driven sampling Oncology patient preferences for depression care: a discrete choice experiment DCREATE: stata module to create efficient designs for discrete choice experiments Lessons learned from using respondent-driven sampling (RDS) to assess sexual risk behaviors among Kenyan young adults living in urban slum settlements: a process evaluation An Empirical comparison of respondent-driven sampling, time location sampling, and snowball sampling for behavioral surveillance in men who have sex with men, Fortaleza, Brazil The patient health questionnaire-2: validity of a two-item depression screener Discrete choice experiments: a guide to model specification, estimation and software Conducting discrete choice experiments to inform healthcare decision making Emotion and decision making A discrete-choice experiment to assess treatment modality preferences of patients with anxiety disorder Shelter-in-place orders reduced COVID-19 Mortality and reduced the rate of growth in hospitalizations Review of sampling hard-to-reach and hidden populations for HIV surveillance How to do (or not to do) … Designing a discrete choice experiment for application in a low-income country Evaluating the impact of stay-at-home orders on the time to reach the peak burden of Covid-19 cases and deaths: Does timing matter? Differential effects of intervention timing on COVID-19 spread in the United States Use of discrete choice experiments to elicit preferences Using discrete choice experiments to estimate a preferencebased measure of outcome-an application to social care for older people Sampling and estimation in hidden populations using respondentdriven sampling Heterogeneity in individual preferences for HIV testing: a systematic literature review of discrete choice experiments Impact of self-imposed prevention measures and short-term government-imposed social distancing on mitigating and delaying a COVID-19 epidemic: a modelling study The Economist: Covid-19 is now in 50 countries, and things will get worse. accessed January The EuroQol Group: EuroQol -a new facility for the measurement of health-related quality of life Discrete choice methods with simulation Discrete-choice experiment to analyse preferences for centralizing specialist cancer surgery services Some desirable properties of the bonferroni correction: Is the bonferroni correction really so bad? The global impact of COVID-19 and strategies for mitigation and suppression The impact of COVID-19 and strategies for mitigation and suppression in low-and middle-income countries Respondent-driven sampling to recruit MDMA users: a methodological assessment Respondent-driven sampling in the recruitment of illicit stimulant drug users in a rural setting: findings and technical issues Discrete choice experiment response rates: a meta-analysis A step-by-step procedure to implement discrete choice experiments in qualtrics Web-based network sampling: efficiency and efficacy of respondentdriven sampling for online research Patient centered decision making: use of conjoint analysis to determine risk-benefit trade-offs for preference sensitive treatment choices Health care workers' mental health and quality of life during COVID-19: results from a mid-pandemic, national survey. Psychiatr Impact of the COVID-19 pandemic on mental health and quality of life among local residents in Liaoning Province, China: a cross-sectional study Coronavirus vaccine tracker Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations We would like to thank the members of the study team that made this work possible. Computer-assisted DCE programming and data management were done by Timothy Souza, Suzanne Sales, and Michelle Loxley at Brown University. Project management and research assistance was provided by Marta Wilson-Barthes at Brown, and project management provided by Kathryn Rodenbach at Purdue. Design and analytical guidance was provided by a team of advisers including: Juddy Wachira, Stavroula Courtney A. Johnson 1 · Dan N. Tran 2 · Ann Mwangi 3 · Sandra G. Sosa-Rubí 4 · Carlos Chivardi 4 · Martín Romero-Martínez 4 · Sonak Pastakia 5 · Elisha Robinson 6 · Larissa Jennings Mayo-