key: cord-0854312-qhln5jsd authors: Pullen, Matthew F; Pastick, Katelyn A; Williams, Darlisha A; Nascene, Alanna A; Bangdiwala, Ananta S; Okafor, Elizabeth C; Hullsiek, Katherine Huppler; Skipper, Caleb P; Lofgren, Sarah M; Engen, Nicole; Abassi, Mahsa; McDonald, Emily G; Lee, Todd C; Rajasingham, Radha; Boulware, David R title: Lessons Learned from Conducting Internet-Based Randomized Clinical Trials During a Global Pandemic date: 2020-12-28 journal: Open Forum Infect Dis DOI: 10.1093/ofid/ofaa602 sha: 0e05a61083c203d71c4a123b121e0cd5c9a0f402 doc_id: 854312 cord_uid: qhln5jsd As the SARS-CoV-2 pandemic evolved, it was apparent that well designed and rapidly conducted randomized clinical trials were urgently needed. However, traditional clinical trial design presented several challenges. Notably, disease prevalence initially varied by time and region, and the pockets of outbreaks evolved geographically over time. Coupled with an occupational hazard from in-person study visits, timely recruitment would prove difficult in a traditional in-person clinical trial. Thus, our team opted to launch nationwide internet-based clinical trials using patient-reported outcome measures. In total, 2795 participants were recruited using traditional and social media, with screening and enrollment performed via an online data capture system. Follow-up surveys and survey reminders were similarly managed through this online system with manual participant outreach in the event of missing data. Here, we present a narrative of our experience running internet-based clinical trials and provide recommendations for the design of future clinical trials during a world pandemic. M a n u s c r i p t Background: In late 2019, as severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) was spreading throughout China, there was significant interest in rapidly repurposing existing antiviral medications (e.g. chloroquine, hydroxychloroquine and lopinavir/ritonavir) for treatment. A primary challenge was to create robust study protocols in order to capture high quality data, while remaining agile enough to adapt to the rapidly changing scientific landscape of a global pandemic. Initially, no coronavirus disease 2019 (COVID-19) clinical trials focused on outpatients, despite being a vital population to target to reduce the number of hospitalizations from SARS-CoV-2 infection. This was likely due to several pandemicrelated factors that made traditional clinical trial models incredibly impractical. On March 9, 2020 , our group at the University of Minnesota began two randomized clinical trials to address SARS-CoV-2 post-exposure prophylaxis and early treatment, enrolling our first trial's initial participant on March 17, just 8 days after our initial meeting [1, 2] . Within another 8 days, the trial had gone international with Canadian partners joining these two trials. We would later launch a third trial, focused on pre-exposure prophylaxis (PrEP), on April 6. All three studies employed an internet-based trial format, recruiting 2795 participants in total [1] [2] [3] . Internet-based clinical trial formats are increasingly gaining attention due to their ability to reach large, geographically distant audiences without expanding to additional physical sites [4] [5] [6] . This trial design can overcome many of the barriers that exist in performing traditional clinical trials in a pandemic setting. Here, we present the unique challenges our group faced using this trial format and how they were addressed. While our experience can be applied to future pandemic trial designs, we also hope that many of the unique advantages of this study methodology could be applied to pragmatic studies in areas where we have need of knowledge. As COVID-19 cases grew exponentially in the United States in March 2020, the need for a rapid exploration of possible therapeutics became apparent. Small, observational trials had been conducted in China and France, but randomized, controlled trials were lacking. To recruit a large number of participants quickly in order to get an answer as soon as possible, we would need to be able to reach outside of our local region. Further complicating the rapid development of clinical trials was the closure of in-person clinics due to the pandemic, as well as a shortage of personal protective equipment (PPE) that would be required for inperson screening of outpatient-based participants. Given this, we opted to instead conduct our trials virtually, employing an internet-based format to recruit participants and direct them to an online survey portal [7] . Following enrollment, eligible participants were randomized to either hydroxychloroquine or placebo, and study medication was sent to them by overnight courier. Not only did this avoid in-person interactions with participants (and a risk of occupational exposure to COVID-19), it permitted rapid nationwide recruitment. Trial endpoints were designed as patient reported outcomes. M a n u s c r i p t The Research Electronic Data Capture (REDCap) digital survey tool was used to capture self-reported participant survey data. We designed the surveys to guide the participants to provide accurate responses. Branching logic and field tags provided in REDCap were used to dynamically hide or show fields related to input from the user, establish upper and lower bounds to limit data input for numeric fields, and capture the date of survey completion without the participant manually entering it. Every attempt was made to ensure the participant had a clear understanding of what each question was asking in concise, plain English (or French in Quebec). Study-specific terms were defined at the beginning of the screening survey and the wording of questions in follow-up surveys was chosen to maximize participants' understanding even if a prior follow-up survey was missed. Utilizing an online survey format allowed us to make minor revisions as needed, often prompted by participant inquiries. All our surveys and automated features were designed and tested in 5 days. Further testing with non-medical persons and patient partners likely would have identified clarity issues earlier. As the global understanding of SARS-CoV-2 increased, adjustments needed to be made to our inclusion criteria and directed questioning. These changes were most often related to what constituted a high-risk exposure or which symptoms were considered consistent with infection following an exposure. Examples of this were anosmia and gastrointestinal symptoms, both of which were sporadically reported during the early outbreak in China but became clearly associated with COVID-19 several months later. The U.S. clinical case definition was not standardized until April 6, 2020 [8] , and CDC guidance regarding healthcare worker risk evolved multiple times during the trial. Figure 1 provides a timeline of milestones during our trials' development and progress, from project launches to publication. To facilitate screening participants, we created a survey that prompted users to provide data relevant to our inclusion and exclusion criteria. A calculated field, hidden from participants, automatically determined eligibility based on their responses. Once the screening survey was completed, eligible participants were automatically sent an email containing a link to the enrollment survey; those who were ineligible were also notified via email. Emails were sent via the REDCap Automated Invitation system. This method of sending the enrollment survey only to eligible participants provided two benefits: (1) it required participants to complete and submit the screening survey before finding out if they qualified, preventing them from changing answers to alter their eligibility before submitting the survey, and (2) it verified that enrolled participants had a valid email address that could receive automated emails from the REDCap system, vital for complete collection of followup data. Participants need to be warned to check their spam/junk folder as some spam filters erroneously blocked REDCap emails even if they were sent from university email accounts. M a n u s c r i p t Although the screening and enrollment were automated, we did institute a manual quality control process to prevent randomizing ineligible participants. The most common reason for not randomizing a screened participant was an incomplete enrollment form, which in most cases was an incomplete mailing address for the study medication shipment. If contacting the participant by email did not resolve the issue, they were not randomized. Similarly, participants who completed the screening survey more than once (identified by using the same email or mailing address) were contacted. If no adequate explanation was given for the duplicate, they were not randomized. Even though many of our exclusion criteria were not publicly available to potential enrollees, we needed to take precautions against participants who screened multiple times to "fish" for variables they could change on subsequent screening surveys to alter their eligibility. In one extreme case, an individual completed the screening survey 16 times over an hour, changing only one or two variables each time. Another example of record duplication involved a for-profit trial recruitment company completing our screening survey multiple times to probe our internal survey logic. Having a standard operating procedure (SOP) for carefully reviewing each new enrollment is key to minimizing these issues ( Table 1) . As the COVID-19 pandemic was evolving rapidly, the inclusion and exclusion criteria were updated as necessary between March 17 and April 24, from additional FDA-mandated exclusion criteria or from recognition of risk. An example was expanding the definition of healthcare workers to include first responders in late March and eventually a final change in late April to "occupational exposure." As updates were made to our algorithm, we made new calculated REDCap variables to determine eligibility and updated the automated invitation logic. In some situations, the data collected may not have been as clear cut as it could have been, due largely to evolving case definitions, availability of testing, and public perception of the trials themselves affecting participation. Agility is required to make rapid adjustments because the circumstances of the pandemic can change. Our recruitment strategy was focused on spreading awareness of the trial using traditional and social media. Initially, an email address was the path for recruitment with incoming emails receiving an automated email response with a participant information sheet description of the trial and a URL link to the REDCap screening survey. We later created a University of Minnesota-supported website for our project (https://covidpep.umn.edu) to provide information about the trial for potential participants as well as linking to our screening survey. Team members promoted this website in social media posts and mentioned it in most interviews in which they participated. We also maintained a "call schedule" for monitoring study-related email addresses, aiming to quickly provide answers to participant questions or feedback. In several instances, this allowed us to clarify if someone was indeed eligible for our trials. The use of social media to spread knowledge of the trial created unique challenges. We attempted to advertise on a prominent social media platform and also a search engine to increase recruitment using targeted geographic advertisements based on emerging COVID-19 A c c e p t e d M a n u s c r i p t hotspots. However, their algorithms rejected our ads as promoting a commercial product and attempts to reach representatives at these companies were fruitless. Social media was highly effective for word-of-mouth dissemination with mixed results. Social media posts in state-specific physician groups and broader healthcare worker groups were useful, particularly for our PrEP trial, as it focused more closely on persons in these professions. Less effective was social media in non-healthcare groups. For example, two weeks into our study, roughly 2600 screening surveys had been completed. Unbeknownst to the study team, an inaccurate graphic (claiming the trial was giving out free COVID-19 medication) had spread via WeChat, a popular social media platform among the Chinese American community. As a result, over 6000 screening surveys were entered over the following 48 hours, creating a large amount of data that needed to be parsed for accuracy and legitimacy. Fortunately, most screening surveys were not fully completed, or automated algorithms screened out ~99%. This type of social media "enroll to get free hydroxychloroquine (or placebo!)" was overall unhelpful. Figures 2a and 2b provide a timeline of significant external events in relation to daily screening and enrollment numbers. One component of recruitment we did not focus on, largely due to time, was creating recruitment materials in non-English languages or that targeted minority communities. In Canada, where all study documents were translated to French, the bilingual materials greatly increased recruitment in the French-language speaking province of Québec. Efforts focused on translating important study documents to other languages can both help recruitment as well as create a more generalizable study population. Overall, our efforts led to 7139 completed screening surveys by the end of the PEP/PET trials. There were an additional 7081 incomplete screening surveys created by people who started but did not finish the survey. Of the completed screening surveys, 18.4% (1312/7139) resulted in a participant being enrolled and randomized in our trials. Having an existing relationship with an academic pharmacy at our institution allowed us to quickly arrange for the assignment and distribution of study medications Monday through Saturday. It was a substantial effort on the part of the pharmacy, so much so that when we launched our PrEP trial, we recruited a commercial pharmacy to assist in medication management. The study team would transmit new prescriptions for blinded study medicine to the pharmacy daily. Randomization was performed at the pharmacy, with a log recording randomization sequence. Assigned prescriptions were filled by the pharmacy team, placed into sealed opaque brown bags for further blinding, and picked up by our study coordinator for packaging and shipment. In the early weeks of our trial, we did not have a dedicated team member who handled these tasks, instead rotating duties among team members. Efficiently managing medication shipments requires dedicated personnel as recruitment increases. Only one-third of participants completed enrollment surveys weekdays between 8am and 4pm. Participants who enrolled outside of daytime work hours (44.4%, 583/1312) or A c c e p t e d M a n u s c r i p t during the weekend (22.3%, 292/1312) typically experienced longer delivery times. The median time from initial enrollment survey completion to delivery was 36 hours (interquartile range [IQR], 23.7 to 42.4 hours). Figure 3 provides the frequencies of estimated shipping times, with variations being due to the time or day of the week in which the participant completed their initial enrollment survey. While waiting until the next working day to enroll someone would have artificially decreased the shipping time, this would have simply delayed the time to study entry. In the postexposure prophylaxis trial [1], the total time from reported high-risk exposure to study medicine receipt was a median of 3 days (IQR, 2 to 4), very similar to the timing of the Barcelona trial's median of 4 days (IQR, 3 to 6 days) [9] . The ability for persons to enroll in a research study during non-traditional hours likely increases the numbers of persons able to participate. This is especially true for busy healthcare workers, who comprised two-thirds of participants in our studies. Planning study workflow to account for after-hours screening and/or enrollment is essential. Delivering medication via courier created a few challenges. In the USA, shipment was handled via FedEx and their Ship Manager software, which enabled batch processing of shipping labels based on exported data from REDCap. In Canada, shipping was donated by Purolator Inc. and used data exported from REDCap to their software. Study medicines were shipped overnight, generally. We asked where participants would be at 10:30am the next day, which may have been at home, work, or other address (e.g. hotel while quarantined). Our Day 1 survey was sent at ~11am, shortly after the anticipated delivery time to verify delivery and if the participant had taken their first dose. When deliveries went astray (~2%), troubleshooting with the courier usually allowed us to arrange re-delivery. Though rare, a few packages went missing (0.2%, 3/1312) or were undeliverable and returned. We used REDCap to send out automated emails on a predefined schedule with a link to follow-up surveys. Follow-up surveys were concise, consistent with the pragmatic design, and focused on collecting information only on current symptoms, symptom severity via a 10point visual analogue scale, hospitalization, and side effects. Additionally, medication adherence was asked at completion of study medicine (day 5), and the participant's guess as to their randomization assignment to assess for adequacy of blinding (day 14). These concise follow up surveys took ~2 minutes to complete. Approximately, 75% of follow up surveys were completed without additional prompting. Tracking down those missing surveys in order to minimize lost to follow up, proved more difficult and where the majority of labor was expended. We created reports in REDCap that listed contact information for participants who were missing surveys, allowing us to follow-up with them specifically. Assume there will be incomplete surveys and develop an SOP for all participant follow-ups utilizing a predetermined number of voice calls, SMS messaging, and email. We primarily used texting and email formats as we believed these to be the least bothersome to participants. However, phone calls resulted in a high rate of data completion, especially on weekends. We developed pre-written scripts to ensure consistent information for calls including voicemail messages. Calls also provided participants a more M a n u s c r i p t direct means of asking questions, which often allowed us to clarify issues that could have resulted in loss to follow-up. SMS messaging was usually limited to last attempts to obtain any outcome data from participants who had proven difficult to contact. These messages were sent using a study-associated Google Voice number, which was linked to one of the study's email accounts for responses. A branded caller ID name for phone calls would have likely been better as it would clearly identify who was attempting to get in touch with them. With using multiple strategies for participant outreach, 12.6% (165/1312) of our participants did not complete follow up surveys, of whom 7.2% (95/1312) in total had unknown vital status information, less than the 20% assumed when designing the study [1, 2] . In the 12-week PrEP trial, 84% (1247/1483) completed the final study termination survey [3] . By diminishing the lost to follow up rate, this ultimately decreased the overall sample size requirement. Accurately confirming all hospitalization and deaths among our study population was also challenging. Despite being asked, not all participants provided accurate emergency contact information. Of those who did, not all emergency contacts responded to follow-up calls. When a participant ceased all follow-up, it was often unclear if this was the result of a hospitalization or death or simply lack of interest in continuing participation. Developing a system for tracking down vital status is key. For instance, we used online searches for obituaries as one way of determining if participants had died. Having social media contact information may have been a better way to assess vital status for younger participants. In retrospect, we could have incorporated calls at an earlier stage in follow-up outreach, using them to ensure data completeness and as an open channel of communication for participants to ask questions. During the follow-up calls, we discovered common trends in participants' misunderstanding of the trial, so adjustments were made to our email scripts to address the common questions directly. Common questions included whether they could still participate if they skipped a medication dose or delayed taking the first dose after receiving the package, how many surveys they needed to complete, and how often surveys were sent. Despite a downloadable consent form and comprehension assessment during enrollment, it became clear that it would have helped to have a more dynamic means of explaining participant expectations. Methods we retrospectively considered were an introductory video in the enrollment survey with a team member giving a verbal description of the study, consent processing over a brief Zoom meeting with a team member, or a "question line" staffed by a team member during business hours for study-specific questions in addition to the email line. One significant advantage to our follow-up strategy was the ability to recruit current participants into new sub studies. Two sub-studies conducted were serology and a hydroxychloroquine pharmacokinetic study. Utilizing our online database of enrolled participants, we sent an e-mail asking for those interested in further research to complete a separate consent form via REDCap. Once consented, we created packets containing Whatman® 903 Proteinsaver cards and lancets for the serology study, and Neoteryx® volumetric absorbed microsampling kits (Neoteryx, Torrance, CA) for the hydroxychloroquine pharmacokinetic study, and sent them via courier to participants. These M a n u s c r i p t packets also contained written instructions for self-collection of either blood spots (for Proteinsaver cards) or whole blood (for microsampling kits). While we did have significant success in having participants collect and return samples, we found that microsampling was a more useful method of collecting usable blood samples for a variety of possible downstream purposes, such as pharmacokinetics, serology, and immunology). Table 2 summarizes the Challenges and Lessons learned. Medication adherence was mixed. In the PEP/PET trials, approximately 15% of persons never started the blinded study medicine (and/or were immediately lost to follow up and did not report taking the study medicine). Negative media attention, input from personal physicians or family, and FDA warnings on the dangers of hydroxychloroquine all influenced participants deciding not to take the study medicine. Whether an in-person trial would have altered this is unclear. We would recommend having a day 1 contact to enhance a personal connection and answer questions or concerns in order to minimize lost to follow up or trial withdrawal. This could have been via automated personalized text message or video call. In retrospect, we should have used a modified intent to treat analysis, excluding those who never initiated the study medicine. This did not alter our results, as a per protocol analysis of those 100% adherent to the study medicine was included as a sensitivity analysis. These trials used an existing FDA-approved medicine with a known safety profile that did not require laboratory monitoring [10] . Utilizing patient reported outcomes created relatively concise surveys focused on relevant topics of interest, not as comprehensive as one would need for a FDA registrational trial. This tradeoff was made in order to keep survey questionnaires short, requiring <2 minutes to complete follow up surveys. More comprehensive surveys may have resulted in higher lost to follow up. While there were tradeoffs, the budget was approximately $350,000 in total to conduct the three trials, funded by generous, unsolicited private donors. U.S. federal funding was requested but unsuccessful (n=4). Numerous investigator salaries were covered from other sources including: provincial governments of Quebec and Manitoba, institutional sources, NIH career development awards (n=3), T32 fellowship, Fogarty International Center fellowship, Doris Duke Foundation, and NIH R01 research project awards (n=4) while research was paused from COVID-19. In comparison, the budget for traditional in-person trials using NIH networks was $10 million for an early treatment trial and $50 million for a PrEP trial. While in-person trials can collect much more data, the cost is exponentially greater. M a n u s c r i p t Internet-based trials are a promising means of designing agile, wide-reaching clinical trials, particularly for rare diseases or neglected (unfunded) diseases, such as most of infectious diseases clinical research. However, internet-based trials do come with unique challenges and limitations. In our trials, these challenges were compounded by conducting a clinical trial during a global pandemic using a study medication that became politicized. Having team members dedicated to participant outreach and follow-up and a dynamic flow of information between participants and the team may help reduce loss to follow-up or inadvertent deviations from study protocols. Additionally, development of a more nuanced relationship with major social media platforms would provide more accurate and consistent messaging about clinical trials, though the onus for improving this relationship may primarily be on the platforms themselves. The challenges presented here are not insurmountable, though they are essential to consider when launching a primarily internet-based trial. We have recommended possible solutions, though new technological innovations in telemedicine and data collection could provide even more efficient means of communicating with participants in internet-based trials and ensure high quality data capture using patient reported outcomes. The authors have no conflicts of interest to disclose. All participants signed an electronic consent form prior to enrolling in the studies described above. The designs of the studies mentioned in this manuscript were approved by the University of Minnesota Institutional Review Board (IRB). M a n u s c r i p t M a n u s c r i p t Table 2 . Challenges and Lessons Learned for Internet-based Trials Case identification  Lack of laboratory testing in U.S. during March-April 2020 or PCR results were delayed for outpatients  Use of epidemiologic linkage allowed for rapid enrollment of symptomatic cases for early treatment (e.g. symptomatic household contacts of PCR+ persons or exposed healthcare workers); yet, also defining this as an a priori subgroup for analysis. o Persons enrolling via epidemiologic linkage enrolled a mean of 1.2 days faster than those with PCR confirmation [2] . o Over 4 times as many persons were excluded as enrolled due to inability to access PCR testing or receive results within 4 days of symptom onset for the early treatment trial [2] .  Using a clinical case definition with independent adjudication in addition to PCR confirmation; o False negative rate of PCR is ~38% on day 1 of symptoms, creating challenges for early diagnosis [11] . o Cannot ignore "probable cases" clinical case definition with PCR+ epidemiologic linkage [8] . Recruitment  Utilizing internet-based, automated recruitment enabled selfenrollment 24 hours per day; useful for persons working full time or those doing shift work, as well as expediting enrollment.  Advertise through both traditional (print, radio, tv) and nontraditional media (Twitter, Facebook, Instagram) accessed a wider audience.  More formal collaboration with co-investigators to recruit participants instead of setting up competing trials.  Email hotline answered questions with an on-call schedule.  Transparency in recruitment posted to social media Recruitment of minorities  While these trials over enrolled persons of Asian descent, due to circulation in Asian-American social media community, other minority populations were significantly underrepresented.  Targeted outreach to other minority communities is required; Involving local community groups and leaders to advertise the study.  Translation of study materials in multiple languages.  Videos explaining the research should include diverse speakers. Obtaining complete and accurate followup  Continuous quality improvement of survey instructions and questions in response to feedback, common questions, or evolution of knowledge of COVID-19; particularly in the first two weeks of implementation.  Two part screen and then enrollment to verify email address and prevent altering answers regarding eligibility. A Randomized Trial of Hydroxychloroquine as Postexposure Prophylaxis for Covid-19 Hydroxychloroquine in Nonhospitalized Adults With Early COVID-19 : A Randomized Trial Hydroxychloroquine as preexposure prophylaxis for COVID-19 in healthcare workers: a randomized trial High adherence and low dropout rate in a virtual clinical study of atopic dermatitis through weekly reward-based personalized genetic lifestyle reports Implementation of clinical research trials using web-based and mobile devices: challenges and solutions Is the future for clinical trials internet-based? A cluster randomized clinical trial Post-exposure prophylaxis or pre-emptive therapy for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2): study protocol for a pragmatic randomized-controlled trial Interim-20-ID-01: Standardized surveillance case definition and national notification for 2019 novel coronavirus disease (COVID-19) A Cluster-Randomized Trial of Hydroxychloroquine as Prevention of Covid-19 Transmission and Disease Safety of Hydroxychloroquine Among Outpatient Clinical Trial Participants for COVID-19 Variation in False-Negative Rate of Reverse Transcriptase Polymerase Chain Reaction-Based SARS-CoV-2 Tests by Time Since Exposure Effect of High vs Low Doses of Chloroquine Diphosphate as Adjunctive Therapy for Patients Hospitalized With Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infection: A Randomized Clinical Trial Outcomes of Hydroxychloroquine Usage in United States Veterans Hospitalized with COVID-19 A c c e p t e d M a n u s c r i p t A c c e p t e d M a n u s c r i p t M a n u s c r i p t  Not all criteria were publicly revealed to eliminate persons gaming their answers to meet enrollment eligibility.  Enrollment criteria via RedCAP survey logic were easily modified.Survey logic for prophylaxis trials were more restrictive to target a moderate / high risk population than the broader protocol criteria. Sending this timeline in reminders may have been helpful.  Collect the type of phone number, e.g. mobile vs. landline  Ask preferred manner of contact for follow-up; email vs. text messages. Preference varies by age and by person.  REDCap has an integrated system to send out surveys via text messages with URLs.  Many people do not answer telephone calls from unknown numbers; having a branded caller ID may improve answer rate.  Obtain a back-up contact for people who become hospitalized or are too ill to respond to follow-up questionnaires.  Collect social media usernames as an alternative method to determine vital status.External Events  Ability to rapidly communicate updates to participants regarding external events; but with IRB-approved messages slowed the process.A c c e p t e d M a n u s c r i p t A c c e p t e d M a n u s c r i p t . Two-thirds of participants enrolled outside of weekday daytime hours (875/1312). The first peak is those enrolling during weekday daytime hours. The second peak is those enrolling in the evening or night, being shipped the next day. The final peak is those enrolling after 3pm on a Saturday or on a Sunday, being shipped on Monday, and receiving delivery on Tuesday morning. For the postexposure prophylaxis trial, the median time from highest risk exposure to starting study drug was 3 days (IQR, 2 to 4) days.