key: cord-0717726-nxreilw5 authors: Bangdiwala, Ananta S.; Boulware, David R. title: Technical procedures and REDCap tools for internet-based clinical trials date: 2022-01-25 journal: Contemp Clin Trials DOI: 10.1016/j.cct.2021.106660 sha: a50f16af6d62bf05923bf4fcd7f3a6ba15ef3f0d doc_id: 717726 cord_uid: nxreilw5 In March of 2020 our team of researchers developed and opened three clinical trials to investigate hydroxychloroquine as prophylaxis or treatment for the coronavirus disease 2019 (COVID-19). We simultaneously built corresponding Research Electronic Data Capture (REDCap) projects for these low-touch, remote trials that relied on participant-reported data. REDCap has built-in features that support pragmatic, internet-based studies, and REDCap is flexible enough to allow creative solutions for specific trials. We describe challenges, choices, and suggestions based on our experience with REDCap for our COVID-19 trials. an email with a link to the Enrollment survey. This two-part screening and then enrollment verified participants had working email addresses. Participants were warned that an immediate email would be sent and if not received to look in their spam email folder. Another option would have been to put the Screening and Enrollment surveys into a Survey Queue so that the Enrollment survey appeared immediately if qualifications were met. However, by using our method with the Automated Invitations, eligible participants were forced to find emails from our trials in their inboxes. This way we were only enrolling participants who successfully interacted with our emailed links, which was critical since all of our follow-up data were collected via emailed surveys. Participants who were ineligible received an email notifying them of their status. To achieve this we created a "Not enrolled" instrument, which was activated as a survey. If the conditions to not be enrolled were met in the Screening survey, then the Automated Invitation to this survey was sent out, but with a link to complete a survey removed from the body of the emailed invite. Instead, we included links to the CDC website for further information about the COVID-19 disease. Eligibility requirements changed multiple times and quickly as our understanding of the behavior of the COVID-19 disease evolved and from FDA communications. For the postexposure prophylaxis (PEP) and early treatment trials, which started enrolling in March 2020, this meant that our calculated eligibility fields needed to be updated even as enrollment was underway. So as to not disturb the value of the eligibility field of already enrolled participants, we created a new eligible Calculated field with each new update and edited the logic of the Automated Invitation for Enrollment. We found it useful to include the date change in the new field name to easily reference when the updated eligibility field was implemented. REDCap has a feature in the Data Quality Application called "Rule H", which updates "incorrect values for Calculated fields." In order to ensure that the Automated Invitations for Enrollment survey only triggered for new participants (and did not re-send to previous participants), we made sure we never executed Rule H in the project for the PEP and treatment trials. Additionally, we also updated the Conditions on the Automated Invitation to include appropriate logic. The preferable logic to do this would have been logic such as "[scr_date] > (date of update);" however, REDCap does not have the ability to read dates into logic. In addition, as we were screening hundreds of participants a day for these trials, the exact time of update was important. We therefore used logic based on the last participant to screen prior to the moment of update implementation (e.g. "[study_id] > 13098"). Additionally, REDCap has a very useful option on the Automated Invitation that can verify logic. When the "ensure logic is true before sending automated invitation" option is selected, "REDCap will re-evaluate the logic against the record's data values whenever the record values are changed AFTER the invitation has been scheduled but BEFORE it has been sent to the respondent." We tracked changes with a diagram and detailed notes. This aided in intra-team communication to make sure we all agreed. Tracking these changes both in notes and with dated fields names also aided in data management to determine who was eligible J o u r n a l P r e -p r o o f based on the time they were screened and guided our development of the CONSORT diagram in the presentation of final results. Figure 1 depicts the final flow diagram of inclusion criteria for the PEP and treatment trials 2, 3 . Notes on updates were also important for communication with Canadian database developers as they had to make the exact same changes to their REDCap project run in parallel. These projects were required to remain separate to abide by privacy guidelines specific to each country. Maintaining two separate but identical REDCap projects added another layer of challenge to managing the data for these pragmatic trials. Consistent and detailed communication was vital to identical data collection. Thus, in addition to comprehensive notes, we downloaded the full Data Dictionary after every update, filed it by date in our records, and delivered it to our Canadian co-investigators. These were high-volume studies and the immediate nature of the pandemic necessitated expeditious enrollment. Our studies were designed entirely to allow potential participants to self-screen. This allowed us to reach as many people as possible, from all over North America, very quickly, and allowed us to enroll participants even as the pandemic was limiting face-to-face interactions. Given the emergency situation created by the pandemic, the IRB and FDA approved this design and the use of e-consent. Our website, surveys, and emails were our primary interactions with participants. Therefore, wording and study definitions had to be very clear. We listed study-specific definitions in a REDCap Descriptive field at the top of the screening form (Supplemental Material 2). We embedded our informed consent documentation into a Descriptive field in the Enrollment survey then followed it with a series of questions to test comprehension. Hidden Calculated fields assessed comprehension and, if achieved, indicated the participant as enrolled. Participants signed their informed consent directly into a Signature field. REDCap does have an eConsent feature in the Survey Settings. Although REDCap has randomization applications, we conducted ours manually. As an internet-based trial, persons could potentially try to screen themselves multiple times, so as to meet eligibility criteria. There was no mechanism within REDCap to stop the same email address or name from being entered multiple times or nonsensical data entered, thus accuracy of information was manually verified by study personnel. Our statistician created the randomization schedule using a permuted block design with random blocks of sizes 2, 4, and 8. This generated list of randomization codes and assignments was sent to the pharmacy, while a blinded list of randomization codes was sent to the study team. Further details of the logistics of this process are summarized in Pullen et al 6 . Once randomization was confirmed, our team tracked this result and study medication shipping in a separate Enrollment Processing instrument. J o u r n a l P r e -p r o o f The fields confirming randomization and shipment of study medication from the Enrollment Processing instrument triggered the automated invitations to deliver surveys for follow-up data collection. The PEP and treatment trials had follow-up surveys sent on days 1, 3, 5, 10, and 14 while the PrEP trial had surveys sent weekly for 12 weeks. Any participant who reported hospitalization during follow-up was sent surveys post-trial end to continue tracking their vital status. The PEP trial and the treatment trial involved the exact same intervention, nearly the same follow-up schedule, and included participants of the same out-patient population -with the major difference being a positive test or the onset of symptoms 7 . Given this, we developed one REDCap project to work for both trials at once. With the PEP and treatment trial housed in the same REDCap project, when it was determined that participants in the PEP trial who became symptomatic within 1 day of enrollment prior to receiving the study medication would be considered in the treatment trial, we were able to easily transition these participants seamlessly. We collected the same data on all participants. Structuring fields and wording to guide self-reported data and cover possible scenarios is important. We wrote the text for our followup surveys so that if someone skipped a day or week the phrasing still made sense to the participant and for our analysis. For example, in the PrEP study, every weekly follow-up survey began with the following text in a Descriptive field: "The following questions pertain only to the past week, not since enrollment." This was reinforced in the wording of each question on the survey (e.g. "In the past week, how many tablets have you taken?"). Since the PrEP trial followed participants weekly for up to 12 weeks, we structured the fields regarding hospitalizations in a way that allowed for the possibility of multiple hospitalizations and possible missed visits. The reality of having a participant complete a survey while hospitalized was likely low, but by asking on every survey, we were more likely to capture the data if available. We also considered having separate hospitalization surveys, but we decided that streamlined surveys were more acceptable for participants and simpler for data management. On any given week a participant could either be not hospitalized, be currently hospitalized, or have been discharged from a hospitalization since the previous survey. We decided to ask a specific hospitalization question on every weekly survey: "Have you spent any days hospitalized (including a discharge day) since the last survey?" By specifying that a discharge day would count as a day in the hospital we were able to cover the various scenarios. An example of one week (8) of fields and branching logic is displayed in Supplemental Material 3. There are four steps REDCap outlines to schedule an automated invitation to a survey. In "STEP 1" one can select if the invitation will be sent my email, SMS message, or a combination. "STEP 2" involves composing the invitation message and specifying the "sender". There are two options for scheduling the timing of a Follow-up survey using the Automated Invitations: by indicating the J o u r n a l P r e -p r o o f number of days after a specific status is achieved (e.g. enrollment) in "STEP 4" or by programming the "STEP 3: Conditions" logic using the datediff function. Both can achieve the same timing: The benefit of the latter is that it puts any delay condition in STEP 3, freeing up STEP 4 so that one can choose the exact time of day a survey is sent. Additionally, the invites do not get into the queue until the day they are scheduled, which can be useful for a pragmatic trial in which changes might need to be made. The automated survey invitation scheduling is an extremely useful feature of REDCap, though it is not without its challenges. For example, when the PEP and treatment trials opened, the Day 3 survey was only meant for treatment participants, so we wrote the logic in the Automated Invitation so that the Day 3 survey was delivered if enrolled into treatment or symptomatic by Day 1: STEP 3: Enrollment survey complete and ([enroll_study] = 2 or [symptoms_ind_d1] = 1) STEP 4: Send the invitation 4 days after the automated survey invitation has been triggered However, this meant that the PEP participants ([enroll_study] = 1) were not able to meet the STEP 3 Conditions until their symptoms were reported on the Day 1 survey. As a result, the Day 3 survey was scheduled for them 4 days after Day 1, instead of after J o u r n a l P r e -p r o o f enrollment. As Automated Invitations could only have one timing set, we opted to then send the Day 3 survey to all participants in both PEP and treatment trials, based on the day of Enrollment. It is important to note that updating the Automated Invitations for any reason can affect previous participants. We only wanted this change to apply to participants who had not yet reached Day 3, but in making the change in logic we inadvertently set the Automated Invitation to send to previous participants as well. We noticed this before it was actually sent, but because these invitations had been saved and scheduled, we had to manually delete all unnecessary Day 3 survey emails using the Survey Invitation Log. Scheduling Automated Invitations should take into account real-world issues. For example, scheduling Follow-up surveys based on completion of the previous survey can cause problems because (a) if a participant missed a survey then the next survey would never be sent, or (b) if a participant was delayed in completing a survey then the next survey would be delayed in being sent. For the PEP and treatment trials especially, we wanted data from short visit windows, as close to our scheduled days as possible. We therefore found it most useful to time all Follow-up Automated Invitations based on completion of the Enrollment survey. Another option that we considered and could be useful for other trials is the Survey Queue, which "displays a list of your surveys to a participant all on a single page, in which the queue comprises all surveys that are to be completed (like a 'to-do' list) as well as the surveys that the participant has already completed." Because our Follow-up surveys needed to be completed over several days to weeks, we decided this would not be useful for our participants. We did include language in our emails to inform our participants about what surveys to expect. If we had more planning time available, it would have been better to include the entire schedule in the initial email with the Enrollment survey, including time of day to expect survey emails. Assuring complete data was a priority, and we established settings within the Automated Invitations such that reminder emails were sent repeatedly up to three times or until that survey was completed. Even with automated email reminders, some participants missed surveys. The reporting system in REDCap allowed us to identify any participants in the PrEP trial who were consistently missing surveys and continually check in with them throughout follow-up with manual phone calls and mobile phone text messages. To be as proactive as possible with assuring follow-up, we established running reports of participants who had missed each weekly survey as well as reports of participants who had missed multiple surveys in a row (Weeks 1-2, Weeks 1-4, Weeks 5-8, Weeks 9-12). We used the datediff function in the Filter logic to keep these reports updating continuously. For example, the Week 2 survey was automatically sent on day 16 post-enrollment. One email reminder was sent 32 hours later, but if by day 19 a participant had still not completed the Week 2 survey, they would appear on the "Missed Week 2 Report" for 3 days or until their survey was complete: ([followup_week_2_complete] <> "2") AND datediff([enr_date],"today","d","mdy",true) >= 19 AND datediff([enr_date],"today","d","mdy",true) <= 22. Our team reviewed these Reports and acted upon them on a coordinated schedule; the logistical aspects have been summarized in Pullen, et al. 6 . We encouraged participants to complete their surveys, but we also implemented a vital status instrument to record basic information from any brief phone call or text message exchange. The REDCap programming for Automated Invitations and Reports for the PrEP trial are presented in Table 2 . We also created a field called enr2today = datediff([enr_date],"today","d","mdy") which was useful for reporting how far into follow-up a participant was. In order to update the "today" element, we ran Rule H every day for the PrEP trial. We found in all of our trials that participants who decided ultimately to not take the study drug often decided to not complete the surveys. Common misconception that if people stopped taking study medication that we were not interested in their outcome. Standard templated wording of informed consent stating that one can stop participating in research at any time is unhelpful f or intention-to-treat analysis. Clarity is needed to separate stopping the study intervention and continuing follow up from stopping all research participation. We amended wording to emails and on the screening forms to remind them of the importance of collecting their follow-up data regardless of their adherence or side effects. We used the example that if only the people who did well on the study medication completed follow up, that the study medication would look as if it performed better than it actually did. We cared about everyone's experience with the study medication, even if they stopped taking the medicine due to side effects or other reasons. Particularly for remote trials, this distinction should be made clear upfront of giving participants options if they want to stop the study medication versus stopping all research participation. Project Set-up Testing the REDCap project as much as possible is critical. Creating a duplicate, practice REDCap project was a useful tool for testing issues both during the development of the data collection system, and throughout enrollment as changes were made. REDCap makes it easy to copy an entire research project over to a duplicate project. When a project is in production, one can easily download an updated Data Dictionary, and upload it to the duplicate project to ensure that they remain identical. Collaboration is indispensable on any clinical trial, even a completely internet-based study. Prior to starting the trial, our team of investigators, colleagues, friends, and family played the roles of mock participants in the duplicate practice project in order to test out different scenarios. What if a participant misses a survey? What if a participant needs to make changes later? Additional beta-testing J o u r n a l P r e -p r o o f by non-medical personnel is recommended to assure the language is appropriate. Physicians routinely under-estimate the complexity of medical language for the general public. Although we had set up our fields and wording to be as specific as possible, we did have some issues arise once we were ready for analysis. For example, we had to use the survey date completed as the date of an event because we did not ask specifically for the date of onset of illness. This is important to consider depending on whether it is a time-to-event analysis or not. In the PrEP trial with weekly follow up surveys, for example, we collected the date of PCR testing but not the date of symptom onset. This created a variable time lag of up to 6 days in a 12 week trial. While such time lag would be equally distributed across treatment arms, it was less precise than we would have liked in retrospect. Conversely, the PEP and treatment trials did not use time-to-event analyses, thus this detail was unnecessary. Once our trials were underway our team developed some sub-studies including COVID-19 antibody testing and hydroxychloroquine drug level testing 4, 8 . We were able to easily implement consenting and follow-up surveys for these sub-studies as further surveys nested within the parent trial REDCap projects. We established email addresses for the trials that would interact with participants. For the PrEP trial, for example, we wrote text at the end of the Screening survey (using the Survey Settings) that potential participants should expect all surveys to be sent from this email: covid19prepfaq@umn.edu. By using a University of Minnesota email address, we were less likely to be delivered to potential participants' spam folders. This University-designated account was blocked from automatic forwarding, so we also set up faq.covid19@gmail.com to allow our team to receive participant questions forwarded to their personal emails as alerts to the questions. They could then log into this account to respond. We created an on-call schedule of investigators to field questions from this email address in a timely manner approximately 18 hours per day. We could have further customized our email messaging to include pictures of the investigators to make it more personalized. REDCAP allows pictures via the addition of html to messages, via: Alternative text where the URL is an internet-accessible image. We also utilized REDCap for each of the trials to send out a final email to all trial participants on the day of publication. Many participants appreciated receiving prompt notification of the trial results as well as knowing their randomization assignment. To send these messages en masse, we piped the participants' randomization assignment from a variable added into REDCap after trial completion into the email message for a new Final Result survey. We specified the STEP 3: Conditions based on logic that the participant was randomized: J o u r n a l P r e -p r o o f datediff([enr_date],"today","d","mdy",true) > 0 AND [rz_assign] <> "" Sending an email to many participants that is not tied to a survey instrument is not straightforward within REDCap. To accomplish this when necessary we made a Report of emails of participants we wanted to email, downloaded the Excel file for this Report, and used an external program to send the batch email. For internet-based trials, consider verifying important endpoint-related information by email or telephone. FDA recommended at least 10% of endpoint information be verified, which we did. We also verified information for people who were outliers, such as PCRpositive persons who remained asymptomatic, to verify that this was indeed correct. The REDCap Alerts feature can be useful for immediate follow-up of events. Logic can be set such that an email is sent to a specified recipient with the relevant participant ID and fields. We set Alerts to notify our PIs of hospitalizations and other adverse events if a participant reported one on a Follow-up survey. We also used Alerts in the treatment trial for Screening surveys that required expert review to determine eligibility (see Figure 1 ). When designing the data collection system, fields can be marked as "required", forcing a pop-up to alert the participant if they skip the field and try to submit the survey. However, this does not prevent the participant from submitting the survey with the skipped required field. Such an incomplete survey, once submitted, cannot be updated. There is an option in the Survey Settings to allow the participants to "return and modify completed responses." We allowed this initially but discovered that this situation led to one participant deleting all of their data. Fortunately, the value History is recorded on each field and can be manually re-entered. However, on the Enrollment survey the signature field was lost from the survey and the value History. Thankfully the REDCap administrator for our institution was able to retrieve these. One survey on which we did allow participants the option to save and return was the Drug start survey in PrEP. We wanted to make sure participants completed this survey after they had physically received the study medication. Since we could not ensure that the survey arrived after the study medication, we started with a basic question of "Did you receive the study medication?" If the answer was "No", branching logic was set so that a Descriptive field appeared with instructions to contact the team and select "Save & Return Later." We based the Follow-up surveys on the field "[shipped] = 1" which our team entered manually into the Enrollment Processing instrument once randomization was completed and the study medication was shipped. To assist with tracking study medication J o u r n a l P r e -p r o o f delivery, we later included the FedEx shipping number as a piped-through field on the email with the Drug Start survey. As this field was manually completed, it was important to make sure that the FedEx number was entered into Enrollment Processing before or at the same time that [shipped] was set to 1 (yes) and the instrument was saved, otherwise a blank line would have appeared in the email where the number should have been. On numeric fields there is an option to set minimums and maximums; however, on date fields one cannot easily stop future dates from being entered, especially when there is not a set timeframe when the survey could be completed. Relatedly, REDCap Calculated fields cannot read dates written within a function. In order to help prevent a future date from being entered, we created Hidden Calculated fields that compared the date entered to the date the form was completed using the datediff function. As soon as a future date was entered, the negative value of the Calculated field would trigger the branching logic to display a Descriptive field stating "Error! The date ... cannot be a future date. Please correct." For those with COVID-19 symptoms, we collected data on symptom severity using a 0-10 visual analog scale. We discovered that the default appearance of the visual analog scale field in REDCap displays the mark in the middle of the scale at a 5 on a scale of 0 to 10. It was possible for a participant to feel a level 5 and therefore not move the marker; however, if the marker was unmoved then REDCap records the score as 0. Also, since the field was required, these surveys were registered as Incomplete in REDCap. Once we discovered this issue we changed the instructions and followed up with any participant who this might have affected. We designed and ran three large-scale, internet-based, double-blind randomized clinical trials for outpatients during the first 3 months of the U.S. COVID-19 pandemic 2-4 . We knew from the start that these trials would be pragmatic and our data collection system would have to be able to respond to such needs. With REDCap we were able to update automatic eligibility criteria assessment fields and Automated Invitation logic while maintaining data integrity. For trials that rely on self-screening and self-report, structuring and writing text for survey fields that guide participants through the processes is critical. Automated Invitations allow for scheduling surveys strategically, prompting participants to record their own data. Proactively tracking and reaching out to participants who miss surveys allows for more complete data. Testing the REDCap projects is essential. We tested our REDCap projects as much as possible with limited time prior to enrollment and continued to test features using identical practice REDCap projects throughout follow-up. Pragmatic, internet-based clinical trials J o u r n a l P r e -p r o o f are possible and are a useful tool for low-cost studies in the modern world. REDCap has excellent features and flexibility to support such trials. Branching logic Allows fields to be hidden or shown based on the survey responder's input. One can embed a document such as the informed consent. NOTE: These days many participants will complete surveys on their mobile phones. Consider instructions asking them to not use emojis as they will cause errors in programming. Date verification Fields can be validated as a date format with a minimum and maximum set. NOTE: One cannot prevent future dates as entries with simple Field Validation. A combination of a calculated field (using the datediff function to compare the date entered to the date the form was completed, set by action tag @TODAY and hidden by action tag @HIDDEN-SURVEY), branching logic, and a descriptive field can be used to alert a participant to a date entry error. As soon as a future date is entered, the negative value of the calculated field would trigger the branching logic to display a descriptive field stating the error: "Error! The date ... cannot be a future date. Please correct" NOTE: It may seem counterintuitive, but if participants must be above a specific age, be sure to set the birth date as the maximum on the birthdate Field Validation. Copy the project Duplicate the project exactly When constructing the database prior to production mode, test the project as much as possible. Once in production mode, copy the whole project to a new project to continue testing issues as necessary. Both these data structures can be downloaded as Excel files Once a project is in production, keep the duplicate test project up-to-date easily by downloading the data dictionary and automated survey invitations and uploading to the duplicate project. Include instructions in the email text. Use a trial-specific email address as the sender. Keep the Screening and Enrollment surveys separate to be certain to enroll only participants who interact with an emailed survey link. To send an informational email to a batch of participants, schedule an Automated Invitation for a survey without a link Survey Settings Allows modifications for survey appearance and access The splash screens that accompany a REDCap survey are useful for including instructions to participants. These can be edited throughout the trial as FAQs come in from participants. Allow or limit a participant's ability to return to and change a survey once submitted. Ensure logic is true before sending automated invitation Especially useful for a project that gets updated often. "REDCap will re-evaluate the logic against the record's data values whenever the record values are changed AFTER the invitation has been scheduled but BEFORE it has been sent to the respondent" J o u r n a l P r e -p r o o f Among other things, this can be useful for including study medication shipping number at study start, reminding a participant of an answer on a previous survey during follow-up, and for informing the participant of their randomization assignment at study end. All values ever entered for a field are automatically saved within that field's history. These can be retrieved if a value ever gets accidentally deleted (e.g. if a participant manages to return to a survey, changes something, and re-saves the survey). Accessed link* n/a n/a n/a n/a J o u r n a l P r e -p r o o f n/a n/a n/a ([followup_week_1_complete] <> "2") AND ([followup_week_2_complete] <> "2") AND datediff([enr_date],"today","d","mdy",true) > 18 Send SMS message (specific to this survey) to participants on Report) Note: Report should be reviewed and acted upon every 3 days J o u r n a l P r e -p r o o f Review: Hydroxychloroquine and chloroquine for treatment of SARS-CoV-2 (COVID-19) A randomized trial of hydroxychloroquine as postexposure prophylaxis for covid-19 Hydroxychloroquine in nonhospitalized adults with early COVID-19 : A randomized trial Hydroxychloroquine as pre-exposure prophylaxis for coronavirus disease 2019 (COVID-19) in healthcare workers: A randomized trial The redcap consortium: Building an international community of software platform partners Lessons learned from conducting internet-based randomized clinical trials during a global pandemic Symptoms of COVID-19 outpatients in the United States Safety of hydroxychloroquine among outpatient clinical trial participants for COVID-19 We acknowledge and thank our co-investigators with whom we collaborated on these studies and colleagues who tested our REDCap projects, including: Dr Weeks 1-4 W4 n/a n/a n/a ([followup_week_1_complete] <> "2") AND ([followup_week_2_complete] <> "2") AND ([followup_week_3_complete] <> "2") AND ([followup_week_4_complete] <> "2") datediff([enr_date],"today","d","mdy",true) >= 28 AND datediff([enr_date],"today","d","mdy",true) < 43 n/a n/a n/a,"today","d","mdy",true) >= 61 AND datediff([enr_date],"today","d","mdy",true) < 69 Send every 1 days Send up to 2 times n/a n/a J o u r n a l P r e -p r o o f Report -Missing study termination survey 10 July, 2020 n/a n/a n/a ([followup_end_of_study_complete] <> "2") AND ([death] <> 1) Send SMS message 11 July, 2020 n/a n/a n/a ([followup_end_of_study_complete] <> "2") AND ([death] <> 1)Call participant in the morning (try to complete study termination survey on phone, or at least Vital Status Form) Send SMS message in the evening 12 July, 2020 n/a n/a n/a (([followup_end_of_study_complete] <> "2") AND ([death] <> 1)) OR "last PCR test was marked pending"Call participant in the morning (try to complete study termination survey on phone, or at least Vital Status Form) Send SMS message in the evening Data cut Data cut date: 13 July, 2020 n/a n/a n/a n/a n/a has been triggered n/a n/a n/a * Anyone could access the link for screening. Eligibility was determined through survey questions.