key: cord-0183906-s8p0g4r1 authors: Kunz, Cornelia Ursula; Jorgens, Silke; Bretz, Frank; Stallard, Nigel; Lancker, Kelly Van; Xi, Dong; Zohar, Sarah; Gerlinger, Christoph; Friede, Tim title: Clinical trials impacted by the COVID-19 pandemic: Adaptive designs to the rescue? date: 2020-05-28 journal: nan DOI: nan sha: 1af0c0dab777ddb7bd11224d16fe4e42b07a3898 doc_id: 183906 cord_uid: s8p0g4r1 Very recently the new pathogen severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) was identified and the coronavirus disease 2019 (COVID-19) declared a pandemic by the World Health Organization. The pandemic has a number of consequences for the ongoing clinical trials in non-COVID-19 conditions. Motivated by four currently ongoing clinical trials in a variety of disease areas we illustrate the challenges faced by the pandemic and sketch out possible solutions including adaptive designs. Guidance is provided on (i) where blinded adaptations can help; (ii) how to achieve type I error rate control, if required; (iii) how to deal with potential treatment effect heterogeneity; (iv) how to utilize early readouts; and (v) how to utilize Bayesian techniques. In more detail approaches to resizing a trial affected by the pandemic are developed including considerations to stop a trial early, the use of group-sequential designs or sample size adjustment. All methods considered are implemented in a freely available R shiny app. Furthermore, regulatory and operational issues including the role of data monitoring committees are discussed. In Wuhan, China pneumonia cases of a new pathogen, which was subsequently named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), were identified in December 2019 (Guan et al, 2020) . In the meanwhile, the coronavirus diseases 2019 (COVID-19) was declared a pandemic by the World Health Organization (WHO). At the time of writing (end of May 2020), more than 5 million cases were confirmed worldwide according to the COVID-19 Dashboard by the Center for Systems Science and Engineering at Johns Hopkins University (https://coronavirus.jhu.edu/map.html). To fight the COVID-19 pandemic, a number of clinical trials were initiated or are in planning to investigate novel therapies, diagnostics and vaccines. Some of these make use of novel, efficient trial designs including platform trials and adaptive group-sequential designs. An overview and recommendations are provided by Stallard et al (2020) . While considerable efforts have been made to set up trials in COVID-19, the vast majority of ongoing trials continues to be in other disease areas. In order to effectively protect patient safety in these trials during the COVID-19 pandemic, across the world, clinical trials answering important healthcare questions were stopped, or temporarily paused to possibly re-start later, some with important modifications. Here we consider the impact of the COVID-19 pandemic on running trials in non-COVID-19 indications. The challenges to these trials posed by the pandemic can take various forms including the following: (1) The (amount) of missing data may preclude definite conclusions to be drawn with the original sample size. (2) Incomplete follow-up (possibly not at random) may invalidate the planned analyses. (3) Reduced on-site data monitoring may cast doubt on data quality and integrity. (4) Missed treatments due to the interruptions, but also due to acquiring the SARS-CoV-2 virus may not be random and require a different approach than based on the intention-to-treat principle. (5) Circumstances (in e.g. usual care, trial operations, drug manufacturing) before, during and after the pandemic induced interruptions may differ substantially with impact on interpretability of the clinical trial data, through which the original research question is more difficult or even impossible to answer. (6) Heterogeneity in patients included in the trial associated with the pandemic may impact results. (7) Potential heterogeneity in included patients for multi-center trials, as the prevalence/incidence of infected patients various from region to region. Regulatory authorities have produced guidance on implications of COVID-19 on methodological aspects of ongoing clinical trials (EMA , 2020a,b; FDA , 2020) . The EMA guideline states that the current situation should not automatically encourage unplanned interim or early analyses (EMA , 2020b) . Despite strong scientific reasons to conduct trials as planned, there may be situations where an unplanned or early analysis may be required to minimize the effect of COVID-19 on the interpretability of the data and results. Potential situations include trials where data collection is nearly finished, an interim analysis is planned in the near future, recruitment of new patients is slowing down or interrupted. In particular, the impact of the pandemic depends on the timing of the pandemic compared to the timeline of the trial, the length of follow-up to observe the primary endpoint and the recruitment rate. Figure 1 illustrates the different scenarios. For example, when recruitment has been paused and will be restarted after the pandemic, the trial duration will be prolonged. A two-stage adaptive design might then be considered for the clinical trial. An interim analysis evaluating the first stage data, which include participants not affected by COVID-19, should guide the investigators to decide whether it is worthwhile to restart recruitment after the pandemic and with which sample size. Nevertheless, as any unplanned interim analysis needs to protect the trial integrity (e.g., blinding) and validity (e.g., type I error rate) appropriate statistical methodology for testing and estimation at the end of the trial is an essential aspect. The adaptive design literature offers potential solutions to deal with the concerns in modified trial designs. This has also been recognized by Anker et al (2020) in the context of clinical trials in heart failure, a chronic condition. The manuscript is organized as follow. In Section 2 four ongoing clinical trials are introduced which are all impacted by the COVID-19 pandemic. These serve as examples and illustrate the many ways trials might be affected by the pandemic. In Section 3 general comments are made on how adaptive designs might be used to overcome the various challenges posed by the pandemic before the issue of resizing trials is considered in more detail in Section 4. Regulatory and operational issues including the role of data monitoring committees or data safety monitoring boards are considered in Section 5. In Section 6, we close with a brief discussion. Clinical trials are affected in many different ways by the COVID-19 pandemic. On the one hand, patients may get infected leading to missed visits, missing data, or even COVID-19 related adverse events. On the other hand, the various lockdown and quarantine measures may disrupt the trial conduct: Patients may be unable to attend their scheduled visits or the study medication cannot be delivered to the patients as planned. While these issues apply to all trials recruiting patients or collecting data during the pandemic, they are affected quite differently depending on the stage the trial was in and also depending on the endpoint of the trial. One important point is still open at the time this paper was written: When and how to restart trials that have had their recruitment interrupted or even the study treatment stopped by the onset of the pandemic? The only thing that seems clear is that the conditions under which a trial is restarted will be very trial specific and can be elaborated only provisionally at the end of this paper. For our first example, consider a study to assess the contraceptive efficacy beyond 5 years up to 8 years of a hormone releasing intrauterine device (IUD) (Jensen et al, 2020) . At the onset of the pandemic all participating women had their IUD in place for more than 6 years but only a few had already completed 8 years of treatment. The primary outcome of the trial is the contraceptive failure rate in years 6 to 8 measured by the Pearl Index (Gerlinger et al, 2003) . The trial uses a treatment policy estimand, albeit the term estimand was not yet invented when the contraceptive trial was conceived. COVID-19 related intercurrent events such as missed or postponed visits to the study center can be ignored for the primary analysis. There will be no interruption of study treatments as the IUD has been in the womans uterus for 5 years at the beginning of the trial and remains there for up to 8 years in total. Even if the pandemic will last past the scheduled end of the trial, the primary outcome (pregnant yes/no) can still be ascertained even if a woman is not able to attend the final visit in person on time, albeit that according to the statistical analysis plan the continued exposure to the IUD needs to be confirmed by the investigator. Nevertheless, the contraceptive failure rate observed over the whole trial may be impacted not only by a potential loss in confirmed exposure time but also by other COVID-19 related intercurrent events. For instance, a couple who usually commutes long-distance on weekends is not at risk of contraceptive failure during the lockdown if they observe the lockdown living apart, but they are possibly at a higher risk if they observe the lockdown living together. However, given the treatment policy estimand and the very low rate of contraceptive failure with an IUD (Mansour et al, 2010) these intercurrent events are not likely to be relevant for the interpretation of the trials results. It should be noted that other endpoints of the trial may also be impacted by COVID-19 related intercurrent events. The regular safety assessments planned at the scheduled visits might be at least partially missing if women need to skip the physical visit. While details of adverse events can be obtained by phone, laboratory values will be definitely missing in such instance. Our second example is the Subacromial spacers for Tears Affecting Rotator cuff Tendons: a Randomised, Efficient, Adaptive Clinical Trial in Surgery (START:REACTS), an adaptive design multi-center randomized controlled trial conducted in the United Kingdom comparing arthroscopic debridement with the In-Space balloon (Stryker, USA) to arthroscopic debridement alone for people with a symptomatic irreparable rotator cuff tear (Metcalfe et al, 2020) . Recruitment to the trial started in February 2018, with a planned sample size of 221 with the potential to stop the study for efficacy or futility at a number of interim analyses. The primary endpoint was shoulder function 12 months after surgery measured using the Constant Shoulder Score (CS) recorded at a hospital out-patient visit, with assessments taken at 3 and 6 months following surgery also used for interim decision-making (Parsons et al, 2019) . Due to the coronavirus pandemic, recruitment to the study was delayed by the cancellation of elective surgery in UK hospitals. The study team are working closely with the Data Monitoring Committee in reviewing the planned timing of the interim analyses to reflect this, and the resulting change in the anticipated numbers of patients with 3, 6 and 12 month follow-up data at different time-points in the study. The pandemic also threatened disruption of the collection of follow-up data for patients for whom surgery had already been completed, as even prior to lockdown, many patients in the study, a large proportion of whom are in vulnerable groups, were unwilling to attend planned appointments for assessment. In order to be able to obtain follow-up data from as many patients as possible, the study team decided to change the primary endpoint to be the 12 month measurement of the Oxford Shoulder Score (OSS), as this does not require face-to-face data collection, but can be completed by post or over phone (or app). As this had originally been included as a secondary endpoint in the study, data were available for all completed patients. The OSS is known to be well correlated to the CS, with the same minimum clinically important difference on a standardized scale, so that the power of the trial is maintained and, as the change was made prior to interim data being observed, there is no loss of trial integrity. The ATALANTE 1 trial: Premature study discontinuation not to endangering sensitive patients during the COVID-19 pandemic The ATALANTE 1 clinical trial (NCT02654587) aimed at evaluating and comparing the medicinal product tedopi (OSE2101) to standard treatment (docetaxel or pemetrexed) as second and third line therapy in HLA-A2 positive patients with advanced NSCLC after failure of immune checkpoint inhibitor. This clinical trial was planned in two stages (1) randomized controlled trial (RCT) on a small sample of patients estimating overall survival rate at 12 months (with about 100 patients) and (2) a RCT comparing overall survival (with about 363 patients in total). After the first stage, 99 patients were included (63 in the experimental arm and 36 in the standard arm), the overall survival rate at 12 months was 46% (95% confidence interval: 33% -59%) in the experimental arm and 36% (95% confidence interval: 21% -54%) in the standard arm (oseimmuno therapeutics, 2020). The second stage of the study was supposed to include patients during 2020. However, this trial was stopped because of the COVID-19 pandemic. Indeed, as patients were suffering from lung cancer, the DSMB decided that it was too risky to continue. They stated that it was impossible to expose patients suffering from lung cancer to COVID-19 infection, this could endanger them and may end up biasing the results of the trial (ose-immuno therapeutics, 2020). As the results of the first stage were promising, the trial stakeholders decided to shortly discuss with the FDA and the EMA asking whether an additional clinical trial would be required, knowing that there are crucial treatment needs in this indication. The CAPE-Covid and the CAPE-Cod (Community-Acquired Pneumonia: Evaluation of Corticosteroids) studies: Embedding a COVID-19 trial within an ongoing trial Our fourth example is the CAPE-Cod trial (NCT02517489), which aims to assess the efficacy of hydrocortisone at ICU on patients suffering of severe community-acquired pneumonia. At the beginning of the COVID-19 pandemic the trial was active and including patients. As SARS-CoV-2 pneumonia was not an exclusion criterion of CAPE-Cod, centers started to include COVID-19 infected patients into the study. The clinical characteristics between the two indications differed, so trial stakeholders have decided to put temporarily on hold the inclusions in CAPE-Cod study and to use the information of COVID-19 patients by embedding a specific study considering COVID-19 indication only. A group-sequential design using the alpha-spending approach by Kim-DeMets (Kim and DeMets, 1987a,b) was chosen for the COVID-19 substudy to account for the considerable uncertainty with regard to the treatment effect in this new group of patients. If the CAPE-Covid study does not achieve the required sample size or stop (for efficacy or futility) before next autumn, there will potentially be inclusions of patients into two studies, as communityacquired pneumonia is a seasonal disease and COVID-19 will still be present. Taking into account patients' heterogeneity will be a major methodological challenge for this trial. 3 How adaptive designs might be used to overcome COVID-19 challenges In this section, guidance is provided on (i) where blinded adaptations can help; (ii) how to achieve type I error rate control, if required, with unblinded adaptations; (iii) how to deal with potential treatment-effect heterogeneity; (iv) how to utilize early read-outs; and (v) how to utilize Bayesian techniques. With blinded data we mean here more generally non-comparative data, i.e. data pooled across treatment arms (FDA , 2019) . Although the trial could be open, the adaption could be informed by blinded data in the sense that they are non-comparative. Generally speaking, potential inflation of type I error rate is less of a concern when adaptations are informed by blinded data (EMA , 2007; FDA , 2019) . Therefore, they might be considered first before looking into unblinded adaptations with knowledge of treatment effect estimates. In the introduction an outline of the potential challenges for clinical trials by the COVID-19 pandemic was provided. In order to assess the extent by which a trial is affected by these, blinded data may be interrogated with regard to the following: baseline patient characteristics; premature study or treatment discontinuations; missing data during follow-up; protocol violations; and nuisance parameters of the outcomes including event rates and variances. The findings may be compared be planning assumptions. Furthermore, time trends can be explored in the blinded data and any changes might be attributed to the COVID-19 if these coincide with the onset of the pandemic (see e.g. Friede and Henderson (2003) ). The findings of such blinded analyses might trigger investigations into resizing the trial. The resizing could be based on blinded or unblinded data; appropriate procedures will be considered in Section 4. However, adaptations are not restricted to sample size reestimation but also include other adaptations such as changes in the statistical model or test statistics to be used. For instance, observed changes in baseline characteristics might be reflected in the statistical model by including additional covariates. Similarly, findings regarding missing data, e.g. due to missed visits, might suggest to adopt a more robust analysis approach. It is well known that repeated analyses of accumulating clinical trial data can lead to estimation bias and to inflation of the type I error rate (Armitage et al, 1969) . For this reason there is generally a reluctance to modify the design of a clinical trial during its conduct for fear that the scientific integrity will be compromised. The necessity of a severe pause in recruitment in many trials due to the current pandemic, however, raises questions of whether additional analyses can be added to an ongoing trial to enable the data obtained so far to be analysed now, with a decision of whether or not to continue with the trial at a later post-COVID-19 time. Although the current situation of clinical trials being conducted in the setting of a global pandemic is without precedent, the particular question of adding interim analyses to a trial is not a new one. Proschan and Hunsberger (1995) introduced the concept of a conditional error function specified prior to the first analysis of accumulating data to be a function that gives the conditional probability of a type I error given the stage 1 data, summarized by a standardized normal test statistic, z 1 . In order to control the type I error of the test at level α, the conditional error function, where φ is the standard normal density function. Wassmer (1998) and Müller and Schäfer (2001) showed how this approach can be used to change a single-stage trial to have a sequential design equivalent to that obtained using a group-sequential or combination function test. The conditional error principle thus enables a trial planned with a single final analysis to be modified at any point prior to that analysis to have a sequential design, with this constructed in such that the type I error rate is not inflated. It should be noted, however, that it is necessary to specify how any data before and after the interim analysis are combined before the first interim analysis is conducted. Modification of the design to include initially unplanned interim analyses will also generally lead to a reduction in the power of the trial, as considered in more detail below. A similar application of the conditional error principle can be used to modify a trial initially planned with interim analyses. For ongoing clinical trials initially planned with interim analyses, the impact of the COVID-19 pandemic may lead to a desire to modify the timing of the planned analyses. Analyses are often taken at times specified in terms of the information available, which may be proportional to the number of patients for a normally distributed endpoint, or to the number of events for a time-to-event endpoint, or given by the number of events for a binary endpoint. Changes to the timing of the interim analyses do not generally lead to an inflation of the type I error rate provided these are not based on the observed treatment difference, and the spending function method (Lan and DeMets, 1983) can be used to modify the critical values used to allow for such changes. If the timing of interim analyses is based on the estimated treatment difference the type I error rate can be inflated using a group-sequential test (see, for example, Proschan et al (1992) ). The combination testing approach could be used to control the type I error rate in this setting (see, for example, Brannath et al (2002)). Homogeneity over the stages of a multistage design has always been a topic of discussion. Even without pandemic disruptions, there are various reasons why studies could change over time: Some sites may only contribute to part of study, the study population may change over time, e.g. for reasons of a depleted patient pool, and the disease under study itself may vary over time. While many of these reasons also apply to fixed sample size designs, multistage and especially adaptive trials are under obligation to deliver justifications of why the stages can be considered sufficiently homogeneous in order to test a common hypothesis. The EMA reflection paper on adaptive designs states that "Using an adaptive design implies [...] that methods for the assessment of homogeneity of results from different stages are pre-planned" (EMA , 2007) . One option they give is the use of heterogeneity tests as known from the area of meta-analyses. However, as Friede and Henderson (2009) point out, this can reduce the power of studies substantially even in case of no heterogeneity: Such tests are typically carried out at a higher significance level than the standard ones, thus accepting a higher false positive rate. An addition they bring forward is searching for timewise cutpoints in the data. Conclusions about the relationship between timing of change and occurrence of interim analyses can then be drawn from the resulting findings. In the current COVID-19 situation, the challenge statisticians face is similar to the general challenge described above. The nature and the severity of the impact will very much depend on the actual situation of the trial and the disease under study. Consequently, the way to deal with them may differ as described elsewhere in this paper. Here, we will focus on the question of whether the COVID-19-related changes are such that a rescue by introducing an adaptive design seems justifiable from the homogeneity aspect. There is one major difference to the situation described in the preceding paragraph: The presence of one or two cutpoints, depending on whether the trial will continue both during and after COVID-19, can be taken as a given. Also the question of whether the changes are due to a possibly performed interim analysis or due to COVID-19 seems moot; the question we need to answer is whether a combination is justified. In some cases, it will be obvious that a combination is not warranted. One example for such a case could be studies in respiratory diseases with hospitalizations included in the endpoint, where a COVID-19 related hospitalization may be an intercurrent event. In other cases, it may not be that obvious and there might be reasons to believe that the pooled patient set is suitable to answer the study hypothesis. Due to the reasons listed above, again a formal heterogeneity test will not be the tool of choice. The EMA Draft Points to Consider on COVID-19 (EMA , 2020b) does not make mention of the burden of proving homogeneity; rather it states the need of "additional analyses [...] to investigate the impact of the three phases [...] to understand the treatment effect as estimated in the trial". While this does not give sponsor carte blanche to combine as they wish, it clearly leaves room for a number of approaches of justifying combination, both from a numerical and a medical perspective. The estimand framework will be an important factor in the decision on pooling or not pooling the data as it will make arguments visible in a structured way: If estimands differ between study parts, then no meaningful estimator for them will be obtained from pooled data (see also Section 5). What can statistical methodology contribute if it must be conceded that pooling the patients is not justifiable? In some situations, the number of patients before the COVID-19 impact may already be sufficient to provide reasonable power (see Section 4.1). In this case, also patients in the COVID-19 timeframe would need to be analyzed, but it is unclear how they might be included. General guidance for such patients is given as repeating the analysis including all patients and discuss changes in the treatment effect estimate. Medical argumentation will then be needed to underpin the assumptions that changes are due to COVID-19. In some cases, causal inference can help estimate outcomes from those patients under the assumption that COVID-19 had not happened. If interested in the treatment effect in a pandemic free world, it might be worth clarifying the question of interest by relating to the estimand framework (ICH , 2019) where COVID-19 is seen as an intercurrent event. Alternatively, one could standardize results from all patients to the subgroup of patients pre-COVID-19 (e.g., Shu and Tan (2018) and Hernan and Robins (2020) ). Sometimes, also an artificial censoring at the COVID-19 impaction and the use of short-term information (see Section 3.4) to estimate final outcomes will provide a helpful sensitivity analysis. If it is not feasible to gain sufficient evidence from the pre-COVID-19 patients and a combination does not seem justifiable, then it may be advisable to pause the trial and to re-start it after the COVID-19 time. The during-COVID-19 patients should be included in supporting analyses, but the main evidence will come from the patient pool not directly affected by the pandemic (see also Anker et al (2020) ). Shortterm endpoints from during-COVID-19 patients may be used in addition to completed patients to inform decisions on the future sample size. The efficiency of adaptive designs depends on the timing and frequency of interim analyzes. Decision making in adaptive study designs requires availability of information on the endpoints utilized for decision making for a sufficient number of patients. Therefore, care should be taken when conducting an unplanned (or early) interim analysis in trials affected by COVID-19. Focusing on the primary endpoint data only will routinely exclude the many individuals for whom the primary endpoint is not available and might therefore potentially be misleading. In particular, this is relevant to studies interrupted by COVID-19 as investigators may already wish to assess the treatment effect using the pre-pandemic data. Several proposals have been made to use the information on early read-outs to inform the adaptation decision (e.g. Friede et al, 2011; Rufibach et al, 2016; Jörgens et al, 2019) . Although the information is different from the primary outcome with all limitations that this might have (Zackin et al, 1998) , a greater proportion of subjects can contribute to the analysis. This is especially useful in trials where only information about the short-term endpoint would be available at the interim analysis (Friede et al, 2011) . If primary endpoint data are available, another approach is to retain the pre-specified long-term endpoint as the primary focus of the interim analysis, but to support it with information on short-term data. In particular, such methodology exploits the possible statistical association between the short-and long-term endpoints to provide information about the long-term primary endpoint on patients who didn't reach their primary endpoint yet. This provides an efficient compromise between the less efficient approach of using only information on subjects who have been followed through to the long-term primary endpoint, and the potentially misleading approach of using only short-term information. A likelihood-based estimator for binary outcomes that combines short-and long-term data assessed at two timepoints was discussed by Marschner and Becker (2001) . They use a likelihood function that depends on three binomial distributions, which model the probability of success at both timepoints as well as the conditional probability of success at the last time point given the short-term endpoint. The maximum likelihood estimator for the probability of a successful primary endpoint in each treatment arm iŝ where L j = {0, 1} and S j = {0, 1} denote the outcome in the trial observed on respectively the long-term primary endpoint and the short-term endpoint in treatment arm j ∈ {0, 1}. Implementing their estimator for the treatment difference in the conditional power approach improves decision making at an interim analysis both for futility stopping and sample size re-assessment (Niewczas et al, 2019) . To preserve the type I error the combination test with final test statistic (see Section 3.2), can be used. Here, the first stage test statistic z 1 is defined by the cohort of patients included before the interim analysis, but the data used comes from the primary endpoint. For patients who were included before the COVID-19 impact but for whom the primary endpoint was observed after that impact, this would mean that their primary endpoints would need to be analysed together with those occurring before the impact. A disadvantage of this test procedure is that it prohibits early stopping for efficacy. In situations where it is not feasible to obtain sufficient information from the pre-COVID-19 patients, it may be advisable to support the interim analysis with historical data (Van Lancker et al, 2019). Sooriyarachchi et al (2006) proposed a score test for incorporating binary assessments taken at three fixed time points. Galbraith and Marschner (2003) expanded the likelihood-based approach described in Marschner and Becker (2001) to continuous endpoints assessed at an arbitrary number of follow-up times with a proposition of extending the topic to group sequential designs or conditional power approaches. This approach was generalized by Hampson and Jennison (2013) to a group sequential design with delayed responses. A shortcoming of these approaches is that they do not take into account baseline covariates in order to make fully efficient use of the information in the data. In addition, the incorporation of baseline covariates and longitudinal measurement of the clinical outcome is likely to involve a heavier dependence on statistical modelling, which raises concerns that their misspecification may result in bias. This problem is overcome by Van Lancker et al (2020) . They propose an interim procedure that is applicable for binary and continuous outcomes assessed at an arbitrary number of follow-up times and that allows the incorporation of baseline covariates in order to increase efficiency. They realise this by considering the estimation of the treatment effect at the time of the interim analysis as a missing data problem. This problem is addressed by employing specific prediction models for the long-term endpoint which allow, besides multiple short-term endpoints, the incorporation of baseline covariate information. When measurements are taken at three different timepoints (baseline, intermediate and final), the probability of a successful primary endpoint in each treatment arm is estimated by 1. fitting a model for L j on S j and the baseline covariates in the cohort of patients for whom all data is available, and using the model to make predictionsL j for all patients for whom S j is observed 2. fitting a model forL j on the baseline covariates in the cohort of people for whom S j is observed, 3. taking the mean of the predictions based on the model in Step 2 for all patients recruited at the time of the interim analysis. Implementing their estimator for the treatment difference in the conditional power approach improves decision making at an interim analysis both for futility stopping and sample size re-assessment without compromising the type I error rate, even if the prediction models are misspecified. The method differs from the previous ones as the proposed interim test statistic, denoted by z τ , is used as the first-stage test statistic z 1 . This is allowed as they provide a second stage test statistic that is independent of their proposed interim test statistic z τ , where z is the final test statistic based on primary endpoint data only and τ is the information fraction at the time of the interim analysis. This allows for statistical hypothesis testing at the interim analysis. However, the type I error will be compromised if information other than the current test statistic is used for interim decisions. In that case, the type I error can be preserved by defining the first stage p-value by the cohort of patients included before the interim analysis (e.g. Friede et al, 2011; Jenkins et al, 2011) . Similarly, care has to be taken when applying flexible study designs to time to event data (Brückner et al, 2018) . When data are separated into stages by the occurrence of the primary event, the type I error will be compromised if information other than the current logrank test statistic is used for interim decisions (Bauer and Posch, 2004) . If short-term endpoints are to be used, Jenkins et al (2011) proposed to base the separation on patients instead of on events. Similar as for other endpoints, this would mean that the primary event for patients who were included before the COVID-19 impact but did experience their primary event only after that impact, would need to be analysed together with those occurring before the impact. Depending on the actual impact, it may be appropriate to either use these patients as a separate cohort -in which case their short-term endpoint should not be used for decision making -or to artificially censor them at the impact timepoint and use their complete data for supplemental analyses only. In addition, the described methodology is also appropriate for interim analyses that take into account post-COVID-19 data if one can assume that the missing mechanism due to COVID-19-related drop-out is missing completely at random. In the Appendix of Van Lancker et al (2020) an extension of their method that allows the weaker assumption that missingness is at random is discussed. An alternative for the other methods is to consider more detailed informative missingness models (e.g., via multiple imputation (Sterne et al, 2009) ). Inter-patient heterogeneity as well as intra-patients heterogeneity are both very common in clinical trials. In COVID-19, however, several types of heterogeneity might add to the ususal level of variability. These include (1) patients infected or not by COVID-19, especially incidence of COVID-19 variability per country in international multi-center trials, (2) patients outcomes (in cancer studies is the present mortality due to the disease or immunosuppressed systems), (3) patients follow-up, and (4) patients compliance due to missing treatments. One way of considering these types of heterogeneity is to use Bayesian approaches during, if possible at all, or at the end of the trial. Using hierarchical Bayesian methods associated with Bayesian evidence synthesis methods will allow to account for different type of heterogeneity (Friede et al, 2017; Röver et al, 2016; Thall and Wathen, 2008) . These approaches take into account uncertainty in estimating the between-trial or subgroup heterogeneity but they can also be used in the setting of withintrial heterogeneity. By using potential variation of the scale parameter of the heterogeneity prior would facilitate sensitivity analyses. Friede et al (2017) proposed a Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values. In the setting of within-trial heterogeneity prior calibration of each source of heterogeneity is at most interest, indeed one should not be limited to methods accounting for only one source of heterogeneity as more than one type can be present. Let φ be the within-trial standard deviation, it determines the degree of heterogeneity across patients either included before or after COVID-19 pandemic or patients infected or not by COVID-19 (or any other COVID-19 source of heterogeneity) and µ the parameter of interest. Under Bayesian inference, uncertainty for φ is automatically accounted for and inference for µ and φ can be captured by the joint posterior distribution of the two parameters. The key point is in the choice of the prior distribution of φ, in particular when subgroups are small or unbalanced. In the absence of relevant external data or information about within-trial heterogeneity, the 95% prior interval of φ should capture small to large heterogeneity. Moreover, the use of a Bayesian approach entails the question of what constitutes sensible prior information in the context of COVID-19 in which there is a continual updating of information that is still not considered reliable. This may be argued on the basis of the endpoint in question, that is, what is the plausible amount of heterogeneity expected, what constitutes relevant external data, and how this information may be utilized. A relatively simple solution would be the use of weakly informative priors. For priors of effect parameters, adaptive priors using power or commensurate prior approaches have proved to be efficient in updating if, when and how to incorporate external information (Hobbs et al, 2012; Ollier et al, 2019) . In the following we develop approaches to resizing a clinical trial affected by the COVID-19 pandemic. Specifically, we consider to stop a trial early, the use of group-sequential designs, and sample size adjustment. All methods are implemented in a freely available R shiny app, which is briefly introduced in Section 4.6. In case data collection is nearly finished at the time of the COVID-19 impact, a natural question that comes into mind is whether one should analyze the trial early based on the data collected so far accepting some loss in power. In the following, we focus on superiority trials comparing a treatment versus placebo (or standard treatment) with allocation ratio 1 : r for placebo versus treatment. Let α denote the one-sided significance level and 1 − β the desired power at the planning stage of the trial. Assume that the endpoint of interest follows a normal distribution. Let δ denote the assumed difference between the means under the alternative hypothesis and let σ 2 denote the common variance for both arms. The total sample size N needed to achieve a desired power of 1 − β is then given by with z 1−α and z 1−β denoting the 1 − αand 1 − β-quantile of the cumulative standard normal distribution. We assume that the trial was originally planned to be analyzed based on N observed patients but so far has only data available for n = τ N patients. Solving Equation (1) for 1 − β and replacing N with n yields the power if the trial is analyzed early based on the data observed: where Φ(·) denotes the cumulative distribution function of the standard normal distribution. However, as n = τ N = (τ (z 1−α + z 1−β ) 2 σ 2 (r + 1) 2 )/(δ 2 r), we can show that the resulting power can be rewritten as which is independent of the allocation ratio r, the difference between the means δ and the variance σ 2 . Instead the power based on n available patients only depends on the fraction τ as well as the significance level α and the desired power 1 − β at planning stage. Resulting values for the power depending on the information fraction τ are shown in Figure 2 (black dotted line) and Table 1 . For a desired power of 1 − β = 0.80, if data is available for about 80% (τ = 0.80) of the planned patients, the absolute loss in power for the fixed design is about 10 percentage points while for a planned power of 1 − β = 0.90, the loss in power is about 7 percentage points. In Section 3.1 we touched upon blinded adaptations. Here we make some comments specifically on blinded sample size reestimation. Blinded sample size reestimation procedures are well established to account for misspecifications of nuiscance paramters in the planning phase of a trial (Friede and Kieser, 2013) . In this situation considered, namely the impact of the COVID-19 pandemic, a number of circumstances might make a resizing of the trial necessary and, as discussed, could be addressed in a blinded sample size review. In particular, censoring of follow-up might make it necessary to assess the sample size and lenght of follow-up. This would be of particular importance in long running trials, particularly prevalent in chronic conditions. In the context of heart failure trials, Anker et al (2020) suggested to censor observations due to regional COVID-19 outbreaks. Such actions would imply a resizing of the trial, potentially in terms of number of patients recruited and length of follow-up, to maintain previously set or in the light of the pandemic revised timelines (Friede et al, 2019) . Assume that a fixed design was planned with N patients to be analyzed. However, only n patients have been observed so far. The question is now, whether we can change to a group-sequential design (GSD) using the n patients for the first stage while the final sample size is still N . That is, the trial is analyzed as a group sequential design using the total sample size from the fixed design. The difference between Section 4.1 and the situation here is, that we will adjust the critical value to allow for two tests of the null hypothesis. Let c 1 and c 2 denote the critical values for a two-stage design and let τ = n/N denote fraction of data being used for the first stage. The variance-covariance matrix for the two test statistics for the first stage and the final analysis is given by Using Φ to denote the cumulative distribution function of the bivariate standard normal distribution the type I error rate is then given by and the power is given by Replacing N in Equation (6) by the right-hand side of Equation (1) yields As in Section 4.1, the resulting power does only depend on the values for the significance level α, the desired power 1 − β at planning stage and the fraction of data available at interim τ . It also depends on the critical values chosen to control the type I error rate. The top row of Figure 2 shows the resulting power depending on the information fraction τ for a planned desired power of either 1 − β = 0.80 (left-hand panel) or 1 − β = 0.90 (right-hand panel). Table 1 lists the resulting power for some values of τ for a desired power of either 80% oder 90%. The first column gives the value for τ , columns 2 to 6 give the resulting power for the fixed design as well as for both stages the Pocock and the O'Brien-Fleming design for a desired power of 1 − β = 0.80, and columns 7 to 11 give the achieved power for a desired power of 1 − β = 0.90. The first set of lines assume that there is no dilution effect (see Section 4.4) for patients enrolled into the trial after the COVID-19 outbreak while the second set assumed that the dilution effect is η = 0.10. For example, if 80% of the planned data has been collected before the COVID-19 outbreak, the resulting power for a fixed design is 0.707 if the planned power is 1 − β = 0.80. Using a Pocock GSD, the power for the first stage is 0.653 while the overall power at the end of the second stage is 0.78. For the O'Brien-Fleming GSD, the power for the first stage is 0.597, while the overall power is 0.792. Another likely scenario is that due to the COVID-19 outbreak the response to the treatment has changed and maybe even the response to the control treatment. Let µ c0 and σ 2 c0 denote the mean and the variance for the control group before the outbreak and let µ c1 and σ 2 c1 denote the mean and variance for the control group after the outbreak. Analogously, the means and variances for the treatment group before and after the outbreak are denoted with µ t0 , µ t1 , σ 2 t0 , and σ 2 t1 . Let δ = µ t0 − µ c0 denote the treatment effect before the outbreak started. The difference between the means after the outbreak started can then be expressed as a fraction of the difference before the outbreak started, i.e. µ t1 − µ c1 = (1 − η)δ. In the following, η will be called the dilution effect. The patients are randomized to control and treatment in a 1 : r ratio. Assume that at the time of the outbreak n = τ N patients have been enrolled into the trial and that it is planned to enroll a total of N patients. Let t 0 denote the test statistic based on only the patients enrolled before the outbreak and let t 1 denote the test statistic based on only the patients enrolled after the outbreak. Furthermore, let t denote the test based on all enrolled patients. With respect to the change of the means after the outbreak, two possible definitions can be thought of 1. relative change of means: µ c1 = (1 − η c )µ c0 and µ t1 = (1 − η t )µ t0 , 2. absolute change of means: µ c1 = µ c0 − c and µ t1 = µ t0 − t . Using c = η c µ c0 and t = η t µ t0 (or alternatively, η c = c /µ c0 and η t = t /µ t0 ), it can be shown that both approaches can be converted into one another. For the variances, we only consider a relative change of the variance and define σ 2 c1 = ψ c σ 2 c0 and σ 2 t1 = ψ t σ 2 t0 . A common assumption is that the variances for the treatment and the control group are the same. Here, we consider the case of σ 2 t0 = σ 2 c0 = σ 2 0 = σ 2 and σ 2 t1 = σ 2 c1 = σ 2 1 with ψ t = ψ c = ψ and σ 2 1 = ψσ 2 0 . That is, we assume equal variances for the two arms but not necessarily equal variances before and after the outbreak. The joint distribution of t 1 , t 2 , and t is then given by The general solution for the joint distribution can be found in Appendix A.1. As before, we assume that the original sample size was planned using a one-sided significance level of α to achieve a desired power of 1 − β based on Equation (1). Replacing N with ((z 1−α + z 1−β ) 2 σ 2 (r + 1) 2 )/(δ 2 r) yields By setting ψ = 1 (assuming equal variances before and after the outbreak), the equation reduces further to  As shown in Sections 4.1 and 4.3, the resulting distribution depends on the values for the significance level α, the desired power 1 − β, and the fraction of data available for the outbreak τ . In the case considered here, the only additional variable is the dilution effect η. Figure 2 shows the resulting values for the power depending on the information fraction τ for a dilution effect of η = 0, η = 0.10, and η = 0.20. The upper two plots show the achieved power for a dilution effect of η = 0 (see Section 4.4), the middle plots show the power for a dilution effect of η = 0.1, and the bottom plots for a dilution effect of η = 0.5. The black dotted line gives the resulting values for the power for the fixed design if analyzed early, the black lines give the resulting power for the Pocock design for the first stage (dashed line) and overall (solid line), and the gray lines give the resulting power for the O'Brien-Fleming design for the first stage (dashed line) and overall (solid line). It should be noted that the power for the fixed design as well as the power for the first stage for the GSDs does not change as analyses only uses first stage data which was collected before the outbreak. In conclusion, if at least 85% of the data are available and no considerable dilution effect is expected, then the recommendation would be to stop the trial immediately. In all other scenarios, consequences of any decision would need explored carefully using the approaches developed. As we will see in Section 4.6 below, these are implemented in a R Shiny app to support this process. As shown in Section 4.4, some loss in power is to be expected if the means and variances for the treatment and control arm change due to the outbreak. In order to regain the desired power of 1 − β, the sample size would need to be adjusted. For a fixed design that is analyzed only once, i.e. after all data has been collected from all patients enrolled before and after the outbreak, the sample size for patients enrolled after the outbreak can be calculated as shown below. Let n 0 denote the number of patients already enrolled into the trial before the outbreak and letñ 1 denote the number to be enrolled after the outbreak started. We wish to determineñ 1 so that the power based on a total ofÑ = n 0 +ñ 1 enrolled patients is 1 − β. Based on Equation (8), we know that the final test statistic t follows a normal distribution with Solving forñ 1 yields Figure 3 : Screenshot of the R shiny app. The derivations can be found in Appendix A.2. In order to find the sample size for the second part of the trial for a GSD, a search algorithm based on Equation (8) has to be used. Please note that the dilution effect η cannot be estimated from the data, but need to be hypothesized. Of course sensitivity analyzes can be conducted based on different assumptions. The R shiny app introduced in the next section was devised to support such processes. To facilitate the implementation of the proposed methods, an R shiny app was developed as a simple-to-use web-based application. It provides insights into the power properties on the fly, given user-defined input of design parameters. Specifically, it has a module for the calculations shown in Section 4.1 to answer the following question: If a trial was designed for 90% power for an assumed treatment effect at a significance level α = 0.025, what is the power if we conduct the analysis with only 85% of the patient data? By following (3), the app provides the power (84.8%) and a plot for different proportions of data available, in addition to 85%. A screenshot of the app is provided in Figure 3 . The app was originally designed to facilitate the discussion by Akacha et al (2020) , where the same calculation as (3) was independently developed. The app is expanded to implement the group sequential design of an interim analysis conducted with data available and a final analysis when the planned data is obtained (see Section 4.3). Two popular group sequential designs are considered which are the Pocock and the O'Brien-Fleming schemes. In addition, the incorporation of dilution effects allows for more general considerations, as demonstrated in Section 4.4. Similar outputs as in displayed Figure 3 are provided with the app for the various scenarios considered above. The app can be accessed at https: //power-implications.shinyapps.io/prod/ and comes with a help tab that contains more information about its usage. The COVID-19 pandemic affects all clinical trials, with implications for studies intended for drug regulation well beyond statistical aspects (EMA , 2020a,b; FDA , 2020) . For example, on-site monitoring of most trials is suspended during the lockdown and with the interdiction of non-essential travels the recording of adverse events might not be as good if a site visit is replaced by a telephone consultation, or a local laboratory was used instead of the central laboratory. Similarly, the mode of administration of a patient reported outcomes questionnaire might have been changed from an electronic collection at the site on a tablet computer to a paper based version mailed to the patient's home. All these examples may lead to a reduced quality of the trial data which may need to be taken into consideration when interpreting the trial. As much remains to be learned on the COVID-19 disease manifestations, treatments and pandemic distribution, it appears necessary to monitor the status and integrity of the trial on an ongoing basis. However, it may not be clear in some situations how this can be done in a way that protects the integrity of trial conduct. Care has to be taken that the original responsibilities of a DMC are not expanded beyond reasonable limits. Many of the responsibilities arising during the pandemic might more naturally seem to belong to trial management personnel, as the associated issues can often be addressed adequately without access to unblinded data; this might involve sponsor personnel, steering committees, etc. If important decisions are advised by unblinded results, then of course this should be done through a DMC. But many other decisions may not require unblinded access. Some, including initiating a sample size re-assessment or updating a study's final statistical analysis plan (SAP), could be very problematic in terms of validly interpreting final analysis results if initiated by a party with access to unblinded interim results such as a DMC. In current practice and supported by prior regulatory guidance, such decisions are generally initiated by parties remaining blinded. Of course the DMC should be kept fully aware of any changes implemented in a trial, and should comment if they have any concerns. But for actions taken based upon blinded data, there are generally no confidentiality concerns, and sponsors can enlist any experts who can help arrive at the best decisions. Establishing a qualified DMC when one was not previously felt to be needed can be challenging and time consuming during the pandemic. Attempting to ensure that DMC members have full understanding of all relevant background for the important tasks they will be assigned to, compared to trial personnel or steering committee members who will already have such perspective, could be risky. Thus, if an unblinded DMC is felt necessary to be established, given the challenges of identifying and implementing such a group quickly, an internal firewalled group might thus be considered as an option in some cases. COVID-19 affects ongoing clinical trials in many different ways, which in turns affects many aspects of statistical inference, which are best described in the estimand framework laid out by ICH (2019). An estimand provides a precise description of the treatment effect reflecting the clinical question posed by the trial objective. It summarizes at a population-level what the outcomes would be in the same patients under different treatment conditions being compared. Central to the estimand framework introduced in ICH (2019) are intercurrent events, which occur after treatment initiation that affect either the interpretation or the existence of the measurements associated with the clinical question of interest. Generally, the intercurrent events due to COVID-19 can be categorized into those that are more of an administrative or operational nature (e.g. treatment discontinuation due to drug supply issues), and those that are more directly related to the effect of COVID-19 on the health status of subjects (e.g. treatment discontinuation due to COVID-19 symptoms), see Akacha et al (2020) . However, the additional intercurrent events are introducing ambiguity to the original research question and teams need to discuss how to account for them (Akacha et al, 2017a,b; Qu and Lipkovich , 2020) . Care has to be taken when employing an adaptive design methodology to combine e.g. the information before and after the COVID-19 outbreak. When each stage of an adaptive design is based on a different estimand, the interpretability of the statistical inference may be hampered. If, for example, the pandemic markedly impacts the trial population after the outbreak because elderly and those with underlying conditions such as asthma, diabetes etc. are at higher risk and therefore excluded from the trial, then this would lead to different stagewise estimands (due to the different population attributes) and limit the overall trial interpretation. The situation is different in adaptive designs with a preplanned selection of a population at an interim analysis (as this does not change the estimand), when following the usual recommendations for an adequately planned trial (which includes the need to pre-specify the envisaged adaptation in the study pro-tocol). Care has also to be taken if the pattern of intercurrent events is different before and after an interim analysis, in line with the usual recommendations to assess consistency across trial stages in an adaptive design. Generally speaking, as the definition of an adaptive design implies that we are considering a trial design, it needs to be aligned to the estimands that reflect the trial objectives according to ICH (2019). The considerations in the previous paragraph are closely related to the trial homogeneity issues discussed in Section 3.3. One particular concern is the possible shift in the study population after the onset of the pandemic. At present we see a notable decline in hospital admissions for non COVID-19 related diseases. It can be assumed that patients with less severe problems tend to postpone a hospital stay for fear of an infection in the hospital or for not putting stress on the already overloaded health system in some countries. Although standard trial procedures like randomization assures the validity of the statistical hypothesis test, it is unclear which population's treatment effect is actually being estimated. The challenges imposed by the pandemic will lead to difficulties in meeting protocol-specified procedures in many instances, thus requiring the need to change aspects of ongoing trials. It is then important to be mindful about the fact that pre-specification of the study protocol and the SAP is the corner stone to avoid operational bias in any clinical trial. Although the ICH (1998) guideline allows changing the SAP even shortly before unblinding a trial, this if often viewed as critical by stakeholders. Changing the characteristics of a trial based on unblinded trial data always requires appropriate measures to control the type I error rate whereas changes triggered by external data are often seen more lenient. In the case of changes to the conduct and/or analysis of a trial caused by the pandemic it is reasonable to assume that such changes are not triggered by the knowledge gained from the ongoing trial. Still, changes will have to be pre-specified and documented, as appropriate. It is recommended to pre-specify key analyses important to interpret the objectives of the trial in the statistical analysis plan, in particular analyses related to the inferential testing strategy. Therefore we suggest to consider first whether different analyses are needed for the primary or key secondary objectives. Other analyses that have a more exploratory character can be included in a separate exploratory analysis planning. If any impact is detected that warrant additional analyses in the clinical study report, then these can be added later. After the lockdown measures will be eased in future, the medical practice may not return to the state before the onset of the pandemic. Social distancing measures may be kept in place and it is to be expected that the trend of, for example, fewer hospital admissions for minor cases will continue to some extent. Nevertheless, certain trials interrupted by the pandemic will be able to restart, albeit in a possibly changed environment. The trial of the long acting contraceptive (Section 2.1) was largely unaffected by the onset of the pandemic. The START:REACTS trial (Section 2.2) had changed its endpoint to a PRO measure that can be observed remotely if a patient does not wish to come to the clinic. This trial can restart recruitment when elective surgeries will be again possible, albeit with the new endpoint as the original endpoint was not always measured during the lockdown measures. The ATALANTE 1 trial (Section 2.3) was stopped for ethical reasons due to the study population being at high risk of COVID-19. As the trial did not proceed to its second stage, consultations with agencies have started to discuss the partial results in view of the clinical unmet need. Such discussion will be likely focus also on the loss of power even if first stage was promising. This begs the question, how promising the first stage results should have been to provide convincing evidence if a dilution of the treatment effect cannot be excluded a priori and the considerations in Section 4 of this paper may support such discussions. Lastly, the CAPE-Covid and the CAPE-Cod studies (Section 2.4) are both addressing ICU patients with two kind of pneumonia. There is heterogeneity in disease and patients prognostic. For the moment, the CAPE-Cod trial is temporarily stopped but is planned to restart next autumn. As the investigators had no choice than to embed a trial within the other, heterogeneity will need to be addressed at the end of the study in order to preserve both results. The COVID-19 pandemic has not only led to a surge a clinical research activities in developing treatments, diagnostics and vaccines to fight the pandemic, but also impacted in many ways on ongoing trials. Here we illustrated the negative effects the pandemic might have on trials by giving four examples from ongoing studies and describing the considerations and consequences in reaction to the pandemic. Furthermore, we focused here on the role of adaptive designs in mitigating the risks of the pandemic which might result in a large number of inconclusive or misleading trials. Aspects that are of particular importance here are type I error rate control and treatment-effect heterogeneity. When trials are affected, the question to stop the trial early or to continue the trial, possibly with modifications is of particular interest. Considering normally distributed outcomes we developed a range of strategies. We believe that these are transferable to other types of outcomes with only limited modifications. The joint distribution of t 0 , t 1 , and t is given by µt1−µc1 σ 2 t1 n t1 + σ 2 c1 n c1 n t0 µ t0 +n t1 µ t1 n t0 +n t1 − n c0 µ c0 +n c1 µ c1 n c0 +n c1 n t0 σ 2 t0 +n t1 σ 2 t1 (n t0 +n t1 ) 2 + n c0 σ 2 c0 +n c1 σ 2 c1 (n c0 +n c1 ) 2 σ 2 t0 n t0 +n t1 + σ 2 c0 n c0 +n c1 σ 2 t0 n t0 + σ 2 c0 n c0 n t0 σ 2 t0 +n t1 σ 2 t1 (n t0 +n t1 ) 2 + n c0 σ 2 c0 +n c1 σ 2 c1 (n c0 +n c1 ) 2 σ 2 t1 n t0 +n t1 + σ 2 c1 n c0 +n c1 σ 2 t1 n t1 + σ 2 c1 n c1 n t0 σ 2 t0 +n t1 σ 2 t1 (n t0 +n t1 ) 2 + n c0 σ 2 c0 +n c1 σ 2 c1 (n c0 +n c1 ) 2 Equation (14) can then be rewritten as Substituting n 0 /(n 0 +ñ 1 ) with ξ = n 0 /(n 0 +ñ 1 ), we can rewrite Equation (12) as follows: Replacing n 0 = N τ and also noticing that the right-hand side of the equation also equals N , we obtain Now, if τ η 2 − 1 + ψ = 0 and replacing ψ = 1 − τ η 2 , we get For τ η 2 − 1 + ψ = 0, we obtain Re-substitution of ξ finally yields As can be seen from Equations (19) and (20), ξ and henceñ 1 have two different solutions due to the square root. Evaluating both solutions for different values of τ , η, and ψ show that only the second solution (+ √ in the numerator and − √ in the denominator lead to a positive number for the sample size. Impact on integrity and interpretability of clinical trials during the COVID-19 pandemic Estimands and their role in clinical trials Estimands in clinical trials-broadening the perspective Continuing Clinical Trials in Heart Failure in the Face of the COVID-19 Crisis: An Expert Consensus Position Paper from the Heart Failure Association (HFA) of the Repeated Significance Tests on Accumulating Data Modification of the sample size and the schedule of interim analyses in survival trials based on data inspections by H. Schäfer and H.?H. Müller Recursive combination tests Non-parametric adaptive enrichment designs using categorical surrogate data Reflection paper on methodological issues in confirmatory clinical trials planned with an adaptive design Guidance on the management of clinical trials during the COVID-19 (coronavirus) pandemic Draft points to consider on implications of Coronavirus disease (Covid-19) on methodological aspects of ongoing clinical trials Guidance for industry: Adaptive design clinical trials for drugs and biologics FDA Guidance on Conduct of Clinical Trials of Medical Products during COVID-19 Public Health Emergency Guidance for Industry, Investigators, and Institutional Review Boards Intervention effects in observational survival studies with an application in total hip replacements Exploring changes in treatment effects across design stages in adaptive trials Blinded sample size re-estimation in superiority and non-inferiority trials: Bias versus variance in variance estimation Blinded sample size reestimation in event-driven clinical trials: Methods and an application in multiple sclerosis Designing a seamless phase II/III clinical trial using early outcomes for treatment selection: an application in multiple sclerosis Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases Interim analysis of continuous long-term endpoints in clinical trials with longitudinal outcomes Recommendation for confidence interval and sample size calculation for the Pearl Index Clinical Characteristics of Coronavirus Disease ICH E9 guideline on statistical principles for clinical trials ICH E9 (R1) addendum on estimands and sensitivity analysis in clinical trials to the guideline on statistical principles for clinical trials An adaptive seamless phase II/III design for oncology trials with subpopulation selection using correlated survival endpoints Contraceptive Efficacy and Safety of 52 mg LNG-IUS for up to Eight Years: Year 6 Data From the Mirena Extension Trial Nested combination tests with a time-to-event endpoint using a short-term endpoint for design adaptations Group sequential tests for delayed responses (with discussion) Causal Inference: What If Commensurate priors for incorporating historical information in clinical trials using general and generalized linear models Design and analysis of group sequential tests based on the type I error spending rate function Confidence Intervals Following Group Sequential Tests in Clinical Trials Discrete sequential boundaries for clinical trials Efficacy of contraceptive methods: A review of the literature Interim monitoring of clinical trials based on long-term binary endpoints Protocol for a randomised controlled trial of Subacromial spacers for Tears Affecting Rotator cuff Tendons: a Randomised, Efficient, Adaptive Clinical Trial in Surgery (START:REACTS) Adaptive group sequential designs for clinical trials: combining the advantages of adaptive and of classical group sequential approaches Interim analysis incorporating short-and long-term binary endpoints An adaptive power prior for sequential clinical trials: Application to bridging studies ose-immuno therapeutics, press release ose-immuno therapeutics, press release An adaptive two-arm clinical trial using early endpoints to inform decision making: design for a study of sub-acromial spacers for repair of rotator cuff tendon tear Effects of assumption violations on type I error rate in group sequential monitoring Designed extension of studies based on conditional power Estimands and estimation for clinical trials beyond the COVID-19 pandemic Evidence synthesis for count distributions based on heterogeneous and incomplete aggregated data Comparison of different clinical development plans for confirmatory subpopulation selection Improved Estimation of Average Treatment Effects on the Treated: Local Efficiency The sequential analysis of repeated binary responses: a score test for the case of three time points Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls Bayesian designs to account for patient heterogeneity in phase II clinical trials Evaluating futility of a binary clinical endpoint using early read-outs Improving interim decisions in randomized trials by exploiting information on short-term endpoints and prognostic baseline covariates A comparison of two methods for adaptive interim analyses in clinical trials Perspective: Human Immunodeficiency Virus Type 1 (HIV-1) RNA End Points in HIV Clinical Trials: Issues in Interim Monitoring and Early Stopping This work was inspired by discussions led by Lisa Hampson and Werner Brannath in the joint working group "Adaptive Designs and Multiple Testing Procedures" of the German Region (DR) and the Austro-Swiss Region (ROeS) of the International Biometric Society. The authors thank Andrea Schulze and Andy Metcalfe for their contributions to Examples 2.1 and 2.2, respectively. Furthermore, the authors are grateful to Gernot Wassmer for comments on the R code used in the R shiny app and to Werner Brannath for comments on an earlier version of this manuscript. The authors have declared no conflict of interest.