key: cord-0797227-7kqq7ai2 authors: Flight, Laura; Julious, Steven; Brennan, Alan; Todd, Susan title: Expected Value of Sample Information to Guide the Design of Group Sequential Clinical Trials date: 2021-12-03 journal: Med Decis Making DOI: 10.1177/0272989x211045036 sha: 080547b5d1b4e15d3d44b2782bbac7ebb2c5ed40 doc_id: 797227 cord_uid: 7kqq7ai2 INTRODUCTION: Adaptive designs allow changes to an ongoing trial based on prespecified early examinations of accrued data. Opportunities are potentially being missed to incorporate health economic considerations into the design of these studies. METHODS: We describe how to estimate the expected value of sample information for group sequential design adaptive trials. We operationalize this approach in a hypothetical case study using data from a pilot trial. We report the expected value of sample information and expected net benefit of sampling results for 5 design options for the future full-scale trial including the fixed-sample-size design and the group sequential design using either the Pocock stopping rule or the O’Brien-Fleming stopping rule with 2 or 5 analyses. We considered 2 scenarios relating to 1) using the cost-effectiveness model with a traditional approach to the health economic analysis and 2) adjusting the cost-effectiveness analysis to incorporate the bias-adjusted maximum likelihood estimates of trial outcomes to account for the bias that can be generated in adaptive trials. RESULTS: The case study demonstrated that the methods developed could be successfully applied in practice. The results showed that the O’Brien-Fleming stopping rule with 2 analyses was the most efficient design with the highest expected net benefit of sampling in the case study. CONCLUSIONS: Cost-effectiveness considerations are unavoidable in budget-constrained, publicly funded health care systems, and adaptive designs can provide an alternative to costly fixed-sample-size designs. We recommend that when planning a clinical trial, expected value of sample information methods be used to compare possible adaptive and nonadaptive trial designs, with appropriate adjustment, to help justify the choice of design characteristics and ensure the cost-effective use of research funding. HIGHLIGHTS: Opportunities are potentially being missed to incorporate health economic considerations into the design of adaptive clinical trials. Existing expected value of sample information analysis methods can be extended to compare possible group sequential and nonadaptive trial designs when planning a clinical trial. We recommend that adjusted analyses be presented to control for the potential impact of the adaptive designs and to maintain the accuracy of the calculations. This approach can help to justify the choice of design characteristics and ensure the cost-effective use of limited research funding. Conducting efficient research is a priority for funders with limited heath research budgets. 1 Adaptive designs are one way to potentially make a trial more efficient that use data collected as a trial progresses, at prespecified time points, to inform modifications to the trial. 2 They can directly benefit patients and health care providers both ethically and financially. 3, 4 Adaptive designs are increasingly used [5] [6] [7] and have been adopted in trials for the treatment of COVID-19. 8 The methods of value-of-information analysis (VOIA) provide a framework for quantifying the value of collecting more information to determine whether a treatment should be adopted by balancing the benefits of additional research against the costs. To date, opportunities are potentially being missed to apply VOIA methods into the design and analysis of adaptive designs. 9, 10 In this article, we extend existing VOIA methods, specifically the expected value of sample information (EVSI) for assessing the cost-effectiveness of proposed fixedsample-size designs to the adaptive design setting. 11 This approach helps to increase the efficiency of trials while reflecting stakeholder preferences for adaptive decisions to be informed by clinical effectiveness during the trial. 11 We highlight how this approach differs from the fixedsample-size design setting using the ISPOR Value of Information Emerging Good Practices Task Force framework. 10, 12 Key considerations include appropriately adjusting estimates for the adaptive nature of the design as well as capturing the costs associated with conducting an adaptive design. Using a hypothetical case study, based on the CAC-TUS pilot trial, 13 the methods are used to guide the design of a trial focusing on the number of interim analyses and choice of clinical effectiveness stopping rule while making appropriate adjustments for the adaptive nature of the design. Pallmann et al. 4 provided a summary of available adaptive designs. In this article, we focus on the commonly used group sequential designs (GSDs). 5, 14 During a GSD, data are examined multiple times. At an interim analysis, a test statistic comparing the intervention and control arms for the outcome of interest-typically, a clinical rather than a cost-effectiveness outcome-is calculated. This test statistic is then compared with the prespecified stopping boundary (also known as a stopping rule). If the test statistic falls within the boundary, the trial continues to the next analysis. This process is repeated at each interim analysis using all accumulated evidence until the test statistics cross the boundary or reach the final analysis. Examples of stopping rules include those proposed by Pocock 15 and O'Brien-Fleming. 16 Each rule has different characteristics and impact on the design and subsequent analyses. 17 The Pocock rule requires a larger maximum sample size if the trial does not stop early; however, there is a lower hurdle for stopping the School of Health and Related Research, University of Sheffield, Sheffield, UK (LF, SJ, AB); Department of Mathematics and Statistics, University of Reading, Reading, UK (ST). The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Laura Flight was funded by a National Institute for Health Research (NIHR) Doctoral Research Fellowship for this project. This article presents independent research funded by the NIHR. The views expressed are those of the authors and not necessarily those of the NHS, the NIHR, the Department of Health and Social Care, or the University of Sheffield. The CACTUS Pilot trial was funded by the NIHR under its Research for Patient Benefit (RfPB) Programme (grant reference number PB-PG-1207-14097). This study was also supported by the Stroke and Telehealth themes of the South Yorkshire Collaboration for Leadership in applied health research and care (CLAHRC). NIHR CLAHRC for South Yorkshire acknowledges funding from the NIHR. The study also received support from North of Tyne PCT. The views expressed are those of the authors and not necessarily those of the National Health Service, the NIHR, or the Department of Health. The Big CACTUS project was funded by the NIHR Health Technology Assessment Programme (12/21/01). Additional funding support was provided by the Tavistock Trust for Aphasia. The views and opinions expressed are those of the authors and do not necessarily reflect those of the Health Technology Assessment Programme, the NIHR, the National Health Service, the Department of Health and Social Care, the Tavistock Trust for Aphasia, or the Stroke Association. Financial support for this study was provided entirely by a National Institute for Health Research Doctoral Fellowship. The funding agreement ensured the authors' independence in designing the study, interpreting the data, writing, and publishing the report. trial at early analyses compared with the O'Brien-Fleming rule. It is important to account for the adaptive nature of the design during analysis to avoid introducing bias into the trial results. 14 Flight 18 explored the impact that a GSD can have on the health economic analysis following a clinical trial and showed how it is important to adjust for the adaptive design to maintain an accurate health economic analysis. In this article, we consider the appropriate adjustment for the adaptive nature of the trial when extending VOIA methods to guide the design of GSDs, highlighting the additional considerations in the adaptive design setting. Our approach allows researchers to determine a costeffective design by comparing different stopping rules and number of interim analyses and to compare adaptive with fixed-sample-size designs. To conduct a VOIA for a fixed-sample-size design, Fenwick et al. 12 proposed 7 steps in the ISPOR Value of Information Emerging Good Practices Task Force guidance. The following sections discuss each of these steps in the context of employing an adaptive design, highlighting how they differ from the fixed-sample-size design case. Here, step 6 has been modified for adaptive designs. This work was supported by a public advisory group who ensured that the development of the methods was relevant and appropriate from the public perspective. More information on their role is provided by Flight. 18 Steps 1 and 2: Conceptualize and Construct a Health Economic Model and Parametrize with Evidence As for fixed-sample-size designs, a health economic model needs to be constructed for the population of interest. This may be an existing model that has been developed for the disease of interest or from previous work such as a pilot study. A probabilistic sensitivity analysis (PSA) is generated based on available prior evidence for the model parameters, denoted by u. Model parameters might include transition probabilities, costs, and benefits for a health state. We denote the PSA sample by where N PSA is the number of PSA samples. For each row of the PSA sample, the model is evaluated to give a per-person net benefit for each intervention, denoted by where d represents the interventions. There is no difference between the VOIA approach for a fixed-sample-size and adaptive design at this stage. Step 5: Establish whether Further Research is Worthwhile The expected value of perfect information (EVPI) considers the scenario in which further research would eliminate all decision uncertainty. [19] [20] [21] Further research is potentially worthwhile if the associated costs are less than the EVPI. 22 Using the same approach as for a fixed-samplesize design, this can be calculated for the adaptive design using 19 It may be possible to resolve all the uncertainty about a subset of the model parameters. 10 This can be measured using the expected value of partially perfect information (EVPPI). 20 Step 6 Stage 1: Identify the trial designs for comparison. The first stage is to choose the trial designs for consideration, which includes the sample size for the trial and the criteria on which the trial might stop early (the stopping rule). As discussed by Flight et al., 9 this is typically informed by the clinical primary outcome, as costeffectiveness outcomes are rarely used in the design of an adaptive trial. As with the fixed-sample-size design, and using a frequentist approach to sample size calculation, this will require an estimate of the clinically important difference for the primary outcome, an estimate of the population variance (for a normal outcome) and type I and type II errors (typically chosen to be 0.05 and 0.1, respectively). 23 These choices are the same regardless of the adaptive nature of the trial and are usually informed by prior information or discussions with the clinical research team. Additional considerations for an adaptive designspecifically, a GSD here-include the choice of stopping rule (based on the clinical primary outcome) and the number of interim analyses. We consider GSDs with the Pocock stopping rule and the O'Brien-Fleming stopping rule with up to 5 equally spaced analyses of the data. The sample size is informed by these choices. Stage 2: Simulate the trial results and analysis data sets accounting for the adaptive design. A trial result data set representative of the population to be randomized into the future trial is simulated for each row of the PSA sample. This is based on the likelihood function from existing information such as a pilot study or observational study. Rothery et al. 10 suggested that data sets should be simulated taking into account how the data from the trial would be analyzed. Flight 18 showed that the adaptive nature of a trial can affect the subsequent health economic analysis. Failing to adjust for this could result in a spurious estimate of the EVSI, potentially wasting limited resources. They also describe how bias-adjusted maximum likelihood estimate methods for the adjustment of the point estimate and the sample mean ordering approach to calculate adjusted confidence intervals of primary and health economic outcomes can be extended to adjust a within-trial and modelbased health economic analysis. [24] [25] [26] In this article, the bias-adjusted methods are referred to as the ''adjusted analysis,'' and the usual maximum likelihood estimate is referred to as the ''unadjusted analysis.'' The data simulation is informed by the PSA parameter estimates to give a trial analysis data set in each row of the PSA. The trial analysis for the design under consideration is applied to each trial results data set. For an adaptive design, this establishes whether the trial would have stopped early at any of the interim analyses. For example, the first group of simulated participants form the analysis set at the first interim analysis. The primary outcome is calculated and compared with the prespecified stopping boundary. If the estimate crosses the boundary, the trial stops and the trial analysis data set is formed from the participants randomized into the trial up to that point. If the boundary is not crossed, the trial continues to the next interim analysis, until the point estimate crosses the boundary or the final analysis is reached. This is repeated for each row of the PSA sample. The accumulating cost-effectiveness data are not used to inform whether the trial should stop early. Stage 3: Calculate summary statistics. Summary statistics for primary and secondary outcomes informing the health economic model are estimated from the trial analysis data set in each row of the PSA sample. This will include the primary and secondary clinical outcomes and health economic outcomes, such as health care resource use and health-related quality of life. These statistics are denoted byT for the adjusted analysis and bŷ for the unadjusted analysis. Stage 4: Calculate the EVSI. The EVSI is the difference between the expected net benefit given sample information minus the expected net benefit given current information. The health economic model has input parameters (u) to estimate the net benefit of each intervention (d = 1, . . . , D) under consideration. This gives a per-person EVSI of for data Y to be collected. 27 A population-level EVSI is estimated by multiplying the individual-level EVSI by the time horizon (T ) and the annual prevalence for the population (N p ) to give 28 Methods for efficiently calculating the EVSI are summarized by Health et al. 29 and Kunst et al. 30 We use the nonparametric regression approach, as this does not require the existence of conjugate distributions or parametric assumptions. 27 Stage 5: Calculate the cost of sampling accounting for additional costs and cost savings. We need to understand the costs associated with conducting the research, known as the cost of sampling. 31 The total cost of sampling is composed of fixed, variable, analysis, and opportunity costs and depends on the number of participants recruited and number of analyses conducted. Fixed costs are incurred regardless of the trial design and include site recruitment and training, archiving costs, and dissemination. Variable costs include randomizing and following up participants such as staff costs and database management. The analysis costs include costs associated with conducting an analysis of the endpoints used to inform interim decision making (typically clinical endpoints). For the fixed design, the cost of analysis is included in the fixed costs. For the adaptive design, however, we separate this out, as multiple analyses may take place depending on the design chosen. The opportunity cost can be thought of as the financial cost of delaying a decision to obtain more information. 32 Willan and Kowgier 33 suggested, for a 2-arm trial, the opportunity cost is equal to the incremental net benefit (INB) of the new intervention compared with the control based on information available before the trial begins. The cost of sampling, for a 2-arm trial, is calculated using where TC is the total cost of sampling, C f is the fixed cost, N a is the expected number of analyses, C a is the cost of analysis, n is the expected sample size (ESS), C v is the variable cost per patient incurred by every participant in the trial, n I is the expected number of participants in the intervention arm, C v, I is the variable cost per participant incurred in the intervention arm only, n C is the expected number of participants in the control arm, C v, C is the variable cost per participant incurred in the control arm only, and C o is the opportunity cost per participant. Additional information on how to calculate the cost of sampling for an adaptive design is given in the supplementary material. Step 7: Compare the ENBS of trial designs and iterate with new evidence. The ENBS is the difference between the population EVSI and the cost of sampling. This can be calculated using the adjusted approach, denoted by g ENBS , or the unadjusted approach, denoted by d ENBS . The optimal design from a health economic perspective has the highest ENBS. We use the ENBS to guide the design of a clinical trial alongside discussions with clinical teams, including the use of adaptive as well as fixed-sample-size designs. As for the fixed-sample-size design, this process should be repeated once new evidence is available. The steps for conducting a VOIA for a fixed-samplesize design, proposed by Fenwick et al., 12 have been extended for an adaptive design. These methods appropriately adjust the analysis for the adaptive nature of the trial and capture the potential additional costs and cost savings of these designs. The following sections outline a hypothetical case study used to illustrate the approach and summarize the results. We use a hypothetical case study, based on a real trial, to illustrate how VOIA can be applied to an adaptive design. The Cost-effectiveness of Aphasia Computer Treatment Compared to Usual Stimulation (CACTUS) pilot clinical trial aimed to assess the feasibility of conducting a large-scale clinical trial into the effectiveness of self-managed computer treatment for people with longstanding aphasia post stroke. 12 Participants were randomized to either receive a computer-based intervention (CSLT) designed to improve word-finding ability through language exercises or a usual care control (UC). A model-based cost-utility analysis of pilot data provided an early analysis of the likely cost-effectiveness of CSLT, and full details are reported by Latimer et al. 34 We considered alternative designs for a full-scale clinical trial following the CACTUS pilot comparing CSLT and UC. Using R, we adapted the original model and analysis methods reported by Latimer et al, 34 and the proposed designs did not attempt to replicate the Big CACTUS clinical trial or health economic analysis that followed the CACTUS pilot trial. 35 Full details on the economic model used in this analysis are provided by Flight. 18 Trial design and data characteristics. We compared a fixed-sample-size design, Pocock (POC) 15 and O'Brien-Fleming (OBF) 16 stopping rules with maximums of 2 and 5 analyses. Each design was applied in R (version 3.4.3) using the RCTdesign package (http://www.rctdesign.org/Welcome.html). For each design, the type I and type II error rates were 0.05 and 0.1, respectively. The clinically important difference was the improvement in proportion of words named correctly between the intervention and control arm (treated as a continuous variable). That and its associated standard deviation were calculated using the pilot trial data. To explore the impact of different trial designs and data characteristics on the choice of optimal design, the correlation between primary and health economic outcomes and the intervention costs were varied. We assumed there was a negative correlation between the primary outcome and costs and a positive correlation between the primary outcome and utilities. Absolute correlations of 0.0, 0.4, and 0.8 were explored, covering a range of no, medium, and high correlation. The cost of CSLT was varied over 15 values, and the INB from the pilot trial and the subsequent EVSI and ENBS was recalculated. Data-generating mechanism. To generate the PSA sample, we bootstrapped the CACTUS pilot data 5000 times. We simulated a trial result data set for each of the PSA rows using copulas. This allowed the marginal distributions of the primary and the health economic outcomes (resource costs and utility) to be nonnormal and correlated. Full details are provided in the supplementary material. We used a willingness-to-pay threshold of £20,000 per quality-adjusted life-year as per National Institute for Health and Care Excellence guidance with a discount rate of 3.5% applied to costs and benefits. 36 The time horizon and prevalent population were taken from the Latimer et al. 34 pilot health economic analysis, giving an average of 27,616 patients expected to eligible for and compliant with CSLT per year over a 10-y period. Trial results estimates. We calculated adjusted and unadjusted estimates of the health economic model parameters using the bias-adjusted maximum likelihood estimates described by Flight, 17 compared adjusted and unadjusted point estimates of the primary clinical outcome, and reported the width of the 95% confidence interval. We then calculated and compared the ENBS for each of the scenarios to determine the optimal trial design from a health economic perspective. We summarize how the results of such analyses could be presented when exploring the optimal trial design, including both fixed-sample-size and adaptive designs, by first considering the impact of each design on the maximum and ESS for the trial, the difference in the cost of sampling, and how the EVSI and ENBS for the designs might be compared visually and summarizing the potential impact of the unadjusted versus the adjusted approaches. FIX denotes the fixed-sample-size design, and OBF2 and OBF5 and POC2 and POC5 denote the O'Brien-Fleming and Pocock designs with 2 and 5 analyses, respectively. The specific results from applying the new VOIA methods to the hypothetical case study are context dependent and not generalizable to all VOIA calculations using this approach. Table 1 summarizes the ESS, number of analyses, and distribution of the sample size for each design over the interim and final analyses with zero correlation between the primary and health economic outcomes. OBF5 has the highest expected number of analyses (4.55 analyses) and POC2 the fewest (1.79 analyses). Both POC designs have a high maximum sample size because of the large penalty for early examinations of the data. The ESS for these designs is high, as a large proportion of trials reach the final analysis where the sample size is larger than the fixed-sample-size design. A small number of trials (0.02%) stopped at the first interim analysis of OBF5, based on the accumulating evidence for the primary outcome, where the sample size was 60. In contrast, almost 5% of trials stopped at the first analysis of POC5, where the sample size was 72. Financial information from the Big CACTUS grant application (not the actual costs incurred) was used to inform the cost of sampling for the hypothetical CAC-TUS case study. These detailed costs are routinely outlined in the planning of clinical trials and are a useful source for any trial team considering this approach. The components of the cost of sampling for the hypothetical case study are given in the supplementary material. The cost of sampling for each design is given in Table 1 . The cost of sampling for FIX is £2,127,530, the highest cost of sampling of the 5 designs. POC2, OBF2, and OBF5 have similar costs of sampling because of their similar ESS. POC5 has the highest costs of sampling; however, this is only £140,000 greater than the cheapest design (OBF5), which is relatively small given each design has a cost of sampling greater than £2,000,000. Even when a trial can stop at the first analysis, large costs are incurred, especially when the first analysis is conducted halfway through the trial. The trials stopping at the first analysis of 5 have the smallest cost of sampling, as they have one-fifth of the maximum number of participants. This is slightly smaller for OBF as the first analysis is conducted on the fewest number of participants. Table 2 gives the unadjusted EVSI and ENBS for FIX, as there are no early examinations of the data, there is no need to adjust the final analysis. The adjusted EVSI and ENBS are presented for the 4 adaptive designs to reflect the adjustments required. OBF2 has the highest EVSI and ENBS. From a costeffectiveness perspective, this is the optimal trial design. This design gives a high EVSI but incurs a smaller cost of sampling compared with FIX and POC (see Table 1 ). The saving in costs of the additional, earlier analyses of OBF5 do not outweigh the reduction in EVSI as a result of the smaller ESS. Likewise, both POC designs do not perform well, as only a small number of trials stop early, and so the trial has a large cost of sampling with no gain in EVSI. The 95% confidence intervals for the estimated EVSI are wide and overlapping for all scenarios. Increasing the number of PSA samples may reduce the variance; however, this will need to be balanced against the increased computation time. As this estimate is used to calculate the ENBS, the choice of design for the trial is uncertain if considering the EVSI and ENBS for 5000 PSA samples. Figure 1 summarizes the ENBS for the 5 proposed designs for increasing intervention costs. This provides a useful way to visualize and compare the competing trial designs to identify the optimal option under increasing intervention costs, identifying scenarios in which different designs may be optimal. In the hypothetical case study, the OBF2 design performs best for all correlations when the intervention costs are low, as they have a higher EVSI and low cost of sampling. The POC5 design also performs well for lower intervention costs, especially when correlations are equal to zero and 0.8, as it has a high EVSI that outweighs its high cost of sampling. However, once the intervention costs are higher than approximately £8000 the designs with the smaller cost of sampling are preferable because of the small EVSI gained from all designs. Hence, the designs with 5 analyses perform better. The ENBS increases as the intervention cost increases, as there is greater uncertainty in the cost-effectiveness decision. Once the intervention costs reach £3846, this uncertainty decreases, as it becomes clearer that the CSLT is unlikely to be cost-effective. POC5 performs well when there is highest uncertainty, as this has the highest ESS and thus the opportunity to learn more information from more participants. Table 3 summarizes the adjusted and unadjusted estimates of the EVSI, ENBS, health economic model parameters, and the primary outcome from the PSA samples. There is no difference between estimates for the baseline utility in the control arm, as this is not affected by the design of the trial and is thus set to be equal. Figure 1 Case study sensitivity to the intervention cost assumption. Expected net benefit of sampling (ENBS) for 5 designs (5000 PSA samples). The adjusted ENBS is presented for the adaptive designs and the unadjusted ENBS for the fixed-sample-size design. The difference between the adjusted and unadjusted estimates of the primary outcome is greatest for the adaptive designs with 5 analyses compared with 2 analyses and highest for POC compared with OBF. The point estimates are likely to be based on less data, as the interim sample size for early analyses out of 5 is small and the POC stopping boundary is more likely to be crossed at an early interim analysis by design. The adjusted confidence intervals are wider, reflecting the additional uncertainty introduced by the adaptive design. The differences between the model parameters are small and close to zero for the cost parameters and the utility improvement. The percentage differences are higher for the probability of good response and probability of relapse, reaching 3.42% and 21.29%, respectively. The primary outcome is used to calculate these model parameters and is biased even when there is no correlation between primary and health economic outcomes. The differences for all parameters are greatest for POC5 within each correlation and greatest when the correlation is equal to 0.4. Overall, the impact of the adjustments for the hypothetical case study is small, with the optimal design changing only when the correlation is 0.4. The adjusted EVSI estimates for the adaptive designs are larger than the unadjusted estimates. The increased EVSI could suggest greater uncertainty as a consequence of the analysis methods used to estimate the adjustments or could reflect the fact that unadjusted approaches underestimate the uncertainty introduced by the adaptive design. The EVSI values change when there is a change in the decision uncertainty. 12 If the bias adjustments have little impact on the decision uncertainty, there will be only small differences between the adjusted and unadjusted EVSI estimates, even if there are large differences between the adjusted and unadjusted model parameters estimates. This is illustrated using 2 hypothetical scenarios in the supplementary material. In the hypothetical case study, we found small differences in the cost of sampling between designs driven by small differences in the ESS. The ENBS was positive and similar for each of the designs, suggesting they were all cost-effective. The effect of the bias adjustment was small and had limited impact on choosing the optimal design. The O'Brien-Fleming stopping rule with 2 analyses had the highest EVSI and ENBS, suggesting this was the most cost-effective design. As the intervention costs were increased and ENBS was recalculated, the potential savings in ESS offered by the adaptive designs gave them a higher ENBS. The O'Brien-Fleming stopping rule with 5 analyses was preferred when variable costs were high, as they offered early interim analyses with a small number of participants and hence a lower ESS and cost of sampling when there was little to be gained in terms of EVSI. The financial benefits of stopping a trial early are likely to be small when the fixed costs are high relative to the variable costs and likewise when the variable costs associated with assessing the trial outcomes in all patients may be high. We have adapted existing methods of EVSI to guide the design of fixed-sample-size designs to the case of considering adaptive designs. These methods appropriately adjust for the adaptive nature of the design and have been operationalized in the context of a hypothetical case study. We have considered adaptive designs with clinical effectiveness stopping rules based on recommendations by Flight et al. 11 and suggestions from the public advisory group supporting this research. However, application of VOIA methods could be extended and applied at the interim analysis of an adaptive design to allow research teams to assess the cost-effectiveness of continuing and to inform the design of the rest of the trial. A simple approach would be to update the EVSI calculation with the available data at the interim; however, this does not take account of all possible future interim analyses. Using health economic outcomes during an adaptive trial has been discussed in the literature 33, [37] [38] [39] ; however, care is needed to ensure the preferences of stakeholders are met. 11 The EcoNomics of Adaptive Clinical Trials (ENACT) 40 collaboration has explored how the value-based sequential approach of Chick et al. 39 and Alban et al. 41 can be applied in the context of publicly funded research in the United Kingdom. Using 2 retrospective case studies, they considered the methodology's strengths, such as considering the ultimate technology adoption decision in the design and analysis of a trial, and challenges, including the application of the methods within current funding structures. 42, 43 Implications for Practice and Research We recommend researchers adjust analyses for the adaptive nature of the designs to avoid introducing bias, reflecting current reporting and regulatory guidance for adaptive designs. 44, 45 As discussed by Flight, 17 adjusted model parameters cannot always be directly estimated from the trial data, and so alternative methods are required. As such, the difference between adjusted and unadjusted estimates may be a consequence of the different analysis methods as well as biases introduced by the design. We reported the adjusted estimates for the adaptive designs and unadjusted estimates for the fixedsample-size designs, as we felt that this best reflected the analysis approach that would be undertaken in practice and reflected current guidance. 10 The VOIA approach outlined offers a formal way to quantify and compare the value of fixed-sample-size and adaptive designs. This will enable researchers to provide a quantified justification for their choice of adaptive design as per the recent guidance from the Food and Drug Administration in the United States. 45 We anticipate these methods will also be used by research teams to inform discussions on the best choice of trial design. We have compared the ESS, EVSI, and ENBS to identify the optimal design. Other factors may include the potential maximum sample size. As discussed, the Pocock stopping rule requires a larger sample size if the trial does not stop early compared with the O'Brien-Fleming rule. Funders, for example, will need to consider the financial and practical implications should the trial continue to the maximum sample size. Following the approach of Willan and Kowgier, 33 we have assumed the opportunity cost is equal to the a priori INB. However, this will be true only if the new intervention cannot be implemented in practice before the trial ends. The opportunity cost may be zero if the intervention is potentially cost-effective and can be used in practice while research is ongoing. 46 Research teams should select the appropriate opportunity cost for their setting. Using this approach may require a large investment of work before the trial is funded. In the CACTUS case study, pilot data and a health economic model were available, reducing the time burden of the VOIA. Application of this approach may be limited to contexts in which an economic model is available or a model can be developed quickly alongside the design of the trial. As highlighted by Flight et al., 11 for these methods to be used to their full potential, funding bodies need to consider alternative ways to fund this work. As with other EVSI methods, the computation time is high. 27, 47 For 5000 PSA samples, it took approximately 5 and 7 h to run the designs with 2 and 5 analyses, respectively. A full range of trial designs should be compared with a high number of PSA samples. 10 However, this may not be viable given the time constraints associated with designing clinical trials for a grant application. Alternative methods for the calculation of EVSI [47] [48] [49] [50] could decrease the computation time. We have focused on the commonly used GSD; however, this approach could be considered for other adaptive designs. For example, Ward et al. 51 used EVSI to compare the optimal design of a 3-arm trial with and without an interim futility analysis. To the best of our knowledge, this is the first adaptation of EVSI to guide the design of a GSD in which interim adaptations are focused on clinical effectiveness. These methods reflect the views of key stakeholders in health technology assessment on the use of health economics in adaptive design 11 and build on existing guidance and methods in VOIA. 10, 12, 27 These methods have the potential to affect the design of adaptive trials that are increasingly used in practice. 5 We have used a hypothetical case study to illustrate how the methods can be applied in practice. The results are context specific; for example, there were small differences between the ENBS for the designs considered, and the bias adjustments had a limited impact. We cannot draw generalizable conclusions about the performance of adaptive and fixed-sample-size designs. However, the adapted VOIA methods and the presentation of the results can be applied to different contexts. Health economics is rarely used in the design and analysis of adaptive clinical trials. We discuss how existing VOIA methods can be adapted to guide the design of a GSD based on the number of analyses and clinical effectiveness stopping rule. This can guide and justify the choice of characteristics and prevent limited research budgets being wasted. We recommend that adjusted analyses are presented to control for the potential impact of the adaptive designs to maintain the accuracy of the calculations. also like to thank members of the CACTUS pilot and Big CACTUS trial team for sharing their expertise and reading early drafts of the work. Laura Flight https://orcid.org/0000-0002-9569-8290 Supplementary material for this article is available on the Medical Decision Making website at http://journals.sagepub.com/ home/mdm. Annual efficient studies funding calls for CTU projects Adaptive designs in clinical drug development-an executive summary of the phrma working group Adaptive designs for confirmatory clinical trials Adaptive designs in clinical trials: why use them, and how to run and report them Adaptive designs undertaken in clinical research: a review of registered clinical trials A literature review of applied adaptive design methodology within the field of oncology in randomised controlled trials and a proposed extension to the consort guidelines Adaptive design clinical trials: a review of the literature and clinicaltrials.gov Dexamethasone in hospitalized patients with Covid-19-preliminary report A review of clinical trials with an adaptive design and health economic analysis Value of information analytical methods: report 2 of the ISPOR value of information analysis emerging good practices task force How can health economics be used in the design and analysis of adaptive clinical trials? A qualitative analysis Value of information analysis for research decisions-an introduction: report 1 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force Computer therapy compared with usual care for people with long-standing aphasia poststroke: a pilot randomized controlled trial The Design and Analysis of Sequential Clinical Trials Group sequential methods in the design and analysis of clinical trials A multiple testing procedure for clinical trials Group Sequential Methods with Applications to Clinical Trials The Use of Health Economics in the Design and Analysis of Adaptive Clinical Trials Expected value of sample information calculations in medical decision modelling Research prioritization based on expected value of partial perfect information: a case-study on interventions to increase uptake of breast cancer screening Exploring the research design space: the expected value of information for sequential research designs Using an iterative approach to economic evaluation in the drug development process Practical guide to sample size calculations: an introduction On the bias of maximum likelihood estimation following a sequential test Supplementary analysis at the conclusion of a sequential clinical trial Parameter estimation following group sequential hypothesis testing Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: a fast, nonparametric regression-based method Ades a E. Expected value of sample information for multi-arm cluster randomized trials with binary outcomes. Med Decis Making Calculating the expected value of sample information in practice: considerations from three case studies Computing the expected value of sample information efficiently: expertise and skills required for four model-based methods Globally optimal trial design for local decision making Enhancing R&D in science-based industry: an optimal stopping model for drug discovery Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods Cost-utility of selfmanaged computer therapy for people with aphasia Computerised speech and language therapy or attention control added to usual care for people with long-term post-stroke aphasia: the Big CACTUS three-arm RCT National Institute for Health and Care Excellence One-sided sequential stopping boundaries for clinical trials: a decision-theoretic approach Optimal Bayesian sequential sampling rules for the economic evaluation of health technologies A Bayesian decision theoretic model of sequential experimentation with delayed response Valueadaptive clinical trial designs for efficient delivery of NIHR research Value-based clinical trials: selecting trial lengths and recruitment rates in different regulatory contexts Value-adaptive clinical trial designs for efficient delivery of research-actions, opportunities and challenges for publicly funded trials Application of a Bayesian value-based sequential model of a clinical trial to the HERO and CACTUS case studies The Adaptive designs CONSORT Extension (ACE) Statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design Guidance for industry: adaptive design clinical trials for drugs and biologics Informing a decision framework for when NICE should recommend the use of health technologies only in the context of an appropriately designed programme of evidence development Efficient Monte Carlo estimation of the expected value of sample information using moment matching An efficient estimator for the expected value of sample information Computing expected value of partial sample information from probabilistic sensitivity analysis using linear regression metamodeling A Gaussian approximation approach for value of information analysis PSU4 VALUE of information for adaptive trials: proof of concept study in MULTI-arm MULTI-STAGE trials of interventions for the prevention of surgical site infections The authors would like to thank the public advisory group for all their insight and support throughout the project. We would