key: cord-0990733-jpimega6 authors: Krebs, Emanuel; Nosyk, Bohdan title: Cost-Effectiveness Analysis in Implementation Science: a Research Agenda and Call for Wider Application date: 2021-03-20 journal: Curr HIV/AIDS Rep DOI: 10.1007/s11904-021-00550-5 sha: 08f30142b045a1695f01693517b0e3bcd25e7dac doc_id: 990733 cord_uid: jpimega6 PURPOSE OF REVIEW: Cost-effectiveness analysis (CEA) can help identify the trade-offs decision makers face when confronted with alternative courses of action for the implementation of public health strategies. Application of CEA alongside implementation scientific studies remains limited. We aimed to identify areas for future development in order to enhance the uptake and impact of model-based CEA in implementation scientific research. RECENT FINDINGS: Important questions remain about how to broadly implement evidence-based public health interventions in routine practice. Establishing population-level implementation strategy components and distinct implementation phases, including planning for implementation, the time required to scale-up programs, and sustainment efforts required to maintain them, can help determine the data needed to quantify each of these elements. Model-based CEA can use these data to determine the added value associated with each of these elements across systems, settings, population subgroups, and levels of implementation to provide tailored guidance for evidence-based public health action. There is a need to integrate implementation science explicitly into CEA to adequately capture diverse real-world delivery contexts and make detailed, informed recommendations on the aspects of the implementation process that provide good value. SUMMARY: We describe examples of how model-based CEA can integrate implementation scientific concepts and evidence to help tailor evaluations to local context. We also propose six distinct domains for methodological advancement in order to enhance the uptake and impact of model-based cost-effectiveness analysis in implementation scientific research. Now more than ever we need to carefully evaluate the implementation of public health programs, to ensure not only their effectiveness but also the effectiveness of the program's implementation and ultimately its value, relative to other competing interests. Economic evaluation, and cost-effectiveness analysis (CEA) specifically, can help identify the trade-offs, or opportunity costs, decision makers face when confronted with alternative courses of action for the implementation of public health strategies [1] . By determining the value provided by successful program implementation, CEA can not only help inform resource allocation decisions but also provide evidence for determining optimal scalability and sustainability of implementation strategies addressing the health needs of diverse populations [2] . Furthermore, CEA can help determine the maximum returns on investment that can be obtained from improving implementation activities [3] [4] [5] [6] ; however, realizing the full public health benefit of systematically incorporating evidence-based interventions (EBIs) into routine practice requires generalizable evidence of how these have been broadly implemented and sustained in practice [3, 7, 8] . For instance, HIV testing provides outstanding value and can even be cost-saving in the long term in high-prevalence populations and settings [9] . Nonetheless, real-world evidence on the scale of implementation of HIV-testing programs is sparse [9] . Levels of implementation documented in the public This article is part of the Topical Collection on Implementation Science * Bohdan Nosyk bnosyk@sfu.ca domain fall short of what would be required, in combination with a wide range of EBIs to treat and protect against infection, to reach the targets of the US 'Ending the HIV Epidemic' initiative [10] . Similarly, medication for opioid-use disorder can also provide exceptional value in addition to substantial health benefits [11] . However, there remains a pressing need for data to guide evidence-based decision-making in addressing the North American opioid crisis, particularly for assessing the impact of public health strategies over different timescales, for different populations or settings, and in combination with multiple interventions [12] . Of course, the global COVID-19 pandemic poses the most significant and immediate challenge in public health implementation. A sound understanding of what may limit the efficiency of large-scale implementation of testing and contact tracing programs across different contexts could have helped guide the effective delivery of COVID-19 vaccines [13] . While CEA has a 40-year history in population health research [14] , its application alongside implementation scientific studies has been limited [1, 15, 16] . A recent systematic review of economic evaluations in the implementation science literature revealed only 14 articles focused on implementation [17] , with only four CEAs using quality-adjusted life years (QALYs) [18] . While the review found improvements in the quality of CEAs over time [17, 19] , best-practice recommendations, such as reporting the perspective from which the analysis is conducted, investigating parameter uncertainty, or reporting cost calculations, were not adhered to in most studies [14, [20] [21] [22] . Likewise, another recent systematic review found a dearth of economic evaluations of interventions designed to improve the implementation of public health initiatives, also noting the mixed quality of the evidence from these studies [23] . The use of model-based economic evaluation has been suggested to help make CEA the standard practice in implementation science [1] . Using mathematical relationships, models such as dynamic transmission models, agent-based models, state-transition cohort models, or individual-based microsimulation models can provide a flexible framework to compare alternative courses of action. As the COVID-19 pandemic has revealed, simulation modeling is playing a greater role than ever in public-health decision-making [24] [25] [26] . The context in which healthcare services are delivered has been shown to influence the cost-effectiveness of interventions [9, 27] . Simulation models that capture the heterogeneity across settings are uniquely positioned to offer guidance on contextually efficient strategies to implement [12, 28, 29] . Fundamentally, public-health programs aim to expand the implementation of evidence-based practices by increasing their reach, adoption, and sustainment to achieve better health outcomes. We outline areas for future development in CEA that would most complement the field of implementation science, in the interest of providing decision makers with pragmatic and context-specific information on the value of implementation and sustainment strategies. Specifically, in order to enhance the uptake and impact of model-based CEA in implementation scientific research, we suggest the need for advancement in (1) determining the reach of EBIs at the population level and within subgroups; (2) reporting adoption of EBIs at a system level and across settings; (3) improving the documentation of pre-implementation planning activities; (4) generating evidence on scaling up EBIs to the population level; (5) measuring the sustainment of EBIs over time; and (6) generating precise and generalizable costs of each of the above components. Increasing the impact of a public health program depends not only on the effectiveness of its individual components but also on the extent and quality of its implementation. While implementation scientists have developed frameworks to characterize implementation theory and practice [30] , recommendations on how these may be incorporated into CEA are scarce. Applying these frameworks in CEA can ensure we fully capture the design features of a given EBI and the relevant costs for each of the implementation components. This can provide a foundation from which to improve the evaluation of implementation strategies and population health. While this article is not intended as a complete review of all individual approaches available, we describe examples of how modelbased CEA can integrate implementation science to help tailor evaluations to local context and focus decision makers on implementation strategies that may provide the greatest public health impact and value for money. The Reach Effectiveness Adoption Implementation Maintenance (RE-AIM) framework has been broadly applied to evaluate health policies and EBIs since it was initially published more than two decades ago [31, 32] . The framework's domains are recognized as essential components in evaluating population-level effects [30] . We applied the RE-AIM framework in recent model-based CEA evaluating combinations of EBIs in HIV/AIDS [33] . We used RE-AIM to define scale of delivery, periods over which the implementation of EBIs are scaled-up and sustained, and the costs of pre-implementation, implementation, delivery, and sustainment of each intervention [9] . Table 1 describes the definitions and assumptions we used for explicitly integrating intervention and implementation components in simulation modeling. The scale of delivery refers to the extent to which an EBI ultimately reaches its intended population. In a prior application, we defined scale of delivery as the product of reach and adoption (Scale ij = Reach ijk × Adoption ik ) for each intervention i, intended population j, and delivery setting k. We used the best evidence available in the public domain for each healthcare setting in which an EBI would be delivered, using similar evidence specific to each population subgroup for which an EBI was intended. Reach was defined as the participation rate in a given intervention, conditional on both the probability an individual would access services in setting k and would accept the intervention. Adoption was defined as the proportion of a delivery setting actually implementing the intervention. Consequently, the population-level impact for each intervention was the product of the resulting scale of delivery and its effectiveness (Population-based Impact i = Scale ij × Effectiveness i ). We incorporated a scale-up period to account for the time required to get to the defined scale of delivery followed by a sustainment period necessary for maintaining the scale of delivery. While we strived to use the best publicly available data to inform each of the RE-AIM components, evidence for population-level implementation of interventions, scale-up, adoption, and sustainment was very limited. Furthermore, as it has been previously noted [6, 32] , data on costs associated with implementation components other than those specific to the delivery of the intervention itself are mostly lacking. Consequently, our analysis required numerous assumptions for deriving costs, particularly for representative patient volumes across healthcare settings, physician practices, or HIV clinics. Nonetheless, it is encouraging that the Standards for Reporting Implementation Studies (StaRI) [34] , a checklist for the transparent reporting of studies evaluating implementation strategies, distinguishes between costs specific to the interventions and to the implementation strategy, suggesting more evidence on costs specific to implementation may soon become available. Hybrid designs combine evaluation of both the implementation process and the intervention. The typology proposed by Curran et al. outlines different considerations for choosing the appropriate hybrid design, with the Hybrid Type 2 simultaneously evaluating the effectiveness of an intervention and its implementation [35] . Similar to the STARI reporting guidelines, which also differentiate cost data for the implementation strategy and the intervention, the recommendations for costs were general and did not distinguish between the various elements of the implementation process (e.g., planning, scaling up, and sustainment). CEA conducted alongside Hybrid Type 2 designs provides an opportunity to also evaluate the different components of implementation strategies most relevant and useful for decision makers and implementers [35, 36] . Mirroring the recommended conditions for use of a Hybrid Type 2 design [35] , CEA-specific considerations may include (1) accounting for the costs of pre-implementation activities that ensure the face validity of the implementation strategies EBIs, evidence-based interventions; CEA, cost-effectiveness analysis being proposed; (2) emphasizing the need for evidence of individual implementation components (e.g., varying levels of reach and adoption) and their costs, supporting applicability to new settings and populations; (3) the potential risk for displacement of other services that may result in indirect costs not captured by an evaluation restricted to the intervention(s) of interest; (4) demonstrating effectiveness (and value) of large-scale implementation to support momentum of adoption in routine care; (5) investigating hypothetical implementation dynamics to iteratively support the development of adaptive implementation strategies; (6) conducting value of information analysis to estimate the additional value the study may produce by reducing uncertainty in decision-making. Simulation modeling and CEA can enhance the relevance of implementation science by measuring the effects of different, sometimes hypothetical, implementation components (e.g., reach, adoption, and sustainment-which are ultimately implementation outcomes themselves), while taking into consideration the impact of context and accounting for variation in costs, ultimately providing value-based recommendations to improve public health. The evidence base supporting many public-health interventions is strong and diverse, but important questions remain about how to broadly implement them in routine practice to achieve greater population-level impact [1] . From a methodological standpoint, there is a need to integrate implementation science explicitly into CEA to adequately capture diverse real-world delivery contexts. CEA can produce detailed, informed recommendations on the aspects of the implementation process that provide good value. An effective use of model-based CEA for decision-making thus requires considering the feasibility and costs of individual implementation components, including planning for implementation, time required to scale-up programs, reach, adoption, and sustainment (Fig. 1) . We identify areas of advancement needed for modelbased CEA to further support public health action by estimating the potential impact of these implementation components, discussing resulting implications for cost considerations and further methodological development ( Table 2) . In our prior work evaluating combinations of EBIs in HIV/ AIDS, we derived the scale for individual interventions by determining their potential reach based on how populationlevel utilization of different healthcare settings (e.g., primary care, emergency departments) varied by geographic region, racial/ethnic group, and sex [9, 33] . For example, to determine the potential reach of an HIV testing intervention in primary care [9] , we used regional US estimates of persons having seen a doctor in the past year (e.g., 59.3% of Hispanic/ Latino men in the West, 93.9% for Black/African American women in the Northeast) [37] . CEAs can thus provide a realistic assessment of the potential value of implementing publichealth programs by explicitly accounting for underlying structural barriers to healthcare access. As adaptation of EBIs can increase population reach within settings, helping to achieve more equitable service provision [38] , model-based CEA can also allow for determining the additional value (and health benefits) that can result from efforts to reduce existing disparities [39] . Implementation scientists have argued for health equity to serve as a guiding principle for the discipline [40] . Simulation models can help us understand the extent and deployment of resources required to overcome these disparities [39, 40] . The recent extension of the RE-AIM framework now incorporates health-equity considerations for each of its domains [41] . Specifically, it suggests the importance of determining reach in terms of population characteristics and social determinants of health [41] . However, reporting of real-world implementation data such as these is very limited [9, 42] . The emerging 'Distributional Cost-Effectiveness Analysis' framework is designed to explicate the tradeoffs in strategies designed to reduce health inequities [43] , as opposed to simply maximizing total health, though of course explicit evidence to this end must be available to execute such analyses [40, 44, 45] . Capturing the population-level impact of EBIs requires information on adoption rates within service-delivery settings, as well as across delivery settings and geographic areas [9] . For example, we estimated the potential adoption in primary care of an HIV testing intervention using the proportion of physicians who offer routine HIV tests (e.g., 31.7% of physicians accepting new patients following Affordable Care Act expansion in California, 54.0% of physicians responding to the United States Centers for Disease Control and Prevention's Medical Monitoring Project Provider Survey) [9, 46, 47] . Consequently, information on current levels of implementation (when the objective is to improve access to existing EBIs) or on feasible levels of implementation (for EBIs not previously implemented) is essential. Recently, Aarons et al. proposed a ranking of the evidence required to evaluate the transportability of EBIs [7] . Extensions to the value of a perfect implementation framework have similarly explored the influence of relaxing assumptions of immediate and perfect adoption [4, 5] to determine whether it is worthwhile to collect more data to reduce decision uncertainty. Another recent extension to this framework has investigated the influence of heterogeneity in adoption across geographical areas [48] . Overly optimistic expectations of the transportability of findings from one setting to another are one of the major causes of unsuccessful policy implementation [49] . Model-based CEA can help the design of implementation strategies by determining the long-term benefits from increasing adoption, reach, or both ( Fig. 1 [A] ). This can be particularly informative when promoting uptake outside of formal healthcare settings to reach different populations via adoption of EBIs in low-barrier, community-based settings. Addressing determinants of health such as income and employment, housing, geography, and availability of transportation, stigma or discrimination, and health literacy offers great public health promise by delivering services for people who might not seek treatment through formal care settings or for historically underserved populations. While the Consolidated Framework for Implementation Research can help identify contextual determinants for increasing reach and adoption [50] , adapting interventions to different delivery settings or target populations will be more complex and prolonged. Model-based CEA can help estimate the costs and benefits of prevention interventions that promote long-term improvements in public health, which are more difficult to measure than effects from short-term clinical interventions. Planning for Implementation, Scaling Up, and Sustaining Impact While calls to explicitly account for the costs attributable to implementation strategies are not novel [51] , reporting remains uncommon [23] . Proposals of pragmatic approaches are still recent [52, 53] , particularly when considering preimplementation activities that may have a direct impact on scale of implementation [54] or program feasibility and sustainability [55] . In estimating costs for implementing combinations of EBIs in HIV/AIDS for six US cities, we derived public health department costs attributable to planning activities to coordinate implementation in local healthcare facilities [9] . We derived personnel needs and used full-time equivalent salaries based on work we have done with local health departments in determining programmatic costs associated with planning for the adoption of EBIs [56] . Additionally, we scaled personnel needs according to population size in each city to account for the number of healthcare facilities that would need to be targeted by such an initiative [9] . We propose these to be distinct from costs explicitly attributable to implementation activities such as EBI adaptation or ensuring implementation fidelity [6] . Accounting for planning costs in CEA is essential when considering the efforts required to improve adoption of EBIs across different service-delivery settings (e.g., HIV testing in clinical settings versus sexual health clinics or community-based organizations) [51] . By working in close collaboration with decision makers and forecasting the long-term costs and benefits of alternative implementation strategies, CEA can provide valuable information for making implementation decisions, thus helping determine the best fit for a given context. Model-based CEA can also extrapolate the effects of shortening the scale-up period to reach a targeted level of implementation. Using dynamic transmission models to capture the impact of achieving herd immunity levels from mass vaccination serves as an example [57] . While constant, linear scale-up over the implementation period can be used as a simplifying assumption [9, [57] [58] [59] , model-based CEA can also provide implementation scientists with a sense of the relative value for rapid, or staggered patterns of scale-up ( Fig. 1 [B] ). For instance, implementing physician alerts in electronic medical records to promote adherence to HIV antiretroviral therapy can be rapidly scaled-up within a healthcare system, whereas a multi-component intervention requiring training for medical personnel and changes to existing processes may require a more time-consuming sequential implementation if there are constraints on human resources [9] . If available, evidence on the cost and effectiveness of interventions designed to improve implementation could also be incorporated explicitly [5] . Recent retrospective work by Johannesen et al. has explored the relative gains of eliminating slow or delayed scaling up of prevention in cardiovascular disease [48] . Furthermore, evidence on the uptake of specific health technologies based on diffusion theory could provide a methodological foundation to prospectively model the impact on value provided by different patterns of adoption (and therefore scale-up) in the implementation of EBIs [60] . Determining the costs and health benefits of improved sustainment over time can allow decision makers to plan for the necessary effort levels for maintenance of an implementation strategy ( Fig. 1 [C] ). The concept of sustainment however lacks a consensus definition in implementation science, and there is limited research on how best to sustain an effective intervention after implementation [61, 62] . There is also a need for better reporting of strategies facilitating sustainment to reduce this large research-to-practice gap [62, 63] . There has been a growing focus on long-term evaluation for the sustainment of EBIs [41] with the recognition that ongoing interventions may be required to sustain the impact of EBIs [63] . CEA can be utilized to assess dynamic scenarios, including the timing for and frequency of retraining or other sustainment activities, ultimately determining how much it may be worth investing to sustain implementation strategies. Of course, capturing the interaction between each of the above-noted implementation components is another advantage provided by simulation modeling. Working in close collaboration with decision makers, simulation models can be utilized to assess the long-term impact and the value of strategies varying any combination of the populations reached; adoption within and across settings; planning capacity; the scale-up period; and the duration of sustainment (Table 3) . Such collaborations can happen during planning for implementation or simultaneously during any stage of executing an implementation strategy, supporting adaptive implementation strategies in near real time. Uncertainty surrounding the costs of implementation is a fundamental challenge to the broad implementation of EBIs, particularly in public health [64] . While there is a breadth of frameworks to guide adaptation, there has been little emphasis on how best to estimate the costs of implementation strategies, according to the scale of delivery, and in diverse settings offering a different scope of services. For instance, the Consolidated Framework for Implementation Research can help identify barriers to implementation and includes cost considerations for EBIs [50] ; however, evidence on the costs of implementation components is necessary for CEA to be further incorporated into implementation scientific research ( Table 2 ) [1] . Consistency in adherence to best practices and reporting guidelines such as the Consolidated Health Economic Evaluation Reporting Standards (CHEERS) [65] has been important in helping establish model-based CEA as standard practice for decision-making in healthcare [14] . Similar guidelines for reporting implementation costs could further help promote the use of implementation science with CEA. There are a number of high-quality microcosting examples in both the public health [66] and the implementation science literature [54] . Similarly, some studies have proposed approaches for the standardized reporting of preimplementation and implementation component costs [52, 54, 55, 67] but these have not yet been used widely or applied to population-level EBIs. Moreover, CEA typically uses assumptions of constant average costs (i.e., linear cost functions) yet the functional form of costs can matter, particularly for widespread implementation of EBIs [68] . Assessing economies of scale and scope in delivering individual and combinations of interventions will no doubt be areas of intense inquiry and scrutiny in the coming years as health systems adapt to our new reality. A study using a regression-based approach to investigate factors determining costs for a large HIV prevention program conducted through non-government organizations in India incorporated a diverse set of population and setting characteristics and found decreasing average costs as scale of delivery increased [69] . A similar approach in local public health agencies across the state of Colorado found evidence suggesting economies of scale in the surveillance of communicable diseases, with average costs one-third lower for high-volume agencies as compared to low-volume agencies [70] . Economies of scope offer the potential for improving the impact of public health programs [71] [72] [73] . The costs of implementation components for delivering more than one intervention at once may differ and requires careful consideration [71] . Determining the value of integrated programs targeting multiple-disease areas with CEA can be enhanced by incorporating budget-impact analysis to also help build consensus for financing arrangement across different sectors of the health system [74, 75] . As the paradigm for CEA in implementation science shifts to systematically account for the costs of different components necessary for achieving population-level implementation of EBIs, the potential for model-based CEA to support and complement implementation scientific research in public health will continue to increase. We believe in the promotion of implementation science and cost-effectiveness analysis as vital tools that should be applied in evaluating every meaningful public-health initiative. As the current pandemic has made painfully clear, it is imperative for public-health scientists to advance a range of new approaches to determine what implementation strategy may provide the greatest publichealth value and thus promote sustainable and efficient use of limited resources. We conclude by acknowledging that our proposed agenda will require health researchers to continue strengthening partnerships with decision makers to advance how implementation-science methods can be incorporated in CEA. Decision makers continuously need to account for courses of action that are shaped by social, political, and economic considerations. Model-based CEA is ideally positioned to play a central role in providing a sound understanding of how well diverse implementation strategies could work, and how we can strive to further advance public health. Funding This study was funded by the National Institutes on Drug Abuse (NIDA grant no. R01DA041747). Conflict of Interest The authors declare no competing interests. Human and Animal Rights and Informed Consent This article does not contain any studies with human or animal subjects performed by any of the authors. Intended populations • Determining the value of expanding reach of EBIs in the intended population and among different populations or subgroups • Determining the impact of implementation strategies that aim to reduce health inequities Service delivery • Determining the value of expanding adoption in the target delivery setting or across different delivery settings • Determining the impact of implementing EBIs outside of formal healthcare settings to deliver services to underserved individuals Planning • Working with decision makers during pre-implementation to project costs and benefits of different implementation strategies • Prospectively determining the potential budgetary impact of implementation strategy alternatives Scaling up • Determining the impact and value of speeding up implementation within and across delivery settings • Exploring uncertainty resulting from different patterns in the timing of implementation Sustaining impact • Determining the impact of imperfect sustainment and how much it may be worth investing in sustainment efforts, both in terms of required frequency and the extent of required effort • Exploring the impact for the de-implementation of existing EBIs EBIs, evidence-based interventions; CEA, cost-effectiveness analysis Economic evaluation of implementation strategies in health care Six components necessary for effective public health program implementation The value of implementation and the value of information: combined and uneven development Adjusting estimates of the expected value of information for implementation: theoretical framework and practical application Assessing the expected value of research studies in reducing uncertainty and improving implementation dynamics Estimating the cost-effectiveness of implementation: is sufficient evidence available? Value Health Scaling-out" evidence-based interventions to new populations or new health care delivery systems A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives The impact of localized implementation: determining the costeffectiveness of HIV prevention and care interventions across six U Ending the HIV epidemic in the USA: an economic modelling study in six cities Cost-effectiveness of publicly funded treatment of opioid use disorder in California How simulation modeling can support the public health response to the opioid crisis in North America: Setting priorities and assessing value Planning for a COVID-19 Vaccination Program Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: second panel on costeffectiveness in health and medicine Implementation science: a reappraisal of our journal mission and scope Enhancing the impact of implementation strategies in healthcare: a research agenda Use of health economic evaluation in the implementation and improvement science fields-a systematic literature review QALYs: the basics. Value Health The methodological quality of economic evaluations of guideline implementation into clinical practice: a systematic review of empiric studies Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force Working Group-6 Modeling good research practices-overview a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-1 Consolidated health economic evaluation reporting standards (CHEERS)-explanation and elaboration: a report of the ispor health economic evaluation publication guidelines good reporting practices task force Economic evaluations of public health implementation-interventions: a systematic review and guideline for practice Prediction models for diagnosis and prognosis of covid-19 infection: systematic review and critical appraisal With COVID-19, modeling takes on life and death importance Dissemination science to advance the use simulation modeling: Our obligation moving forward Exploring the population-level impact of antiretroviral treatment: the influence of baseline intervention context Implementation science for the prevention and treatment of HIV/AIDS Mixed-method approaches to strengthen economic evaluations in implementation research Making sense of implementation theories, models and frameworks Evaluating the public health impact of health promotion interventions: the RE-AIM framework The RE-AIM framework: a systematic review of use over time What will it take to 'End the HIV epidemic' in the US? An economic modeling study in 6 cities Standards for reporting implementation studies (StaRI) statement Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact Implementation science: relevance in the real world without sacrificing rigor Public-use data file and documentation Available f r o m U R L The importance of context in implementation research Ending the Epidemic' will not happen without addressing racial/ethnic disparities in the US HIV epidemic Implementation research methodologies for achieving scientific equity and health equity An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time Implementation science perspectives and opportunities for HIV/AIDS research: integrating science, practice, and policy Using cost-effectiveness analysis to address health equity concerns Economic dimensions of health inequities: the role of implementation research Inequality in quality: addressing socioeconomic, racial, and ethnic disparities in health care Initial health assessments and HIV screening under the Affordable Care Act Routine HIV testing among providers of HIV care in the United States Subcategorizing the expected value of perfect implementation to identify when and where to invest in implementation initiatives Policy failure and the policyimplementation gap: can policy support programs help? Policy Design Practice Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda A pragmatic method for costing implementation strategies using time-driven activity-based costing Cost analysis of prevention research centers: instrument development The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion Costing the implementation of public health interventions in resource-limited settings: a conceptual framework The cost-effectiveness of human immunodeficiency virus testing and treatment engagement initiatives in British Columbia Impact of program scale and indirect effects on the cost-effectiveness of vaccination programs Mandating the offer of HIV testing in New York: simulating the epidemic impact and resource needs Estimating the cost of providing foundational public health services Estimating future health technology diffusion using expert beliefs calibrated to an established diffusion model Sustaining evidence-based interventions and policies: recent innovations and future directions in implementation science: Am Public Health Assoc Evidence-based intervention sustainability strategies: a systematic review RE-AIM Planning and Evaluation Framework: adapting to new science and practice with a 20-year review Committee on Public Health Strategies to Improve Health) Consolidated Health Economic Evaluation Reporting Standards (CHEERS)-explanation and alaboration: a report of the ISPOR Health Economic Evaluation Publication Guidelines Good Reporting Practices Task Force Costs of expanded rapid HIV testing in four emergency departments Measuring costs to community-based agencies for implementation of an evidence-based practice Improved allocation of HIV prevention resources: using information about prevention program production functions What determines HIV prevention costs at scale? Evidence from the Avahan programme in India The economic cost of communicable disease surveillance in local public health agencies Economic evaluations of mass drug administration: the importance of economies of scale and scope Contact tracing for COVID-19: An opportunity to reduce health disparities and end the HIV/ AIDS epidemic in the US The potential epidemiological impact of COVID-19 on the HIV/AIDS epidemic and the cost-effectiveness of linked, opt-out HIV testing: a modeling study in six US cities Sharing the costs of structural interventions: what can models tell us? Budget impact analysis-principles of good practice: report of the ISPOR 2012 Budget Impact Analysis Good Practice II Task Force Development and testing of the Measure of Innovation-Specific Implementation Intentions (MISII) using Rasch measurement theory Quantitative measures of health policy implementation determinants and outcomes: a systematic review Implementation outcome instruments for use in physical healthcare settings: a systematic review Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations