key: cord-0077043-vq4ay5sz authors: Brownson, Ross C.; Shelton, Rachel C.; Geng, Elvin H.; Glasgow, Russell E. title: Revisiting concepts of evidence in implementation science date: 2022-04-12 journal: Implement Sci DOI: 10.1186/s13012-022-01201-y sha: f929f00dfc88bf59d1950917a5ab2daa6b730dc6 doc_id: 77043 cord_uid: vq4ay5sz BACKGROUND: Evidence, in multiple forms, is a foundation of implementation science. For public health and clinical practice, evidence includes the following: type 1 evidence on etiology and burden; type 2 evidence on effectiveness of interventions; and type 3: evidence on dissemination and implementation (D&I) within context. To support a vision for development and use of evidence in D&I science that is more comprehensive and equitable (particularly for type 3 evidence), this article aims to clarify concepts of evidence, summarize ongoing debates about evidence, and provide a set of recommendations and tools/resources for addressing the “how-to” in filling evidence gaps most critical to advancing implementation science. MAIN TEXT: Because current conceptualizations of evidence have been relatively narrow and insufficiently characterized in our opinion, we identify and discuss challenges and debates about the uses, usefulness, and gaps in evidence for implementation science. A set of questions is proposed to assist in determining when evidence is sufficient for dissemination and implementation. Intersecting gaps include the need to (1) reconsider how the evidence base is determined, (2) improve understanding of contextual effects on implementation, (3) sharpen the focus on health equity in how we approach and build the evidence-base, (4) conduct more policy implementation research and evaluation, and (5) learn from audience and stakeholder perspectives. We offer 15 recommendations to assist in filling these gaps and describe a set of tools for enhancing the evidence most needed in implementation science. CONCLUSIONS: To address our recommendations, we see capacity as a necessary ingredient to shift the field’s approach to evidence. Capacity includes the “push” for implementation science where researchers are trained to develop and evaluate evidence which should be useful and feasible for implementers and reflect community or stakeholder priorities. Equally important, there has been inadequate training and too little emphasis on the “pull” for implementation science (e.g., training implementers, practice-based research). We suggest that funders and reviewers of research should adopt and support a more robust definition of evidence. By critically examining the evolving nature of evidence, implementation science can better fulfill its vision of facilitating widespread and equitable adoption, delivery, and sustainment of scientific advances. Evidence, often informed by a complex cycle of observation, theory, and experiment [1] , is a foundation of implementation science [2, 3] . Evidence is central in part because dissemination and implementation (D&I) science is based on the notion that there are practices and policies that should be widely used because scientific research concludes that they would have widespread benefits. In this context, an evidence-based intervention (EBI) is defined broadly to include programs, practices, processes, policies, and guidelines with some level of effectiveness [4] . Many of the underlying sources of evidence were originally derived from legal settings, taking on multiple forms including witness accounts, police testimony, expert opinions, and forensic science [5] . Building on these origins, evidence for public health and clinical practice comes in many forms, across three broad domains [6] [7] [8] : type 1: evidence on etiology and burden; type 2: evidence on effectiveness of interventions; type 3: evidence on implementation within context (Table 1 ). These three types of evidence are often not linear, but interconnected, iterative, and overlapping-they shape one another (e.g., if we have limited type 2 evidence then the ability to apply type 3 evidence is hampered). Across these three domains, we have by far the most type 1 evidence and the least type 3 evidence [6, 9] . Definitions of evidence and the associated processes (how evidence is used) vary by setting. In clinical settings, evidence-based medicine is "the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients" [10] . Evidence-based public health occurs across a range of community settings and is "the process of integrating science-based interventions with community preferences to improve the health of populations" [11] . Perhaps most relevant to implementation science, evidence-based decision-making is a multilevel process that involves collecting and implementing the best available evidence from research, practice, professional experience, and clinical or community partners [12] [13] [14] [15] . A robust, equitable, and sustainable approach to evidence-based decision-making takes both challenges and strengths into account (e.g., skills, leadership priorities, resources [16] [17] [18] [19] ) and places scientific evidence and stakeholder engagement in the center of the decision-making process [20] . For all types of evidence and particularly for type 3 evidence regarding D&I, complexity and context are essential elements [21] [22] [23] . Both PCORI [24, 25] and a recent update to the MRC guidance [26] have provided statements about researching complex health interventions that provide excellent recommendations and resources. We concur with most of these recommendations and add to their points and recommendations in this article. The most effective approaches often rely on complex interventions embedded in complex systems (e.g., nonlinear, multilevel interventions) where the description of core intervention components and their relationships involve multiple settings, audiences, and approaches [26] [27] [28] . Type 3 evidence is also highly context-dependent-the context for implementation involves complex adaptive systems that form the dynamic environment(s) in which discrete interventions and interconnected implementation processes are situated [29] . For example, in models such as the Dynamic Sustainability Framework, the EBI is embedded in the context of multiple factors in a practice setting (e.g., staffing, organizational climate) which is in turn embedded in a broader ecological system with a complex set of variables (e.g., policy, regulations, population characteristics) [30] . This embeddedness also should take into account dynamism-that an EBI may stay true to its original function but need to evolve form over time to adapt to changing population needs, new evidence, and the "fit" of evidence with complex and changing context [30] [31] [32] . Much has been written about the terminology of evidence-based practice and policy. The most widely used term is "evidence-based" practice (often evidence-based medicine [33, 34] or evidence-based public health [7, 35] ). Especially in Canada and Australia, the term "evidence-informed" decision-making is commonly used [15, 36] . The term "informed" is used to emphasize that public health decisions are based on research but also require consideration of individual preferences and political and organizational factors [37, 38] . Others have used the term "knowledge-based practice" or "practice-based evidence" or "practice-relevant evidence" to emphasize the importance of practice wisdom from frontline practitioners and lived experience of patients and community members [39] [40] [41] [42] [43] . To maximize the use of EBIs, research should inform practice and practice should inform research [44] . In our view, the most important issue is not which term to use, but rather that implementation decisions should be based on and informed by evaluation and research findings, while using rigorous methods to take into account a variety of contextual variables across multiple levels of influence (Table 2) . Fundamental issues for implementation science involve the questions: (1) evidence on what and for whom in what settings and under what conditions? and (2) When do we have enough evidence for D&I? While the answer to this latter question will always be "it depends, " there are related questions that are useful to consider (Table 3) . To facilitate the development and delivery of more equitable and sustainable interventions, we need to expand our thinking about evidence, especially for but not limited to type 3 evidence. We discuss a set of five core interrelated issues about evidence, examining (1) how the evidence base is determined, (2) context, (3) health equity, (4) policy implementation, and (5) audience/stakeholder perspectives. All areas concern some form of research or knowledge gaps in D&I science. The evidence base discussion presents a broader perspective on what is considered evidence; the context, equity, and stakeholder sections cover neglected aspects of implementation science in need of more and higher quality research; and the policy implementation section points to the need for the most pressing gaps in policy-relevant research for D&I. Across these areas, we provide a series of recommendations along with tools and resources for speeding translation of research to practice and policy. Here, we describe ongoing discussions and debates about the uses, usefulness, and gaps in evidence for implementation science, which give way to our recommendations (Table 4 ). While this is not an exhaustive list, it illustrates the need for more reflection and clarity across five core areas where there are major unresolved issues about evidence. The evidence base for implementation science needs to be broadened to encompass a wider range of study designs, methods, stakeholders, and outcomes. For example, the decontextualized randomized controlled efficacy trial (RCT) that attempts to control for many potential confounding factors is generally considered the gold standard for obtaining evidence on internal validity and contributing to the determination of causality of a given intervention, practice, or treatment [45] . A property of an RCT is that, with large sample sizes, it allows researchers to potentially balance known and unknown confounders. Despite the value and conceptual simplicity of the traditional efficacy RCT, its limitations have been noted [46] [47] [48] . For example, randomization may be impractical, costly, or unethical for some interventions (e.g., community-based interventions where partners have concerns about withholding a program from the community) and for many policy interventions, where the independent variable (the "exposure") cannot be randomized. Tools such as PRECIS-2 and the newer PRECIS-2 PS help enhance the real-world utility of RCTs (pragmatic trials) [49, 50] . For some settings and interventions, alternative and more rapid-cycle and adaptive designs are needed to elucidate effects including quasi-experiments, observational trials, iterative assessments and actions, natural experiments, and mixed-methods studies [51] [52] [53] [54] [55] . Often in implementation science what we want to know is how one strategy adds to a range of strategies already being delivered within an existing environment a concept called "mosaic effectiveness" [56] . For clinical and public health practice, the generalizability of an EBI's effectiveness from one population and setting to another (and ideally across a diverse range of populations and settings)-the core concept of external validity-is an essential ingredient. Systematic review and practice guidelines, which are often the basis for an implementation study, are mainly focused on whether an intervention is effective on average (internal validity) and have commonly given limited attention to specifying conditions (settings, populations, circumstances) under which a program is and is not effective [57] [58] [59] . For implementation science, there are many considerations and layers to the notion of whether an evidence-based practice applies in a particular setting or population [59] . Tools such as ADAPT [60] or process models like ADAPT-ITT [61] can be useful in transferring EBIs from one setting to another while taking contextual variables into account. Models such as FRAME and FRAME-IS are helpful for tracking and building the evidence base around what types of adaptations are associated with improved or decreased effectiveness or implementation outcomes (and for which settings and populations?) [62, 63] . The question of whether an EBI applies involves a set of scientific considerations that may differ from simply knowing average treatment effects. These include balancing of fidelity to the original EBI functions with adaptations needed for replication and scale-up [64] , as well as considerations as to when there may be a need to "start from scratch" in developing a new intervention as opposed to refining or adapting an existing one (e.g., when the nature of the evidence for an EBI does not fit the sociocultural or community context). There is a pressing need for research on the strengths and limitations of practitioner-driven and community-centered adaptation of EBIs, which is likely to enhance relevance, feasibility, and sociocultural appropriateness and acceptability, as well as fit with implementation context [65] [66] [67] . There are also potential considerations when adapting EBIs or implementation strategies (e.g., costs, resources needed, potential reduction in effectiveness) [63, 68, 69] . It has also been suggested that a greater emphasis is needed on both the functions of an intervention (its basic purposes, underlying theoretical premise) and forms (the strategies and approaches used to meet each intervention function) [64] , opening the door to inquiry about how fidelity to function may demand adaptations (or in some cases transformation or evolution) in form. Additional evidence is needed on the inter-related concepts of null (ineffective) interventions, de-implementation, and mis-implementation [70] [71] [72] . From null intervention results, we can learn which parts of an EBI or implementation strategy need to be refined, adapted, or re-invented. Data on null interventions also informs for whom and under what conditions an EBI or implementation strategy is "evidence-based. " De-implementation is the process of stopping or abandoning practices that are not proved to be effective or are possibly harmful [73] , whereas mis-implementation involves one or both of two processes: the discontinuation of effective programs and the continuation of ineffective practices in public health settings [70] . Many of the contextual variables in Table 2 strongly affect de-implementation and mis-implementation. Emerging perspectives in data science and causal inference may help advance type 3 evidence. If contextual heterogeneity is the norm, then the scientific task in any one study population is to produce data that address relevance across diverse external settings. Useful methods to do so are becoming available and suggest that the more we know about mediators/mechanisms and modifiers of effects in implementation, the more interpretable findings could be in different settings and populations [74] [75] [76] . For example, consider the question of whether evidence for audit and feedback on the use of EBIs in HIV clinics from randomized trials in Boston could apply to HIV clinics in Nairobi, Kenya. Let us assume that in Boston, researchers learn that the credibility of the data is a key driver of successful implementation (e.g., clinicians who doubt the veracity of metrics from the electronic health record are less likely to respond). Given the widespread challenges of data accuracy in the nascent electronic health records in this specific setting in Africa (and extensive literature documenting this challenge), audit and feedback as an implementation strategy can be anticipated to have limited implementation relevance as well as effectiveness. Using data from Boston to infer (in this case that it might not work) in Nairobi depends on knowing critical mediators of audit and feedback in Boston (i.e., the credibility of data on provider performance). In some situations, a completely different implementation strategy may be needed that is better suited to local conditions. One further implication is that this directs research efforts to not only find effects in Boston, but how they came about (type 3 evidence). The complexity and dynamic nature of implementation necessitate continual attention to context (i.e., active and unique factors that surround implementation and sustainability [77, 78] ) [22, 79, 80] . When context is taken into account in research, the study findings are more likely to indicate the conditions under which evidence does or does not generalize to different populations, settings, and time periods [23] -yet too often context is inadequately described or not fully elucidated [81] . How pressing is the health issue?° Is there an EBI? If so, what is the quality and quantity of evidence on the EBI?° How long will it take to develop the evidence base?° Are there emerging or established health equity issues?° If the study addresses social or structural determinants, might multiple health conditions benefit?° Is the issue a priority among stakeholders? How many? Which ones?° Are you equipped to measure a range of contextual variables?° Are there resources to implement a study?° Might a hybrid trial that addresses both effectiveness and implementation, be appropriate?° Is there implementation already happening that you might evaluate?° Is action going to be taken regardless of whether the program or policy is evidence-based or not?° What are the consequences of not implementing?° What are the consequences of getting it wrong? Contextual conditions also drive and inform the adaptation of EBIs to populations and settings that differ from those in which it originally developed [82] . It is useful to consider contextual issues of relevance for implementation across levels of a socio-ecological framework (individual, interpersonal, organizational, community, policy) ( Table 2 ) [79] . The challenging scientific task of "unpacking" context requires three activities. First, contextual effects in any study setting or across settings and/or systems should be enumerated (e.g., a set of variables in Table 2 ). Second, since one cannot measure everything, part of building the evidence base involves determining which aspects of context are most salient for implementation within and across settings. Third, implementation research should also seek to measure the presence, distribution, and intensity of those contextual factors in target settings in which a research study is not being undertaken, but where one might want to apply evidence. Within an implementation research project, context is dynamic and should be assessed across all stages of a study [83] . Too often, dynamic contexts are not fully understood or assessed [30] . In some cases, the context for delivery (e.g., a particular clinical setting) is relatively stable, but the target of the intervention (e.g., a particular pathophysiology; guidelines for cancer screening) is dynamic and emergent. In a more complex intervention trial, both context and targets are dynamic and emergent [22, 84] . During implementation planning, a needs and assets assessment (formative research) should account for historical, cultural, social, and system factors that may shape implementation and the implementation climate, including forms of structural or institutional racism (e.g., inequitable practices and policies), medical mistrust, institutional and providers' biases and norms that may create or reinforce biases or inequities, as well as community strengths and assets that may inform implementation efforts. Tools such as critical ethnography can be useful during needs assessment to understand interactions between the ensembles of actors, agencies, interventions, and other contextual variables [85] . When selecting EBIs to be tested in an implementation study, context may affect both internal validity and external validity. Systematic reviews, which are often the source of EBIs, use a relatively narrow hierarchy of evidence [86] and tend to strip out implementation context when trying to make a summary (often quantitative) judgement about the average effectiveness of an EBI (e.g. for most populations and settings). For many settings in which we are conducting implementation studies (e.g., lower-and middle-income countries [87] ), we may not have a strong evidence base, guidelines, or interventions that have been tested through "gold-standard" RCTs and if they have, they are often not under conditions similar to those in which the EBI will now be applied. Context in global settings presents unique considerations, particularly in lower-and middle-income countries (LMICs) and other settings that have limited resources and face numerous structural barriers to health (e.g., in the USA, federally qualified health centers, donor-funded vertical health programs in lower-and middle-income countries). Among the considerations is the relevant evidence base for implementation-when settings vary tremendously, particularly the social and political context and systems/organizational infrastructure: Do researchers and implementers need to start anew in building the evidence base for implementation, answering many of the questions in Table 3 ? There is some evidence that in settings with constrained resources, intervention and methods innovations may be fostered due to the need for creativity and adaptations (e.g., task shifting [88] ) when choices are restricted [89] ). Adaptive designs (where interventions and strategies are modified in response to emerging data) may be particularly useful in LMICs since they may allow a team to begin with low-intensity/lowresource approaches, and refine or intensify as needed [90] [91] [92] . Transportability theory has been applied to assess whether findings about the effects of an implementation strategy in one setting can be used to infer in another, and if so, whether it is likely to work [93] . Context, when defined narrowly as the causes of an outcome that differ from one setting to another, asks science to focus on two measurement tasks. In the initial context where a strategy is being tested, it will be important to measure the steps that mediate or moderate the effects of the strategy on the outcome as well as factors that influence those steps. Hypotheses not only about effects but also about how and why they occur across diverse settings are important to inform the measurement architecture. Context is also important during the process of broader dissemination of evidence-based approaches. There is a well-documented disconnect between how researchers disseminate their findings (including EBIs) and how practitioners and policy makers learn about the latest evidence [14] . Applying principles of designing for dissemination (D4D) allows researchers to better account for the needs, assets, priorities, and time frames of potential adopters and stakeholders [94, 95] . An active D4D process emphasizes the design phase of an implementation research project. A D4D process anticipates dissemination of products (e.g., an evidence-based implementation strategy) by developing a dissemination plan that takes into account audience differences, product messaging, channels, and packaging [96] . In the future, this proactive D4D process could usefully more fully address designing for equity and sustainment, as well as dissemination. Addressing heath disparities and promoting health equity is becoming a more central and explicit focus of implementation science [92, [97] [98] [99] [100] [101] [102] . Health equity is a framing that shifts from a deficits approach (disparities) to one focused on what society can achieve (equity) [103] . An equity focus also recognizes the unjust nature of inequities, naming root/structural causes [104] . This emphasis is documented in publication trends over the past two decades. Figure 1 shows trends of publications from January 1, 2000, to December 31, 2021, using two search strings in PubMed: 1) "health disparities" AND ["implementation science" OR "implementation research" or "knowledge translation"] and 2) "health equity" AND ["implementation science" OR "implementation research" or "knowledge translation"]. For most of the past two decades, research has been framed more often with a disparities focus than with an equity focusdisparity publications were two-to three-fold more common than equity articles from 2006 to 2014. However, in 2021, the number of equity-framed publications greatly exceeded the number of disparities-framed publications. To move towards the goal of achieving health equity, it is critical that implementation science expands the quantity, quality, and types of evidence produced and prioritized, as well as who and what settings are (1) reflected in that evidence (representativeness) and (2) involved in its generation and interpretation (representation). For many health conditions and populations, we have adequate descriptive (type 1) data that can guide what to address (e.g., the size and nature of disparities). However, we often lack sufficient data on EBIs and strategies that are effective in reducing inequities and/or promoting equity [92] . Often, available EBIs inadequately address or account for many relevant social, cultural, structural, and contextual conditions that shape both health inequities and have implications for EBI implementation [92, 105, 106] . There are challenges in generating evidence on inequities, including potentially smaller sample sizes across various social dimensions through which inequities exist, which may limit subgroup heterogeneity analyses (e.g., by race or ethnicity) [107, 108] (see Table 2 ). As we build the evidence base of EBIs to actively promote equity, there is a need to understand the core elements of equity-focused interventions and strategies, and to do so for the range of social dimensions through which health inequities may exist (e.g., race, immigration status, gender, sexual orientation, location) and their intersection [109] . A foundational challenge here is that many EBIs were not developed with or tested among settings or populations that experience inequities or with the goal of promoting health equity and may unintentionally contribute to or exacerbate inequities [110] [111] [112] . This results in part Fig. 1 Number of annual publications on health disparities and health equity from the reductionist way in which EBIs are often developed, deployed (a linear, "cause and effect" approach), and tested [113] , paying inadequate attention to the complex and interrelated social determinants of health and root causes of health inequities (e.g., structural racism, inequitable allocation of resources and opportunities) [114] [115] [116] [117] [118] . We need to engage a wider range of partners from lower resource settings earlier and throughout the research process and in meaningful ways to build a broader and more relevant array of equity-focused EBIs that are feasible, acceptable, culturally appropriate, and address root causes. We also need to expand what we "count" as EBIs in public health and clinical research, broadening the focus from a narrower view of individual, interpersonal, and organizational interventions, to also include community, policy, and multi-sector interventions that have the potential to make larger shifts in health inequities. Such broadening of evidence with an eye towards health equity will consider moving beyond a more singular focus on our EBI repositories and including and evaluating existing promising community-defined evidence and interventions [92, 119, 120] . In expanding the evidencebase with the goal of promoting health equity, there are significant opportunities to develop and deploy EBIs in sectors outside of health (e.g., schools, workplaces, social services agencies, juvenile justice settings) where in many cases, the reach and impact can be greater than in the health sector [121] . Additionally, as we expand this evidence base, it may be beneficial to prioritize development and evaluation of interventions, practices, and policies that can reduce underlying structural and social factors (e.g., structural racism) and their downstream effects on health inequities [120] . Equity should be a core criterion for valuing evidence. This value statement should be reflected in priorities of funders, how research questions are framed, how research resources and decision-making are distributed, and how studies are conducted, evaluated, and reviewed. Implementation science has a role in recognizing that a negative consequence of our social and economic systems is the concentration of resources and health. These systems create inequities, so when thinking about closing an implementation gap, we should recognize the context-that such a gap is often an outgrowth of these systems and must be addressed and transformed. Equity needs to be prioritized and made more explicit as part of engagement efforts, which includes consideration of power imbalances (who is and is not involved in making key decisions) and timing of when and how partners are engaged (e.g., who is involved in EBI development and deployment, how communities are reflected in cocreating the evidence) [95, 120] . Reflection questions and step-by-step guidance can help guide study planning with an equity focus [102, 120] . Health and social policies, in the form of laws, regulations, organizational practices, and funding priorities, have a substantial impact on the health and well-being of populations and create the conditions under which people can be healthy and thrive-or not [122, 123] . Clinical and public health guidelines inform policy implementation by providing the basis for legislation, informing covered services in health plans, and advancing policies that support health equity [124] [125] [126] [127] [128] . Policies often address the underlying social and structural conditions that shape health and inequities-this in turn provides opportunities for policy implementation to frame accountability for organizations and systems. Policy implementation research, which has been conducted since the 1970s across multiple disciplines [129, 130] , seeks to understand the complexities of the policy process and increase the likelihood that evidence reaches policymakers and influences their decisions so that the population health benefits of scientific progress are maximized [131] . A key objective of policy implementation research is the enactment, enforcement, and evaluation of evidence-based policies to (1) understand approaches to enhance the likelihood of policy adoption (process); (2) identify specific policy elements likely to be effective (content); and (3) document the potential impact of policy (outcomes) [132] . Especially in the USA, policy implementation research is underdeveloped compared to other areas in implementation science. For example, a content analysis of all projects funded by the US National Institutes of Health through implementation research program announcements found that only 8% of funded grants were on policy implementation researc h [133] . Few of these studies had an explicit focus on equity or social determinants of health. Policy researchers have utilized a variety of designs, methods, and data sources to investigate the development processes, content, and outcomes of policies. Much more evidence is needed, including which policies work and which do not (for what outcomes, settings, and populations), how policies should be developed and implemented, unintended consequences of policies, and the best ways to combine quantitative and qualitative methods for evaluation of "upstream" factors that have important implications for health equity [134] . There is also a pressing need for reliable and valid measures of policy implementation processes [135] . These knowledge gaps are unlikely to be addressed by randomized designs and are more likely to be addressed using quasi-experimental designs, natural experiments, stakeholder-driven adaptations, systems science methods, citizen science, and participatory approaches [51, 66, [136] [137] [138] [139] . Several other areas in policy implementation research need attention. First, policy makers often need information on a much shorter time frame than researchers can deliver-this calls for the use of tools such as rapid-cycle research [140] and rapid realist reviews [141] . Second, we need to better understand the spread of policies, including the reasons that ineffective policies spread [142] , the role of social media [131] , and ways to address mis-and dis-information in the policy process [143] . Finally, more emphasis is needed on the reciprocal, often horizontal, interactions between organizations and the development of policy-relevant evidence [144] . For this inter-organizational research, the role of policy intermediaries (those who work in between existing systems to achieve a policy goal) has gained attention due to their critical roles in policy implementation research [145] . Strategies and tools to address several of these issues are provided in recent reviews [146, 147] and in Table 4 . There are multiple audiences of relevance for developing, applying, disseminating, and sustaining the evidence for implementation science [148] . When seeking effective methods to generate, implement, and sustain EBIs, it is important to take into account the characteristics of each audience and stakeholder group, what they value, how to balance different viewpoints, and how to combine stakeholders' experience and research evidence. Across these stakeholder groups, research evidence is only one of many influential factors influencing adoption, implementation, and sustainment of EBI [6, 15, 40] . Key audience categories include researchers, practitioners, and policy makers (Table 5 ). Researchers are one core audience. These individuals typically have specialized training and may devote an entire career studying a particular health issue. Another audience includes clinical and public health practitioners who seek practical information on the scope and quality of evidence for a range of EBIs and implementation strategies that are relevant in their setting. Practitioners in clinical settings (e.g., nurses, physicians) have specialized and standardized training whereas the training for public health practitioners is highly variable (most public health practitioners lack a public health degree [149] ). A third group is policy makers at local, regional, state, national, and international levels. These individuals are faced with macro-level decisions on how to allocate public resources. Policy makers seek out distributional consequences (i.e., who has to pay, how much, and who benefits) [150] and in many policy settings, anecdotes are prioritized over empirical data [9] . The category of policy makers also includes fundersthese funders may be elected officials and "small p" policy makers (organizational leaders) who make funding decisions within their settings. The relevance and usefulness of evidence vary by stakeholder type (Table 5 ) [151] . Research usefulness can be informed by audience segmentation, where a product promotion strategy is targeted to the characteristics of a desired segment-a widely accepted principle in marketing [152] . Audience segmentation can be informed by the process of user-centered design and decision-centered processes, in which the product (e.g., an implementation strategy) is guided in a systematic way by the end-users of the product [153] [154] [155] . Framing is another important factor in considering audiences for D&I. Individuals interpret the same data in different ways depending on the mental model through which they perceive information [156] . For example, policy makers often perceive risks and benefits not in scientific terms but in relation to (usually short term) emotional, moral, financial, or political frameworks [157, 158] . In practical terms for implementation science, framing for a particular health issue for a community member or patient might relate to the ability to raise healthy children whereas framing for a policy maker might relate to cost savings from action or inaction. Cost and economic evaluation are key considerations for a range of stakeholders involved in implementation, yet too often the perspectives of diverse stakeholders are not well considered, acted upon, or reported [159] . The "how-to" for broadening the evidence base for implementation science will require several actions. First, we need to prioritize the evidence gaps and possible ways of filling these gaps-many ideas are shown in Table 4 . Next, resources and tools are needed to address evidence deficits ( Table 6 ). All tools listed are available free of charge and provide enough background and instructions to make them useful for a wide range of users-from beginners to experts. The tools listed cover multiple, overlapping domains: (1) engagement and partnerships; (2) study planning; (3) research proposals, articles, reporting, and guidelines; (4) and dissemination, scale-up, and sustainability. In addition to the resources in Table 6 , there are many other portals that provide valuable information and resources for implementation research across multiple domains (e.g., technical assistance, mentorship, conferences, archived slides, webinars) [160] [161] [162] [163] [164] [165] [166] [167] [168] . Capacity is a core element for building a stronger, more comprehensive, and equitable evidence base. Capacity can be developed in multiple ways, including supporting The PSAT measures the sustainability of evidencebased practices in community settings. Users receive a tailored report that can be used by public health and community organizations to plan for and implement changes within their organization. https:// www. susta intool. org/ psat/ This table is illustrative and is not meant to be comprehensive. We have focused on sources that are more regularly updated the "push" for implementation science where researchers are trained to develop the evidence for implementation and skills in evaluation. Evaluation skill building should take into account the principles of realist evaluation, a mixed-methods approach that takes into account multiple contextual variables [169] . There is a significant number of implementation science training opportunities across countries [160, 170, 171] , though few have an explicit focus on many of the issues we have highlighted (e.g., health equity, designing for dissemination, sustainability, policy implementation). There has also been inadequate training and too little emphasis on the "pull" for implementation science (e.g., training the practitioners/ implementers) [170, 172] . This emphasis on "pull" should embrace the audience differences in Table 5 . There is even less evidence on who and how to conduct capacity building, especially in low-resource settings [171, 173] . There are also macro-level actions that would facilitate a broader and more robust evidence base. For example, funders and guideline developers should adopt a more comprehensive definition of evidence, addressing many of the recommendations outlined in Table 4 and above. This could include an alternative or addition to GRADE, incorporating methods of appraising research that does not automatically elevate RCTs (particularly when answering policy-related research questions). Similarly, it is helpful for study sections to be oriented to a wide array of evidence, particularly type 3 evidence. This will require some learning as well as some unlearning-as an example, we need to broaden our understanding of contextual mediators and moderators of implementation, which are likely to vary from those identified in highly controlled experiments. Over the past few decades, there has been substantial progress in defining evidence for clinical and public health practice, identifying evidence gaps, and making initial progress in filling certain gaps. Yet to solve the health challenges facing society, we need new and expanded thinking about evidence and commitment to context-based decision-making. This process begins with evidence-a foundation of implementation science. By critically examining and broadening current concepts of evidence, implementation science can better fulfill its vision of providing an explicit response to decades of scientific progress that has not translated into equitable and sustained improvements in population health [92] . Searching for evidence about health education and health behavior interventions Implementation research: what it is and how to do it What is dissemination and implementation science?: An Introduction and opportunities to advance behavioral medicine and public health globally Methodologic challenges in disseminating evidence-based interventions to promote physical activity Strengthening the evidence base for health promotion Evidence-based public health: a fundamental concept for public health practice Evidence-based decision making in public health A glossary for evidence based public health Evidence-Based Public Health Evidence-based medicine Evidence-based public health: an evolving concept Toward a theory of evidence based decision making Evidence-informed decision making in public health in action Building capacity for evidencebased public health: reconciling the pulls of practice and the push of research Shifting sands -from descriptions to solutions Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners Barriers to evidence-based practice A systematic review of barriers to and facilitators of the use of evidence by policymakers Azami-Aghdash S. Barriers to evidencebased medicine: a systematic review An evidence integration triangle for aligning science with policy and practice When complexity science meets implementation science: a theoretical and empirical analysis of systems change Implementation, context and complexity Context matters in implementation science: a scoping review of determinant frameworks that describe contextual determinants for implementation outcomes The Patient-Centered Outcomes Research Institute (PCORI) national priorities for research and initial research agenda A new framework for developing and evaluating complex interventions: update of Medical Research Council guidance Lessons from complex interventions to improve health Methods for exploring implementation variation and local context within a cluster randomised community intervention trial Towards a general theory of implementation The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change Fidelity and its relationship to implementation efffectiveness, adaptation and dissemination Dissemination and Implementation Research in Health: Translating Science to Practice Evidence based medicine has come a long way The need for evidence-based medicine Evidence-based public health practice Tools to support evidence-informed public health decision making Evidence-informed practice: from individual to context Old myths, new myths: challenging myths in public health What counts as ʻevidenceʼ in ʻevidence-based practiceʼ? What counts as evidence in evidence-based practice? Making research relevant: if it is an evidence-based practice, whereʼs the practice-based evidence? The use of tacit and explicit knowledge in public health: a qualitative study Evidence-based nursing practice: why is it important? Practice-based evidence Understanding randomised controlled trials A proposal to speed translation of healthcare research into practice: dramatic change is needed DʼEste C: Limitations of the randomized controlled trial in evaluating population-based health interventions Generalizability of the results of randomized trials The PRECIS-2 tool: designing trials that are fit for purpose Designing provider-focused implementation trials with purpose and intent: introducing the PRECIS-2-PS tool Natural experiment methodology for research: a review of how different methods can support real-world research Natural experiments: an underused tool for public health? Public Health Innovations in mixed methods evaluations Designing for Accelerated Translation (DART) of emerging innovations in health Aligning implementation science with improvement practice: a call to action Mosaic effectiveness: measuring the impact of novel PrEP methods External validity: the next step for systematic reviews? Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology Making health research matter: a call to increase attention to external validity Adapting interventions to new contexts-the ADAPT guidance The ADAPT-ITT model: a novel method of adapting evidence-based HIV Interventions Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare The FRAME: an expanded framework for reporting adaptations and modifications to evidencebased interventions Core functions and forms of complex health interventions: a patient-centered medical home illustration Horse Brave Heart MY, Das R, Farhat T: Building the evidence base to inform planned intervention adaptations by practitioners serving health disparity populations Participatory approaches for study design and analysis in dissemination and implementation research Participatory implementation science to increase the impact of evidence-based cancer prevention and control A systematic review of adaptations of evidence-based public health interventions globally The advantages and limitations of guideline adaptation frameworks Understanding mis-implementation in public health practice Theories, models, and frameworks for de-implementation of low-value care: a scoping review of the literature Unpacking the complexities of de-implementing inappropriate health interventions Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices Understanding causal pathways within health systems policy evaluation through mediation analysis: an application to payment for performance (P4P) in Tanzania Designing trials for transport: optimizing trials for translation to diverse External validity: from do-calculus to transportability across populations Advancing a conceptual model of evidence-based practice implementation in public service sectors Guidance for the assessment of context and implementation in health technology assessments (HTA) and systematic reviews of complex interventions: the context and implementation of complex interventions (CICI) framework. European Union How can we increase translation of research into practice? Types of evidence needed An extension of RE-AIM to enhance sustainability: addressing dynamic context and promoting health equity over time Developing robust, sustainable, implementation systems using rigorous, rapid and relevant science A two-way street: bridging implementation science and cultural adaptations of mental health treatments Partnering to translate evidence-based programs to community settings: bridging the gap between research and practice Evaluating complex interventions in context: systematic, meta-narrative review of case study approaches Using critical ethnography to explore issues in health promotion Expanding the evidence within evidencebased healthcare: thinking about the context, acceptability and feasibility of interventions Is evidence-based medicine relevant to the developing world Task shifting for non-communicable disease management in low and middle income countries--a systematic review Implementation science in resource-poor countries and communities Adaptive designs for randomized trials in public health An overview of research and evaluation designs for dissemination and implementation Implementation science should give higher priority to health equity Understanding HIV program effects: a structural approach to context using the transportability framework Designing for dissemination among public health researchers: findings from a national survey in the United States Dissemination and stakeholder engagement practices among dissemination & implementation scientists: results from an online survey Designing for dissemination and sustainability to promote equitable impacts on health The health equity implementation framework: proposal and preliminary study of hepatitis C virus treatment Implementation research methodologies for achieving scientific equity and health equity Reframing implementation science to address inequities in healthcare delivery Recommendations for addressing structural racism in implementation science: a call to the field Advancing health equity through CTSA programs: opportunities for interaction between health equity, dissemination and implementation, and translational science Addressing health disparities through implementation science-a need to integrate an equity lens from the outset Health equity is the issue we have been waiting for Defining equity in health Barriers and facilitators to implementing evidence-based interventions among third sector organisations: a systematic review Scaling up evidence-based interventions in US public systems to prevent behavioral health problems: challenges and opportunities Public health surveillance of lowfrequency populations Addressing the challenges of research with small populations The problem with the phrase women and minorities: intersectionality-an important theoretical framework for public health A scoping review of unintended harm associated with public health interventions: towards a typology and an understanding of underlying factors What types of interventions generate inequalities? Evidence from systematic reviews The effects of public health policies on health inequalities in highincome countries: an umbrella review Transcending reductionism in nutrition research The social determinants of health: coming of age Global action on the social determinants of health Action on the social determinants of health Racism and health: evidence and needed research Using syndemics and intersectionality to explain the disproportionate COVID-19 mortality among black men Community-defined evidence: a bottom-up behavioral health approach to measure what works in communities of color Application of an antiracism lens in the field of implementation science: recommendations for Reframing Implementation Research with a Focus on Justice and Racial Equity Expanding implementation research to prevent chronic diseases in community settings Ten great public health achievements--United States Ten great public health achievements--United States Consensus guidelines: improving the delivery of clinical preventive services. Health Care Manage Rev Developing and using the guide to community preventive services: lessons learned about evidence-based public health The evolving role of prevention in health care. Contributions of the U.S. Preventive Services Task Force Developing evidencebased clinical practice guidelines: lessons learned by the US Preventive Services Task Force Addressing systemic racism through clinical preventive service recommendations from the US Preventive Services Task Force Policy implementation: conceptual foundations, accumulated wisdom and new directions Never the twain shall meet?--a comparison of implementation science and policy implementation research Dissemination and Implementation Research in Health: Translating Science to Practice Understanding evidencebased public health policy A review of policy dissemination and implementation research funded by the National Institutes of Health Policy implementation science -an unexplored strategy to address social determinants of health Quantitative measures of health policy implementation determinants and outcomes: a systematic review Handbook of practical program evaluation Citizen science applied to building healthier community environments: advancing the field through shared construct and measurement development Creating synergies between citizen science and indigenous and local knowledge Community-based participatory research contributions to intervention research: the intersection of science and practice to improve health equity National Institutes of Health approaches to dissemination and implementation science: current and future directions A time-responsive tool for informing policy making: rapid realist review Why bad policies spread (and good ones don't) Leveraging media and health communication strategies to overcome the COVID-19 infodemic Mobilising modern facts: health technology assessment and the politics of evidence Understanding the supports needed for policy implementation: a comparative analysis of the placement of intermediaries across three mental health systems Strategies for effective dissemination of research to United States policymakers: a systematic review Understanding the implementation of evidence-informed policies and practices from a policy perspective: a critical interpretive synthesis research data, including large and complex data types • gold Open Access which fosters wider collaboration and increased citations maximum visibility for your research: over 100M website views per year submit your research ? Choose BMC Who will keep the public healthy? Educating public health professionals for the 21st century Evidence-based health policy versus evidence-based medicine Integrating research, practice, and policy: what we see depends on where we stand Theory and method in health audience segmentation Aligning implementation and user-centered design strategies to enhance the impact of health services: results from a concept mapping study User-centered design for psychosocial intervention development and implementation Decision-centred design in healthcare: the process of identifying a decision support tool for airway management Can scientists and policy makers work together? The Social Issues Research Centre. Guidelines for scientists on communicating with the media. Oxford: The Social Issues Research Centre Implementation science issues in understanding, collecting, and using cost estimates: a multi-stakeholder perspective A content analysis of dissemination and implementation science resource initiatives: what types of resources do they offer to advance the field? The Center for Implementation Dissemination and Implementation Science Program Realist review--a new method of systematic review designed for complex policy interventions Mapping training needs for dissemination and implementation research: lessons from a synthesis of existing D&I research training programs Building capacity in dissemination and implementation science: a systematic review of the academic literature on teaching and training initiatives Competences for implementation science: what trainees need to learn and where they learn it Building capacity in implementation science research training at the University of Nairobi