key: cord-0827677-nw4qzydu authors: Aarons, Gregory A. title: Reviews of attitude research in implementation science require comprehensiveness, accuracy, and specificity date: 2022-05-10 journal: Implement Sci DOI: 10.1186/s13012-022-01198-4 sha: 87e277ed19f97b103129f5780bae335e6e7d7bf8 doc_id: 827677 cord_uid: nw4qzydu nan Dear Editors-in-Chief, Implementation Science: I write to voice concerns regarding the article by Fishman, Yang, and Mandell (2021, Vol 16, No. 87) recently published in Implementation Science. The concerns include attributions, interpretation of the meaning of measures, and an overly narrow consideration of the extant literature on attitudes in implementation research. The first concern is attribution regarding the 15-item Evidence-Based Practice Attitude Scale (EBPAS) [1] . The authors state that "The EBPAS developers acknowledge that the EBPAS assesses other constructs, such as knowledge" (pg. 6), but they proceed to cite a paper not by the EBPAS developer [2] for support. The cited paper used the expanded EBPAS-50 [3] that was subsequently made more pragmatic in the EBPAS-36, [4] both of which assess eight additional dimensions of attitudes toward evidence-based practices (EBPs). The EBPAS assesses attitudes, not knowledge. The authors incorrectly attributed their assertion to the EBPAS developer (i.e., this letter's author). If the attribution were true, appropriate evidence should be clearly documented. Second, the authors offer an example of an EBPAS item that they assert assesses "knowledge, " but this is incorrect. They give the example of attitudes toward implementation of EBP given requirements to do so by supervisors, an organization, or a state (e.g., policy level) (i.e., EBPAS Requirements subscale). These items were intended to, and do, assess attitudes conditional on the source of directives to use EBP in work with patients and clients. This is a critical issue in implementation as directives to adopt and use new practices often come from those who manage, supervise, or set policy for direct services. This is part of the complex outer and inner contexts of implementation represented in frameworks that address such issues [5, 6] . The EBPAS Requirements subscale squarely focuses on attitudes conditional on the nuances of clinicians' work environments in practice. Fishman et al. 's claim suggests a misunderstanding or misinterpretation of the EBPAS and a potential misconstrual of differences between assessment of attitudes and knowledge. Third, the authors state that "EBPAS items refer only to vaguely defined behavioral goals, such as adoption of new practices" (pg. 6). Again, the authors misinterpret the EBPAS, as this widely used instrument was designed to assess attitudes relevant to EBP implementation [1, 7] . The original 15-item EBPAS focuses on EBPs for three subscales, and more general attitudes of openness to new practices in one subscale (Openness subscale). The EBPAS is easily adapted for attitudes toward specific innovations in health care such as attitudes toward cognitive behavioral therapy, measurement-based care, or medication-assisted treatment, and behavioral targets such as EBP use or fidelity. Fishman et al. 's conclusions suggest a cursory evaluation of the EBPAS and ignore the breadth of empirical work conducted on attitudes using the EBPAS and its application since its publication in 2004. For example, an informal search on Google Scholar This comment refers to the article available at https:// doi. org/ 10. 1186/ s13012-021-01153-9. Fourth, the authors neglect to include both older and newer studies pertinent to their stated goals in this review. While the exclusion of older studies was noted as a limitation, the exclusion of newer studies was also a limitation. The decision to exclude articles not covered in the Lewis et al review [8] (studies from 1990 to 2018) resulted in a narrow and unrepresentative sample of attitude studies and missed relevant studies including more recent work. If the goal was to focus on attitudes and causal models, then that is the literature that should have been systematically searched and reviewed. Relying on a tangentially related review lacking relevant studies, and not specifically targeting attitudes led to the omission of relevant studies squarely related to the Fishman and colleagues' own declared issues of interest (i.e., attitudes and causal models). For example, Fig. 1 illustrates a more recent study (i.e., 2020) of attitudes as part of a multilevel causal model examining leadership and attitudes in relation to implementation success [9] . Attitudes operate across multiple phases in complex implementation processes and contexts [5] . It is important for researchers to define key terms and constructs as well as their theoretical origins and underpinnings. Clearly specifying the role of attitudes in the causal chain relating to implementation outcomes is important. While theories may place attitudes in particular causal sequences, attitudes could serve as determinants, mechanisms, or outcomes in causal relationships of interest and can inform theory development and testing of implementation theories and implementation strategies where inferences can be drawn for specified issues of interest using appropriate theoretical models and relevant study designs [10] . For example, Azjen [11] notes that "feedback effects" (where the experience of a behavior can influence subsequent attitudes) is a frequently raised issue in research using the Theory of Planned Behavior. Empirical evidence for "reversecausal relations" in the Theory of Planned Behavior suggests that reciprocal causal relations should be considered in attitude focused implementation research [12] . Fishman and colleagues appeared to downplay the importance of research on issues such as the role of attitudes related to policies on subsequent intentions and health behaviors (p. 3). However, recent research demonstrates the relevance of the roles of policy, attitudes, and intentions during the COVID-19 health crisis [13] . Thus, causal theory should be clearly specified while remaining open to alternative hypotheses and empirical testing. Reviews in the field of implementation science should have accurate attributions and should be rigorous and comprehensive to avoid selection bias and misinterpretations, as is consistent with rigor advocated by respected bodies such as Cochrane Reviews [14] . It is not only risk of bias in individual studies, but risk of bias in reviews that can compromise the scientific endeavor [15] . Reviews must be comprehensive regarding the question(s) being asked, identification and use of relevant literature, and employing rigorous methods to avoid formulating conclusions that may not reflect the extant literature. By employing rigorous approaches to reviews, together we can advance the field of implementation science with the highest degree of rigor and relevance. Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS) Sustaining clinician penetration, attitudes and knowledge in cognitivebehavioral therapy for youth anxiety Expanding the domains of attitudes towards evidence-based practice: The Evidence-based Practice Attitude Scale-50 The Evidence-based Practice Attitude Scale-36 (EPBAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in Norwegian Advancing a conceptual model of evidence-based practice implementation in public service sectors Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework Psychometric properties and United States national norms of the Evidence-Based Practice Attitude Scale (EBPAS) A systematic review of empirical studies examining mechanisms of implementation in health The influence of transformational leadership and leader attitudes on subordinate attitudes and implementation success Structural and probalistic causality The theory of planned behavior: Frequently asked questions Causality in the theory of planned behavior Predicting Behavioral Intentions to Prevent or Mitigate COVID-19: A Cross-Cultural Meta-Analysis of Attitudes, Norms, and Perceived Behavioral Control Effects What is a Cochrane review? Overall bias methods and their use in sensitivity analysis of Cochrane reviews were not consistent Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations