key: cord-1008914-8ubwgu1z authors: Soloe, Cindy; Burrus, Olivia; Subramanian, Sujha title: The Effectiveness of mHealth and eHealth Tools in Improving Provider Knowledge, Confidence, and Behaviors Related to Cancer Detection, Treatment, and Survivorship Care: a Systematic Review date: 2021-02-18 journal: J Cancer Educ DOI: 10.1007/s13187-021-01961-z sha: 48164ef3be4067359ef22e4e6458d80f33526ce1 doc_id: 1008914 cord_uid: 8ubwgu1z Mobile health (mHealth) and eHealth interventions have demonstrated potential to improve cancer care delivery and disease management by increasing access to health information and health management skills. However, there is a need to better understand the overall impact of these interventions in improving cancer care and to identify best practices to support intervention adoption. Overall, this review intended to systematically catalogue the recent body of cancer-based mHealth and eHealth education and training interventions and assess the effectiveness of these interventions in increasing health care professionals’ knowledge, confidence, and behaviors related to the delivery of care along the cancer continuum. Our initial search yielded 135 articles, and our full review included 23 articles. We abstracted descriptive data for each of the 23 studies, including an overview of interventions (i.e., intended intervention recipients, location of delivery, topic of focus), study methods (i.e., design, sampling approach, sample size), and outcome measures. Almost all the studies reported knowledge gain as an outcome of the education interventions, whereas only half assessed provider confidence or behavior change. We conclude that there is some evidence that mHealth and eHealth interventions lead to improvements in cancer care delivery, but this is not a consistent finding across the studies reviewed. Our findings also identify gaps that should be addressed in future research, offer guidance on the utility of mHealth and eHealth interventions, and provide a roadmap for addressing these gaps. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s13187-021-01961-z. Cancer is the second leading cause of death globally. An estimated 9.6 million global deaths in 2018, approximately 1 in 6 deaths overall, were attributed to cancer [1] . The economic burden of cancer is substantial in all countries because of high health care spending and lost productivity caused by morbidity and premature death. As cancer treatment costs increase, prevention and early detection efforts become more costeffective and potentially cost-saving [2] . Additionally, early detection, high-quality treatment, and survivorship care can lead to improved health outcomes. Mobile health (mHealth) has demonstrated the substantial potential to improve health care delivery and disease management by increasing access to health information and health management skills [3] . Over the past several years, mHealth has increasingly been adopted to provide efficient and effective health care [4] . eHealth, which encompasses mHealth but includes a broader set of information and communication technologies, is also essential for enhancing health care education and training [5] . eHealth comprises multiple interventions, including telehealth, telemedicine, mHealth, electronic medical or health records (EMR/EHR), big data, wearables, and even artificial intelligence [1] . The emergence of the COVID-19 global pandemic has expedited the widescale adoption of virtual and digital care [6] . As such, understanding how to support mHealth and eHealth effectiveness is more critical than ever. Although mHealth could improve patient care across the cancer continuum [7] , there is a need to better understand the overall impact of these interventions in improving cancer care and to identify best practices to support intervention adoption. Most prior reviews on eHealth interventions have focused broadly on noncommunicable diseases or generally on professional education. The systematic review by Campbell et al. [8] was cancer specific but only evaluated the effectiveness of online cancer education for nurses and allied professionals. Others reviewed the effectiveness of mHealth approaches for noncommunicable disease (NCD) care but focused only in low-and middle-income countries [9, 10] or broadly explored components of eHealth (including mHealth) effectiveness regarding NCD management [11] [12] [13] . Still, no systematic review currently addresses cancer-specific eHealth and mHealth education and training interventions, and this limits our understanding of the effectiveness of these interventions for improving cancer care delivery. Cancer has many diseasespecific aspects, including screening, early detection, and diagnostic processes and a multitude of technologies to support these processes. This review intended to systematically catalogue the recent body of cancer-based mHealth and eHealth education and training interventions and assess the effectiveness of these mHealth interventions in increasing health care professionals' knowledge, confidence, and behaviors related to the delivery of care along the cancer continuum. Findings from this review can offer guidance on the utility of mHealth and eHealth interventions and provide a roadmap for addressing gaps in the literature. Using the Community Guide Systematic Review Methods as a framework [14], we implemented the following steps in our review process: (1) identify what interventions the review will cover; (2) define a conceptual approach for evaluating the interventions; (3) identify and apply criteria for including or excluding studies; (4) search for, retrieve, and screen abstracts; (5) review the full text of selected studies and abstract relevant study characteristics (as determined in the conceptual approach); (6) assess the quality of each study; (7) summarize the evidence and identify gaps; and (8) develop recommendations and findings. In spring 2020, we engaged RTI International's Library and Information Services unit to systematically search literature reflecting the use of mHealth learning interventions to improve health care professionals' delivery of cancer care. The search drew from three databases (PubMed, Embase, and Web of Science). Key search terms included mHealth, cancer, or noncommunicable disease, chronic disease terms, training, and healthcare provider. Although our initial search strategy included noncommunicable disease and chronic disease terms, we eliminated these terms after the search yielded a higher number of results than anticipated. The final search focused on cancer mHealth trainings. The full search strategy is in Appendix A. Because the field of mHealth is rapidly evolving, we restricted our search to studies published from 2010 to 2020. Only articles available in English were included, and no geographic parameters were applied. Two reviewers independently assessed titles/abstracts then reviewed full-text articles, extracted relevant data, and assessed study quality. Articles were included if they presented evaluations of mHealth or eHealth approaches to train health care workers who provide cancer care. Articles were excluded if they did not focus on cancer, described intervention development only (i.e., no evaluation), focused on pediatric care, or included an evaluation limited to satisfaction assessment or if a full article could not be retrieved (i.e., only an abstract available). Review articles were also excluded. For each study included, a single reviewer abstracted relevant study characteristics (i.e., mode of delivery, study design, sampling approach) and data for outcomes of interest into a structured form. A second reviewer checked all data for completeness and accuracy. A single reviewer assessed each study's methodological quality using applicable National Institute of Health (NIH) Quality Assessment Tools [15] and a standardized approach to categorize manuscripts. The review team then discussed the quality scores to ensure consistency. We summarized the abstracted data and quality ratings into evidence tables, including an overview of reviewed manuscript characteristics (i.e., topic, study design, sampling details, primary outcomes; Table 1 ); knowledge outcome measurement and findings ( Table 2) ; and confidence, behavior, and intention outcome measurement and findings (Table 3) . Within the outcomes tables, we report within group and between group measurement design studies separately. Our initial search yielded 135 articles. After reviewing the abstracts, 81 were eliminated because they did not meet our final inclusion criteria (Fig. 1) . We requested 54 full articles; 31 of these were eliminated based on our exclusion criteria. One article was excluded because it described an intervention already included in our review (i.e., duplicate article). Therefore, our full review included 23 articles. We abstracted descriptive data for each of the 23 studies, including an overview of interventions (i.e., intended intervention recipients, location of delivery, topic of focus), study methods (i.e., design, sampling approach, sample size), and outcome measures. These data are presented in Table 1 . Interventions Assessed Ten studies described interventions directed specifically toward nurses and five toward Primary Care Providers/General Practitioners, seven described interventions designed to address multiple provider types (e.g., physicians, physician assistants, and nurse practitioners), and six described interventions designed for specialists (e.g., pathologists, oncologists). Eleven studies were set in the USA, Secondary analysis of an RCT; however, the authors use the dataset in this instance data as a pre-post design c No significant differences in demographics between the intervention and control groups SD, standard deviation Table 3 Confidence, behavior, and intention outcome measurement and findings Author, year Study Methodology Four of the 23 studies used a post-only design to measure outcomes, 10 used pre-post, five used randomized controlled trials (RCTs), and four used other methods (combined post-only and pre-post, multiple time series, comparison group, and quasi-experimental). Of the prepost design studies, two had more than one data collection timepoint for the "post" measure, whereas all others assessed outcomes immediately after intervention exposure. Sixteen studies used non-probability sampling, and six used probability sampling; one did not describe their sampling methods. Outcome Measures Twenty-one studies measured provider knowledge outcomes, 11 measured provider confidence (i.e., self-efficacy), and 10 measured provider behavior/intention. Thirteen studies measured only one type of outcome (knowledge only: 11; confidence only: 1; behavior/intention only: 1). Most studies measured knowledge only (n = 11) or knowledge and one (n = 8) or two (n = 5) other outcomes. Knowledge was measured using many different study designs: post-only (n = 5), pre-post (n = 9), RCT (n = 4), and other (n = 2; i.e., multiple time series, quasi-experimental). Confidence was measured using post-only (n = 1), pre-post (n = 5), RCT (n = 3), and other (n = 2; i.e., multiple time series, quasi-experimental). Behavior/intention was measured using post-only (n = 2), pre-post (n = 4), RCT (n = 3), and other (n = 1; i.e., quasi-experimental). Using applicable NIH Quality Assessment Tools, eight articles were categorized as high quality, eight as medium quality, and seven as low quality (see Appendix C). Table 2 presents provider knowledge outcome findings. Table 3 presents provider confidence and behavior/intention outcome findings. Both tables exclude findings from postonly design studies. Knowledge Findings Seventeen studies measured knowledge outcomes using a pre-post design [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] . Among studies that measured change in knowledge within a group (n = 7), six reported statistical significance with change in mean knowledge scores ranging from 4 to 24%. Among studies that measured change in mean knowledge scores between groups (intervention vs. control; n = 6), four reported at least some statistically significant difference in knowledge between intervention and control participants immediately after the intervention. However, the one study that measured change 12 months post-intervention found that this difference was not sustained. Of the four studies that reported other methods of measuring knowledge (increased perceived knowledge, high agreement rates between trainees and experts, increased diagnostic accuracy in a simulated patient encounter, median improvement in knowledge scores), two reported a statistically significant change in knowledge outcomes. Provider Confidence Findings Eight studies measured the impact of their intervention on provider confidence (i.e., selfefficacy or confidence in the ability to perform the behavior of focus in the intervention) [18, 23, 24, 26, 27, 29, 33, 34] . Five studies measured change in confidence score within group and three between groups (intervention vs. control). Two of the five studies reporting pre-post-changes in mean confidence reported a statistically significant change in provider confidence following the intervention [33, 34] . Two of the three studies that calculated difference in mean confidence scores between intervention and comparison or control groups found statistical significance among the groups [24, 27] . Provider Behavior/Intentions Findings Seven studies measured change in provider behavior/intention [23, 24, 27-29, 33, 35] . Five of these measured change within group and two between group (intervention vs. control; see Table 2 ). Two of the four studies measuring pre-post changes in mean behavior or intention scores following intervention reported statistically significant increases in behavior following the intervention [28, 35] . One of the three studies that calculated the mean difference in post-intervention behavior between the intervention and comparison or control groups noted significant differences between the groups [27] . Three of these seven studies relied solely on provider self-report of behavior/intention. One study used a combination of provider self-report and observational data. We conducted a systematic review to identify eHealth-and mHealth-based education interventions to assess their effectiveness in improving cancer care. We identified a total of 23 studies that met the study inclusion criteria. Almost all the studies reported knowledge gain as an outcome of the education interventions, whereas only half assessed provider confidence or behavior change. The majority of the studies with behavior outcomes reported statistically significant improvement but behavior change exhibited wide variation with a range from 4 to 24% among studies that reported percentage change based on assessments before and after the education intervention. Several studies also reported statistically significant changes in confidence levels and self-reported behavior, but there were also multiple studies that did not find the intervention to be effective. Similar to knowledge change, there was variation in the behavior change proportions, from 1 to 20%, among studies that reported percentage differences. Overall, we can conclude that there is some evidence that eHealth interventions lead to improvements in cancer care delivery, but this is not a consistent finding across the studies reviewed. Almost 80% (18/23) of the interventions were delivered via online courses, and the remaining 20% were a blend of online and in-person education. The studies presented in this review do not offer clear insight as to whether multimodal approaches are more effective than stand-alone ones. There is evidence from other settings that the use of multimodal education methods can make teaching more effective [36, 37] . Additional studies in the future could compare combinations of approaches and methods to support evidencebased decisions on the selection of multimodal interventions. Furthermore, evidence on the role of mHealth interventions was limited, with only one study using SMS messaging, and we did not identify any systematic assessments of education apps. Cancer education apps like those created by the American Society of Clinical Oncology for self-evaluation do exist, but there may not be formal evaluations of the effectiveness of these tools in the peer-reviewed literature. Importantly, our findings may also indicate that there is a preference for using online tools to deliver education materials than relying on mHealth approaches. Text messaging and apps may be more appropriate for facilitating data collection and providing expert support during clinical care delivery [12, 38] . Almost all the studies identified for this review report on evaluations conducted in high-income settings. The paucity of research in the low-and middle-income settings is an important finding from this review. Mortality from cancer remains high in limited-resource settings, and it is projected that the burden from cancer will grow in low-and middle-income countries [39, 40] . There is an urgent need to improve knowledge among providers to prevent, screen, diagnose, and treat cancers, and therefore, more research is needed in lowresource settings to create the evidence base on optimal education interventions [41, 42] . Our review identified several other gaps that should be addressed in future research. First, behavior change was only addressed in a small number of studies, and all studies except one used self-reported behavior measurement. The ultimate objective of all education interventions is to foster optimal use of guideline-recommended cancer care. As such, intervention evaluations should include reports of observation-based behavior measurement. Second, studies generally reported changes immediately after the intervention was delivered, and there is a need for longer-term assessments to evaluate the sustainability of the impact of the education delivered. Third, no study included in this review conducted a costeffectiveness assessment of the interventions to provide guidance for the adoption of the approaches studied. The importance of systematic economic evaluations has been highlighted in a prior review [8] , and this is an important omission that should be addressed in future research. Fourth, the overall quality of the studies in this field should be improved. Our review assigned a high-quality rating to only about one-third of the studies included in this manuscript. Fifth, our experience compiling the findings for this review reveals the importance of fostering consistent terminology, outcome measures, and metrics to pool the results from eHealth studies consistently. We acknowledge that it might not always be feasible to implement consistent reporting because the education interventions and the target audience differs but nevertheless attempts at reaching an agreement on standardized measures will be extremely important to generate collaborative evidence to support the field. This review has several limitations that should be considered when interpreting the findings. A key drawback is that outcome measures are not always reported consistently in the manuscripts reviewed, and this makes it difficult to synthesize the findings to reach concrete conclusions. Some of the differences in the selection of measures could be the variation in the type of education tested, but nevertheless, a more uniform approach would be useful. Our review was also limited to cancer, and there may be lessons to learn from the delivery of other chronic and noncommunicable conditions. Furthermore, we only included studies published in English, and the review team also decided to exclude pediatric-focused physician education because these studies were on specific issues that are likely not generalizable to the population as a whole. eHealth and mHealth technology are rapidly progressing in terms of content displays, interactive graphics and other tools, and virtual reality training approaches. Therefore, although we included all manuscripts published as of early 2020, the field will continue to evolve. Conclusion eHealth and mHealth interventions show promise, but the evidence is inconsistent. In general, our results indicate some evidence toward the positive impact of mHealth interventions on provider knowledge but insufficient findings regarding the impact on provider behavior. Findings from the studies currently available in the literature vary widely regarding the use of eHealth to improve provider delivery of cancer care, which highlights the need for additional methodologically rigorous studies with longer-term follow-up. An essential recommendation from our analysis is the need for consistent terminology, measures, and metrics to synthesize results from the studies efficiently, which will build the evidence base required to adopt optimal and cost-effect ive i nterventions. Generalizability of the findings is an additional concern and future studies will ideally evaluate eHealth and mHealth education interventions in a wide variety of settings, including those in low-and middle-income countries. Supplementary Information The online version contains supplementary material available at https://doi.org/10.1007/s13187-021-01961-z. Code Availability Not applicable. Author Contributions All authors contributed to the development of the research and evaluation questions. Cindy Soloe and Olivia Burrus reviewed and abstracted the articles, refining based on review by Sujha Subramanian. Cindy Soloe and Sujha Subramanian drafted and revised the manuscript incorporating comments from Olivia Burrus. Funding This research is supported by a grant from the US National Institutes of Health (R21CA224387). WHO-Cancer report-2020-global profile The Cancer atlas, Third edn World Health Organization (2016) mHealth: use of mobile wireless technologies for public health The value of mHealth for managing chronic conditions World Health Organization. 2020. Digital health To the lighthouse: embracing a grand challenge for cancer education in the digital age The effectiveness of mHealth for self-management in improving pain, psychological distress, fatigue, and sleep in cancer survivors: a systematic review Effectiveness of online cancer education for nurses and allied health professionals; a systematic review using Kirkpatrick evaluation framework Scoping review assessing the evidence used to support the adoption of mobile health (mHealth) technologies for the education and training of community health workers (CHWs) in low-income and middle-income countries Mobile health for non-communicable diseases in sub-Saharan Africa: a systematic review of the literature and strategic framework for research Impact of mHealth chronic disease management on treatment adherence and patient outcomes: a systematic review Recent worldwide developments in eHealth and mHealth to more effectively manage cancer and other chronic diseases -a systematic review Evidence on feasibility and effective use of mHealth strategies by frontline health workers in developing countries: systematic review Community Guide, The. 2020. Our Methodology Study quality assessment tools The International Society of Urological Pathology Education web-a web-based system for training and testing of pathologists Effects on skills and practice from a web-based skin cancer course for primary care providers Impact of a non-small cell lung cancer educational program for interdisciplinary teams Development and evaluation of a web-based breast cancer cultural competency course for primary healthcare providers Cervical cancer screening in adolescents: an evidence-based internet education program for practice improvement among advanced practice nurses Impact of a webbased reproductive health training program: ENRICH (Educating Nurses about Reproductive Issues in Cancer Healthcare) Impact of an online survivorship primer on clinician knowledge and intended practice changes Impact of webbased case conferencing on cancer genetics training outcomes for community-based clinicians Evaluation of online learning modules for improving physical activity counseling skills, practices, and knowledge of oncology nurses Video education on Hereditary Breast and Ovarian Cancer (HBOC) for physicians: an interventional study Online training on skin cancer diagnosis in rheumatologists: results from a nationwide randomized web-based survey Effect of a web-based curriculum on primary care practice: basic skin cancer triage trial Learner's perception, knowledge and behaviour assessment within a breast imaging E-Learning course for radiographers Improving clinician confidence and skills: piloting a web-based learning program for clinicians in supportive care screening of cancer patients The management of acute adverse effects of breast cancer treatment in general practice: a video-vignette study Analysis of factors related to poor outcome after elearning training in endoscopic diagnosis of early gastric cancer using magnifying narrow-band imaging mHealth to train community health nurses in visual inspection with acetic acid for cervical cancer screening in Ghana Is an online skin cancer toolkit an effective way to educate primary care physicians about skin cancer diagnosis and referral? Addressing educational needs in managing complex pain in cancer populations: evaluation of APAM: an online educational intervention for nurses Enhancing communication between oncologists and patients with a computer-based training program: a randomized trial A multimodal approach to teaching cultural competency in the doctor of pharmacy curriculum Multimodality in language education -implications for teaching Barriers to scale of digital health systems for cancer care and control in last-mile settings Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries Global cancer incidence and mortality rates and trends: an update. Cancer Cancer control in low-and middle-income countries: is it time to consider screening The central role of provider training in implementing resource-stratified guidelines for palliative care in low-income and middle-income countries: lessons from the Jamaica Cancer Care and Research Institute in the Caribbean and Universidad Católica in Latin America Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations Data Availability Data are available upon request. Conflict of Interest The authors declare no conflict of interest.Disclaimer The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the National institutes of Health.