To describe the best evidence on the effectiveness of technology-based learning tools designed to improve knowledge of health care providers about clinical practice guidelines (CPGs).
We conducted a systematic review, searching MEDLINE, Embase, and CINAHL from inception to July 2018. Included studies investigated the effectiveness of any technology-based learning tools developed to improve knowledge of health care providers about CPGs. We used a 2-phase screening process to determine eligibility. Pairs of reviewers critically appraised relevant studies using the Scottish Intercollegiate Guidelines Network checklist for randomized controlled trials or the National Institutes of Health checklist for pre- and postintervention trials. Evidence from internally valid studies was described using a best-evidence summary. We conducted a sensitivity analysis to determine whether results varied according to methodological quality.
Twenty-five of 8321 articles met our selection criteria. Six studies had a low risk of bias and were included in this review. Spaced education was associated with improvement in knowledge; however, its effectiveness relative to other interventions is unknown. Module-based online educational interventions were associated with improvement in knowledge of CPGs; however, they may not be more effective than paper-based self-learning or in-person workshops. The sensitivity analysis determined that the evidence was similar between the high and low risk of bias studies.
Module-based- and spaced-education interventions may be beneficial for improving health care providers' knowledge of CPGs; however, much of the evidence toward their use is preliminary.
INTRODUCTION
Health care providers are expected to remain current with clinical evidence,1,2 yet the use of evidence in clinical practice is suboptimal.3–10 There are many ways to enhance the use of research evidence, including clinical practice guidelines (CPGs). CPGs include recommendations developed following an evaluation of the scientific literature.11 CPGs optimize patient care by allowing health care providers and patients to select the best evidence-based care consistent with the unique needs and preferences of patients.11 However, difficulty accessing CPGs and evidence-based information can inhibit health care providers from incorporating this information into their patient care.5–10
There is a need for the development of appropriately tailored knowledge translation (KT) activities to facilitate the exchange of evidence to health care providers.12–18 KT refers to the exchange, synthesis, and ethically sound application of knowledge to improve health and provide effective services in health care.19 Education is an important aspect of KT because it contributes to all phases from its development to implementation and the final phases of evaluation. Constructivism in education is the theory that learners construct their understanding through experiences and reflection.20,21 In order to learn something new, according to this theory, learners must reconcile new information with their previous knowledge and experiences.20,21
There is a significant shift toward the use of technology-based learning rather than the traditional in-person, classroom-based learning.22 Technology-based learning tools in health care education can improve access to information to meet the needs of health care providers.23–26 Moreover, such tools can be used to adapt information to the health care providers' learning styles and increase intrinsic motivation;24–27 however, there is no clear understanding of which technology-based educational interventions are most effective in improving knowledge in the health care provider population.28 In this review, we define technology-based learning tools as instruments of learning that incorporate digital technology as a method for the delivery of information.29 Terms such as Web-based learning, e-learning, computer-assisted learning, and online learning have been used synonymously with technology-based learning.4,30,31 Examples include but are not limited to websites, online courses/modules, and podcasts.
A previous systematic review aimed to identify health care providers' perceived usability and practice behavior change of technology-based educational interventions used in disseminating CPGs.30 They identified 7 types of technology-based interventions, including websites, computer software, web-based workshops, computerized decision support systems, an electronic educational game, e-mail, and multimodal interventions consisting of at least 1 technological component. The results varied by intervention type with no clear superior method of dissemination. This review provides important information; however, additional pedagogical components should be explored to better inform the development of appropriate KT tools for this population.
Our review is set within a larger study that aimed to develop and evaluate a technology-based learning tool designed to improve knowledge of CPGs within chiropractors and chiropractic teaching faculty at a Canadian chiropractic college. To develop a learning tool tailored to the target population, we aimed to determine whether previous learning tools have been developed to disseminate CPGs and then use them to inform the development of this novel tool. Understanding that there is no single right learning tool design for every population, we must integrate information from a variety of resources to develop a tool more likely to be effective. We worked with an Advisory Committee, reviewed pedagogical theories and principles for online learning, and conducted the present review to collectively construct a body of evidence toward identifying the most appropriate and well-informed tool for this population and subject. Improving our understanding of technology-based educational interventions that are effective in improving knowledge is necessary to develop KT strategies for health care providers. Therefore, we aimed to describe the best evidence on the effectiveness of technology-based learning tools designed to improve knowledge of health care providers, in active practice, about CPGs.
METHODS
Registration
We registered our systematic review protocol with the International Prospective Register of Systematic Reviews (PROSPERO) on August 3, 2017 (CRD42017071308).
Eligibility Criteria
Population
We included studies targeting health care providers in active practice including but not limited to physicians, residents, and nurses. We excluded populations, including students not yet in practice.
Interventions
Interventions included technology-based educational interventions aiming to educate individuals about a CPG. Examples of technology-based educational interventions may include web-based modules or smartphone apps. We excluded educational simulation design interventions and clinical decision support systems in this review. Educational simulation decision interventions refer to the artificial representation of a clinical scenario by a 3-dimensional application (eg, SimLab). A clinical decision support tool is any tool that provides clinicians, administrative staff, patients, caregivers, or other members of the care team with information that is filtered or targeted to a specific person or situation. Decision support tools are patient specific and provide more information about care toward the individual patient rather than a general improvement of knowledge of CPGs. These interventions were excluded, as they do not fit within our definition of a technology-based learning tool aiming to disseminate CPGs.
Comparison Groups
Studies that compared technology-based educational interventions to other interventions (technology based or not) or no intervention were considered.
Outcomes
The primary outcome of interest was a measure of knowledge following the use of the educational intervention. We did not use a specific definition of knowledge. Instead, we accepted the authors' definitions and/or means of assessing knowledge and commented on their justification.
Study Characteristics
Eligible study designs met the following inclusion criteria: (1) English language, (2) published in a peer-reviewed journal, and 3) study designs, including randomized controlled trials (RCTs), cohort studies, case control studies, and pre- and postintervention trial study designs. We excluded study designs such as case reports, case series, qualitative studies, literature reviews, biomechanical and laboratory studies, and studies not reporting a methodology and publication types, such as guidelines, letters, editorials, commentaries, reports, book chapters, conference proceedings/abstracts, lectures, and consensus development statements.
Information Sources
Our search strategy was developed in consultation with an experienced health sciences librarian, and a second health sciences librarian reviewed the search for completeness and accuracy using the Peer Review of Electronic Search Strategies (PRESS) Checklist. We searched MEDLINE and Embase (through Ovid Technologies Inc, New York, NY) and CINAHL Plus with Full Text (through EBSCOhost) from inception to July 2018. The search strategies (Fig. 1) were first developed for MEDLINE and subsequently adapted to the other databases. The search strategy combined Medical Subject Headings (MeSH) as well as text words (title and abstract) related to CPGs and technology-based education. We used EndNote X7 to create the bibliographic database.
MEDLINE search strategy. Search run April 11, 2017, and updated July 1, 2018, in Ovid MEDLINE: Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE® Daily and Ovid MEDLINE® 1946–Present.
MEDLINE search strategy. Search run April 11, 2017, and updated July 1, 2018, in Ovid MEDLINE: Epub Ahead of Print, In-Process & Other Non-Indexed Citations, Ovid MEDLINE® Daily and Ovid MEDLINE® 1946–Present.
Study Selection
We used a 2-phase screening process to select eligible studies. In phase I, 5 pairs of independent reviewers screened citation titles and abstracts to determine eligibility. Citations were classified as either relevant, irrelevant, or possibly relevant. In phase II, the same pairs of reviewers independently screened the possibly relevant articles from phase I to determine eligibility. Reviewers reached consensus through discussion following each phase.
Quality Assessment and Data Extraction
Four pairs of reviewers independently appraised the internal validity of eligible studies using the Scottish Intercollegiate Guidelines Network checklists for RCTs as well as the National Institutes of Health Checklist for pre- and postintervention trials.32,33 Reviewers reached consensus through discussion. Studies deemed to have a low risk of bias were included in this review. Those with a high risk of bias (presence of methodological fatal flaws, such as selection bias due to improper randomization) were excluded. We contacted the authors when additional information was needed to complete the appraisal. A study was considered to have a high risk of bias if reviewers considered that the study's internal validity was compromised because of biases and methodological flaws.
The lead author extracted data from low risk of bias studies into evidence tables to describe the type of educational intervention, population, the topic of the CPG, follow-up time points, and results of each study. A second reviewer independently verified the accuracy of the extracted data. Any disagreements were discussed until consensus was reached.
Sensitivity Analysis
We conducted a sensitivity analysis (1) to determine whether results varied between low and high risk of bias studies and (2) to assess the possible impact of misclassification bias from our risk of bias assessment. The lead author extracted data from the high risk of bias studies and created evidence tables. We stratified results of high risk of bias studies according to types of educational interventions and identified whether these interventions demonstrated an improvement, a reduction, or no change in knowledge. Finally, we described the similarities and discrepancies between the high and low risk of bias studies.
Data Summary
A narrative summary of the low risk of bias studies was performed. We stratified our results according to types of educational interventions.
Statistical Analysis
The interrater agreement for article screening was computed using the κ coefficient, and percentage agreement for critical appraisal was calculated. We computed the mean difference between groups and 95% confidence intervals (CIs) to quantify the effectiveness of interventions when possible. Where this was not possible, we reported median values and significance, as reported in the studies. More weight was given to results of RCTs. This systematic review complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.34
RESULTS
We identified 8321 articles. We removed 311 duplicates and screened 8010 articles for eligibility (Fig. 2). Phase I screening yielded 97 articles, and 25 articles were relevant following phase II screening. Reasons for exclusion from phase II (n = 72) were (1) ineligible intervention type (n = 10), (2) outcomes not relevant (n = 43), (3) ineligible study design (n = 3), (4) ineligible study population (n = 3), and (5) ineligible publication type (n = 13). The interrater agreement for phase I screening of articles was κ = 0.33 (fair agreement) and for phase II κ = 0.73 (substantial agreement). The percentage agreement for the critical appraisal of articles was 59%. There were no deviations from the protocol.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram.
Study Characteristics
We critically appraised 25 articles. Of those, 6 had a low risk of bias and were included in our summary. Four low risk of bias studies were RCTs, and the remaining 2 were pre- and postintervention trials. The studies focused on (1) primary care following myocardial infarction in resident physicians;35 (2) detection, evaluation, and treatment of high blood cholesterol in physicians;36 (3) hematuria, priapism, staghorn calculi, infertility, and antibiotic prophylaxis in urologists and urology residents;37,38 (4) health care–associated infections in health care workers (nurses, physicians, and other health care workers, including pharmacists, paramedics, respiratory therapists, and physiotherapists);39 and (5) whiplash management in general practitioners.40 The educational interventions included module-based online education (n = 4),35,36,39,40 spaced-education combined with case studies (n = 1),37 and spaced-education combined with a game (n = 1).38 Module-based online education is a series of online sequentially ordered modules, each focusing on a particular topic. Modules are often combined to teach larger, more complex topics to learners. Spaced education refers to educational interventions delivered over a prolonged period. It includes “spaces” or times without the intervention between learning intervals, which is said to improve long-term memory.41 The prolonged period between learning intervals is variable. No standard length of time appears to exist for this type of intervention. The length of the intervals included in this review is noted.
Risk of Bias Within Studies
The low risk of bias RCTs had (1) clear research questions, (2) adequate randomization processes, (3) baseline similarities between groups, (4) interventions as the only differences between groups, (5) adequate outcome measurement tools, (6) loss to follow-up accounted for in analyses, and (7) intention-to-treat analyses (Table 1, available as online supplementary material attached to this article at https://www.journalchiroed.com).35–38 However, concealment methods were not clearly described for 3 studies,36–38 and blinding did not occur in 1 study35 and was not clearly described for 2 studies.37,38
Two pre- and postintervention studies had a low risk of bias.39,40 They had (1) clear research questions, (2) clearly described eligibility criteria, (3) representative study populations, (4) adequate enrollment procedures, (5) adequate sample sizes, (6) clearly described interventions, (7) loss to follow-up accounted for in analyses, and (7) adequate outcome measurement tools (Table 2, online). Both studies did not report on blinding procedures (researcher blinding to participant allocation). All 6 studies justified the selection of their knowledge measurement through content expert review,35,39,40 pilot testing,36,37 or a previous trial.38
Interventions Involving Spaced Education
Two RCTs aimed to improve knowledge using spaced education in combination with a game or online case studies.37,38 These studies provide preliminary evidence suggesting that spaced education may be associated with improvement in knowledge; however, the effectiveness of spaced education is not well established because it was not compared to a different educational strategy. Additionally, the length of the spacing did not appear to influence knowledge change.
The first RCT randomized 1470 urologists to 1 of 2 spaced-education game intervention groups (n = 735 per group).38 Although knowledge improved in both groups (group A: 52/100 increase; group B: 53/100 increase), the difference between groups was not statistically significant (group A: 100/100 [interquartile range 3.0]; group B: 98/100 [interquartile range 8.0]) (Table 5, online).
The second RCT included urologists and urology residents (n = 240 per group) who received spaced education in combination with case studies focusing on 1 of 2 CPGs.37 The only difference between the 2 interventions was the CPG being instructed. The results of this study cannot be used to determine differences in the effectiveness of spaced education between groups; therefore, we used only within-group results. The results suggest that both groups significantly improved their knowledge following the intervention (cycle 3) (p < .05); within-group difference in means (95% CI) for group A was 29.1/100 (28.0–30.14) and for group B was 24.6/100 (23.73–25.47) (Table 5, online).
Interventions Involving Module-Based Online Education
Four studies (2 RCTs and 2 pre- and postintervention studies) aimed to improve knowledge using module-based online educational programs.35,36,39,40 Based on this review, preliminary evidence suggests that online module-based education may be effective in improving knowledge about CPGs in health care providers; however, they may not be superior to paper-based self-learning or face-to-face workshops.
The first RCT randomized resident physicians to either a module-based education program (n = 83) or a printed guideline group (n = 79).35 The results indicate that participants in the intervention group scored a median of 0.5/20 higher than the control group postintervention (F1) and 1.0/20 4–6 months following the intervention (F2) (Table 5, online). These results were not statistically significant between groups (intervention: F1: 15.0/20 [95% CI 14.0–15.0]; F2: 12.0/20 [95% CI 11.0–13.0]; control: F1: 14.5/20 [95% CI 14.0–15.0]; F2: 11.0/20 [95% CI 10.0–12.0]). Knowledge increased in both intervention groups; however, the statistical significance is unknown (intervention F1: 5.0/20; F2: 2.0/20 increase; control F1: 5.5/20; F2: 2.0/20 increase).
In the second RCT, physicians were randomized to either an online multiformat education group (n = 52) or a live workshop (control) group (n = 51).36 There was no statistical mean difference between groups (F1: 1.01/39 [95% CI: −0.39–2.41]; F2: 0.66/39 [95% CI: −0.65–1.97]) (Table 5, online). However, participants in both groups significantly increased their knowledge (difference in mean test scores: intervention: F1: 11.62/39 [95% CI 10.58–12.66]; F2: 13.89/39 [95% CI: 12.86–14.92]; control: F1: 12.63/39 [95% CI: 11.70–13.59]; F2: 14.55/39 [95% CI: 13.75–15.36]).
The first pre- and postintervention study included 971 health care workers.39 The results indicated that each group (stratified by profession) significantly increased their knowledge immediately after (F1) as well as 3 months following the intervention (F2) (p < .05) (Table 6, online) (nurses: F1: 26/100; F2: 22/100 increase; physicians: F1: 24/100; F2: 15/100 increase; other health care workers: F1: 24/100; F2: 22/100 increase).
The second pre- and postintervention study included 233 general practitioners.40 The results indicated a statistically significant mean difference following the intervention (1.8/9 [95% CI: 1.65–1.95]) (Table 6, online).
Sensitivity Analysis
Nineteen studies (7 RCTs and 12 pre- and postintervention studies) with a high risk of bias were included in the sensitivity analysis. Of the 7 RCTs, 5 investigated the effectiveness of an online module-based educational intervention (Table 7 online). Three RCTs reported that online module-based interventions were superior to the control interventions (wait list/no intervention or printed guidelines).43,45,46 All 5 studies reported a within-group improvement in knowledge following the interventions.43–46,48 Moreover, another RCT reported that an electronic guideline (electronic point-of-care tool) was more effective than paper-based learning.42 Finally, the last RCT reported that an electronic guideline and in-person education led to improvements in knowledge compared to in-person education alone.47
The 12 pre- and postintervention studies (Table 8, online) included 7 studies investigating online module-based educational interventions. All 7 studies reported improvement in knowledge.50,51,54,56,57,60,61 Two studies investigated the effectiveness of a video (narrated presentations) and concluded positive improvements in knowledge.49,59 Finally, 2 studies investigated the effectiveness of social media campaigns.52,55 One study reported that a social media campaign was not associated with consistent improvements in knowledge,55 and the other found that traditional methods (print, email, and internet-based education) followed by a social media campaign were associated with improvement in knowledge following instruction with the traditional methods but not following the social media campaign.52
DISCUSSION
Summary of Evidence
Our systematic review aimed to examine the best available evidence on the effectiveness of technology-based learning tools designed to improve knowledge of CPGs by health care providers. We found preliminary evidence for the use of spaced education in combination with a game or case studies; however, because this intervention was not compared to a control intervention, the effect of the benefit cannot be accurately determined. Second, online module-based education may be effective in improving knowledge; however, preliminary evidence suggests that this intervention may not be superior to paper-based self-learning or in-person workshops.
The sensitivity analysis determined that the results of online module-based educational interventions did not differ across methodological quality. This analysis also provided preliminary information about the possible effectiveness of electronic guidelines with and without in-person workshops, short videos, and social media campaigns. These results could be used to generate hypotheses for future studies.
Additionally, considerable gaps in the quality of this literature were apparent throughout the conduct of this review. This limitation has led to a minimal quantity of quality literature on which to draw conclusions. We recommended that future research in the area address this concern.
Theories of Learning
The constructivist theory of learning influenced our main objective of this review and the overall larger study. We considered that learners' previous knowledge and experiences might play a significant role in the uptake of new information. Therefore, health care providers (the learners) will have some previous knowledge and clinical experiences of CPGs on which they can build. Further, constructivist-style learning supports the construction of knowledge rather than learners regurgitating a series of facts.21 If we want health care professionals to retain and engage in using the information from a guideline into clinical practice, the construction of this knowledge is paramount.
Informing KT
KT is an essential component of health research. It is imperative when developing KT strategies to understand end users and their needs, barriers, and preferences so that the activity/intervention can be tailored to their unique needs and preferences.
The previous systematic review by De Angelis et al30 provides insight into the use of technology-based educational interventions that may be beneficial in improving the perception of usability and practice behavior changes about CPGs. Our systematic review builds on this information and provides new evidence to further support the use of technology-based educational interventions.
Strengths and Limitations
This study has several strengths. We implemented a rigorous search strategy that was developed with an experienced health sciences librarian to help minimize errors. All reviewers were trained to screen and critically appraise to minimize error and bias. We eliminated studies of low quality to minimize the risk of bias; however, we also conducted a sensitivity analysis to understand the possible impact of misclassification bias on our results.
Some limitations are present in this study. We limited our search to studies published in the English language, and this may have excluded relevant studies; however, this is an unlikely source of bias.62–66 In addition, the critical appraisal process involves scientific judgment varying between reviewers. However, this methodology is widely used in systematic reviews minimized by training reviewers a priori.67–69 Our review is limited to the quality of the outcome measurements used in the low risk of bias studies. We restricted our review to studies that assessed knowledge following the use of a technology-based learning tool. We did not include studies assessing other measures, such as behavioral change and clinical outcomes. While we recognize that a change in knowledge does not guarantee an eventual implementation of a new practice, a change in knowledge is an important antecedent of behavior change and is typically needed if the implementation of a new practice is expected.70 Finally, due to the limited number of articles included in this review, our findings may not be generalizable.
CONCLUSION
Health care providers need to remain current with CPGs; however, the use of CPGs in clinical practice is suboptimal. A learning tool that incorporates findings from this systematic review stands to improve the use of CPGs. We have summarized the best available literature on educational interventions and their effectiveness in improving knowledge about CPGs in health care providers. This evidence will be used to inform the development of a novel technology-based educational intervention used to disseminate a CPG to chiropractic teaching faculty. This review may also be used to inform the development of other technology-based education KT interventions.
ACKNOWLEDGMENTS
The authors would like to acknowledge the invaluable contributions to this review from Ms Kristi Randhawa, Ms Kathleen Smith, Dr Hainan Yu, Dr Heather Shearer Dr. Jessica Wong, Dr Mana Rezai, and Ms Nancy Fynn-Sackey.
FUNDING AND CONFLICTS OF INTEREST
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors have no conflicts of interest to declare relevant to this work.
REFERENCES
Author notes
Leslie Verville is a Research Manager - Knowledge Translation at the Centre for Disability Prevention and Rehabilitation at Ontario Tech University (2000 Simcoe Street N, Oshawa, Ontario, L1H 7K4, Canada; [email protected]). Pierre Côté is a professor on the Faculty of Health Sciences, director of the Centre for Disability Prevention and Rehabilitation, and Canada research chair in Disability Prevention and Rehabilitation at Ontario Tech University (2000 Simcoe Street N, Oshawa, Ontario, L1H 7K4, Canada; [email protected]). Diane Grondin is an associate professor in the Department of Research and Innovation at the Canadian Memorial Chiropractic College (6100 Leslie Street North, Toronto, Ontario, M2H 3J1, Canada; [email protected]). Silvano Mior is the director for research partnerships and health policy at the Canadian Memorial Chiropractic College and a scientist at the Centre for Disability Prevention and Rehabilitation at Ontario Tech University (6100 Leslie Street North, Toronto, Ontario, M2H 3J1, Canada; [email protected]). Keshini Moodley is a master's degree student at the Centre for Disability Prevention and Rehabilitation at Ontario Tech University (2000 Simcoe Street N, Oshawa, Ontario, L1H 7K4, Canada; [email protected]). Robin Kay is a professor on the Faculty of Education at Ontario Tech University (2000 Simcoe Street N, Oshawa, Ontario, L1H 7K4, Canada; [email protected]). Anne Taylor-Vaisey is health sciences librarian for the Centre for Disability Prevention and Rehabilitation at Ontario Tech University (2000 Simcoe Street N, Oshawa, Ontario, L1H 7K4, Canada; [email protected]).
Concept development: LV, PC, RK. Design: LV, PC, RK. Supervision: LV, PC, RK. Data collection/processing: LV, PC, KM. Analysis/interpretation: LV, PC, DG, SM, RK. Literature search: ATV. Writing: LV, PC. Critical review: LV, PC, DG, SM, KM, RK, ATV.