Objective

The purpose of this review was to identify assessment instruments and their measurement properties for assessing evidence-based practice (EBP) knowledge, skills, attitudes, and behavior among students of manual therapy education programs.

Methods

7 electronic databases were systematically searched from inception to May 19, 2023. (MEDLINE, EMBASE, CINAHL, ERIC, EBSCO Discovery, LISA, Google Scholar.) Search terms were subject headings specific to each database (MeSH in MEDLINE) and relevant to evidence-based practice, assessment tools/instruments, and manual therapy healthcare professions. Eligible studies included students of manual therapy education programs (chiropractic, physiotherapy, occupational therapy, osteopathy) and provided evidence supporting instrument measurement properties (reliability, validity). Titles and abstracts were screened by 2 reviewers. Data on each instrument and its properties were extracted and tabulated by 2 reviewers. Instruments were compared using the Classification Rubric for EBP Assessment Tools in Education (CREATE) framework including the 5 steps in the EBP model. Joanna Briggs Institute methodology and Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews checklist were followed.

Results

5 studies were identified, (3 physiotherapy, 2 chiropractic) Two studies used a physiotherapy-focused modification of the Fresno test. One study presented the Knowledge of Research Evidence Competencies instrument. Two studies presented original instruments. Instruments focused on the knowledge domain and did not assess all 5 EBP model steps.

Conclusion

The current literature does not address all 5 steps of the EBP model. The identified instruments have the potential to ensure chiropractic institutions are graduating chiropractors who are highly skilled in evidence-based practice.

Since its beginnings in the early 1990s, evidence-based practice (EBP) has been adopted into professional practice in most healthcare disciplines worldwide. In research, most healthcare professions continue to survey their practitioners about their views on and abilities to perform evidence-based practice; however, leaders in EBP report on challenges in implementation, despite some 30 years of focus.1–3  To better effect change in practice, there is direction for EBP training incorporation into the curriculum of healthcare professional (HCP) programs.2–4  Many HCP education programs have responded to this call and continue to develop their programs to include more EBP teaching. It is included in accreditation criteria and graduate competencies.5,6  As this change in curriculum occurs, there is a need to assess the effects of the curricular changes on students.

Instruments exist that attempt to measure change in the abilities of the EBP learner, commonly though not exclusively, in the assessment categories of EBP knowledge, skills, attitudes, and behavior (KSAB).7,8  KSABs are 4 of the 7 categories of educational assessment used in the Classification Rubric of EBP Assessment Tools in Education (CREATE) framework.9  This framework was introduced in the 2011 Sicily statement on Classification and Development of EBP Learning Assessment Tools, which followed the 2005 Sicily Statement on Evidence Based Practice.4,9  The aim of the 2011 Sicily Statement was “to provide guidance for purposeful classification and development of EBP assessment tools.”9 

Current literature shows that the majority of testing of EBP instruments has been done with HCPs in the context of continuing education and using the domains of KSAB. To our knowledge, 10 systematic reviews exist that investigate EBP measurement instruments. The 10 reviews studied different disciplines: 4 studied practitioners of medicine or a mix of healthcare professions of which medicine was the dominant discipline,7,8,10,11  2 studied nursing,2,12  2 studied occupational therapy,13,14  1 studied physiotherapy,15  and 1 studied rehabilitation sciences.16  Of these 10 reviews, only 2 exclusively studied student populations: Kumaravel et al studied medical students and Boruff and Harrison studied students of rehabilitation sciences.7,16  Albarqouni and Buchanan did not exclude student studies in their reviews; however, the group was not distinguished in discussions.8,13  Nine of the 10 reviews included data or referenced the 7 categories of educational assessment presented in the CREATE framework, most commonly the knowledge category.

To our knowledge, there are no reviews of EBP instruments measuring EBP KSAB in students of chiropractic, physiotherapy, occupational therapy or osteopathy. The purpose of this review was to identify assessment instruments and their measurement properties for assessing EBP KSAB among students of manual therapy (MT) education programs. This review is embedded in a larger review including practitioners. The instruments will be used to populate the CREATE framework presented in the Sicily statement on EBP assessment tools.4  The framework cross references 7 categories of educational assessment with the 5 steps of EBP: Ask, Search, Appraise, Integrate and Evaluate.4  The results of this review will inform MT educators on instruments available to evaluate EBP learning of students in their education programs.

Inclusion Criteria

Studies fulfilled the following inclusion criteria: (1) English language; (2) published in a peer-reviewed journal; and (3) instruments including but not limited to self-reported and non-self-reported surveys and questionnaires. (4) Study population includes the following MT professions: chiropractic students or chiropractors, physiotherapy students or physiotherapists, occupational therapy students or occupational therapists, osteopathic students or osteopaths. (5) Studies have an assessment of at least 1 of these 4 EBP domains of interest: KSAB as defined in the Sicily statement and the CREATE framework.4,9  Studies that examine at least 1 measurement property of the instrument used including: Reliability, Validity [eg, Content, Construct, Discriminative (subset of Construct), and Responsiveness].

We performed our search on MT professions that we defined as chiropractic, physiotherapy, occupational therapy, and osteopathy, due to their similar patient base with respect to primarily musculoskeletal conditions managed. This combining of like professions is already seen in the literature in systematic reviews on EBP measurement instruments by Boruff in 2017.16  In this scoping review only the studies that reported on students of chiropractic, physiotherapy, occupational therapy, or osteopathy healthcare programs were included and reported on.

Exclusion Criteria

Studies fulfilling any of the following criteria were excluded: (1) letters, editorials, commentaries, unpublished manuscripts, dissertations, government reports, books and book chapters, conference proceedings, meeting abstracts, lectures and addresses, consensus development statements, or guideline statements; (2) study population including nurses, physicians, and other nonmanual therapy healthcare professionals.

Search Strategy

The search strategy was developed in consultation with a health sciences librarian (KM) and reviewed by a second librarian using the Peer Review of Electronic Search Strategies (PRESS) Checklist.17,18  The following electronic databases were systematically searched from inception to May 19, 2023: MEDLINE (Ovid), EMBASE (Ovid), CINAHL (EBSCO), ERIC, EBSCO Discovery, LISA (Library Information Sciences Abstracts), and Google Scholar.

Search terms consisted of subject headings specific to each database (eg, MeSH in MEDLINE) and free text words relevant to evidence-based practice, assessment tools/instruments, and manual therapy healthcare professions (See Supplementary File 1 for a complete list of electronic search strategies for multiple databases). Supplementary File 1 is available as an online supplementary file accompanying this article.

The inclusion of Information Literacy (IL) instruments in our search terms for EBP measurement instruments followed the inclusion criteria of a previously published 2018 scoping review by Boruff et al, which reported on assessment of knowledge and skills in IL for rehabilitation sciences students.16  In previous works using IL as a measure of EBP, the Association of College & Research Libraries (ACRL) Information Literacy competency standards: “Determine, Access, Evaluate, Apply and Ethics” are used as equivalent to the 5 Steps of EBP.16,19  Although we agree that IL training fits very well into training of the first 3 steps of EBP, it is the opinion of the authors and others that this does not include vital components necessary to perform steps 4 and 5 and, therefore, IL instruments would not fully assess all 5 steps of EBP.2 

The final search results were exported into EndNote x9.3.3 (Clarivate Analytics) reference manager and duplicates removed by lead investigator (LD).

Two reviewers (LD, JC) independently screened articles in 2 phases using prepiloted Excel (Microsoft Corp) spreadsheets (Supplementary File 1). Phase 1 involved screening titles and abstracts for irrelevant (IR) and possibly relevant (PR) citations. Phase 1 sheets were compared across raters, and disagreements discussed to reach consensus. The original ratings between the 2 raters were compared and agreement statistics were reported (% agreement, kappa with 95% confidence intervals). Citations rated as possibly relevant in phase 1 were reviewed using the full text article in phase 2 and ranked as relevant (R) or irrelevant (IR) in an Excel spreadsheet. Raters met to discuss disagreements and where necessary consulted a third investigator (SHJ) to reach consensus. Again, agreement statistics were calculated and reported.

A manual citation search of included studies’ references was performed (LD, JC) from the finalized set of included studies after phase 2. Forward citation search was performed with a health sciences librarian (KM) on the included studies using Google Scholar. Copies of EBP measurement instruments assessed in the included studies were obtained from lead authors via email correspondence where possible. No quality assessment of the selected literature was performed as it is a scoping review.20 

Two reviewers (LD, JK) independently extracted data from eligible sources of evidence to build evidence tables in an iterative process. A third reviewer with expertise in biostatistics and research methodology (SHJ) was consulted. Data extracted included: study location, health discipline and population, study design, instrument name and description, outcome domains of interest (EBP KSAB), and the measurement properties investigated. Additionally, data was collected on instrument administration and development: scoring points, feasibility (time required to complete and score), floor/ceiling effect, and item discrimination values.

The EBP instruments were compared in the CREATE summary table originating from the framework developed from the Sicily statement on instruments measuring EBP.4  Revisions of tables consisted of consultation with third reviewer (SHJ) for consistency and accuracy of reporting validity and reliability data.

The format used to report on this review focused on the items outlined by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR).21 

The results of the search are outlined in PRISMA format in Figure 1.22  We resolved any disagreements between the 2 reviewers and reached consensus without any consultation with the 3rd investigator (SHJ) for phases 1 and 2. Percentage agreement between raters for phase 1 was 87.2% (95% confidence interval [CI]: 85.9%–88.4%) with kappa 0.18 (95% CI: 0.13–0.23). Percentage agreement between raters for phase 2 was 76.8% (95% CI: 65.1%–86.1%) with kappa 0.48 (95% CI: 0.26–0.70). There were no targeted prespecified levels of agreement, but 2 reviewers were used to reduce the chance of missed studies. The higher level of agreement with the lower values of kappa, particularly for phase 1 screening, reflect what is known as the kappa paradox that occurs with uneven marginal distributions in the tables, with substantially more ratings of irrelevant (∼90%) compared to possibly relevant (∼10%).23 

Figure 1

PRISMA flow chart.

Figure 1

PRISMA flow chart.

Close modal

Manual citation search and forward citation searches did not yield any new sources of evidence. An updated search was conducted to May 2023 that did not yield any new results.

General characteristics of the included studies are presented in Table 1. No studies studied EBP measurement instruments on occupational therapy or osteopathy students. Four different EBP measurement instruments were evaluated: the Modified Fresno test (mFT),24,27  Knowledge of Research Evidence Competencies (K-REC),25  EBP student knowledge 40-item questionnaire (K40-Q),26  and Information Literacy Self-efficacy survey and Knowledge tests (IL-S/K25).19  Original versions of 3 of 4 instruments were obtained: mFT,24,27  K-REC,25  and K40-Q.26  Authors of 1 study were unavailable, thus the information reported here was retrieved from the published study.19 

Table 1

Included Studies

Included Studies
Included Studies

Table 2 describes the different EBP instruments assessed in the 5 studies. The format of the mFT and K-REC is a mixture of short-answer, multiple-choice, and true or false items. The K40-Q is multiple choice and the IL-S/K25 uses a Likert scale. Domains of KSAB addressed by each instrument were identified. All 4 instruments addressed the Knowledge domain. No instruments assessed the domains of Attitudes or Behavior. Only the mFT addressed the Skills domain. Measurement properties studied were introduced.

Table 2

Evidence-Based Practice Measurement Instruments, Outcome Domains (KSAB) and Measurement Properties Assessed

Evidence-Based Practice Measurement Instruments, Outcome Domains (KSAB) and Measurement Properties Assessed
Evidence-Based Practice Measurement Instruments, Outcome Domains (KSAB) and Measurement Properties Assessed

Specific values and findings of the measurement properties assessed for each instrument are presented in Table 3. Definitions of properties of reliability and validity were consistent with the “COnsensus based Standards for the selection of health status Measurement INstruments (COSMIN)” definitions.28  Internal consistency was reported in 3 studies: Tilson, Leo et al, and Tepe et al.19,24,26  Intra- and inter-rater, and test–retest reliability was reported in 3 of 5 studies: Tilson, Lewis et al, and Miller et al.24,25,27  Test–retest reliability was reported by Lewis et al, Miller et al, and Tepe et al.19,25,27  Miller et al provided the reliability properties of standard error of measurement (SEM) and minimal detectable change of the mFT.27  To allow for interpretation of these 3 reliability properties, information on scoring points of the instruments was collected where reported. Content validity was reported in 4 studies: Tilson, Lewis et al, Leo et al, and Tepe et al. Construct was reported by 2 studies: Tilson and Lewis et al.19,24–26 

Table 3

Summary of Measurement Properties Assessed

Summary of Measurement Properties Assessed
Summary of Measurement Properties Assessed

Item discrimination values are not reported here, as it is beyond the scope of this paper. Reported times for students to complete the different instruments ranged from 10 to 41 minutes, whereas time required to score was only reported for mFT (10–20 minutes).24–27  Floor/ceiling effects were not identified.24–27 

Instruments were used to populate the Classification Rubric for EBP Assessment Tools in Education-CREATE framework presented in the Sicily statement on EBP assessment tools 2011 (Table 4).4  The 4 instruments assessed in our review all occupy the Knowledge assessment category of the framework. The mFT and IL-S surveys also occupy the Skills and Self-efficacy categories respectively. No instruments assess the categories of reaction to the educational experience, attitudes, behaviors, or benefits to patients. The instruments in this review all address the first 3 steps of EBP Ask, Search, and Appraise. The mFT is the only instrument addressing step 4 Integrate. None of the instruments addressed the Evaluate step.

Table 4

Classification Rubric for EBP Assessment Tools in Education, CREATE FRAMEWORK (Tilson, et al, 20114 )

Classification Rubric for EBP Assessment Tools in Education, CREATE FRAMEWORK (Tilson, et al, 20114)
Classification Rubric for EBP Assessment Tools in Education, CREATE FRAMEWORK (Tilson, et al, 20114)

In our scoping review we identified 5 studies examining the measurement properties of 4 instruments used to evaluate EBP KSAB among MT students: the Modified Fresno test (mFT),24,27  Knowledge of Research Evidence Competencies (K-REC),25  EBP student knowledge 40-item questionnaire (K40-Q),26  and the Information Literacy Self-efficacy survey and Knowledge tests (IL-S/K25).19  Our review included instruments that had undergone some psychometric assessment. None of the studies included examined all the measurement properties we considered. To be suitable for use in education programs, instruments should have established and adequate measurement properties such as reliability including intra- and inter-rater reliability and test-retest reliability, validity (face, content, construct), and responsiveness to change. Feasibility of instrument use is also important to consider for an educational setting including time required to complete the instrument and time required to score.7  The 4 instruments showed combinations of acceptable internal consistency, inter- and intra-rater reliability, test–retest reliability, content (face) validity, and construct validity. In education, knowledge change is evaluated by cognitive assessment and all 4 instruments in this review used cognitive testing to assess knowledge. There was inadequate information provided in the included studies to perform qualitative analysis and this is consistent with findings by Roberge-Dao (2022) in their umbrella review on measures of EBP that states “it is unclear to what extent this guidance (COSMIN) can be applied to the range of EBP measures.29,30 

K40-Q and IL-S/K25 were instruments developed for use on chiropractic students. The IL-S/K25 instrument was not obtained, and forward citation search of the authors did not find any other relevant publications. The K40-Q and the IL-S/K25 demonstrated acceptable internal consistency and face validity, and the IL-S/K25 had good test–retest reliability. The K40-Q is a multiple-choice format that without any calculations, was completed by students in 20–30 minutes.26  We infer that the low demand on human and financial resources could make the administration and scoring more feasible, though this was not directly reported.

The K-REC was developed for use with allied health professions and to date has been primarily tested in physiotherapy students. It showed good inter- and intrarater reliability, face validity, and construct validity. A second study using the K-REC by Long et al studied whether an EBP intervention led to change in a student population.31  They showed that the K-REC scores changed significantly before and after 2 training courses with a Cohen’s effect size of 1.13 for knowledge. Effect sizes are considered as “small” (<0.2), “medium” (0.2–0.8), and “large” (> 0.8).31,32  This large effect size could be considered some evidence of responsiveness despite not being what they set out to study. Development of the K-REC was based on the Fresno test and, like the Fresno test, it starts with a clinical scenario and 9 short-answer style questions follow.25  The short number of multiple choice, true/false, ranking, and short-answer questions correlated to an average completion time of 10 minutes. There was no reporting on time required to score.

The original Fresno test was included in the first systematic review on instruments for evaluation of education in EBP by Shaneyfelt in 2006.10  In the review, 43% of instruments studied were tested on students in medicine, nursing or dentistry.10  Since then, the Fresno Test has been adapted for other professions including occupational therapy, physiotherapy, and nursing.2,24,33  Within our included studies the mFT showed good internal consistency, inter- and intra-rater reliability, SEM, confidence in a single measure, minimal detectable change, and face and construct validity. Test–retest for the mFT reached only moderate levels as reported by Miller with the explanation that it is “…perhaps attributable to the subjects’ novice level or individual motivation”.27  Evaluating test–retest reliability with a wider range of competencies in EBP may help address this. The mixed question type, some of which require calculation, makes administration of this test the longest of the 4 instruments at 60 minutes. Both studies on the mFT involved training of markers for up to 7 hours over several sessions.24,27  For use with HCPs in the field, the training of markers may be prohibitive; however, in an educational institution this may be more feasible as markers are routinely trained for diverse types of test situations.

We found no studies investigating the responsiveness of the instruments tested in MT students, although the effect sizes for the K-REC by Long et al reported above may be seen to support responsiveness.31  Responsiveness of an instrument would show change in scores detecting the introduction of an EBP educational intervention and would be an important statistic in assessing curricular change in an academic setting. We suggest this as a critical area of future research.

The CREATE framework includes the 5 Steps of EBP reported by the 2011 Sicily Statement: Ask, Search, Appraise, Integrate and Evaluate.4  The 4 instruments of our review were distributed into the CREATE framework across the first 3 steps of EBP Ask, Search and Appraise. Only 1 (mFT) was explicitly modified to assess Step 4: Integrate. This gap is consistent with previous systematic reviews on the substantially larger number of instruments developed and used with students in medicine and nursing.2,7,8,10,30  The first introductions of EBP teaching into HCP curricula largely occurred in the 1990s with a focus on step 3 Appraisal. Recent statements by leaders of EBP research call for the training of clinicians and clinical learners to focus on understanding the trustworthiness of evidence, evidence summaries, and interpretation of treatment effects, over critical appraisal.1  Researchers involved in instrument development prior to the development of the Sicily statement on EBP assessment tools recognize that “…failing to assess this knowledge (Integrate) sends an implicit message to learners that it is not important.”24  The Sicily statement and CREATE framework call for teaching and assessing all 5 steps of EBP.4,8  Future research should focus on filling this gap.

In the CREATE framework, there are 7 assessment categories.4  The instruments included in this review focused on 3: Self-efficacy (IL-S/K25), Skills (mFT), and Knowledge, the latter addressed by all 4 instruments using cognitive testing.16,26  We found no instruments that assess the other categories of the framework: Reaction to the educational experience, Attitudes, Behavior or Benefits to the patients. This is consistent with the findings of 10 existing systematic reviews on EBP instruments and the 2011 Sicily statement.

The righthand section of the CREATE framework provides additional considerations regarding the assessment of EBP in different contexts, for example clinicians versus researchers. This also applies to assessing practitioners and students using different instruments according to their role. Most instruments reported in the literature have been created for practicing professionals learning EBP as continuing education rather than students of the MT programs.34,35  Schools of manual therapy (physiotherapy, occupational therapy, chiropractic, and osteopathy) would benefit from more research in adaptability of existing instruments in a student population in order to assess curricular changes while implementing EBP.

The right side of the framework also suggests the possibility of interdisciplinary use of an instrument. The Sicily statement states: “…evidence-based practitioners may share more attitudes in common with other evidence-based practitioners than with non-evidence-based colleagues from their own profession…”.4  It is common to see multiple HCPs assessed in a single study evaluating an EBP assessment instrument.11,30  Our study population group was comprised of professions with similarities in scope of practice and conditions treated, and as a result of this scoping review, the authors feel that the assessment tools could be used on chiropractic and physiotherapy. It has been suggested that using a combination of instruments developed by other fields (ie, behavioral and communication sciences) may help address all 5 steps.4,36  Future research assessing interdisciplinary use of outcome measures evaluating EBP is needed. The value of interprofessional learning in EBP has been recognized by several groups and sharing both EBP curricular competencies and assessments could facilitate stronger interprofessional research on EBP in healthcare.3,4,35,37  Future research should also be focused on this group of HCPs, as the most recent of our included studies was published in 2015, almost 10 years ago.

Strengths of this study include following the PRISMA guidance for conducting and reporting of scoping reviews. We also included a variety of manual therapy professions for increased applicability. Our scoping review had several limitations. Our exclusion criteria may have omitted outcome measures that had not reported validity or reliability. Also, the grey literature was not searched. We had a high level of disagreement in phase 1 screening and we were not able to obtain the IL-SK25 instrument. We may also have missed instruments by not including the search term Shared Decision Making as part of step 4 in EBP.

The 4 identified instruments principally assessed student EBP knowledge of the KSAB domains, with some adequate reliability and validity. They all addressed steps 1 to 3 of the CREATE Framework, and 1 instrument assessed step 4. More research is needed in instruments assessing steps 4 and 5. Other disciplines, namely behavioral and communication sciences, may provide other instruments to address all 5 steps and all KSAB domains. The mFT has been well tested in the literature so far and assesses 4 steps of EBP while the K-REC appears to be a good second option for our population of interest. More assessment of instrument responsiveness is of future interest when following students’ responses to curricular changes over time. Further, instruments used on practitioners may also be adaptable to a student population with adequate validity studies.

The authors would like to warmly thank Dr. Hainan Yu for his expertise, patience, and guidance in the creation, software, and formatting of this study.

FUNDING AND CONFLICTS OF INTEREST No funding was provided. The authors have no stated conflicts of interest.

1.
Tikkinen
KA,
Guyatt
GH.
Understanding of research results, evidence summaries and their applicability—not critical appraisal—are core skills of medical curriculum
.
BMJ Evid Based Med
.
2021
;
26
(
5
):
231
233
.
2.
Belita
E,
Squires
JE,
Yost
J,
Ganann
R,
Burnett
T,
Dobbins
M.
Measures of evidence-informed decision-making competence attributes: a psychometric systematic review
.
BMC Nurs
.
2020
;
19
:
1
28
.
3.
Lehane
E,
Leahy-Warren
P,
O’Riordan
C,
et al
Evidence-based practice education for healthcare professions: an expert view
.
BMJ Evid Based Med
.
2019
;
24
:
103
108
.
4.
Tilson
JK,
Kaplan
SL,
Harris
JL,
et al
Sicily statement on classification and development of evidence-based practice learning assessment tools
.
BMC Med Educ
.
2011
;
11
(
1
):
1
10
.
5.
Council on Chiropractic Education (CCE)
.
CCE accreditation standards principles, processes & requirements for accreditation
.
2020
. Accessed January 25, 2024. https://www.cceusa.org/uploads/1/0/6/5/106500339/2020_cce__accreditation_standards__current_.pdf
6.
Federation of Canadian Chiropractic (FCC).
Entry-to-practice competency profile for chiropractors in Canada
.
2018
. Accessed January 25, 2024. https://chirofed.ca/wp-content/uploads/2021/03/entry-to-practice-competency-profile-DCP-Canada-Nov-9.pdf
7.
Kumaravel
B,
Hearn
JH,
Jahangiri
L,
Pollard
R,
Stocker
CJ,
Nunan
D.
A systematic review and taxonomy of tools for evaluating evidence-based medicine teaching in medical education
.
Syst Rev
.
2020
;
9
(
1
):
1
12
.
8.
Albarqouni
L,
Hoffmann
T,
Glasziou
P.
Evidence-based practice educational intervention studies: a systematic review of what is taught and how it is measured
.
BMC Med Educ
.
2018
;
18
(
1
):
1
8
.
9.
Dawes
M,
Summerskill
W,
Glasziou
P,
et al
Sicily statement on evidence-based practice
.
BMC Med Educ
.
2005
;
5
:
1
7
.
10.
Shaneyfelt
T,
Baum
KD,
Bell
D,
et al
Instruments for evaluating education in evidence-based practice: a systematic review
.
JAMA
.
2006
;
296
(
9
):
1116
1127
.
11.
Rengerink
KO,
Zwolsman
SE,
Ubbink
DT,
Mol
BW,
Van Dijk
N,
Vermeulen
H.
Tools to assess evidence-based practice behaviour among healthcare professionals
.
BMJ Evid Based Med
.
2013
;
18
(
4
):
129
138
.
12.
Leung
K,
Trevena
L,
Waters
D.
Systematic review of instruments for measuring nurses’ knowledge, skills and attitudes for evidence‐based practice
.
J Adv Nurs
.
2014
;
70
(
10
):
2181
2195
.
13.
Buchanan
H,
Siegfried
N,
Jelsma
J.
Survey instruments for knowledge, skills, attitudes and behaviour related to evidence‐based practice in occupational therapy: a systematic review
.
Occup Ther Int
.
2016
;
23
(
2
):
59
90
.
14.
Glegg
SM,
Holsti
L.
Measures of knowledge and skills for evidence-based practice: a systematic review
.
Can J Occup Ther
.
2010
;
77
(
4
):
219
232
.
15.
Fernández‐Domínguez
JC,
Sesé‐Abad
A,
Morales‐Asencio
JM,
Oliva‐Pascual‐Vaca
A,
Salinas‐Bueno
I,
de Pedro‐Gómez
JE.
Validity and reliability of instruments aimed at measuring evidence‐based practice in physical therapy: a systematic review of the literature
.
J Eval Clin Pract
.
2014
;
20
(
6
):
767
778
.
16.
Boruff
JT,
Harrison
P.
Assessment of knowledge and skills in information literacy instruction for rehabilitation sciences students: a scoping review
.
J Med Libr Assoc
.
2018
;
106
(
1
):
15
.
17.
McGowan
J,
Sampson
M,
Salzwedel
DM,
Cogo
E,
Foerster
V,
Lefebvre
C.
PRESS peer review of electronic search strategies: 2015 guideline statement
.
J Clin Epidemiol
.
2016
;
75
:
40
46
.
18.
Aromataris
E,
Munn
Z.
Chapter 1: JBI Systematic Reviews. In:
Aromataris
E,
Munn
Z
(Editors).
JBI Manual for Evidence Synthesis
.
JBI
,
2020
. https://synthesismanual.jbi.global.
19.
Tepe
R,
Tepe
C.
Development and psychometric evaluation of an information literacy self-efficacy survey and an information literacy knowledge test
.
J Chiropr Educ
.
2015
;
29
(
1
):
11
15
.
20.
Armstrong
R,
Hall
BJ,
Doyle
J,
Waters
E.
‘Scoping the scope’ of a Cochrane review
.
J Public Health
.
2011
;
33
(
1
):
147
150
.
21.
Tricco
AC,
Lillie
E,
Zarin
W,
et al
PRISMA Extension for Scoping Reviews (PRISMA-ScR): checklist and explanation
.
Ann Intern Med
.
2018
;
169
:
467
473
.
22.
Page
MJ,
McKenzie
JE,
Bossuyt
PM,
et al
The PRISMA 2020 statement: an updated guideline for reporting systematic reviews
.
Int J Surg
.
2021
;
88
:
105906
.
23.
Feinstein
AR,
Cicchetti
DV.
High agreement but low kappa: I. The problems of two paradoxes
.
J Clin Epidemiol
.
1990
;
43
(
6
):
543
549
.
24.
Tilson
JK.
Validation of the modified Fresno test: assessing physical therapists’ evidence based practice knowledge and skills
.
BMC Med Educ
.
2010
;
10
:
1
9
.
25.
Lewis
LK,
Williams
MT,
Olds
TS.
Development and psychometric testing of an instrument to evaluate cognitive skills of evidence based practice in student health professionals
.
BMC Med Educ
.
2011
;
11
:
1
1
.
26.
Leo
MC,
Peterson
D,
Haas
M,
LeFebvre
R,
Bhalerao
S.
Development and psychometric evaluation of an evidence-based practice questionnaire for a chiropractic curriculum
.
J Manipulative Physiol Ther
.
2012
;
35
(
9
):
692
700
.
27.
Miller
AH,
Cummings
N,
Tomlinson
J.
Measurement error and detectable change for the modified Fresno Test in first-year entry-level physical therapy students
.
J Allied Health
.
2013
;
42
(
3
):
169
174
.
28.
Amsterdam Public Health
.
COSMIN (Consensus-based standards for the selection of health measurement instruments)
. Accessed Sept 26 2023. https://www.cosmin.nl
29.
Terwee
C,
Mokkink
LB,
Knol
DL,
Ostelo
R,
Bouter
LM,
de Vet
HC.
Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist
.
Qual Life Res
.
2012
;
21
:
651
657
.
30.
Roberge-Dao
J,
Maggio
LA,
Zaccagnini
M,
et al
Quality, methods, and recommendations of systematic reviews on measures of evidence-based practice: an umbrella review
.
JBI Evid Synth
.
2022
;
20
(
4
):
1004
1073
.
31.
Long
K,
McEvoy
M,
Lewis
L,
Wiles
L,
Williams
M,
Olds
T.
Entry-level evidence-based practice training in physiotherapy students: does it change knowledge, attitudes, and behaviours? A longitudinal study
.
Internet J Allied Health Sci Prac
.
2011
;
9
(
3
):
1
11
.
32.
Cohen
J.
Statistical Power Analysis for the Behavioral Sciences
. 2nd ed.
New Jersey
:
Lawrence Erlbaum Associates
;
1988
.
33.
Atler
K,
Stephens
J.
Pilot use of the Adapted Fresno Test for evaluating evidence-based practice knowledge in occupational therapy students
.
Am J Occup Ther
.
2020
;
74
(
4
):
7404205100p1
-
9
.
34.
Lehane
E,
Agreli
H,
O’Connor
S,
et al
Building capacity: getting evidence-based practice into healthcare professional curricula
.
BMJ Evid Based Med
.
2021
;
26
(
5
):
246
246
.
35.
Albarqouni
L,
Hoffmann
T,
Straus
S,
et al
Core competencies in evidence-based practice for health professionals: consensus statement based on a systematic review and Delphi survey
.
JAMA Netw Open
.
2018
;
1
(
2
):
e180281
.
36.
Hoffmann
TC,
Montori
VM,
Del Mar
C.
The connection between evidence-based medicine and shared decision making
.
JAMA
.
2014
;
312
(
13
):
1295
1296
.
37.
Straus
S,
Richardson
W,
Glasziou
P,
Haynes
R.
Evidence-Based Medicine. How to Practice and Teach
EBM. Edinburgh:
Churchill Livingstone
;
2005
.

Author notes

Lara deGraauw (corresponding author) is an assistant professor in clinical education at the Canadian Memorial Chiropractic College (6100 Leslie Street, North York, M2H3J1, Canada; [email protected]). Jocelyn Cox is an assistant professor of clinical education at the Canadian Memorial Chiropractic College (6100 Leslie Street, North York, M2H3J1, Canada; [email protected]). Jaclyn Kissel is an assistant professor in clinical education at the Canadian Memorial Chiropractic College (6100 Leslie Street, North York, M2H3J1, Canada; [email protected]). Kent Murnaghan is the associate reference librarian at the Canadian Memorial Chiropractic College (6100 Leslie Street, North York, M2H3J1, Canada; [email protected]). Sheilah Hogg-Johnson is a professor and research methodologist at the Canadian Memorial Chiropractic College (6100 Leslie Street, North York, M2H3J1, Canada; [email protected])

This is an award-winning paper presented at the Chiropractic Educators Research Forum (CERF), June 4, 2023 conference Keeping It Real: Practice Relevant Education. The CERF Awards are funded in part by sponsorships from NCMIC, ChiroHealth USA, Clinical Compass, World Federation of Chiropractic, and Brighthall. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement, by these sponsors.

Author ContributionsConcept development: LD. Design: LD, SHJ. Supervision: LD Data collection/processing: JC, LD, JK. Analysis/interpretation: LD, SHJ, JK, JC. Literature search: KM. Writing: LD, SHJ, JK, JC. Critical review: JC.

Supplementary data