ABSTRACT
Theoretical frameworks provide a lens to examine questions and interpret results; however, they are underutilized in medical education.
To systematically evaluate the use of theoretical frameworks in ophthalmic medical education and present a theory of change model to guide educational initiatives.
Six electronic databases were searched for peer-reviewed, English-language studies published between 2016 and 2021 on ophthalmic educational initiatives employing a theoretical framework. Quality of studies was assessed using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) approach; risk of bias was evaluated using the Medical Education Research Study Quality Instrument (MERSQI) and the Accreditation Council for Graduate Medical Education (ACGME) guidelines for evaluation of assessment methods. Abstracted components of the included studies were used to develop a theory of change model.
The literature search yielded 1661 studies: 666 were duplicates, 834 studies were excluded after abstract review, and 132 after full-text review; 29 studies (19.2%) employing a theoretical framework were included. The theories used most frequently were the Dreyfus model of skill acquisition and Messick's contemporary validity framework. GRADE ratings were predominantly “low,” the average MERSQI score was 10.04, and the ACGME recommendation for all assessment development studies was the lowest recommendation. The theory of change model outlined how educators can select, apply, and evaluate theory-based interventions.
Few ophthalmic medical education studies employed a theoretical framework; their overall rigor was low as assessed by GRADE, MERSQI, and ACGME guidelines. A theory of change model can guide integration of theoretical frameworks into educational initiatives.
Introduction
A theory is a set of logically related propositions that describe relationships among concepts and help explain phenomena.1 In medical education, theories serve as the basis of theoretical frameworks that provide a lens to explore questions, design initiatives, evaluate outcomes, measure impact, and disseminate findings.2 Studies grounded in theory guide best practices and may serve as “clarification” studies that evoke depth of understanding and propel the field forward.2,3 For example, the Shannon and Weaver Model of Communication has been used to analyze opportunities for error in clinician handoffs,4 and Ericsson's deliberate practice theory has been used to design a simulation course to teach advanced life support skills.5
However, theoretical frameworks are underutilized in medical education research.3,6 Many educational initiatives, especially within subspecialty medical education, continue to be developed based on the traditional teacher-apprentice model.2,7 Lack of theory-based educational initiatives can preclude meaningful interpretation of study methods and results, as theories ground new scholarly work within current literature, allow application of findings to other settings, and provide a framework for adaptation of existing theories or development of new theories.3,6 Additionally, there is a dearth of studies on the prevalence of theoretical framework usage in subspecialty medical education.8
This article has 2 purposes: to systematically review the role of theoretical frameworks in subspecialty medical education, using ophthalmology as an example, and to use the findings to construct a theory of change model9 for guiding the development of theory-based graduate medical education curricula. Our primary questions are: What is the prevalence of theoretical framework use in ophthalmic medical education, and how can educators best integrate theory-based educational initiatives? Our findings may benefit educators by highlighting the state of theoretical framework use in subspecialty medical education and by extending these findings into a theory of change model to encourage the more widespread use of theoretical frameworks.
Methods
Search Strategy
A research librarian (L.P.) was consulted to develop a comprehensive search strategy. Following updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines10 on the conduct and reporting of systematic reviews, we searched 6 online databases (PubMed, Embase, Web of Science, CINAHL, PsycINFO, and ERIC) for articles published between January 1, 2016 and January 16, 2021 (Figure 1). We selected a 5-year period prior to the writing of this review to capture current practices in medical education. Our searches included database-specific thesaurus terms, such as medical subject headings (MeSH) and Emtree, as well as keywords relevant to ophthalmic education and theoretical frameworks (online supplementary data).
Selection Criteria
Eligibility criteria included peer-reviewed, English-language studies discussing educational initiatives in an ophthalmology setting that employed a theoretical framework at the onset of the initiative. We used the definition of theoretical framework by Varpio et al: “a logically developed and connected set of concepts and premises—developed from one or more theories—that a researcher creates to scaffold a study.”1 Educational initiatives included development of curricula, learning interventions, training strategies, and evaluation methods (eg, rubrics). We also included studies that referenced initiatives informed by a theoretical framework and studies that assessed learners with clinical evaluation methods, such as rubrics, employing a theoretical framework. We excluded reviews, studies that were not explicitly informed a priori by a theoretical framework, and studies that focused on populations other than medical students, ophthalmology trainees, or ophthalmologists. We also excluded studies that employed best practice models without describing a theoretical framework.
Eligible studies were de-duplicated in EndNote (Clarivate Analytics, Philadelphia, PA) using the method by Bramer et al11 and imported into the systematic review software Covidence (Melbourne, Victoria, Australia) for screening, full-text review, and data extraction. Two reviewers (S.L.S, Z.Z.Y.) conducted abstract screening and full-text review independently and in duplicate, with disagreements arbitrated by the senior author (P.B.G.).
Data Extraction
A data extraction template developed in Covidence was used to extract relevant information, including year of publication, location, study design, characteristics of study participants, sample size, educational initiatives, theoretical frameworks, underlying theories, outcomes, and results. Data extraction was completed independently and in duplicate by 2 reviewers (S.L.S., Z.Z.Y.), with disagreements arbitrated by the senior author (P.B.G.).
Quality Assessment
The Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) guidelines12 were used to evaluate the overall quality of the studies. The GRADE approach scores quality of evidence based on risk of bias, inconsistency, indirectness, imprecision, and publication bias; studies can be upgraded by demonstrating large effects, plausible confounding, and dose response gradients. The GRADEPro Guideline Development Tool (Evidence Prime, Ontario, Canada) was used to create a GRADE evidence profile for outcomes.
Comprehensive risk of bias (methodological quality) for experimental, quasi-experimental, and observational studies was measured using the Medical Education Research Study Quality Instrument (MERSQI).13 MERSQI scores medical education studies on 10 questions across 6 domains for a maximum of 18 points.
Comprehensive risk of bias (methodological quality) for studies that developed clinical assessment methods (eg, rubrics) was determined using guidelines developed by the Accreditation Council for Graduate Medical Education (ACGME).14 Unlike the GRADE standards which evaluate overall quality of studies based on outcomes, the ACGME guidelines are the only published method to date that evaluates quality of clinical assessment methods.15 Studies are assigned a letter grade ranging from A to C on 6 domains (reliability, validity, ease of use, resources required, ease of interpretation, and educational impact), an overall level of evidence, and an overall recommendation for uptake into a program's evaluation system. All components of quality assessment and risk of bias analysis were completed independently and in duplicate by 2 co-authors (S.L.S., Z.Z.Y.) with disagreements arbitrated by the senior author (P.B.G.). This study adhered to the tenets of the Declaration of Helsinki.
Developing a Theory of Change Model
Theory of change models are commonly used in large-scale projects16,17 to delineate the steps and interventions needed to achieve a set of long-term outcomes by backwards mapping the required preconditions, assumptions, rationale, and interventions necessary to achieve these outcomes. We used our findings to construct a theory of change model9 depicting the steps and resources required for an educational system to develop theory-based initiatives.
Results
Study Selection
A total of 1661 results were identified: 700 from PubMed and 961 from the other electronic databases (Figure 1). After excluding 666 duplicates, 995 potential studies were identified; 834 were excluded following title and abstract screening. We reviewed 161 articles in full. We excluded 10 articles that were not studies or were not ophthalmology specific. Of the remaining 151 articles discussing educational initiatives in ophthalmology research, 29 (19.2%) were explicitly informed by a theoretical framework and made up the final analytic sample.
Study Quality
According to the GRADE approach for rating certainty of outcomes, 10 outcomes were rated as “very low” certainty, 7 were rated as “low” certainty, 1 as “moderate” certainty, and 1 as “high” certainty; this is consistent with reported ratings for non-randomized studies.12 The online supplementary data contain a GRADE evidence table for the 7 most important outcomes, rated by 3 authors (S.L.S., Z.Z.Y., P.B.G.) using the GRADE guidelines.
The average MERSQI score for all applicable studies was 10.04 out of 18 points; by comparison, recently published mean MERSQI scores ranged from 9.05 to 12 in other surgical subspecialties.18,19 The online supplementary data list MERSQI scores for each applicable study.
For studies that developed clinical assessment methods, overall ACGME guideline scores were mixed for reliability, relatively high for validity, high for ease of use, very high for resources, relatively high for ease of interpretation, and unclear for educational impact. In the absence of large-scale studies or randomized trials, the overall recommendation for all applicable studies was judged as “Class 3” (provisional usage as a component of a program's evaluation system), the lowest rating. These scores are consistent with other reviews of clinical skill assessment methods.15,20 The online supplementary data list ACGME guideline ratings for clinical assessment development studies.
Characteristics of Included Studies and Interventions
The most common study types were prospective cohort21-29 and cross-sectional.30-36 Studies were most commonly conducted in the United States,27,37-42 Denmark,23,28,36,43 and the United Kingdom.25,26,44,45 Only 7 studies22,26,31,32,35,44,46 had sample sizes over 50 participants (range 52-311). Ten studies22,29,34,36,38,44,45,47-49 included attending ophthalmologists, 9 studies21,27,30,33,37,39-42 included residents, 6 studies24,26,31,32,35,46 included medical students, and 4 studies23,25,28,43 included a mixed selection of participants. Seven educational intervention studies were conducted in a hospital/clinic,21,22,29,32,37,39,40 6 in an in-person classroom,24,30,31,35,41,46 9 in simulation centers,23,25,28,33,36,40,42,44,45 1 in a virtual classroom,26 and 1 at an academic conference.34 Table 1 contains characteristics of the included studies.
Theories and Theoretical Frameworks
Outcomes and Results
Table 2 describes outcomes and study results. Most studies measured components of surgical performance and skill, such as intraoperative complications,21,22,29,37,40 surgery performance,22,27-29,36 aesthetic grade,42 surgery completion,22 and surgical efficiency.39,42 Several studies investigated learning outcomes, such as examination performance,24,31,43 learning readiness,35 learning style,30 and learning barriers.33 Nine studies examined components of validity.23,28,38,43-45,47-49 Studies also assessed subjective participant evaluation of the initiative; 8 studies24,26,27,32,34,41,42,46 surveyed participants, and 1 study surveyed surgical teams.25
Theory of Change Model
Our theory of change model (Figure 2) aimed to provide a framework to guide educators in developing, implementing, and evaluating theory-based educational interventions. We abstracted key components of ophthalmic educational initiatives based on theoretical frameworks. Additionally, we analyzed studies that described how theoretical framework usage informed study design to reveal preconditions for designing theory-based initiatives. Given the relative dearth of studies that transparently reported theoretical framework usage, we also referenced literature on theory of change models and the Center for Theory of Change's guidelines50 to further inform the development of our model.
Assumptions that must hold true for developing theory-based interventions successfully include flexibility of the curriculum to accommodate change, educators' willingness to learn about and employ theoretical frameworks, participants' willingness to trial curricular interventions, and administrators' willingness and ability to support educators and participants. Resources include educators, participants, administrators, material resources, educational resources, and data collection systems.
In our hypothetical example of an ophthalmology residency curriculum initiative, an area for curriculum improvement or change must first be identified by analyzing performance trends and summative or formative evaluations or conducting a needs assessment. For example, if 40% of first-year residents in an ophthalmology program scored poorly on their national training examination, educators may be asked to develop an educational intervention to rectify the low scores.
Prior to developing the intervention, educators may undergo training in educational theory to better select and apply theoretical frameworks. Administrators may set aside protected time for learning and make funding available to provide educators with learning resources such as webinars and reading lists. Educators are then better equipped to conduct a literature review and select appropriate theories to inform their intervention. Educators may, for example, select Vygotsky's collaborative learning theory,51 which suggests that peer-to-peer learning fosters deeper thinking. With administrative support, they may review available resources and plan to dedicate 30 minutes at the end of weekly didactics for resident-led examination practice. After each session, residents may be invited to fill out evaluations on their satisfaction with the initiative within 72 hours.
Intermediate outcomes include satisfaction with the initiative, improved standardized evaluation metrics (eg, proportion of residents who score well on the national examination), and increased number of learning or graduation competencies fulfilled. Long-term outcomes for learners include improved knowledge base and better performance as resident and practicing physicians.52 Long-term outcomes for educators include increased use of conceptual frameworks in educational initiatives, which may translate to increased scholarly output and funding.3 Ultimately, achieving these outcomes will support the goal of increasing theory-based educational interventions throughout an educational system.
Finally, the initiative development process is iterative, and performance data may be routinely reviewed to inform future modifications. For example, residents may prefer more timed examination simulations; educators may then reexamine the initiative using another theoretical framework.53
Discussion
The primary purpose of this systematic review was to evaluate theoretical frameworks in subspecialty medical education, using ophthalmology as an example. We found that less than 20% of ophthalmic medical education studies published between 2016 and 2021 were informed by a theoretical framework. When included studies used frameworks, they often named the theory without describing how it framed the research question, informed the methods, or elucidated the results.6 Several studies incorporated previously designed theory-based courses or evaluation methods into their medical education initiative but did not further describe the theoretical framework.
Few studies have investigated the prevalence of conceptual frameworks in medicine and surgery. Schwartz et al reviewed the use of conceptual frameworks in the study of duty hours regulations for residents and found that several made contradictory predictions.54 Davis et al reviewed the conceptual underpinnings of pediatrics quality-of-life instruments and found that only 7.9% (3 of 38) were based in theory.55
Our findings are consistent with other studies investigating use of theoretical frameworks in medical education. A review by Bajpai et al on the use of learning theories in digital health professions education reported that 33.4% (81 of 242) were informed by theory.56 Similarly, a review by Hauer et al on behavior change curricula for medical trainees demonstrated that 35.7% (39 of 109) used a theoretical framework.57 In addition, a review by van Gaalen et al on gamification in health professions education found that only 15.9% (7 of 44) of studies employed a theoretical framework,58 and a review by Leslie et al on faculty development programs in medical education found that only 18.2% (4 of 22) of studies employed a theoretical framework.58 Of note, some studies employed different definitions of a theory-based approach,56,58 and others did not define theory or theoretical framework.57,59 These discrepancies obscure accurate prevalence data and highlight the need to adhere to a standardized set of definitions.1,2,53,60-62
The secondary purpose of this systematic review was to use our findings to construct a theory of change model to guide educators in creating theory-based initiatives. Given the complexity and heterogeneity of medical education systems across institutions, theory of change models are excellent tools to map large-scale initiatives, especially those with multiple outcomes.50 Our theory of change model illustrated the comprehensive process of selecting, integrating, and evaluating theory-based interventions, including the resources required and the underlying assumptions.
There are several limitations of this study. Our definition of a theoretical framework may differ from that of other studies; there is a need for researchers to adopt standardized definitions of the following terms: theory, theoretical framework, and conceptual framework.1 We used the definitions by Varpio et al, as they provide the most current model of these terms, informed by literature review, for health professions education research.1 We also limited our review to studies published between 2016 and 2021 in order to focus our search on current work in ophthalmic medical education6 ; however, this may have masked trends over time. Due to the heterogeneity in study initiatives and outcomes, we were unable to evaluate the efficacy of theoretical framework use and its effects on learner performance. Additionally, given the relatively poor overall quality of included studies and the heterogeneity in reporting use of theoretical frameworks, we were unable to assess the impact of theoretical frameworks on ophthalmic medical education. Moreover, it is possible that effective theoretical frameworks employed in certain study settings (eg, a classroom) may not translate to real-world practical settings (eg, an operating room).
We were also unable to fully determine applicability of many domains of the ACGME guidelines due to ambiguity in their wording; however, reviewers remained consistent in their application of this tool. Other studies have reported similar challenges in evaluating clinical assessment methods, such as evaluation tools for surgical skills, using the ACGME guidelines.15,20 Further investigation is needed into the efficacy of assessment and evaluation tools for surgical subspecialties. Finally, it is possible that some authors used theoretical frameworks without reporting them or without being consciously aware of using them53 ; for a study to be included, authors must have reported usage of a theoretical framework or employed a named intervention or methodology based in theory.
Educators interested in designing curricular interventions or longitudinal programs based on theoretical frameworks may benefit from examining questions and results through several “lenses” of theoretical frameworks and using standardized evaluation and assessment systems. In addition, medical educators may consider testing interventions in more than one study setting or institution. Future studies can be improved by transparently reporting theoretical frameworks, including the rationale for selecting a particular framework and how it informed study design and setting.
Conclusions
In summary, theoretical frameworks are underutilized in ophthalmic medical education research, and many studies that employ them do not do so transparently; in the few studies that integrated a theoretical framework, overall study rigor was low as assessed by GRADE, MERSQI, and ACGME guidelines. A theory of change model may guide educators in selecting, applying, and evaluating theory-based initiatives.
References
Author notes
Editor's Note: The online version of this article contains further data from the study.
Funding: The authors report no external funding source for this study.
Competing Interests
Conflict of interest: The authors declare they have no competing interests.
Disclaimer: The views expressed here are those of the authors and do not necessarily reflect the positions or policies of the US Department of Veterans Affairs or the US Government.