Background Although Clinical Competency Committees (CCCs) were implemented to facilitate the goals of competency-based medical education, implementation has been variable, and we do not know if and how these committees affected programs and assessment in graduate medical education (GME).
Objective To explore the roles CCCs fulfill in GME and their effect on trainees, faculty, and programs.
Methods We conducted a narrative review of CCC primary research with the following inclusion criteria: all articles must be research in nature, focused on GME and specifically studying CCCs, and published in English language journals from January 2013 to November 2022.
Results The main results are as follows: (1) The primary role of the CCC (decision-making on trainee progress) is mostly described in “snapshots” (ie, focusing on a single aspect of this role at a single point in time); (2) CCCs are taking on secondary roles, some of which were anticipated (eg, remediation, feedback) whereas others were “unanticipated” (eg, use of CCC data to validate trainee self-assessment, predict trainee performance in other settings such as certifying examinations, investigate gender bias in assessment); and (3) Articles briefly mentioned short-term outcomes of CCCs at the level of the trainees, faculty, and programs. However, most studies described interventions to aid CCC work and did not specifically aim at investigating short-term (eg, curriculum changes) or long-term outcomes (eg, improved patient outcomes).
Conclusions CCCs fulfill a range of roles in assessment beyond their intended purpose. A more systematic approach is needed to investigate the outcomes of CCC implementation on GME.
Introduction
The primary role of Clinical Competency Committees (CCCs) in graduate medical education (GME) is to render judgements about trainees’ performance against a set of criteria and standards (eg, competency Milestones in the United States) and to make recommendations to program directors about advancement.1 In addition to this primary role, there has been a recognition of evolving secondary roles that these committees can play in residency training.2 Although neither a mandate nor rarely made explicit, these secondary roles result from CCCs being privy to “seeing” and synthesizing all their trainees’ assessment data and may include remediating trainees, critiquing the quantity and quality of assessment data, providing faculty development, identifying curricular gaps and redundancies, and serving a quality improvement function for the assessment system.2 To date, however, there has been no synthesis of the existing literature to describe roles actually fulfilled by CCCs and their effect on GME and assessment systems in particular. Without such exploration, we risk not providing CCC members with the time allotment, faculty development, and administrative support needed, thus limiting their ability to ensure that trainees are ready for graduation and capable of providing safe and effective care. Therefore, the purpose of this narrative review is to provide an overview of the published research on the roles CCCs fulfill in GME and their effect on residency training and assessment.
Methods
We chose to conduct a narrative review because they are particularly useful for exploring topics that are complex and under-studied, and they allow the inclusion of a wide variety of studies.3 Our approach was informed by the recommendations published by Sukhera, which offer guidance on key decisions in the review process.3 According to Sukhera, narrative reviews are (1) “flexible” in which the initial scope may change through the review process,” (2) well suited for topics that are “under-researched,” (3) “require a meaningful synthesis of research evidence that may be complex or broad,” and (4) “require detailed, nuanced description and interpretation.”3 Our study met each of these criteria. First, although we started the review broadly (eg, including perspectives and policy documents), we realized that we needed to focus on research articles to gather data about how CCCs are actually being implemented and studied. Second, our focus on the roles of CCCs and their associated outcomes is indeed an “under-researched” area in GME. Third, the CCC literature to date is complex (ie, represented by varying implementation approaches).4 Fourth, we provided a “nuanced”3 analysis of the descriptions offered in the literature of CCCs as detailed below.
Search Strategy
To our knowledge, the first mandate for CCCs was put forth in January 2013, when the Next Accreditation System went into effect. Thus, we searched PubMed, Scopus, ERIC, and Google Scholar databases from January 1, 2013 to November 1, 2022. During the study, the search was updated twice because there was a delay in our process due to the COVID-19 pandemic and the emergence of pertinent CCC literature. We used “Clinical Competency Committee” as a search term to identify publications where the CCC was mentioned in the title and/or abstract. The search strategy was conducted under the guidance of a librarian at the leading author’s institution.
Publication Selection Process
The Figure summarizes the search process. One author (A.E.) performed a preliminary review of the search results. One hundred seventeen articles were initially identified and, after removal of duplicates, 111 were selected for “title and abstract review.” Two researchers (A.E. and either M.G. or S.H.) independently screened 60 abstracts for inclusion criteria and subsequently resolved discrepancies through discussion. This approach proved to be critical to the richness of our discussions centered on developing definitions for some of the key concepts involved in conceptualizing our work, such as “secondary roles” and “impacts.” A.E. subsequently reviewed all remaining abstracts. Following full-text review of selected abstracts, ultimately, 84 articles were included for data extraction.5-88
Flowchart of the Literature Search and Study Selection Process
Abbreviation: CCC, Clinical Competency Committee.
Flowchart of the Literature Search and Study Selection Process
Abbreviation: CCC, Clinical Competency Committee.
Inclusion/Exclusion Criteria
Inclusion criteria were the following: all articles must be research in nature, focused on GME and specifically studying CCCs (ie, the term CCC was used in the abstract and/or title), and published in English language journals from January 2013 to November 2022. Articles were excluded if they focused on other decision-making groups such as education committees, were written as perspectives, or situated in undergraduate medical education. The authors discussed these criteria at length, opting to focus the review on only GME, as the process for using CCCs in undergraduate medical education is still evolving.
Data Extraction
A data extraction form was created in Microsoft Excel. The categories (codes) for data extraction included demographic program information, description of the primary and secondary role(s) of CCCs, and the effect of these roles at various levels of the assessment system and context (eg, on the trainee, training program, specialty). Using the preliminary coding framework, 3 authors (A.E., M.G., S.H.) independently reviewed 14 randomly selected articles. Preliminary findings were discussed in the entire research team (A.E., M.G., S.H., E.H.) to resolve any discrepancies and to further refine the coding framework. Using the final coding framework, A.E. subsequently reviewed the remaining articles, while the other authors (M.G., S.H., E.H.) “spot checked” the extracted data for accuracy and completeness. Any articles co-authored by one or more members of the research team were reviewed independently by a colleague (Lisa Conforti or Raghdah Al-Bualy) with content expertise in CCCs. Table 1 presents a taxonomy of themes capturing both primary and secondary CCC roles.
We were aware of the posited secondary roles mentioned in the CCC guidebook,2 but we did not know if this would capture the breadth of what was published. Thus, in our taxonomy, we differentiated “anticipated” secondary roles (ie, those that had been mentioned as theoretically possible in the CCC guidebook2 ) versus those that we had not already seen described (“unanticipated”). We used the competency-based medical education (CBME) logic model developed by Van Melle et al to define effects of CCCs on GME training.89 A logic model is a visual representation of how components of a process interact to effect short- and long-term outcomes.89 Thus, changes in the assessment system (eg, new rotations, assessment tools) or actions and attitudes of stakeholders resulting from CCCs fulfilling primary and secondary roles were identified as effects or outcomes. We furthermore characterized outcomes as “short-term” (eg, changes that can be measured in 1 or 2 CCC cycles, for example, residents using CCC feedback to create individualized learning plans) versus “long-term” (eg, those changes that take a much longer time to identify and measure such as linking trainee learning to improved patient outcomes or determining whether CCC ratings influence revisions to the Milestones).
Reflexivity
We had ongoing discussions to determine how our a priori definitions and assumptions of CCC roles and experiential knowledge might influence our interpretation of the data. All authors have published under the broad heading of assessment. A.E. works for the Accreditation Council for Graduate Medical Education (ACGME) and has contributed to the CCC literature. E.H. worked for the ACGME at the time of the study and has contributed to the Milestones and CCC literature. M.G. has been involved in design of assessments in which decision-making relies on CCCs. She also has experiential knowledge of how CCCs work and how they can affect assessment quality. S.H. has been involved in the design and execution of programs of assessment and curricula of undergraduate medical education in which decision-making is done by a committee, and she has experiential knowledge on quality assurance of assessment programs.
Results
Table 2 presents an overview of the demographic characteristics of the studies included in our review. Evaluation of the quality of the studies included in our review identified most studies to be original research published in peer-reviewed journals, focusing on specific elements in the process of CCC implementation and decision-making. The majority were single-institution, single-specialty studies conducted in the United States involving CCC review of trainees at the residency level. See Table 1 for a brief description of each of the studies included in the review.
CCC Primary Role and Potential Associated Effect
The primary role of the CCC is to assess trainees and make recommendations to program directors regarding developmental progress.1 We found that most articles did not describe the primary role in its entirety (ie, from review of the assessment data to making a judgement and communicating it to programs and trainees). Rather, the focus was on various aspects such as trying to make sense of difficult data,5 the use of undocumented assessment data,6 role conceptualization,7 or implementation of new assessment tools.8 See data for themes and examples. Some articles focused on the process of Milestones ratings and patterns in performance scoring.9-16 An ethnography of CCCs furthermore revealed marked variability in the approaches CCCs use to conduct their work, such as the strategies used in decision-making and types of assessment data used.4 We also found little inquiry as to how decisions about trainee performance are actually made. The CCC process (before, during, and after meetings) as described by Al-Bualy and colleagues outlines multiple steps during CCC work.17 Our review reveals that still very little is known about how these steps are actually performed.
Although we did not find any studies that explicitly aimed to investigate the effect of CCC implementation on programs and assessment systems, there were multiple examples of “CCC-inspired changes” aimed to facilitate CCC work such as instituting new (1) curricula,18 (2) assessments,19-23 (3) ways of organizing the assessment data,24-29 and (4) assessors.30,31 Beyond these CCC-inspired changes, there were also some mentions of “short-term” outcomes involving the trainees, program, or CCC itself. For instance, at the level of the trainees, one program implemented a web-based direct observation assessment tool with 38% of residents subsequently reporting more immediate feedback on their operative skills.19 At the level of the program, Mamtani et al describe development and implementation of a curriculum and instructional strategies to support assessment of Patient Safety and Quality Improvement Milestones.18 Similarly, Pack and colleagues describe CCCs providing training programs with input for development of new learning experiences, based on what CCCs count as evidence for their decision-making and to fill gaps within the assessment system.32
Lastly, there were a number of instances of short-term outcomes on the CCC itself. In some cases, authors were able to report a resultant change to their process. For instance, the score cards used in one general surgery program to consolidate all relevant assessment data to a single page card reduced the length of the CCC meeting from 126 minutes to 106 minutes, yet the time available to actually discuss each resident increased from 1 minute prior to using the score cards to 5 minutes after implementation.28 Nabors and colleagues described reduced deliberation time during CCC meetings following a change in the assessment process (ie, CCC members assigning their ratings prior to the CCC meeting).33 Other studies reported the effect of including allied health professionals as assessors on the robustness of CCC decision-making. For example, Bedy and colleagues surveyed emergency medicine CCC members and found that they perceived pharmacists’ assessments as useful to their judgement of resident performance.31 In another study, nurses’ end-of-rotation assessments of emergency medicine residents did not correlate with final CCC proficiency levels; however, their descriptive comments were thought to be “invaluable” for identifying areas for improvement.30 Pack and colleagues found that through the CCCs struggle with problematic evidence, members unpacked their assumptions about the source of the data used and reflected on how they perceive certain types of data as useful evidence.5 All of these examples suggest the potential effect of CCCs at multiple levels in residency training and assessment, not only on trainees, curricula, and assessment programs but also on the CCC members themselves, for example through development of shared mental models around decision-making processes, including usefulness of assessment data.
CCC Secondary Roles and Potential Associated Effect
We found only one study that was specifically designed to explore CCC secondary roles.32 In this study by Pack and colleagues, CCCs deliberately contributed to ongoing evaluation of the curriculum and assessment in residency training and provided meaningful feedback to program directors about program limitations, thereby documenting an impact on CCC members, other faculty, and the program.32 As the vast majority of articles included in our review were not designed to explore CCC secondary roles, we extrapolated the secondary role of the CCC based on the narrative provided by the authors. For instance, some CCCs went beyond identifying struggling residents to assume other responsibilities in the remediation process. Warburton and colleagues describe their CCC not only identifying struggling learners but also referring them to the Early Intervention Remediation Committee.34 In another setting, CCC members helped create individualized learning plans and monitored residents’ progress.35 In these descriptions of secondary roles, the associated outcomes were not reported.
In addition to these “anticipated” secondary roles, we also found a myriad of secondary roles which had not been ascribed to CCCs previously and denoted these as “unanticipated” secondary roles (see Table 1 for themes and examples). The very presence of a CCC, and the data generated by these committees, allowed for “CCC data facilitated research” (ie, use of CCC data to investigate a variety of assessment-related topics). Several studies, for example, investigated the correlation between resident self-assessment and CCC scores.36-42 In various specialties and settings, CCC data was used to check for gender-based differences in Milestones ratings43-47 (ie, for evidence of gender bias in assessment). In those studies that did not find any significant gender-based differences,43-46 some authors hypothesized that CCCs may serve to neutralize biases in the assessments they receive.46 CCC data were also used to investigate correlations between Milestone ratings and performance in other settings, such as certifying or in-training examinations.48,49 Lastly, CCC data was used to support the evaluation of curricular activities or changes. For instance, Frey-Vogel and colleagues compared pediatric interns’ performance scores in a simulated setting with CCC performance ratings.50 In all of the examples mentioned above, CCC data was seen as an instrumental resource for investigating a variety of assessment-related questions that arise in GME.
Discussion
To our knowledge this is the first narrative review of the CCC literature focusing on CCC roles and the effect of these committees on residency training and assessment. Our work revealed 3 key findings: (1) most CCC studies address components of the primary role but do not explore the entire process; (2) CCCs fulfil secondary roles that we did not anticipate; and (3) despite the myriad of short-term “CCC-inspired changes,” there is a lack of documented long-term outcomes.
By revealing the lack of a comprehensive approach to studying their primary and secondary roles, our review illustrates the complex nature of the work of CCCs. Not only are we still trying to understand how CCCs accomplish their intended role, but our findings also suggest their actual role includes a broad range of secondary roles, some of which are unanticipated. Thus, our review illuminates the breadth of roles CCC play in GME assessment systems. With a clearer understanding of the roles these committees actually play, programs are in a better position to advocate for resources to support CCC efforts. A more systematic approach to investigating CCC processes may not only assist CCCs in developing their expertise, but also provide meaningful input into how assessment systems can be improved to ensure fair and transparent assessment of learning as well as enhance assessment for learning (feedback and learning opportunities for trainees), contributing to achievement of overarching goals in CBME.
CCC performance ratings have been used as a source of validity evidence in the context of self-assessments, exploration of bias in assessment decisions, and prediction of performance in other settings—an unanticipated role. This seems to suggest that CCC ratings are considered to be a “gold standard.” Although findings from these studies may help critically analyze and improve assessment practices in specific settings, more work is needed to justify this approach. Further research into both anticipated and unanticipated secondary CCC roles can help training programs leverage the data they are already collecting to evaluate their practices and make improvements to their education and assessment systems. Therefore, we want to emphasize that even though secondary CCC roles (both anticipated and unanticipated) were not the original intent, they should not be minimized. More work is needed to determine the full scope of these roles and their effects on assessment systems, and we call for a collaborative national research agenda centered on studying specific CCC roles and their associated short- and long-term outcomes.
We also need to determine how implementation of CCCs contributes to achievement of intended outcomes in CBME (ie, graduating trainees who can provide safe and effective care). Although we found multiple examples of “CCC-inspired changes” or “outputs” such as new assessment tools, learning experiences, and data organization platforms, we did not find any studies that focused on investigation of long-term outcomes (eg, a change in feedback culture). To better understand if and how implementation of CCCs contributes to CBME goals, we therefore recommend that future studies consider using the logic model to study CCC impact, both short- and long-term, and intended as well as unintended outcomes.
There are a number of limitations to this work. Our review is limited to English-language studies only. Since studying the roles and associated outcomes of CCCs was not the explicit aim of most of the articles we reviewed, it is possible that we made inferences without fully understanding the contexts described. Although our taxonomy can serve as a first step for conceptualizing CCC processes and effects, it will need to be revisited and revised as future work is performed. Most of the studies were conducted in the United States and it is possible that our findings and conclusions may not be generalizable to other contexts. Thus, future studies should investigate CCC roles and associated outcomes in other contexts.
Conclusions
Although this narrative review identified broader CCC roles than anticipated, there were significant gaps in the literature regarding descriptions of these roles and their associated effects. The lack of articles specifically focused on investigating the outcomes of implementing CCCs in GME is a key finding and a launching point for future work.
The authors would like to thank Lisa Conforti, Raghdah Al-Bualy, Laura Edgar, and Lorelei Lingard.
References
Editor’s Note
The online supplementary data contains a visual abstract.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
Disclaimer: Andem Ekpenyong, MD, MHPE, works part-time for the Accreditation Council for Graduate Medical Education (ACGME), and at the time of writing, Eric S. Holmboe, MD, worked full-time for the ACGME.