Background The Clinical Competency Committee (CCC) provides accountability to the general public that physicians completing a training program have achieved competence. CCC processes and features that best identify resident outcomes along a developmental spectrum are not well described.
Objective This study sought to describe CCC features associated with effective and efficient CCC performance.
Methods The study was conducted as part of the 2022 Council of Academic Family Medicine Educational Research Alliance survey of family medicine residency program directors. The survey assessed CCC methods, policies, faculty development, structure, and overall CCC time required. The outcomes were identification of residents along a spectrum of development, from failing to exceeding expectations. Ordinal logistic regressions were used to explore the relationship between CCC characteristics and CCC outcomes.
Results The response rate was 43.3% (291 of 672). Eighty-nine percent (258 of 291) of program directors reported their CCC is successful in identifying residents not meeting expectations; 69.3% (201 of 290) agree their CCC identifies residents who are exceeding expectations. Programs with written policies for synthesizing data (OR=2.53; 95% CI 1.22-5.22; P=.012) and written policies for resident feedback (OR=19.91; 95% CI 3.72-106.44; P<.001) were more likely to report successfully identifying residents below expectations. Programs whose members spent fewer than 3 hours per 6-month interval on CCC meetings were less likely to report being able to identify failing residents (OR=0.37; 95% CI 0.19-0.72; P=.004).
Conclusions This survey of family medicine program directors suggests that formal policies, faculty development, and adequate time for CCC faculty are associated with an effective CCC, especially if goals beyond “identifying failure” are desired.
Introduction
The Clinical Competency Committee (CCC) is an Accreditation Council for Graduate Medical Education (ACGME) requirement for accreditation and serves a complex set of functions at the system, program, faculty, and resident levels.1,2 It is expected to identify failing, struggling, and advanced residents to tailor educational opportunities to meet their needs, and to synthesize datapoints to assign milestones.3
A 2015 study found that most CCCs used a problem identification model to complete their work, with fewer using a developmental model (table 1).4 The problem identification model assumed the residents would become competent during training and focused on identifying struggling residents. The developmental model focused on identifying stages of competence for each resident. In this model, residents were assumed to have a range of skills, and CCC processes were better defined, more transparent, and focused on feedback to residents. The extent to which this model has been incorporated into graduate medical education is not well studied.
Our study sought to explore whether family medicine program directors’ report of features that are consistent with a developmental approach to the CCC correlated with increased identification of residents along a spectrum of development. Specifically, we hypothesized that CCCs with specific policies and procedures for the acquisition and synthesis of data, as well as standards for faculty development of CCC members, may correlate with identification of residents who are struggling but not failing, as well as residents who are excelling.
KEY POINTS
Clinical Competency Committees (CCCs) have a high-stakes role in ensuring residents graduate as safe physicians; however, given the current lack of consistent best practices approaches, they risk inefficiency and overfocus on simply identifying those who are struggling.
This survey of family medicine program directors found that certain features, such as the presence of formal policies, were associated with improved ability to both identify struggling residents and those exceeding expectations.
Program directors interested in improving the efficiency and nuance of their CCC outcomes could consider adding structured faculty development, formal policies, and adequate time for their CCC members.
Methods
Participants
Between April 13 and May 16, 2022, family medicine program directors (N=672) who had not previously opted out were invited to participate in the online Council of Academic Family Medicine Educational Research Alliance (CERA) program director survey.5
Survey Development
Items were developed by members of the research team after a literature review (see online supplementary data for survey). The CERA steering committee independently vetted the questions based on evidence presented, and a sample of family medicine educators pretested the questions.
The items were developed to assess factors associated with the program director’s determination of their CCC’s ability to identify residents who are struggling, excelling, or at risk of failing. Items asked about data management, formal and informal policies, faculty development for CCC members, structure, and time.
Analysis
Survey items were summarized using descriptive statistics. Ordinal logistic regressions were used to explore the relationship between various CCC characteristics and CCC outcomes. These models estimate proportional odds ratios (ORs) for each predictor (CCC characteristics) when shifting to higher levels of CCC efficiency/outcomes. All statistical analyses were performed using SPSS for Windows Version 28 (IBM Corp, Armonk, NY). Statistical significance was assessed using an alpha level of .05.
The study was approved by the Institutional Review Board of the American Academy of Family Physicians.
Results
The overall response rate was 44.3% (298 of 672); 43.3% (291 of 672) went on to answer the first item about their CCC. Table 2 provides demographic and program characteristic data for respondents.
Eighty-nine percent of respondents (258 of 291) strongly agree/agree their CCC is successful at identifying residents not meeting expectations. A similar number strongly agree/agree being able to identify residents who are below expectations but are not failing (88.7%, 258 of 291). Fewer strongly agree/agree (69.1%, 201 of 291) their CCC identifies residents who are exceeding expectations and may benefit from individualized education to achieve their full potential (table 3). The full analysis is available in the online supplementary data.
Identifying Failing Residents
Programs were more likely to report that their CCC successfully identifies failing residents when all CCC members receive formal faculty development about the CCC (OR=3.62; 95% CI 1.02-12.90; P=.047). For each 6-month milestone reporting period, CCCs whose members spent less than 3 hours per 6-month interval on CCC meetings were less likely to report being able to identify failing residents (OR=0.37; 95% CI 0.19-0.72; P=.004).
Identifying Residents Requiring Remediation
Programs with a written policy describing a standardized way for residents to receive feedback generated from the CCC were 14 times more likely to successfully identify residents who require remediation but who are not failing (OR=14.14; 95% CI 2.64-75.63; P=.002). Use of assessment data from multiple sources was also associated with greater success (OR=4.3; 95% CI 1.52-12.21; P=.006), compared to relying mostly on a single source.
Identifying Residents Exceeding Expectations
Programs with a formal written policy or procedure for how to include different kinds of data were 5.3 times more likely to report successfully identifying residents exceeding expectations (OR=5.34; 95% CI 2.62-10.90; P<.001). Presence of a formal policy for residents to receive feedback was also associated with greater success in identifying residents exceeding expectations (OR=12.65; 95% CI 2.42-66.16; P=.003).
Discussion
A model that allows for placement of residents along a spectrum, rather than a binary “failing/not failing” distinction, is more compatible with the competency-based milestone approach to resident development. This competency-based developmental model4 is not only more compatible with most program curricula, it is also more closely adherent to ACGME requirements.4,6
Our study found that written CCC policies correlated with better CCC operations and better resident feedback. Formal policies may provide accountability and clear expectations for communication.
An effective CCC requires substantial faculty time. Programs whose members spent fewer than 3 hours on meetings were less likely to report being able to identify failing residents. Previous literature suggested that faculty who spent more time reviewing resident files and who were responsible for providing feedback to residents were more likely to assign lower ratings.1 One previous study found only 10% of CCC members had protected time for CCC work, although they found the annual time requirement to be more than 9 hours for nearly 40% of programs.7 In spite of this outlay of time, the typical resident was discussed for 10 minutes. Not investing adequate time was found to be associated with worse outcomes in this and other studies. Without adequate time for the complex task, CCCs may default to identifying only residents at risk of failing, rather than to the development of all residents.4
Our study suggests faculty development is associated with better identification of residents who are not meeting expectations. Additional faculty development in the role and process of the CCC is another investment in time that may be required to obtain the high-quality results required to adequately synthesize data and provide effective feedback. This is consistent with previous literature.8-10
Limitations
The response rate of the survey was 44.3%, and we do not have information on nonresponders. Program director self-report may not reflect the opinions of the CCC chair or other committee members. It may also be subject to recall bias and social desirability bias. The cross-sectional nature of this study provides insight into a single point in time. Most programs reported being able to identify residents who were failing or struggling, leading to a smaller pool analysis among programs not reporting being able to do so. This study was limited to family medicine program directors. However, CCC requirements are common to all ACGME accredited programs.2 We expect many of the outcomes from this study are relevant to CCCs in other specialties as well.
Conclusions
Formal written policies for CCC procedures and increasing faculty time for CCC activities appear to be associated with a developmental rather than a problem identification approach to CCC activities.
References
Editor’s Note
The online version of this article contains the survey used in the study and results of the logistic regression analysis.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.