Abstract
A national task force identified domains and developmental milestones from the national competencies for resident training. Cultural Consensus Analysis (CCA) is a standard anthropological technique that can identify value conflicts. We created a CCA based on the internal medicine milestones (M-CCA) in 3 steps: converted the 38 domains into active statements; reduced the total number to 12 by summarizing and combining; and simplified the wording. This M-CCA needs further validation, after which it may be useful for assessing the 6-competency model.
Background
In September 1999, the Accreditation Council for Graduate Medical Education (ACGME) unveiled the Outcome Project. By July 2002, all graduate medical education programs in the United States began a transition from process measures to 6 outcome measures (patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice). The details about how these general competencies were to be taught, or how they were to be measured as “learned” by the residents, were left unspecified.
A decade later, little progress has been made in the measurement of these 6 competencies.1 There are difficulties in measuring the competencies because some are vaguely defined. It is also possible that competencies do not correspond to important domains of performance. Low interrater reliability occurs because meaning, especially for professionalism, is culture dependent.2 Moreover, there is scant literature describing how general competencies are valued in the context of care delivery.
Difficulty in operationalizing the ACGME competencies has resulted in idiosyncratic evaluation of educational effectiveness and uncertainty about the 6-competency model. To address this problem, a developmental set of 142 behavioral milestones for internal medicine has been created.3 However, the independent assessment of 142 milestones is unmanageable and risks reliance on detailed checklists. Bundling selected milestones into meaningful assessment tools requires greater understanding of the psychometric problems outlined above.
Cultural Consensus Analysis (CCA) is a standard anthropological technique that determines whether and to what degree groups hold shared knowledge and whether there are conflicting preferences and values among groups.4 Our research team has developed a CCA that assesses operational problems in resident teaching clinics.5 This CCA consists of a set of 16 cards, each describing 1 element of competent performance (such as “Let the patient know about laboratory results”). These are sorted by order of importance to the subject. This CCA indicates the degree to which each group shares a coherent set of beliefs. Large value differences among groups in the CCA are associated with major operational problems as described by blinded interdisciplinary focus groups.6 Overall, CCA performance can statistically validate or refute the underlying conceptual model that the statements were based on.7
We propose a CCA, based on the internal medicine milestones, as an effective and pragmatic way to assess the 6-competency model. This M-CCA can provide 3 important measurements that are relevant. The M-CCA can measure the shared knowledge of a group, such as residents. Thus, it can tell us whether a competency is recognized by that group as being important to them. Comparing M-CCA results among groups, such as faculty and residents, can tell us if educational intentions match educational results. Statistical analysis of M-CCA performance across groups and sites can tell us whether the 6-competency model is universally valid or contains cultural relativism.
Project Description
We describe here the initial creation of the M-CCA for use in an ambulatory clinic. It must still be validated before widespread use. The M-CCA was created in 3 steps. Each step was carried out by the core research team (C.S.S., W.H., C.F., M.M., F.L-W.) and then confirmed or modified by leaders from the Milestone Project (K.C., W.I.).
We began with the 38 bullets described in the internal medicine milestones,4 which are linked to the Residency Review Committee requirements for internal medicine programs.8 These bullets were converted into resident behaviors that could be used for a competency-based CCA. For instance, the milestone domain “Manage patients using clinical skills of interviewing and physical examination” under the patient care competency was converted to “Uses interviewing and physical examination to manage patient care.”
During step 2, the 38 statements were reduced in 3 iterations to 12 statements by summarizing, prioritizing, and combining similar statements. For instance, the statements “Use interviewing and physical examination to manage patient care,” “Manage patients across the spectrum of clinical disease,” and “Manage patients in a variety of health care settings” were combined into the statement “Competently manage patients with a variety of diseases in continuity clinic, a skilled nursing facility, and the hospital.” We did not begin with a target number of statements but wanted at least 12 statements because the statistical method for model testing (convergent discriminant analysis) requires 2 or more statements per node. We checked frequently with leaders from the Milestone Project to make sure that no critical nuance had been lost in reduction.
During the final step, the statements were simplified and reworded to achieve a Flesch-Kincaid reading level of 8th grade or less. For example, the statement “Competently manage patients with a variety of diseases in continuity clinic, a skilled nursing facility, and the hospital” was simplified to “Care for patients with different diseases in the clinic, nursing home, and hospital.” The final M-CCA statements are shown in the table.
Discussion
We have developed a CCA based on the internal medicine milestones (M-CCA). It is not known whether this M-CCA adequately represents the complexity of the milestone construct, or whether the simplicity of these statements has lost important levels of nuance. This M-CCA instrument should be validated with further studies.
If valid, the M-CCA may be used to demonstrate important differences among groups in a continuity clinic. For instance, one can easily imagine that the statement “Work with the team to get quality patient care and make the best system” might have different rankings by different stakeholders when they are forced to choose between this and other value-laden statements. Understanding how such statements are interpreted by these groups is critically important to the evaluation process.
The next step in the development of the M-CCA is a pilot study that will test the instrument and methodology in a variety of clinical environments. The results from this pilot will give us an indication of whether or not the instrument could be used in other populations and whether the 6-competency model is coherent and complete.
References
Author notes
C. Scott Smith, MD, is Professor of Medicine in Medical Education and Evaluation at the University of Washington; William Hill, PhD, MBA, is Research Associate at the VA Medical Center; Chris Francovich, EdD, is Assistant Professor in Leadership Studies at Gonzaga University; Magdalena Morris, RN, MSN, is Faculty in Nursing at Carrington College; Francine Langlois-Winkle, BS, is Research Coordinator at Saint Luke's Regional Medical Center; Kelly Caverzagie, MD, is Associate Program Director in the Department of Internal Medicine in the Division of Hospitalist Medicine at Henry Ford Hospital; and William Iobst, MD, is Vice President in Academic Affairs at the American Board of Internal Medicine.
Presented in part at the Association for Medical Educators in Europe meeting, September 6, 2010, in Glasgow, Scotland.
Funding: The authors report no external funding source.