Abstract
Many internal medicine (IM) programs have reorganized their resident continuity clinics to improve trainees' ambulatory experience. Downstream effects on continuity of care and other clinical and educational metrics are unclear.
This multi-institutional, cross-sectional study included 713 IM residents from 12 programs. Continuity was measured using the usual provider of care method (UPC) and the continuity for physician method (PHY). Three clinic models (traditional, block, and combination) were compared using analysis of covariance. Multivariable linear regression analysis was used to analyze the effect of practice metrics and clinic model on continuity.
UPC, reflecting continuity from the patient perspective, was significantly different, and was highest in the block model, midrange in combination model, and lowest in the traditional model programs. PHY, reflecting continuity from the perspective of the resident provider, was significantly lower in the block model than in combination and traditional programs. Panel size, ambulatory workload, utilization, number of clinics attended in the study period, and clinic model together accounted for 62% of the variation found in UPC and 26% of the variation found in PHY.
Clinic model appeared to have a significant effect on continuity measured from both the patient and resident perspectives. Continuity requires balance between provider availability and demand for services. Optimizing this balance to maximize resident education, and the health of the population served, will require consideration of relevant local factors and priorities in addition to the clinic model.
Internal medicine programs look for ways to enhance the ambulatory care experience for residents. The ideal model to optimize patient and learner continuity remains elusive.
A study assessed continuity of care in different ambulatory care models.
Lack of randomization; multiple local factors affecting continuity reduce generalizability.
Continuity of care for patients and physicians differed both among the 3 models and for physicians and patients in each model. The optimal approach requires balancing patient and learner considerations.
Introduction
Continuity between patients and providers is an important tenet of primary care. Recognized as a key mechanism for improved quality of care,1 enhanced continuity is associated with improved patient and provider satisfaction, improved adherence to recommended preventive care, and decreased utilization of the emergency department and hospital.2–5
Governing bodies for graduate medical education recognize the importance of providing an ambulatory continuity experience for trainees.6 However, achieving continuity of care in these settings remains a challenge.7 There is variation in resident continuity clinic structure and size, and many trainees feel stressed in the clinic environment.8
Continuity metrics vary widely among programs, suggesting that structural differences may be important for promoting continuity of care. Previous studies have demonstrated that clinic time and frequency, as well as patient panel size, affect continuity.9–11 Several structural models have been described and evaluated by internal medicine (IM) residencies throughout the United States.12–16 Reports from single institutions with innovative education models show conflicting results in patient-provider continuity.12,17,18 In addition, comparisons between programs are lacking. In this study, we compared continuity of care metrics across programs with distinct structural characteristics.
Methods
Study Population and Design
Twelve programs participated in the Educational Innovations Project Ambulatory Collaborative (table 1).19–21 Of eligible residents, 98% consented to participate. Texas Tech University Health Sciences Center at El Paso provided oversight. Participating sites received approval from their local Institutional Review Board.
The primary aim of this multi-institutional, cross-sectional study was to assess the effect of clinic structure on continuity and other key practice metrics in IM resident continuity clinics. The secondary aim was to analyze determinants of continuity across all programs. The data collection period was September 2010 through May 2011. One institution implemented a long block ambulatory experience, so the time frame at this institution was correspondingly shifted.
Clinic Model
As described in prior studies, program leadership from each institution described their continuity clinic model as falling into 1 of 3 groups: (1) traditional weekly experience; (2) combination, with some weekly experiences plus additional ambulatory block rotations; and (3) block structure with discrete inpatient and ambulatory rotations.19,20
Key Practice Metrics
Continuity was measured using 2 methods: the usual provider of care method (UPC),22,23 the percentage of visits in which patients were seen by their primary resident; and the continuity for physician method (PHY),10,24 the percentage of visits for residents in which they see their own patients. Panel size was defined as the number of patients followed by each resident in continuity clinic at the end of the data collection period. Ambulatory workload was defined, based on volume, as the total number of patient visits divided by the number of clinics attended for each resident during the study period. Utilization was defined as the average number of visits for patients during the study period.
Statistical Analysis
In the primary analysis, the independent variable was clinic model. UPC, PHY, ambulatory workload, panel size, utilization, and number of clinics in the study period were dependent variables. We compared the 3 clinic models using analysis of covariance. The Tukey studentized range test was used to test for differences among groups.
Multivariable linear regression analysis was performed to analyze the effect of practice metrics and clinic model on continuity. In this analysis, UPC and PHY were dependent variables. Panel size, ambulatory workload, utilization, number of clinics in the study period, and clinic model were independent variables. P < .05 was considered statistically significant. We used SAS version 9.3 (SAS Institute Inc) for statistical analysis.
Results
Practice data were available for 96% to 97% of the participating residents, varying slightly with the particular measure. Results by clinic model are displayed in table 2. UPC was significantly different across the 3 clinic models, being highest in the block model, midrange in the combination model, and lowest in the traditional model programs. PHY was significantly lower in block model than in combination and traditional programs. Because there was wide variation in utilization across groups, we repeated the analysis controlling for utilization, and differences in UPC and PHY across clinic models remained significant (data not shown). Ambulatory workload was significantly higher in the block model compared with both traditional and combination model programs. Differences in panel size and utilization were significant across all 3 clinic models, as shown in table 2. The number of clinics in the 9-month study period was significantly higher in traditional model compared with both combination and block model programs.
Results of the secondary analysis evaluating associations between practice metrics, clinic model, and continuity are displayed in tables 3 and 4. As panel size and utilization increase, UPC decreases significantly but PHY increases significantly. As ambulatory workload and number of clinics in the study period increase, UPC increases significantly but PHY decreases significantly. Clinic model was a significant independent variable in the analysis of both UPC and PHY, even after controlling for the other confounding variables. Panel size, ambulatory workload, utilization, number of clinics attended in the study period, and clinic model together accounted for 62% of the variation found in UPC and 26% of the variation found in PHY.
Discussion
Our findings suggest that clinic model is indeed associated with continuity, ambulatory workload, and panel size in IM residency programs. Block model programs have the highest continuity from the patient perspective (UPC) but the lowest continuity from the provider perspective (PHY). Block scheduling typically requires residents to be part of a team and to cover team members' patients. The lower PHY may be explained in part by this team structure. Indeed, a single institution found a similar drop in continuity from the provider perspective after redesign to a block model, but also demonstrated that team continuity was preserved.12 Ambulatory workload and panel size are highest in block model programs, indicating that residents are seeing more patients per session on average and are handling larger panel sizes. It is important to note that, based on our prior research, this increase in workload and panel size appears to occur without detrimental effects on resident or patient satisfaction compared with the traditional model.19,20
Combination model programs maintain some outpatient availability of resident providers during inpatient rotations and add continuity experiences during ambulatory blocks. This resulted in higher continuity from the patient perspective compared with traditional model programs, although both were lower than block programs. Despite an increased number of clinics during the study period in the traditional model, patients were seen by their primary resident provider only 22% of the time on average. Resident schedules in both the traditional and combination models still tend to require adjustment based on call and other responsibilities, potentially leading to changes in clinic session day or time from week to week. A prior study in the pediatric literature demonstrated that variable day scheduling for continuity clinic resulted in lower continuity from the patient perspective, despite increased time in clinic, which is similar to our results.25
Continuity is a balance between supply and demand, between the educational needs of residents and the needs of their patients. Factors that increase demand for a set number of appointments with a resident provider, such as higher panel size and utilization, tend to decrease a given patient's chances of seeing their own resident. This is reflected in a lower UPC. On the other hand, factors that increase the supply of appointments, such as increased ambulatory workload and increased number of clinics in the study period, make it easier for a given patient to see his or her assigned resident, thus reflected as an increased UPC. These findings describing associations between panel size, number of clinics, and UPC are consistent with prior literature.9
PHY measures continuity from a different perspective. This measure reflects the percentage of time that residents see their own patients and has been suggested as the most appropriate measure for continuity when evaluating resident outpatient educational experiences.10 In our study, practice metrics affect PHY in a pattern that appears dichotomous to UPC. As demand on the system increases because of a larger resident panel size or higher utilization, residents are more likely to see their own patients, resulting in an increased PHY. As the supply of appointments increases due to more clinic sessions or increased ambulatory workload with higher volume per session, PHY decreases, indicating that residents are seeing a higher percentage of patients from outside their individual panel. In this situation, the supply of appointments is higher than the demand generated from the resident's individual panel. This enhanced capacity may be important for cross coverage as residents increasingly work together in teams. These findings contrast with prior pediatric literature where continuity for residents (PHY) significantly increased with an increasing number of clinics.10 This difference may be explained in part by discrepancies in the patient population. The majority of visits in this pediatric study were for sick care, whereas chronic illnesses generally predominate in IM.
The outlined practice parameters explain a significant portion of the variation in UPC and PHY, but unidentified factors also play a substantial role. Local factors, such as the supervising attending physician, have been shown to influence continuity.9 Institutional culture and priorities are likely contributing factors, such as training of scheduling staff, timing and frequency of return visits, and no-show rates. Resident factors, such as communication skills, professionalism, and clinical abilities, may also play a role in resident-patient continuity, and is an area for future research.
The study has several limitations. Participating institutions chose their continuity clinic models and were not randomized. The participating programs may not be representative of all programs nationally, although both community and university programs of varying size and regional location were included. There are inherent variations within the categories we called block and combination models. Ambulatory workload was based on volume and was not adjusted for case mix or severity of illness. Finally, there were multiple factors that could not be controlled, such as institutional culture, level of staffing, staff training, clinic scheduling procedures, and use of an electronic health record.
Conclusion
Block model programs demonstrated higher continuity from the patient perspective, while traditional and combination model programs demonstrated higher continuity from the physician perspective. Clinic model, panel size, ambulatory workload, utilization, and number of clinics in the study period are significantly associated with continuity measured from both patient and resident perspectives. Optimizing the balance to maximize resident education, as well as the health of the population served, is an important goal that will require consideration of relevant local factors and priorities in addition to the practice metrics and clinic models we describe.
References
Author notes
Maureen D. Francis, MD, FACP, is Assistant Dean for Medical Education and Associate Professor, Texas Tech University Health Sciences Center at El Paso; Mark L. Wieland, MD, MPH, is Assistant Professor of Medicine and Consultant, Division of Primary Care Internal Medicine, Mayo Clinic, Rochester; Sean Drake, MD, FACP, is Program Director, Internal Medicine Residency, Henry Ford Hospital, and Clinical Assistant Professor, Wayne State University; Keri Lyn Gwisdalla, MD, is Associate Program Director, Internal Medicine Residency, Banner Good Samaritan Medical Center/Phoenix VAHCS, and Assistant Professor of Clinical Medicine, University of Arizona College of Medicine–Phoenix; Katherine A. Julian, MD, is Professor of Clinical Medicine and Track Director, Primary Care General Internal Medicine Residency Program, University of California, San Francisco; Christopher Nabors, MD, is Assistant Professor and Associate Program Director, Internal Medicine Residency, New York Medical College at Westchester Medical Center; Anne Pereira, MD, MPH, FACP, is Assistant Dean for Clinical Education and Associate Professor of Medicine, University of Minnesota School of Medicine; Michael Rosenblum, MD, FACP, is Director, Baystate Internal Medicine Residency Programs, and Assistant Clinical Professor, Tufts University School of Medicine; Amy Smith, MS, is Instructor, School of Medicine and Public Health, University of Wisconsin-Madison; David Sweet, MD, FACP, is Program Director, Internal Medicine Residency, Summa Health System and Professor, Internal Medicine, Northeast Ohio Medical University; Kris Thomas, MD, is Associate Professor of Medicine, Consultant in the Division of Primary Care Internal Medicine, and Associate Program Director, Internal Medicine Residency, Mayo Clinic, Rochester; Andrew Varney, MD, is Professor of Clinical Medicine and Program Director, Internal Medicine Residency, Southern Illinois University School of Medicine; Eric Warm, MD, FACP, is Professor of Medicine and Program Director, Internal Medicine Residency, University of Cincinnati Academic Health Center; David Wininger, MD, is Program Director, Internal Medicine Residency and Associate Professor of Clinical Internal Medicine, The Ohio State University Wexner Medical Center; and Mark L. Francis, MD, MS, is Professor, Medical Education, Texas Tech University Health Sciences Center at El Paso.
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors report they have no competing interests.
The authors would like to thank the following people for their contributions to the design of the study and data management at the participating institutions: Jayne Peterson, MD, Banner Good Samaritan Medical Center; Reva Kleppel, MSW, MPH, Baystate Medical Center; Michael Langan, MD, Ohio State University Wexner Medical Center; Lynn Clough, PhD, Summa Health System/NEOMED; Rebecca Shunk, MD, Maya Dulay, MD, and Pat O'Sullivan, PhD, University of California, San Francisco; and Bennett Vogelman, MD, and Robert Holland, MD, University of Wisconsin. We would also like to thank Melchor Ortiz, PhD, Texas Tech University Health Sciences Center at El Paso, for his assistance in the initial management of the data. Lastly, we extend our gratitude to the Alliance for Academic Internal Medicine for providing administrative support for the project and meeting space for the EIP Ambulatory Collaborative.