Medical licensing boards use competence assessment and educational intervention programs as tools in disciplinary actions. Few studies measure the impact of these remedial interventions on the quality of care provided by participants after such interventions. CPEP, the Center for Personalized Education for Professionals, provides clinical competence assessment/educational intervention services and practice monitoring, primarily for physicians complying with board orders due to substandard care. Depending on the board requirements, some physicians complete an assessment/educational intervention and, after completion, subsequently undergo practice monitoring (Intervention Group). Others participate in the practice monitoring without first completing an assessment/educational intervention (Non-Intervention Group). CPEP conducted a retrospective study of chart reviews (n=2073) performed as part of each group’s participation in the Practice Monitoring Program. When compared to the charts from the Intervention Group, charts from the Non-Intervention Group were more than five times more likely to demonstrate care below standard (P < 0.0001) and almost four times more likely to have documentation issues that prohibited the monitor’s ability to determine the quality of care (P < 0.0001). This study suggests that completion of a competence assessment/education intervention program is an effective means of achieving acceptable quality of care that is sustained over time (average 18 months) after completion of the intervention.

Physician competence and the delivery of care that meets or exceeds standard of care are essential to protect public health and safety. Medical licensing boards are accountable to the public to protect, promote and maintain patient health and safety by investigating quality-of-care issues resulting from substandard care provided by physicians in their jurisdictions.1,2,3 

As defined by the Federation of State Medical Boards (FSMB), “incompetence” is lacking the cognitive, non-cognitive and communicative skills to effectively perform one’s scope of practice. “Dyscompetence” is failing to maintain the standard of care in one or more areas of practice.1  Boards are charged with investigating physician misconduct and may require evaluation, education, or monitoring.4 

The 1993 report of the Office of the Inspector General, United States Department of Health and Human Services, on the role and approach of medical licensing boards indicated that boards make a serious ongoing effort to ensure competence and quality of care.5  Shortly after publication of this report, the FSMB adopted the policy of its Special Committee on Evaluation of Quality of Care and Maintenance of Competence. This FSMB policy recommends that medical licensing boards identify ways to evaluate physicians, and if necessary, require remediation.1  The range of potential sanctions is broad. Those who cannot be remediated may need to be removed from practice.6 

Clinical competence assessment, educational intervention and practice monitoring programs are tools that are now well-utilized by medical licensing boards and commonly used in disciplinary agreements related to substandard care. Clinical competence assessment and educational intervention programs conduct an in-depth evaluation to identify areas of weakness in a physician’s knowledge and skills and develop education plans to remediate those weaknesses and improve quality of care. Practice monitoring is a separate and unique process, not intended as an educational intervention, used to determine whether a physician is meeting minimal quality of care and documentation standards. Monitoring is often accomplished through a series of retrospective chart reviews of a specified number of charts from the physician’s practice.

CPEP, the Center for Personalized Education for Professionals, is a non-profit organization founded in 1990. The organization offers clinical competence assessments followed by structured educational interventions when appropriate. CPEP also offers practice monitoring and seminars for physicians and other health care professionals. Development of the CPEP program and the characteristics of its physician participants have been previously described in the literature.7,8,9,10,11 

The majority of physicians enrolled in CPEP’s Assessment and Educational Intervention Program (Assessment/Education Program) are referred by state licensing boards or hospitals due to concerns about substandard care. Practice Monitoring Program (PMP) participants are also primarily under disciplinary agreements due to substandard care. In some cases, participants are required to enroll in monitoring after completion of the Assessment/Education Program. In other cases, physicians are required to enroll in the PMP, without participation in the Assessment/Education Program. Through the Assessment/Education Program, participants engage in a competence assessment followed by a longitudinal education plan to address educational needs identified during the Assessment. The PMP involves chart reviews to determine if the care provided, as documented in the charts, meets minimally accepted standard of care. It is not a substitute for a competence assessment and provides no educational content for the participant.

The objective of this study is to assess the long-term impact of educational intervention on patient care by comparing the quality of patient care provided by physicians who were monitored through the PMP after successful completion of the Assessment/Education Program (Intervention Group) to that of physicians who were referred directly into the PMP, without completion of an assessment and educational intervention (Non-Intervention Group).

There are few studies of the impact on disciplined physicians’ quality of care after completion of an intensive educational intervention. To the authors’ knowledge, this is the first study to compare quality of care based on chart reviews for physicians for 12 months or more following successful completion of an educational intervention versus the care of disciplined physicians who did not complete educational remediation.

More than 100 clinicians per year enroll in CPEP’s Assessment Program. Detailed descriptions of the CPEP Competence Assessment have been previously published.7,8,10  Each clinical competence assessment is an in-depth evaluation tailored to the physician’s practice area and specialty. A variety of assessment modalities are employed to evaluate the clinician’s medical knowledge, clinical reasoning, communication, documentation and other areas. The Assessment Report summarizes the physician’s performance, identifies educational needs and provides educational recommendations.

Following the Assessment, physicians may enroll in an educational intervention through CPEP. The Education Plan is tailored to address the educational needs identified in the Assessment. The intervention provides a 6- to-18-month educational experience addressing the participant’s educational needs and provides lifelong learning tools. The educational activities generally take place in the participant’s home community while the physician remains in active clinical practice. While the Education Plan is tailored to the individual’s practice specialty and educational needs, it employs a standardized approach for all participants, with common elements and tasks, including but not limited to establishment of a learning contract, participation in a preceptorship, didactic coursework and review courses and feedback on patient charts and clinical performance. During the preceptorship, the participant meets regularly (usually two times per month) with a board-certified, specialty-matched local preceptor to review patient care, as reflected in patient records, discuss clinical topics and focus on integrating new knowledge into actual practice. The preceptorship may also involve an initial period of supervised practice with gradually increasing levels of independence.

The Education Plan is considered complete when the preceptor and CPEP associate medical director agree that the participant has completed all required educational activities, achieved their learning goals, integrated improvements in documentation and patient care and demonstrated competence in the areas of identified need. Following the completion of the Education Plan, an additional chart review or evaluation is conducted by an impartial physician reviewer (also board-certified and specialty-matched, but who has not been involved in the remediation process) to provide confirmation that the participant’s charts demonstrated patient care that is at or above standard.

The PMP is tailored to meet specific referring organization requirements for monitoring the care provided by individual physicians through regular retrospective chart reviews by qualified physician monitors. CPEP identifies the monitors, employs a standard protocol for chart selection and uses standardized chart review tools. Reviews are conducted by a specialty-matched, specialty-board-certified physician with a similar scope of practice as the participant. While monitors are informed of the participant’s specialty, scope of practice and practice circumstances (e.g., rural vs. urban, group vs. solo) as they relate to the scope of the review and to provide understanding of the physician’s practice resources, they are not given information about medical school, residency training, board certification status or disciplinary history. In addition, the PMP monitors are not informed about whether the physician has participated in CPEP’s Assessment/Education Program prior to monitoring. This is done to help monitors maintain objectivity and remove potential biases. Charts are rated for patient care, documentation, and overall quality of care.

By comparing ratings of overall quality of care of the Intervention Group after completion of the educational intervention to the Non-Intervention Group, CPEP hopes to understand the impact of longitudinal, personalized education on participant quality of care following successful completion of the remediation program.

CPEP staff reviewed the overall quality of care ratings for physicians (n=34) who participated in the PMP between January 2010 and June 2019. Participants were referred by medical boards (n=32) and hospitals (n=2). A total of 2,073 charts were reviewed: 1,215 charts from the Intervention Group and 858 charts from the Non-Intervention Group. Upon enrollment in the PMP, CPEP staff collected information on the participants’ gender, degree (MD or DO) and specialty.

Patient charts were rated using the following scoring system: “1” met generally accepted standard of care, “2” met standard of care with qualifications, “3” reviewer was unable to determine the quality of care due to documentation deficiencies (“Unable to Determine Quality of Care”) and “4” failed to meet generally accepted standard of care (“Failed to Meet Standard of Care”). All charts receiving a rating of “4” were sent to a second reviewer to determine whether there was consensus in this rating. In cases of disagreement between the first and second reviewer, the Medical Director made a final determination by reviewing the rationale for each reviewer’s findings, as well as the actual chart if needed.

The participants in the Intervention Group completed a minimum of 18 months of monitoring following successful completion of the Assessment/Education Program, with the exception of one participant who completed 12 months of monitoring. Three Intervention Group participants completed additional months of monitoring due to board concerns about the quality-of-care ratings they received during the monitoring process. The average number of months in monitoring by the Intervention Group was 19, and ranged from 12 to 26 months. For the Non-Intervention Group, the duration of monitoring completed during the study period varied from 2 to 26 months, with an average of 10 months.

Statistical testing was conducted using SAS Enterprise Guide version 7.15 HF3 2017. Descriptive analysis of gender, specialty and degree was performed. The analysis included tests for outliers as well as whether the degrees of physicians made a difference in their chart ratings. The results did not change significantly when outliers were removed. Therefore, all observations were used for the final results. Since the only difference between the two groups, aside from number of reviews, was the physician’s degree (MD or DO), that variable was also introduced into the model as a potential for adjustment. However, there was not a statistically significant finding. Therefore, the analysis was performed without that adjustment.

Three variables were calculated and evaluated: (1) the portion of charts rated as unable to determine quality of care due to documentation deficiencies (Unable to Determine Quality of Care) based on total charts reviewed in the study, (2) the portion of charts rated as failing to meet generally accepted standard of care (Failed to Meet Standard of Care) based on total charts, and (3) the portion of charts rated as Failed to Meet Standard of Care after excluding charts in which the review was Unable to Determine Quality from the denominator.

The results for the Intervention Group and the Non-Intervention Group were compared using a t-test analyzing the difference in findings at the physician level. The results were then tested using logistic regression, with a binomial distribution to account for the number of chart reviews conducted per physician and the resulting proportion of coded responses.

Participant demographics are described in Table 1. There were 21 physicians in the Non-Intervention Group, with a total of 858 charts reviewed, and 13 participants in the Intervention Group, with 1,215 charts reviewed. Gender and specialty did not significantly differ between the two groups. The only significant difference in demographic information between the two groups was degree (MD or DO), with 69% of the Intervention Group having an MD degree compared to 100% in the Non-Intervention Group. This characteristic was introduced into the logistic regression model as a potential for adjustment. However, the regression model findings were not statistically significant; therefore, the analysis was performed without that adjustment.

Table 1

Participant Demographics

Participant Demographics
Participant Demographics

Of the 858 charts reviewed from the Non-Intervention Group, the monitors rated 173 (20%) as Unable to Determine Quality and 118 (14%) as Failed to Meet Standard of Care. Of the 1,215 charts reviewed in the Intervention Group, the monitors rated 67 (6%) charts as Unable to Determine Quality and 36 (3%) charts as Failed to Meet Standard of Care. (See Tables 2 and 3.)

Table 2

Proportion of All Charts Rated as ‘Unable to Determine Quality of Care’ Due to Documentation Deficiencies

Proportion of All Charts Rated as ‘Unable to Determine Quality of Care’ Due to Documentation Deficiencies
Proportion of All Charts Rated as ‘Unable to Determine Quality of Care’ Due to Documentation Deficiencies
Table 3

Proportion of All Charts Rated as ‘Failed to Meet Standard of Care’

Proportion of All Charts Rated as ‘Failed to Meet Standard of Care’
Proportion of All Charts Rated as ‘Failed to Meet Standard of Care’

As shown in Tables 2 and 3, charts of physicians who did not complete a structured educational intervention (Non-Intervention Group) were 4.64 times more likely to be identified as Failed to Meet Standard of Care (P < 0.0001) and 3.66 times more likely to be rated as Unable to Determine Quality of Care than the charts from physicians who completed the educational intervention (Intervention Group) (P < 0.0001).

When the charts from which quality of care could not be determined due to documentation deficiencies were removed from the groups, 1,833 charts remained. For the Non-Intervention Group, 118 out of 685 (17%) were rated as Failed to Meet Standard of Care, while 36 out of 1148 charts (3%) of the Intervention Group charts received this rating (Table 4). As shown in Table 4, when charts for which the quality of care could not be determined were removed from the analysis, the risk for the Non-Intervention Group increased to 5.49 times more likely to be rated as Failed to Meet Standard of Care than the Intervention Group (P < 0.0001).

Table 4

Proportion of Charts Rated as ‘Failed to Meet Standard of Care’ After Removal of Charts Rated as ‘Unable to Determine Quality of Care’

Proportion of Charts Rated as ‘Failed to Meet Standard of Care’ After Removal of Charts Rated as ‘Unable to Determine Quality of Care’
Proportion of Charts Rated as ‘Failed to Meet Standard of Care’ After Removal of Charts Rated as ‘Unable to Determine Quality of Care’

There are a number of studies that estimate quality-of-care deficiencies among practicing physicians in the United States and Canada. One study estimated that 6% to 12% of physicians practicing in the United States are dyscompetent.12  A study of randomly selected physicians in Ontario, Canada showed that approximately 15% of general practitioners and family physicians and 2% of specialists had considerable practice deficiencies.13  The licensing board’s role is to protect the public from physicians who are found to have engaged in “unprofessional, improper or incompetent medical practice.”14 

Licensing boards are in a position to direct the specific type, quality, and quantity of remediation prescribed for physicians identified as having quality-of-care concerns (incompetent or dyscompetent physicians) and whom they believe are remediable. According to the FSMB’s 2018 “U.S. Medical Regulatory Trends and Actions” report, 4,081 physicians were disciplined by licensing boards in 2017. Of the disciplined physicians, 1,343 received a restricted license and 1,147 received a reprimand.14  These disciplinary measures also could include a range of other conditions and additional requirements. These requirements might include traditional continuing medical education (CME) or specialized assessment and education programs. The reasons for the discipline were not specified.

As shown by Rosner et al. in their 1994 review of remedial training in Canada and the United States, education needs to be personalized and tailored to meet the specific needs of each physician.15  CME courses that are designed for typical, highly functioning physicians may not be sufficient for physicians requiring significant remediation.16 

In addition, traditional CME activities do not typically incorporate ongoing evaluation and formative feedback to document whether the educational activity increased knowledge and is being applied to patient care. George Miller, MD, provided a frequently referenced framework to evaluate the success of physician education programs, which includes evaluating levels from know, to know how, to show how, to does.17  The highest level of evaluation of a physician’s quality of care is to demonstrate that they are doing what they have been trained to do. According to Miller, one of the best means to measure what physicians “do” is through evaluation of practice performance by review of their actual patient care. This study uses chart reviews to evaluate whether the participants are “doing” what they were trained to do.

In a 2009 international survey of assessment and remediation programs, Humphrey and colleagues noted that while participant progress through remediation was monitored, systematic follow-up to determine long-term effectiveness was not done.18  In a 2018 article to determine the state of the art in the remediation of practicing physicians, Bourgeois-Law and colleagues summarized the published research on assessment programs by stating that while there are a number of studies addressing the assessment portion of the remediation process, there was “an absence of research — educational or otherwise — published on the remediation of practicing physicians.” The authors concluded that this suggests that “addressing physicians in need of remediation is more difficult than determining the physicians’ competence in the first place.”19 

There are a handful of studies of remediation programs for practicing physicians who have been identified as having poor clinical performance that report mostly positive outcomes achieved at the end of a remediation process.13,20,21,22,23,24  The primary studies on the long-term effectiveness of remediation have come from the College of Physicians and Surgeons of Ontario (CPSO) and the Collège des Médecins du Québec (CMQ).13,25,26,27 

A 1998 study by Norton and colleagues using CPSO data followed 81 physicians identified as in need of help to improve their records and/or their quality of care. The physicians, who had completed an assessment (which included a site visit and a review of 20 to 30 randomly selected charts) and simple educational interventions, underwent a reassessment an average of six years later. Their performance at reassessment was compared to three matched physicians undergoing their initial assessment. The study found that all 81 of the physicians were practicing as well as their peers.25 

The CMQ remedial retraining program offers a variety of retraining activities that are individualized to the participant. A study by Goulet and colleagues evaluated 94 physicians who completed the remedial program between 1993 and 2004. Study participants had undergone a peer assessment during the two-year period before or after the remedial program. The peer assessment included a review of 30 to 50 patient charts and a records-stimulated recall interview. Similar to this study, ratings were assigned for quality of care and record keeping. The study found that 30% to 40% of poorly performing physicians who completed the remedial education improved their performance.26 

To the authors’ knowledge, no studies have been conducted on the long-term impact on quality of care of structured remedial education programs in the United States. This study provides chart-based, objective evaluation of quality of care provided by participants over a period of a minimum of 12 (average 19) months following successful completion of the intervention. In the Intervention Group, only 6% of charts (67) were rated as Unable to Determine Quality and only 3% of charts (36) were rated as Failed to Meet Standard of Care. Given that all of the participants had entered the competence assessment program under a board stipulation based on quality-of-care concerns, the authors opine that this is a significant indicator of the long-term success of the intervention.

A unique feature of the CPEP study is that it compared the performance of two groups of physicians who had been identified by licensing boards as having quality-of-care concerns — those who were required to complete a structured remedial educational intervention prior to a period of monitoring and those who were required to participate in monitoring without the educational intervention. The differences between the percent of charts found to be below standard of care in two groups was statistically significant (3% vs. 14%). Given that all of the participants had entered the competence assessment program under a board stipulation based on quality-of-care concerns, the authors opine that this is a significant indicator of the long-term impact of the intervention on overall quality of care.

Poor record keeping is problematic in and of itself, in that it can place a patient at increased risk for medical error and may inhibit continuity of care as other treating physicians will not have complete and/or accurate information regarding the patient. Published studies, mostly in the residency setting, have reported mixed results in efforts to improve physician documentation, with some showing improvement and some not.28,29,30,31,32  In a 2018 study by the University of California San Diego (UCSD) Physician Assessment and Clinical Education (PACE) Program, Wenghofer and colleagues evaluated the charting skills of physicians who were under a disciplinary order to participate in a practice monitoring program.33  Evaluators visited the participants’ practice sites to conduct medical record audits. For the 77 participants who completed between six and 24 months of the program, there was significant improvement in their charting. The authors suggest that further study is needed to determine if the improvement in charting was sustained after the program was completed.

A core component of the CPEP Assessment/Education Program is to evaluate and remediate documentation deficiencies over the course of the educational intervention. The data in this study reveal that the number of charts with documentation deficiencies that precluded the monitor’s ability to determine the quality of care was low (67 out of 1215 [6%]) in the Intervention Group; however, the Non-Intervention Group was 3.66 times more likely (P < 0.0001) to have charts receive this rating. Thus, we feel that the lower proportion of charts with such severe medical record deficiencies is an additional affirmation of the benefits of the longitudinal, practice-based education model used in the Intervention Group.

Poor documentation can obscure the reviewers’ ability to evaluate the quality of care provided. When the charts that were rated as Unable to Determine Quality were removed from the analysis, the Intervention Group performance was even better compared to the Non-Intervention Group (risk ratio increased from 4.64 to 5.49.) While it is impossible to say with certainty in the current study, one could presume that at least some of the cases that were rated as Unable to Determine Quality were actual cases of substandard care. Thus, our calculations of the difference in quality of care provided between our two study groups may be underestimated.

We believe that this data indicates that the longitudinal practice-based education model that CPEP uses is effective, as demonstrated by comparing the care ratings in these two groups. This is even more impressive, given that medical boards may be more likely to require physicians who are perceived as having greater deficits to complete assessment and educational intervention as compared to simply monitoring them. Given the large percentage of charts demonstrating below-standard care in the monitoring (Non-Intervention) group, the study findings raise the question of whether some of these physicians may have benefited from formal assessment and structured remediation services prior to entering monitoring.

This study has limitations. The number of physicians included in the study is relatively small. The study did not take into consideration whether or what educational activities, if any, were undertaken by physicians in the Non-Intervention Group. The results could have been impacted by lack of consistent duration of required or completed monitoring. CPEP did not have access to data comparing the relative quality of care at entry into each process; doing so could have addressed the question of whether baseline deficits, prior to the Assessment/Education Program, differ between the two groups. CPEP used pooled data for the physicians in the Intervention and Non-Intervention Groups, and did not evaluate the data by individual physician, nor did we analyze change in individual performance over time in monitoring. CPEP did not track how often initial ratings of “4” for Failed to Meet Standard of Care were not upheld on second review; thus, we are not able to comment on inter-rater reliability in this cohort. In more recent data from the Practice Monitoring Program, pre liminary findings indicate that inter-rater reliability is approximately 87% for reviews of charts receiving a “4” rating. Finally, while there are similarities in the assessment components of entities conducting post-licensure assessment, approaches to educational remediation vary. The model of assessment and structured individualized education used by CPEP may not be generalizable to other programs, since each program uses a variety of educational tools and processes.

This study’s findings contribute to the body of knowledge regarding the effectiveness of remediation; in particular, longitudinal, practice-based remediation. This is one of very few studies that have been able to capture data on actual quality of patient care (e.g., what the participants are “doing”) for an extended period of time after completion of an educational intervention. More importantly, it documents a statistically significant relative risk for problematic documentation and substandard care among physicians who did not complete the educational intervention while undergoing a disciplinary process.

The risk of documentation deficiencies significant enough to prohibit the reviewer from determining whether the care provided was appropriate was almost four times greater for those who did not complete the intervention. Once the study adjusted for care that could not be evaluated due to these significant documentation deficiencies, the risk of below-standard quality of care was more than five times greater for those who did not complete the intervention.

Medical licensing boards are in the critical role of directing the disciplinary process for incompetent and dyscompetent physicians to ensure improvement and safety to practice. Assessment and structured educational programs are important tools that medical boards can use for rehabilitation of poorly performing physicians. This study suggests that a comprehensive competence assessment followed by a structured, longitudinal and practice-based remedial educational process is an effective means of improving physician performance and that improvement is sustained after completion of the Education Program. Plans for future study include expanding the cohorts to include physicians who have more recently completed the programs and examining data at the individual physician level, the latter of which would allow analysis of whether seriousness of the offense and/or the severity of the action corresponds with performance.

In addition, the large proportion of charts identified as reflecting substandard care in the Non-Intervention Group raises the question of whether physicians currently referred directly into a monitoring process might benefit from an assessment and remediation process prior to entering monitoring. Further study may help elucidate ways to determine which physicians would benefit from competence and targeted, structured remedial education.

1.
Federation of State Medical Boards.
The Special Committee on Evaluation of Quality of Care and Maintenance of Competence
.
2.
Gallagher
CT,
Hussain
K,
White
JDP,
Chaar
B.
The Legal Underpinnings of Medical Discipline in Common Law Jurisdictions
.
J Leg Med
.
2019
Jan–Mar
;
39
(
1
):
15
34
.
3.
Thompson
JN,
Robin
LA.
State Medical Boards. Future Challenges for Regulation and Quality Enhancement of Medical Care
.
J Leg Med
.
2012
Jan
;
33
(
1
):
93
114
.
4.
Horowitz
R.
In the Public Interest: Medical Licensing and the Disciplinary Process
.
New Brunswick, NJ
:
Rutgers University Press
;
2013
.
5.
Department of Health and Human Services, Office of the Inspector General.
State Medical Boards and Quality-Of-Care Cases: Promising Approaches
.
https://oig.hhs.gov/oei/reports/oei-01-92-00050.pdf. Published February 1993. Accessed January 25, 2021.
6.
Grant
D,
Alfred
KC.
Sanctions and Recidivism: An Evaluation of Physician Discipline and State Medical Boards
.
J Health Polit Policy Law.
2007
Oct
;
32
(
5
):
867
885
.
7.
Bunnell
KP,
Kahn
A,
Kasunic
LB,
Radcliff
S.
CPEPP: Development of a Model for Personalized Continuing Education
.
J Contin Educ Health Prof
.
1991
;
11
:
19
27
.
8.
Korinek
LL,
Thompson
LL,
McCrae
C,
Korinek
E.
Do Physicians Referred for Competency Evaluations Have Underlying Cognitive Problems?
Acad Med
.
2009
Aug
;
84
(
8
):
1015
21
.
9.
Grace
ES,
Korinek
EJ,
Weitzel
LB,
Wentz
DK.
Physicians Reentering Clinical Practice: Characteristics and Clinical Abilities
.
J Contin Educ Health Prof
.
2011
Winter;
31
(
1
):
49
55
.
10.
Grace
ES,
Korinek
EJ,
Tran
ZV.
Characteristics of Physicians Referred for a Competence Assessment: A Comparison of State Medical Board and Hospital Referred Physicians
.
J Med Regul.
2010
;
96
(
3
):
8
15
.
11.
Grace
ES,
Wenghofer
EF,
Korinek
EJ.
Predictors of Physician Performance on Competence Assessment: Findings from CPEP, the Center for Personalized Education for Physicians
.
Acad Med
.
2014
Jun
;
89
(
6
):
912
9
.
12.
Williams
BW.
The Prevalence and Special Educational Requirements of Dyscompetent Physicians
.
J Contin Educ Health Prof.
2006
Summer;
26
(
3
):
173
91
.
13.
McAuley
RG,
Paul
WM,
Morrison
GH,
Beckett
R,
Goldsmith
CH.
Five-Year Results of the Peer Assessment Program of the College of Physicians and Surgeons of Ontario
.
CMAJ.
1990
Dec
1
;
143
(
11
):
1193
9
.
14.
Federation of State Medical Boards of the United States.
U.S. Medical Regulatory Trends and Actions 2018.
15.
Rosner
F,
Balint
JA,
Stein
RM.
Remedial Medical Education
.
Arch Intern Med.
1994
Feb
;
154
(
3
):
274
9
.
16.
Williams
BW.
The Prevalence and Special Educational Requirements of Dyscompetent Physicians
.
J Contin Educ Health Prof.
2006
Summer;
26
(
3
):
173
91
.
17.
Miller,
GE.
The Assessment of Clinical Skills/Competence/Performance
.
Acad Med.
1990
Sep
;
65
(
9 Suppl
):
S63
7
.
18.
Humphrey
C.
Assessment and Remediation for Physicians with Suspected Performance Problems: An International Survey
.
J Contin Educ Health Prof.
2010
Winter;
30
(
1
):
26
36
.
19.
Bourgeois-Law
G,
Pim
TW,
Glenn
R.
Remediation in Practicing Physicians: Current and Alternative Conceptualizations
.
Acad Med.
2018
Nov
;
93
(
11
):
1638
1644
.
20.
Hanna
E,
Premi
J,
Turnbull
J.
Results of Remedial Continuing Medical Education in Dyscompetent Physicians
.
Acad Med.
2000
Feb
;
75
(
2
):
174
6
.
21.
Goulet
F,
Jacques
A,
Gagnon
R.
An Innovative Approach to Remedial Continuing Medical Education, 1992–2002
.
Acad Med.
2005
Jun
;
80
(
6
):
533
540
.
22.
Moran
JA,
Kirk
P,
Kopelow
M.
Measuring the Effectiveness of a Pilot Continuing Medical Education Program
.
Can Fam Physician.
1996
Feb
;
42
:
272
276
.
23.
Cosman
BC,
Alverson
AD,
Boal
PA,
Owens
EL,
Norcross,
WA.
Assessment and Remedial Clinical Education of Surgeons in California
.
Arch Surg.
2011
Dec
;
146
(
12
):
1411
5
.
24.
Lillis
S,
Takai
N,
Francis
S.
Long-Term Outcomes of a Remedial Education Program for Doctors with Clinical Performance Deficits
.
J Contin Educ Health Prof.
2014
Spring;
34
(
2
):
96
101
.
25.
Norton
PG,
Dunn
EV,
Beckett
R,
Faulkner
D.
Long-Term Follow-Up in the Peer Assessment Program for Nonspecialist Physicians in Ontario, Canada
.
Jt Comm J Qual Improv.
1998
Jun
;
24
(
6
):
334
341
.
26.
Goulet
F,
Gagnon
R,
Gingras
ME.
Influence of Remedial Professional Development Programs for Poorly Performing Physicians
.
J Contin Educ Health Prof.
2007
Winter;
27
(
1
):
42
48
.
27.
Norton
PG,
Ginsburg
LS,
Dunn
E,
Beckett
R,
Faulkner
D.
Educational Interventions to Improve Practice of Nonspecialty Physicians Who Are Identified in Need by Peer Review
.
J Contin Educ Health Prof.
2004
Fall;
24
(
4
):
244
252
.
28.
Comeau
R,
Craig
C.
Does Teaching of Documentation of Shoulder Dystocia Delivery through Simulation Result in Improved Documentation in Real Life?
J Obstet Gynaecol Can.
2014
Mar
;
36
(
3
):
258
265
.
29.
Russo
R,
Fitzgerald
SP,
Eveland
JD,
Fuchs
BD,
Redmon
DP.
Improving Physician Clinical Documentation Quality: Evaluating Two Self-Efficacy-Based Training Programs
.
Health Care Manage Rev.
2013
Jan–Mar
;
38
(
1
):
29
39
.
30.
Farzandipour
M,
Meidani
Z,
Rangraz Jeddi
F,
et al.
A Pilot Study of the Impact of an Educational Intervention Aimed at Improving Medical Record Documentation
.
J R Coll Physicians Edinb.
2013
;
43
(
1
):
29
34
.
31.
Tinsley
JA.
An Educational Intervention to Improve Residents’ Inpatient Charting
.
Acad Psychiatry.
2004
Summer;
28
(
2
):
136
9
.
32.
Lorenzetti
DL,
Quan
H,
Lucyk
K,
et al.
Strategies for Improving Physician Documentation in the Emergency Department: A Systemic Review
.
BMC Emerg Med.
2018
Oct
25
;
18
(
1
):
36
.
Accessed February 2, 2021. http://doi.org/10.1186/s12873-018-0188-z.
33.
Wenghofer
E,
Boal
P,
Floyd
N,
Lee
J,
Woodard
R,
Norcross
W.
Improving Charting Skills of Physicians in Monitored Practice
.
J Contin Educ Health Prof
.
2018
Fall;
38
(
4
):
244
9
.