Objective

The objective of this study was to investigate the association of a clinical documentation quality improvement program using audit–feedback with clinical compliance to indicators of quality chart documentation.

Methods

This was an analysis of differences between adherence to quality indicators of chiropractic record documentation and audit–feedback intervention (feedback report only vs. feedback report with one-on-one educational consultation) at different campuses. Comparisons among groups were analyzed using analysis of variance (ANOVA), Tukey or Dunnett post hoc tests, and Cohen's d effect size estimates.

Results

There was a significant increase in the mean percentile compliance in 2 of 5 compliance areas and 1 of 11 compliance objectives. Campus B demonstrated significantly higher levels of compliance relative to campus A and/or campus C in 5 of 5 compliance areas and 7 of 11 compliance objectives. Across-campus comparisons indicated that the compliance area Review (Non-Medicare) Treatment Plan [F(2,18) = 17.537, p < .001] and compliance objective Treatment Plan Goals [F(2,26) = 5.653, p < .001] exhibited the highest practical importance for clinical compliance practice.

Conclusions

Feedback of performance improved compliance to indicators of quality health record documentation, especially when baseline adherence is relatively low. Required educational consultations with clinicians combined with audit–feedback were no more effective at increasing compliance to indicators of quality health record documentation than audit–feedback alone.

Background

Patient health records play a critical role in the management and treatment of patients and serve as important documentation for reimbursement, risk management, quality assurance, performance evaluations, and research.1,2  In light of the recently enacted Patient Protection and Affordable Care Act (PPACA), increased efforts to reduce costs and improve the quality of health care have put a renewed emphasis on the quality of record documentation. The use of standardized quality of care assessment instruments that abstract information from the health record is the most commonly accepted method of measuring the quality of patient care.3  Despite the importance of the health record, there is a paucity of published research evaluating the quality of and interventions to improve documentation contained within the chiropractic patient record. In the allied health professions, the overall quality of record documentation has been found to be poor, often underestimating the quality of the care provided.410  Interventions to improve several of the elements of medical record documentation in this setting have varied in outcome.1116 

To the authors' knowledge, this is the first report describing the use of an audit with feedback intervention to improve the quality of chiropractic record documentation. The aim of the study is twofold: (1) assess the effects of a quarterly audit with feedback intervention on the adherence to quality indicators of chiropractic record documentation in the outpatient clinics of three chiropractic college campuses and (2) describe each campus's overall compliance rate over 3 years relative to the other campuses.

Questions

The overarching question of the study is, did the inclusion of an individual clinician consultation to the audit–feedback process on one campus equate to overall increases in patient file record-keeping compliance relative to audit-feedback alone? More specifically, are there significant differences in the percentage of reported compliance between the academic years 2008 to 2010 with regard to both areas of compliance and their corresponding objectives? Answering this question at a macro level will provide some evidence to assess whether or not additions to the audit–feedback system were successful in increasing compliance from 2008 to 2010.

Lastly, are there significant differences in the reported percentage of compliance between 3 campuses with regard to demarcated areas of compliance and objectives? Because within the college clinic system under study there are 3 distinct campuses with clinics, an assessment of differences between each campus could signal whether quality improvements to the audit–feedback system had a potentially greater impact at any one campus. Knowledge of such differences could spark further investigation of that campus for improvement processes to implement at one or both of the other campuses.

Clinical Integrity Program

The Clinical Integrity Program (CIP) was developed and implemented to ensure statutory and regulatory compliance and to advance the quality of health care delivery and the intern education experience. One objective of this program entails having complete, timely, and correct documentation of clinical and treatment decisions. A formal mechanism of recording and reporting adherence to quality indicators of documentation by trained reviewers who systematically reviewed chiropractic patient records forms the basis of the dataset used in the current study. The Institutional Review Board at Palmer College of Chiropractic designated this study exempt from review.

Data Collection

All health records of patients visiting any one of the campuses' outpatient clinics during the respective academic quarter audit window from 2008 to 2010 were eligible for inclusion in that quarter's record audit procedure. At the conclusion of each academic quarter, a trained reviewer at each campus randomly selected a sample of patient records, stratified by attending clinician, to assess for adherence to quality indicators using an electronic chart assessment instrument. The chart assessment tool included 11 compliance objectives (Figure 1) across 5 compliance areas. Reviewers assigned yes, no, or not applicable ratings to each of the quality indicators based on their trained clinical judgments. The Office of Strategic Development routinely audited the database to ensure validation and quality assurance of the data. Following the audit of selected records each academic quarter, an individualized clinician feedback report with anonymized peer-comparison statistics and performance relative to an 80% benchmark was available for review on the shared drive of the college's computer network. Additionally, each campus chief of staff or dean of clinics received an e-mailed feedback report of the overall performance of each clinic and clinician and was encouraged to provide a multifaceted intervention to improve adherence to quality indicators as necessary. The outpatient clinics at campus A required a one-on-one educational consultation with each clinician regarding his or her performance in addition to the feedback report. These consultations complied with the “Faculty Duties” section outlined in the collective bargaining agreement at campus A and were conducted during the 2009–2010 academic year.

Figure 1.

Documentation objectives extracted from record-keeping audit. Tx, treatment; Hx, history; CC, chief complaint; NMS, neuromusculoskeletal; CMT, chiropractic manipulative therapy; Dx, diagnosis.

Figure 1.

Documentation objectives extracted from record-keeping audit. Tx, treatment; Hx, history; CC, chief complaint; NMS, neuromusculoskeletal; CMT, chiropractic manipulative therapy; Dx, diagnosis.

Variables

The dependent variable used within the current study was the percentage of compliance within either an area of compliance or a corresponding compliance objective. Independent variables consisted of calendar years and college campus.

Data Analysis

Data analysis for the current study was quantitative. In addition to reporting mean percentile data across all variables, analysis of variance (ANOVA) was used to assess significant differences between dependent and independent variables. Although the ANOVA procedure is robust to violations of normality and equality of variances, the procedure has little tolerance for violations of the assumption of independence. Given the nature of the random audits conducted within the current study, all observations within the study conform to the assumption of independence. The data for the current study were not normally distributed; however, ANOVA is notably robust to violations of normality. Outliers were not an issue within the study. Where significant findings emerge, post hoc analyses were used to ferret out within-variable differences. Using a Levene's statistic to assess equality of variances, in the event of unequal variances, the Brown-Forsythe adjusted F test was interpreted. The Tukey HSD post hoc test was used when variances were assumed equal, and the Dunnett T3 post hoc test was used when variances were assumed to be unequal across groups.

Assessment of the magnitude, or size, of the impact of a significant difference between groups was assessed using Cohen's d effect size. Because calculation of d relies on pooled standard deviation, the size of the difference between groups is displayed in standard deviations of ±3 units. In addition to estimates of probability (i.e., p values), effect size estimates describe the magnitude of results to bring more function and meaning to the analysis. As such, effect size conveys the practical importance of the research findings. In the absence of other similar studies to help guide the interpretation of Cohen's d within the current study, the standard interpretation17  of d was used: ≤0.19 is little to no effect, ≥0.2 to ≤0.49 is considered a small effect, ≥0.5 to ≤0.79 is considered a moderate effect, and ≥0.8 is considered a large effect. Given the nature of the analysis within the current study (i.e., it is not a directional study), the authors are primarily concerned with, and report, the absolute value of Cohen's d. Statistical analyses were performed using IBM SPSS Statistics for Windows (Version 19.0, IBM Corp, Armonk, NY).

Compliance Area by Calendar Year

Table 1 reports mean percentile information for compliance area by calendar year. ANOVA indicated that the compliance areas Documented Prognosis and Treatment Goals (p < .01), and Medicare Documentation Elements (p < .05) were significantly different across the 2008 to 2010 calendar years. Table 1 reports all statistical data and analysis for compliance area and objective by calendar year.

Table 1.

ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Calendar Year

ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Calendar Year
ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Calendar Year

A post hoc analysis indicated that for the Documented Prognosis and Treatment Goals compliance area, compliance was significantly lower in 2008 when compared to both 2009 (p = .049, d = 0.767) and 2010 (p = .018, d = 0.203). In addition, for the Medicare Documentation Elements compliance area, compliance in 2010 (p = .025, d = 0.500) was significantly higher when compared to compliance in the 2008 calendar year. Cohen's d effect sizes for these two areas indicated small to moderate practical significance, with the first area approaching high practical importance.

The compliance areas Review (Non-Medicare) History, Review (Non-Medicare) Treatment Plan, and Review (Non-Medicare) Exam differed only by chance. Examination of their mean scores across calendar years indicates relatively high and somewhat stable compliance.

Compliance Objective by Calendar Year

Assessment of compliance objectives also occurred within the current study. Table 1 reports all statistical data and analysis for compliance area and objective by calendar year. Of the 11 objectives, the objective Treatment Plan Prognosis was significantly different between calendar years (p < .05). Post hoc analysis indicated that the 2010 compliance score was significantly higher when compared to the 2008 compliance score (p = .014, d = 1.122). Cohen's d effect size indicates high practical importance for this objective.

The compliance objective Recovery/Improvement Expectation approached significance (p = .06) and would have likely achieved significance had the number of total file audits (n) been greater. The remaining 7 objectives displayed only chance differences between calendar years. Given that these 7 objectives started with relatively high compliance in year 2008, that none emerged significantly different was unsurprising.

Compliance Area by Campus

Table 2 reports all statistical data and analysis for compliance area and objective by campus. Significant differences were found between campuses for all 5 compliance areas. With regard to significant differences found between campuses for the compliance area Documented Prognosis and Treatment Goals (p < .001), post hoc analysis indicated that campus B's compliance rate was significantly higher than both campuses A (p < .001, d = 2.142) and C (p = .027, d = 0.688). Post hoc analysis for the significant differences within the area of Review (Non-Medicare) History (p < .05) indicated that campus B's compliance record was significantly higher when compared to campus A (p = .043, d = 1.185). Cohen's d indicated high practical importance for both of these areas.

Table 2.

ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Campus

ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Campus
ANOVA Statistics for Compliance Percentages for Compliance Area/Objective by Campus

Significant differences were also found for the compliance area: Medicare Documentation Elements (p < .001). Post hoc analysis indicated that campus B's compliance was significantly higher than that of campuses A (p < .001, d = 0.887) and C (p = .009, d = 0.625). Similar significant differences were exhibited for the compliance area Review (Non-Medicare) Treatment Plan (p < .001), wherein compliance for campus B (p < .001, d = 3.0) and C (p = .003, d = 1.477) was significantly higher when compared to campus A. Finally, significant differences were also found for the final compliance area Review (Non-Medicare) Exam (p < .001). When compared to campuses A (p < .001, d = 1.103) and C (p = .009, d = 1.040), campus B's compliance was significantly greater. Cohen's d indicated moderate to high practical importance for both of these areas.

Compliance Objective by Campus

The final area of inquiry concerned the existence of significant differences between campuses regarding the compliance objectives. Seven of the 11 compliance objectives significantly differed across campuses (see Table 2). Post hoc analysis indicated for the first objective, Treatment Plan Prognosis (p < .01), that campus B's compliance was significantly higher than that of campus A (p = .001, d = 2.002). For the second compliance objective, Treatment Plan Goals (p < .05), post hoc analysis specified that campus B's compliance was significantly higher than that of campus A (p < .001, d = 2.745). Cohen's d indicated high practical importance for both of these areas.

Significant differences between campuses were also illustrated by the third compliance objective Treatment Plan Current (p < .001), wherein campus A's compliance is significantly lower than that of both campus B (p < .001, d = 3.0) and C (p = .004, d = 1.477). Similarly, post hoc analysis for compliance objective 4, History with Chief Complaint Review (p < .05), signified that campus B's compliance was significantly higher than campus A's compliance (p = .043, d = 1.185). Cohen's d indicated high practical importance for these areas.

Compliance objective 6, Patient Status Assessment (p < .01), was also significantly different across campuses, and post hoc analysis indicated that campus B's compliance was significantly higher as compared to that of campus C (p = .003, d = 1.948). Significant differences were also found for compliance objective 7, Neuromusculoskeletal Condition Documented (p < .01); follow-up analyses indicated that campus B's compliance was significantly higher than campus A's (p = .001, d = 2.50). Lastly, post hoc analysis concerning the significant differences found for compliance objective 10, Demonstrated Subluxation (p < .001), revealed that campus B's compliance was significantly higher when compared to the compliance of both campus A (p = .002, d = 2.212) and campus C (p = .033, d = 1.554). Cohen's d indicated high practical importance for these areas. All other compliance objectives differed by chance only.

Our findings suggest that intentionally designed audit and feedback systems may improve compliance to some indicators of quality health record documentation in a chiropractic educational clinic system over and above compliance expected merely by the passing of time. The significance and larger magnitude of audit and feedback occurred more frequently between campuses rather than time period. Generally, increased compliance was observed for indicators when baseline adherence was relatively low, suggesting that a formal audit–feedback system may significantly raise compliance when compliance is lower than desired. However, despite the fact that only chance differences existed between calendar years, the authors believe that the quality assurance and compliance increases observed across time are significant. In many cases, compliance was already relatively high, and in other cases where compliance was relatively lower, it appears to have increased significantly. This increase is noteworthy for any compliance program.

In addition, our study seems to suggest that administratively demanding audit–feedback systems that require clinician consultations, as was the case for campus A, may be less effective than systems where the onus of quality assurance and compliance falls primarily with clinicians. In light of the findings of Hyson et al that higher functioning facilities use individualized feedback, we were surprised that campus B exhibited a significantly higher degree of quality assurance and compliance, which also exhibited some of the highest practical importance via effect sizes, given that it appears less individualized than campus A.18  Additionally, it is quite likely that differences in the feedback characteristics at each campus in this study have influenced its outcome. Future research should take these characteristics into consideration when designing intervention studies that include audit with feedback.

Our study has several limitations that must be considered when interpreting these results. First, our findings were drawn from retrospective analysis of routinely collected data from audits of chart documentation not intended for research purposes. There are limitations inherent in this methodological approach. For example, the electronic chart assessment instrument used for data abstraction as well as the lack of rigorous and systematized protocols and guidelines for trained reviewers may have jeopardized the accuracy and reliability of the collected data. Furthermore, site-specific data abstractors may have introduced reviewer bias into the data. Finally, the small number of sampled charts in one clinic system only may have led to type II statistical error and reduced the overall generalizability of the findings from the study.

The purpose of this study was to investigate the effects of an audit–feedback system with regard to resulting documentation compliance. Although statistically marginal differences were found, noteworthy increases in compliance across calendar years were encouraging. Additionally, differences found between the compliance of three different campuses calls into question the structure and processes of the audit–feedback systems at those campuses. While each campuses' audit–feedback system has advantages and disadvantages, understanding the successes and failures of each system will allow compliance officers to institute quality assurance improvements where needed. Further studies are required.

The authors have no conflicts of interest to declare.

1
American Chiropractic Association, Clinical Documentation Committee
.
Clinical Documentation Manual: Clinical Documentation Essentials for Doctors of Chiropractic. 2nd ed
.
Arlington, VA
:
American Chiropractic Foundation Press;
2005
:
84
.
2
OUM Chiropractor Program
.
Documentation 101
.
Franklin, TN
:
OUM Practice Management University
;
2010
:
21
.
3
McDonald
CJ
,
Overhage
JM
,
Dexter
P
,
Takesue
BY
,
Dwyer
DM
.
A framework for capturing clinical data sets from computerized sources
.
Ann Intern Med
.
1997
;
127
(
8 pt 2
):
675
682
.
4
Luck
J
,
Peabody
JW
,
Dresselhaus
TR
,
Lee
M
,
Glassman
P
.
How well does chart abstraction measure quality? A prospective comparison of standardized patients with the medical record
.
Am J Med
.
2000
;
108
(
8
):
642
649
.
5
Abernethy
AP
,
Herndon
JE
,
Wheeler
JL
,
Rowe
K
,
Marcello
J
,
Patwardhan
M
.
Poor documentation prevents adequate assessment of quality metrics in colorectal cancer
.
J Oncol Pract
.
2009
;
5
(
4
):
167
174
.
6
Burnett
SJ
,
Deelchand
V
,
Franklin
BD
,
Moorthy
K
,
Vincent
C
.
Missing clinical information in NHS hospital outpatient clinics: prevalence, causes and effects on patient care
.
BMC Health Serv Res
.
2011
;
11
:
114
.
7
Smith
PC
,
Araya-Guerra
R
,
Bublitz
C
,
et al
.
Missing clinical information during primary care visits
.
JAMA
.
2005
;
293
(
5
):
565
571
.
8
Cox
JL
,
Zitner
D
,
Courtney
KD
,
et al
.
Undocumented patient information: an impediment to quality of care
.
Am J Med
.
2003
;
114
(
3
):
211
216
.
9
Vainiomaki
S
,
Kuusela
M
,
Vainiomaki
P
,
Rautava
P
.
The quality of electronic patient records in Finnish primary healthcare needs to be improved
.
Scand J Prim Health Care
.
2008
;
26
(
2
):
117
122
.
10
Soto
CM
,
Kleinman
KP
,
Simon
SR
.
Quality and correlates of medical record documentation in the ambulatory care setting
.
BMC Health Serv Res
.
2002
;
2
(
1
):
22
.
11
Tinsley
JA
.
An educational intervention to improve residents' inpatient charting
.
Acad Psychiatry
.
2004
;
28
(
2
):
136
139
.
12
Brami
J
,
Doumenc
M
.
Improving general practitioner records in France by a two-round medical audit
.
J Eval Clin Pract
.
2002
;
8
(
2
):
175
181
.
13
Glisson
JK
,
Morton
ME
,
Bond
AH
,
Griswold
M
.
Does an education intervention improve physician signature legibility? Pilot study of a prospective chart review
.
Perspect Health Inf Manag
.
2011
;
8
:
1e
.
14
Opila
DA
.
The impact of feedback to medical housestaff on chart documentation and quality of care in the outpatient setting
.
J Gen Intern Med
.
1997
;
12
(
6
):
352
356
.
15
Skelly
KS
,
Bergus
GR
.
Does structured audit and feedback improve the accuracy of residents' CPT E&M coding of clinic visits?
Fam Med
.
2010
;
42
(
9
):
648
652
.
16
Dexter
SC
,
Hayashi
D
,
Tysome
JR
.
The ANKLe score: an audit of otolaryngology emergency clinic record keeping
.
Ann R Coll Surg Engl
.
2008
;
90
(
3
):
231
234
.
17
Cohen
,
J
.
Statistical Power Analysis for the Behavioral Sciences. 2nd ed
.
Hillsdale, NJ
:
Erlbaum;
1988
.
18
Hyson
SJ
,
Best
RG
,
Pugh
JA
.
Audit and feedback and clinical practice guideline adherence: making feedback actionable
.
Implement Sci
.
2006
;
1
:
9
.

Author notes

Nicole Homb is a clinical research fellow, Shayan Sheybani is the clinic affairs operations administrator, Dustin Derby is the senior director for institutional planning and research, and Kurt Wood is the vice chancellor for clinic affairs, all with the Palmer College of Chiropractic. Address correspondence to Nicole Homb, 741 Brady Street, Davenport, IA 52803; DrNicoleHomb@hotmail.com. This article was received September 23, 2013, revised January 30, 2014, and accepted February 1, 2014.