The assessment of healthcare facility quality using business excellence models provides valuable information about performance gaps, which can be used to improve performance. Within the excellence framework, the “result” domain presents more challenges in terms of improvement over time. Using European and American business excellence-based models (EFQM and Balridge, respectively), this review aims to highlight the impact of quality assessment on the improvement of healthcare performance results. A literature search was performed using PubMed, SCOPUS, and CINAHL databases. PRISMA guidelines were followed. All the articles were evaluated using the Critical Appraisal Skills Programme (CASP) Tool. Thematic analysis was conducted following Thomas and Harden's approach, and confidence levels were determined using the GRADE-CERQual method. Nine studies were included. Two main themes emerged: 1) the assessment highlighted improvement in some results; and 2) the assessment highlighted areas that need improvement. The assessments focused mostly on customer-based results and least on society-based results. Six out of nine included studies did not show improvement in the desired results after a one-time assessment; however, no recommendations to improve quality were given to the facility after the assessments. Unless there is continuity in the assessment process, the desired results may not improve.

Patients continue to expect excellence in their healthcare facilities.[1] Although excellence is synonymous with quality, the differences between facilities' approaches to quality and others' approaches to excellence are far reaching.[2] According to the American Society for Quality, “Excellence is a measure of consistently superior performance that surpasses requirements and expectations without demonstrating significant flaws or waste”.[3] However, organizational excellence is defined as the ongoing effort to establish an internal framework of standards and processes intended to engage and motivate employees to deliver products and services that fulfill customer requirements within business expectations.[3]

To ensure excellence in healthcare, healthcare institutions should continually assess their quality [4] and take necessary actions to enhance quality.[5] According to Furnival et al, quality improvements can be initiated by collecting data and applying appropriate analytical techniques.[6] Evidence of a performance gap encourages decision makers to improve the current situation.[5,7] An assessment provides a detailed review of strengths and weaknesses, activates quality initiatives, empowers employees, and compares current performance with that of competitors.[8] Developing a quality assessment tool using the European Foundation for Quality Management (EFQM) model opens the door to indispensable performance discussions.[9] A self-assessment identifies areas for improvement and highlights defective systems and processes.[10] A facility's compliance with accreditation standards, such as United States Joint Commission standards, may also be assessed and measured to determine how well it adheres to each system or chapter, whether organizational or patient centered.[11]

For example, the performance metric is included in the Joint Commission standard as part of the chapter's standards, which includes a section about the Baldrige excellence model.[11] In addition, the EFQM excellence model is more result-oriented than accreditation because the results account for 40.2%, while processes represent 64.4%. This improves the quality by concentrating on the outputs and improving performance.[12] EFQM is internationally recognized for measuring organizational performance, although one tool cannot cover everything [10,13,14]

A systematic review by Furnival et al. found 70 instrumental frameworks for evaluating quality, including the Baldrige Award Questionnaire.[13] In the European model, there are nine domains, including four related to results, society, people, customers, and key performance [15], which are harder to implement than enablers, which are structures and processes.[14] Yousefinezhadi et al. found that the EFQM excellence model improved service efficiency by reducing the length of stay and waiting time, and improving patient satisfaction [16]. The impact of these assessments on healthcare performance has not yet been studied using a recognized excellence model. The purpose of this study was to highlight the role of quality assessment using a business excellence-based model to improve healthcare performance [17]. This study focuses on only two business models: the European Foundation for Quality Management (EFQM) and the American Malcolm Baldrige National Quality Award (also knowns as the Balridge model). The research question is, “Does the quality assessment of healthcare services using business excellence models have an impact on improving key performance indicators?”

The ‘Population, Intervention, Comparison, Outcome and Time' (PICOT) template was used to depict keywords and their alternative terms (Table S1).[18] In the current study, the population (P) refers to healthcare facilities. Intervention (I) is the assessment by the business excellence model. The outcome (O) is to improve performance, and the alternative is to improve the quality and results of the service performance assessment. There was no specified time (T) or comparison (C).

A systematic search was conducted using selected keywords, relevant medical subject headings (MeSH) terms, and suitable alternatives, considering differences in indexation among different databases. The inclusion criteria were (1) descriptive and cross-sectional studies published in the English language over a 10-year period (2011-2021), and (2) studies focused on the standards or criteria of assessment based on awards or certification by a national or international body (Table 1). Furthermore, studies based on an assessment of the quality of services, departments, and facilities in healthcare facilities were included. Additionally, studies that used the Excellence Model to assess or track changes in performance or outcomes from national or international healthcare facilities were included. In cases where the evaluation data, notably the result domain, are not measured or stated in numerical values, the search is restricted (See Table S3 for details on the key terms and search strategy). Articles without assessment data or those that were not relevant to healthcare facilities were excluded. Studies included in the review were assessed regardless of funding. Three databases (PubMed, Scopus, and CINAHL) were extensively searched using the EBSCO host platform according to these inclusion and exclusion criteria (as of February 19, 2021). Mendeley was used for data management and elimination (Table S3). A web-based server Rayyan[19] was used to facilitate an independent blind review of the selected studies.

Table 1

Characteristics of the included studies.

Characteristics of the included studies.
Characteristics of the included studies.

Reporting Bias Assessment

Among the included studies, different strategies were used to mitigate the risk of bias associated with the assessment data. The selected criteria included calculations of intra- and inter-rater reliability, Cohen's Kappa (0.86 to 0.97), external validation, Cronbach's alpha coefficient, standard deviation, and average values.

The confidence level (CI) of each theme was determined individually considering the following factors: coherence, relevance, adequacy, methodological limitations, and risk of bias. Based on CI, the studies were classified as low, medium, or high levels of confidence.

Selection of Studies for Inclusion

A total of 173 articles were acquired using the search strategy, and 34 duplicate entries were identified and removed using Mendeley; 139 articles were independently screened and 117 were excluded based on the exclusion criteria (Tables S2 and S3). During the initial screening of systematic reviews, expert commentaries (n = 21), commentaries (n = 7), and studies published in languages other than English (n = 4) were excluded.

Additionally, studies based on the service-specific excellence model (n = 14), application of the excellence model to sectors other than healthcare (e.g., education, n = 21), and evaluations of professional staff (n = 26) were excluded. Following independent screening, the search results were subjected to a blind review using the Rayyan Web server.[19]Figure 1 presents the PRISMA flowchart illustrating this process, resulting in nine studies included in the analysis.[2028]

Figure 1

PRISMA flow diagram depicting steps of literature search and selection for included studies.

Figure 1

PRISMA flow diagram depicting steps of literature search and selection for included studies.

Close modal

The excellence model reflects a quota documented by measuring the desired results (‘result domain') either one time or multiple times according to the number of quality assessments performed within the scope of the intervention. In the included studies, a variety of quality metrics were related to society, customers, and performance either collectively or specifically. For further analysis, text was extracted and tabulated under two themes: (1) improving some aspects of the results, and (2) assessing the need for improvement. The overall results (outcome or criteria) were reported as numerical data as the average of the four results.[2126] For Berssaneti et al., the numerical data were converted to percentages.[20] In the case of Shields et al., the study analyzed and tabulated only customer results.[27]

When the assessment was performed more than once,[22,26,28] data were recorded separately for each assessment. Pattanaik and Aurolipy scored the highest among the facilities but did not mention any strategies for managing risk of bias.[28] Studies included in the review were assessed regardless of funding.

Appraisal of Evidence

The Critical Appraisal Skills Programme (CASP) framework[29] was used to appraise qualitative studies. Pattanaik and Aurolipy's study was appraised using cross-sectional studies.[28,30]Table 2 presents the criteria for the numerical assessment and quality level. Three articles were of high quality, five were of moderate quality, and one was of low quality.

Table 2

Comparative evaluation of included articles using the Critical Appraisal Skills Programme (CASP) tool Questions.

Comparative evaluation of included articles using the Critical Appraisal Skills Programme (CASP) tool Questions.
Comparative evaluation of included articles using the Critical Appraisal Skills Programme (CASP) tool Questions.

As the EFQM and Baldridge frameworks do not require data saturation when collecting information, the CASP's data saturation postulate was ignored.[6,15] The CASP tool also evaluates the researcher-participant relationship. Nasab et al. translated the EFQM questionnaire, followed by modification of the four questions based on their experiences.[25] Dehnavieh et al. and Mishra et al. adjusted the questionnaire to incorporate the facility environment.[21,24] Favaretti et al. used a modified version for the Italian healthcare system.[22] Furthermore, Shields et al. added one open-ended question, and Berssaneti et al. added the option of ‘Not applicable' for scoring.[20,27]

The evaluation of ethics in four articles was challenging because these studies failed to mention ethical approval or confidentiality.[21,22,24,26] Two of these articles[22,24] were published by Emerald,' which is a partner of the Committee on Publication Ethics (COPE). One study indicated ethical committee approval in the cover letter but the article did not contain a satisfactory statement regarding approval or exemption.[26] Similarly, the study by Dehnavieh et al. did not mention approval by an ethical committee.[21] Due to incomplete or missing ethical statements in these four papers, this element could not be evaluated.[21,22,24,26]

Confidence levels were determined based on coherence, relevance, adequacy, and methodological limitations in each study. Additionally, bias was assessed for each subtheme.

Data Extraction and Thematic Analysis

This review used meticulous thematic analysis to produce valuable findings from the collective results of the nine included articles. There is already a ‘pre-existing concept' of the association between quality assessment using powerful tools, such as the excellence model, and performance improvement.[31] Thus, the comparison between categories of performance results is made based on the collective results of the four categories of the EFQM model [15,32] or on one category of the Baldrige model. The required elements of “enhancing transparency in reporting the synthesis of qualitative research (ENTREQ) statements” were considered.[31]Table 2 illustrates the characteristics of the selected studies, which were tabulated using the inductive approach.[33]

Key themes were developed using Thomas and Harden's methodology[34] and tabulated from the results (including available numerical data), discussions, and conclusions of the included qualitative studies (Table S4). The confidence level for each subtheme was determined using the GRADE-CERQual approach.[35] Pattanaik and Aurolipy's study was excluded from thematic analysis due to its quantitative and descriptive design.[28]

Evidence Synthesis

This review includes nine articles from seven countries: Brazil, Iran (n = 2), Italy, Portugal, India (n = 2), Spain, and the United States (Table 2). The total sample size (n = 1045) was calculated based on the number of questionnaire respondents from all studies except Nasab et al. and Berssaneti et al.[20,25] The former did not mention the sample size, and the latter considered the sample size as five emergency departments.

In all the included articles, the study population was either the staff or selected categories, except for Nasab et al., in which the study population was the selected emergency departments.[25] The study designs included a hybrid approach (literature review and study of multiple cases), cross-sectional study, methodological triangulation, cross-sectional descriptive study, and case studies. The design could not be determined for studies by Favaretti et al. and Shields et al. , and Pattanaik and Aurolipy used descriptive and quantitative approaches.[22,27,28]

The eight qualitative studies followed an interpretivist paradigm (paradigm naturalistic), except for Nasab et al., which employed a constructivist approach in which the researcher guided the participants and helped collect data.[25]

Shields et al.[27] used the Baldrige model and all other studies used the EFQM model. All studies adopted the ready-made tool, but some modified it based on language or setting (see Appraising the Evidence section). Chinda modified the EFQM questionnaire to assess occupational safety through questionnaire distribution.[36] Pattanaik and Aurolipy distributed an electronic version of the scale.[28] The others used the interview method. Mishra et al. and Nasab et al. conduct document reviews and interviews.[24,25] Favaretti et al. organized a training workshop along with interviews.[22]

Evaluation of Risk of Bias

The risk of bias was analyzed for each qualitative theme. The findings of Dehnavieh et al. and Favaretti et al. seemed biased toward their funders.[21,22] The detailed results of Shields et al. are not accessible for review, resulting in a publication bias.[27] Finally, Mishra et al. placed different emphasis on weighting the criteria of excellence (Table S4).[24]

Theme 1 – Assessment highlighted improvement in some dimensions

The first key theme comprises four subthemes. The first subtheme to emerge was that results associated with employee-related domains do not usually improve.[22,23,26] Different but non-conflicting primary results were found, suggesting mixed evidence. Both Favaretti et al. and Rodríguez-González et al. reported improvements in their results.[22,26] Rodríguez-González et al. mentioned that the staff had the lowest change over time.[26] These findings align with the results of Matthies-Baraibar et al., who observed higher employee satisfaction in EFQM award-winning organizations than in non-award winners during three successive periods.[37] Authors noted that the percentage increase in satisfaction decreased over time. Additionally, these facilities are mature in their associated processes, and relaxation is acquired over time. For example, the working conditions status was significantly different in 2007-2008 (32%), and there was no significant difference in the status of ‘technical and material resources' in 2009-2010 (44%). Rodríguez-González et al. reported some improvements, but nurses did not find them satisfactory.[26] The results are in line with those of a previous study conducted in the Democratic Republic of Congo. The author highlighted that although staff satisfaction increased from 11% in 2005 to 57% in 2010, the nurse's dissatisfaction rate (3.0) was the worst compared to that of the biologist (3.312) in 2010.[38] The results of people exceeded those of the benchmarked European Quality Association (EQA), but at the same time urged such domains to improve the overall quality.[28] However, Marques et al. highlighted that program coordinators did not measure any aspects associated with staff motivation or commitment.[23]

The second subtheme was that society-based results have the least improvement. Although an improvement was observed in three studies, this change was the lowest compared to the other results.[22,23,25] This subtheme is consistent with the findings of Chinda[41] who assessed the safety function based on the EFQM Model and found that society's result criteria scored the lowest (45%). Society activities include social image, social cooperation, social cost reduction, and public safety. There is moderate confidence in this subtheme (Table 4b).

The third subtheme was about increasing the focus on improving customer results. There was a strong emphasis on improving the customer results in four studies.[2123,27] The primary finding includes complaint measures and patient satisfaction with the quality of service, such as access to care. Patient satisfaction improved over time.[22,23,27] Yousefinezhadi found that implementation of the EFQM framework improved patient satisfaction.[16] Improvement in patient complaint indicators was reported in two studies.[22,23] Marques et al. and Dehnavieh et al. find that the customer domain is an area of strength in healthcare facilities.[21,23] In addition, Chinda, in his assessment of safety using the EFQM model, the customer result scored 56%.[36] This was an aggregate score of factors and attributes such as customer satisfaction, perception, and expectation. The weight assigned to the customer results (14.5) was much higher than that of the people and society results, yielding higher final customer results.[24] This subtheme has consistent evidence, high confidence, and additional supportive data (Table S4b).

The fourth subtheme was that results associated with key performance indicators do not usually lead to improvement. The improvement in key performance indicators, such as efficiency or finances, was limited and, in some cases, contradictory.[22,23,26] There was an apparent inconsistency in the fourth subtheme. When applying the Baldrige model, performance was based on both non-financial and financial measures.[2] Financial performance was higher than non-financial performance in India, and the opposite was observed in Iran. Positive progress has been reported in terms of efficiency, productivity, and medication safety.[26] Griffith studied the performance of 14 highly reliable organizational facilities recognized as Baldrige award recipients.[39] He found that the performance of those facilities was above average in safety and infection rates, but below average in financial performance and readmission rates. However, Marques et al. mentioned inappropriate reporting of efficiency and finance.[23] Spohn highlighted that the overall business performance result for education facilities based on the Baldrige model was 36%, which was the highest area needing improvement.[8] The confidence level for this subtheme is moderate. The key performance was the lowest in Marques et al. and highest in Mishra et al., according to the weight given by experts because of its importance (Table S5).[23,24]

Theme 2 – Assessment highlights the need for improvement

The second key theme comprises two subthemes. The first one is that the assessment identified general areas for improvement and supportive actions, but did not identify specific areas of improvement.[2224] Precisely, Marques et al. revealed that the assessment data would help in the planning phase of the following year.[23] According to Marques et al., a self-assessment activity elucidated areas for improvement and the underlying systems and processes that need to be changed to improve an organization's performance.[10] Moreover, Dehnavieh et al. added that assessment also promoted strengths.[21] Six out of the nine studies (66.6%) did not show any evidence of improvement in this subtheme (Table S4a). This subtheme has consistent evidence and moderate confidence.

The second subtheme was that feedback from the assessment provides a ‘prescription' of needful actions. Ultimately, the assessment of quality produces workable recommendations for quality improvement, such as strategies to upgrade specific functions or guidance per quality dimension.[21,23,27] Shields et al. depicted actions and indicators that require improvement.[27] Marques et al. stated which type of domain, i.e., society, and Dehnavieh et al. assessed the financial system.[21,23] Griffith specified that the areas needing improvement to ensure high reliability in organizations adopting the excellence model are finance and readmission indicators.[39] Findings in this subtheme are characterized by low confidence. Variation in the scope of assessment, nature of the setting, and identified gaps demonstrate that this subtheme has mixed evidence.

Additional Findings

The impact on improving performance can be only be observed when the assessment is repeated.[22,26,27] In six of the nine included studies (66.6%), there was no evidence of performance improvement by repeat assessment, regardless of the type of assessment or model. Thus, continuity over a long period is the key to improving performance.

Adoption of business excellence models for quality improvement was workable at the level of governance[22,28] and at the country level, allowing for comparison with foreign countries.[20,21] Such assessments can support broad application in healthcare systems.

This review highlights the potential impact of adopting a business model of excellence for quality assessment purposes in the healthcare field.[40] Business excellence models can also guide service leaders at the government, facility, or departmental level and practitioners of quality management.

The assessment of quality using business excellence models highlights the domain of key performance indicators that can be improved, albeit with variations in confidence and quality of evidence. Such an assessment indicates that the greatest focus on performance is associated with customer-based results and the least focus is given to society-based results. Focusing on both customers and key performance indicators does not continuously reflect improvement. Furthermore, no recommendations or ‘prescriptions' to improve performance were given to the facility at the end of the assessment. There is inadequate evidence to support the idea that performance can be improved by performing a one-time quality assessment. Repeatedly performing quality assessments is key to achieving the desired performance improvement, regardless of the domain of the result. Business excellence models can be applied to different healthcare settings, ranging from one department to multiple facilities, for quality assessment.

There are some limitations in this study. This systematic review included only the European and American business excellence models (EFQM and Baldrige); other countries may use different models. Only one of the included studies followed the American model, and the rest applied the European model. There was no evidence supporting adoption of the excellence model in Africa or the Middle East; however, adopting the Saudi national model of organizational excellence ‘King AbdulAziz Quality Award'[41] in healthcare could be considered. Second, although this review focuses on the' result' domain, it was challenging to find studies that include actual data on performance metrics for other domains of excellence. Another limitation is the lack of published data of actual assessment results from the award winners' facilities. Third, the assessment may be performed only once if the facility lacks a policy for application of the model or the submission of awards. Fourth, the quality level of six out of nine studies was medium or low, which reduced the confidence level of the findings.

Evidence-based recommendations from this systematic review include:

  • Expand the quality audit policy to utilize recognized excellence models as a quality assessment tool (i.e., EQFM or Balridge).

  • These models can be used for quality assessment in a various healthcare systems or departments and in low- or high-resource settings.

  • Ensure continuity when adopting the excellence model to monitor changes in each result over time by assigning well-determined objectives and monitoring associated key performance indicators.

  • In addition to patient-focused results, these models can support quality assessment associated with society-based and performance-based results.

  • Act on the assessment results to improve the identified areas, either as a function or dimension of quality, by continuously addressing the findings and sustaining the achievements.

  • The local context and policies play a crucial role in determining the applicability of the final recommendation.

Future research should focus on presentation of evidence of improvement along with the assessment scores and results, including detailed data analysis of performance results from at least two assessments, specific value-additions in terms of quality improvement activities, and minimizing the risk of bias in handling and publishing results of the performance assessment.

The author acknowledges support from Ms. Michelle Melvin (Western Care Association, Mayo, Ireland) and Drs. Patricia Kennedy and Dymphna O'Connell (St. Angela's College, Sligo, Ireland).

1.
Berwick
DM.
What “patient-centered” should mean: confessions of an extremist
.
Health Aff
.
2009
;
28
:
w555
65
.
2.
Gorji
AM,
Farooquie
JA.
A comparative study of total quality management of health care system in India and Iran
.
BMC Res Notes
.
2011
;
4
:
566
.
3.
What is Organizational Excellence? Organizational Excellence Model
.
4.
Access to Justice for Business and Inclusive Growth in Latvia
.
OECD
;
2018
.
5.
Bassett
S,
Westmore
K.
Systems and processes that ensure high quality care
.
Nurs Manag
.
2012
;
19
:
18
20
.
6.
How Baldrige works
.
NIST
.
7.
Madden
D.
Building a culture of patient safety–report of the commission on patient safety and quality assurance
.
2008
.
Department of Health and Children
.
2018
;
3
.
8.
Spohn
R.
The self-assessment process and impacts on the health information management program performance: a case study
.
Perspect Health Inf Manag
.
2015
;
12
:
1e
.
9.
Sampietro-Colom
L,
Lach
K,
Pasternack
I,
et al
Guiding principles for good practices in hospital-based health technology assessment units
.
Int J Technol Assess Health Care
.
2015
;
31
:
457
465
.
10.
Marques
AI,
Santos
L,
Soares
P,
et al
A proposed adaptation of the European Foundation for Quality Management Excellence Model to physical activity programmes for the elderly - development of a quality self-assessment tool using a modified Delphi process
.
Int J Behav Nutr Phys Act
.
2011
;
8
:
104
.
11.
Comparison Between Joint Commission Standards, Malcolm Baldrige National Quality Award Criteria, and Magnet Recognition Program Components
.
The Joint Commission
.
2013
:
1
14
.
12.
Malekzadeh
R,
Mahmoodi
G,
Abedi
G.
A comparison of three models of hospital performance assessment using IPOCC approach
.
Ethiop J Health Sci
.
2019
;
29
:
543
550
.
13.
Furnival
J,
Boaden
R,
Walshe
K.
Conceptualizing and assessing improvement capability: a review
.
Int J Qual Health Care
.
2017
;
29
:
604
611
.
14.
Abdallah
A.
Implementing quality initiatives in healthcare organizations: drivers and challenges
.
Int J Health Care Qual Assur
.
2014
;
27
:
166
181
.
15.
Organisational change management.
Accessed
May
2
,
2022
.
efqm.org/index.php/efqm-model-2013/
16.
Yousefinezhadi
T,
Mohamadi
E,
Safari Palangi
H,
Akbari Sari
A.
The effect of ISO 9001 and the EFQM model on improving hospital performance: a systematic review
.
Iran Red Crescent Med J
.
2015
;
17
:
e23010
.
17.
Dobbins
M.
Rapid review guidebook
.
Natl Collab Cent Method Tools
.
2017
;
13
:
25
.
18.
Riva
JJ,
Malik
KMP,
Burnie
SJ,
et al
What is your research question? An introduction to the PICOT format for clinicians
.
J Can Chiropr Assoc
.
2012
;
56
:
167
171
.
19.
Ouzzani
M,
Hammady
H,
Fedorowicz
Z,
Elmagarmid
A.
Rayyan—a web and mobile app for systematic reviews
.
Syst Rev
.
2016
;
5
:
210
.
20.
Berssaneti
FT,
Saut
AM,
Barakat
MF,
Calarge
FA.
Is there any link between accreditation programs and the models of organizational excellence?
Rev Esc Enferm USP
.
2016
;
50
:
650
657
.
21.
Dehnavieh
R,
Ebrahimipour
H,
Hekmat
SN,
et al
EFQM-based self-assessment of quality management in hospitals affiliated to Kerman University of Medical Sciences
.
Int J Hosp Res
.
2012
;
1
:
57
63
.
22.
Favaretti
C,
De Pieri
P,
Torri
E,
et al
An EFQM excellence model for integrated healthcare governance
.
Int J Health Care Qual Assur
.
2015
;
28
:
156
172
.
23.
Marques
AI,
Rosa
MJ,
Soares
P,
et al
Evaluation of physical activity programmes for elderly people - a descriptive study using the EFQM' criteria
.
BMC Public Health
.
2011
;
11
:
123
.
24.
Mishra
V,
Samuel
C,
Sharma
SK.
Supply chain partnership assessment of a diabetes clinic
.
Int J Health Care Qual Assur
.
2018
;
31
:
646
658
.
25.
Nasab
MH,
Mohaghegh
B,
Khalesi
N,
Jaafaripooyan
E.
Parallel quality assessment of emergency departments by European foundation for quality management model and Iranian national program for hospital evaluation
.
Iran J Public Health
.
2013
;
42
:
610
619
.
26.
Rodríguez-González
CG,
Sarobe-González
C,
Durán-García
ME,
et al
Use of the EFQM excellence model to improve hospital pharmacy performance
.
Res Social Adm Pharm
.
2020
;
16
:
710
716
.
27.
Shields
JA,
Jennings
JL.
Using the Malcolm Baldrige “are we making progress” survey for organizational self-assessment and performance improvement
.
J Healthc Qual
.
2013
;
35
:
5
15
.
28.
Pattanaik
B,
Aurolipy. Quality assessment using EFQM model for overall excellence of indian health care sector
.
Indian J Public Health Res Develop
.
2020
;
11
:
822
825
.
29.
Brice
R.
Casp checklists
.
CASP - Critical Appraisal Skills Programme
.
Published
October
12
,
2020
.
Accessed May 2, 2022. casp-uk.net/casp-tools-checklists
30.
Moola
S,
Munn
Z,
Tufanaru
C,
et al
Critical appraisal checklist for analytical cross-sectional studies
.
The Joanna Briggs Institute
;
2017
.
31.
Tong
A,
Flemming
K,
McInnes
E,
et al
Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ
.
BMC Med Res Methodol
.
2012
;
12
:
181
.
32.
Gemoets
P.
EFQM transition guide-how to upgrade to the EFQM Excellence Model 2010
.
EFQM
.
2009
.
33.
Brunton
G,
Harden
A,
Oakley
A,
Brunton
G.
Evidence for policy and practice information and co-ordinating centre
.
34.
Thomas
J,
Harden
A.
Methods for the thematic synthesis of qualitative research in systematic reviews
.
BMC Med Res Methodol
.
2008
;
8
:
45
.
35.
Lewin
S,
Booth
A,
Glenton
C,
et al
Applying GRADE-CERQual to qualitative evidence synthesis findings: introduction to the series
.
Implement Sci
.
2018
;
13
(Suppl 1)
:
2
.
36.
Chinda
T.
A safety assessment approach using safety enablers and results
.
Int J Occup Saf Ergon
.
2012
;
18
:
343
361
.
37.
Matthies-Baraibar
C,
Arcelay-Salazar
A,
Cantero-González
D,
et al
Is organizational progress in the EFQM model related to employee satisfaction?
BMC Health Serv Res
.
2014
;
14
:
468
.
38.
Mundongo
TH,
Ditend
YG,
VanCaillie
D,
Malonga
KF.
The assessment of job satisfaction for the healthcare providers in university clinics of Lubumbashi, Democratic Republic of Congo
.
Pan Afr Med J
.
2014
;
19
:
265
.
39.
Griffith
JR.
Understanding high-reliability organizations: are Baldrige recipients models?
J Healthc Manag
.
2015
;
60
:
44
61
.
40.
Barends
E,
Plum
K,
Mawson
A.
The added value of rapid evidence assessments for managers and organizations
.
In Search of Evidence
.
Published online
2015
.
41.
Excellence model
.

Source of support: None. Conflict of interest: None.

This work is published under a CC-BY-NC-ND 4.0 International License.