Context.—

Little is known regarding the reporting quality of meta-analyses in diagnostic pathology.

Objective.—

To compare reporting quality of meta-analyses in diagnostic pathology and medicine and to examine factors associated with reporting quality of diagnostic pathology meta-analyses.

Design.—

Meta-analyses were identified in 12 major diagnostic pathology journals without specifying years and 4 major medicine journals in 2006 and 2011 using PubMed. Reporting quality of meta-analyses was evaluated using the 27-item checklist of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement published in 2009. A higher PRISMA score indicates higher reporting quality.

Results.—

Forty-one diagnostic pathology meta-analyses and 118 medicine meta-analyses were included. Overall, reporting quality of meta-analyses in diagnostic pathology was lower than that in medicine (median [interquartile range] = 22 [15, 25] versus 27 [23, 28], P < .001). Compared with medicine meta-analyses, diagnostic pathology meta-analyses less likely reported 23 of the 27 items (85.2%) on the PRISMA checklist, but more likely reported the data items. Higher reporting quality of diagnostic pathology meta-analyses was associated with recent publication years (later than 2009 versus 2009 or earlier, P = .002) and non–North American first authors (versus North American, P = .001), but not journal publisher's location (P = .11). Interestingly, reporting quality was not associated with adjusted citation ratio for meta-analyses in either diagnostic pathology or medicine (P = .40 and P = .09, respectively).

Conclusions.—

Meta-analyses in diagnostic pathology had lower reporting quality than those in medicine. Reporting quality of diagnostic pathology meta-analyses is linked to publication year and first author's location, but not to journal publisher's location or article's adjusted citation ratios. More research and education on meta-analysis methodology may improve the reporting quality of diagnostic pathology meta-analyses.

Systematic reviews and meta-analyses have become increasingly important in health care research.13  A systematic review is a review of a clearly formulated question that uses systematic and explicit methods to identify, select, and critically appraise relevant research and to collect and analyze data of included studies.4  Meta-analyses are different from systematic reviews in their use of statistical techniques to analyze the included studies.4  Clinical researchers and clinicians often rely on meta-analyses to become familiar with current knowledge and its gaps regarding a specific question because meta-analyses summarize available evidence by combining data from a series of well-qualified primary studies, and hence increase the overall sample size and the statistical power. Therefore, they are supposed to provide more accurate and reliable evidence than any individual study alone.5,6 

However, the potential strengths of meta-analyses significantly rely on their reporting quality. Meta-analyses should be reported fully and transparently to allow readers to assess the strengths and weaknesses of the included studies and the analysis itself. Inadequate reporting of key information in meta-analyses limits researchers' and clinicians' ability to critically appraise and independently interpret meta-analysis results, and therefore diminishes their clinical values.7,8  In order to improve the clarity, completeness, and transparency of reporting meta-analyses, many works have focused on meta-analysis reporting quality and reporting guidelines, since as early as 1994.7,913  Most recently, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement was developed and published in 2009 by an international group,4  and has been endorsed by numerous medical journals.14  The checklist of the PRISMA statement includes 27 items deemed essential for transparent reporting of systematic reviews and meta-analyses, and is used as an indicator of their reporting quality.4,15 

Although meta-analyses have been increasingly used to summarize evidence and update readers on specific topics, their reporting quality varies significantly.7,9  The reporting quality of meta-analyses in various areas remain suboptimal despite some improvements associated with the publication of the PRISMA statement.1618  The reporting quality of meta-analyses in diagnostic pathology is largely unknown, probably in part because of the difficulty in and underutilization of meta-analyses in diagnostic pathology.1921  Therefore, in this study, we quantitatively compared the reporting quality of meta-analyses in diagnostic pathology and medicine based on the PRISMA statement checklist, and examined the factors potentially associated with the reporting quality of meta-analyses in diagnostic pathology. The goal of the study is to provide some useful information to improve the reporting quality of meta-analyses in diagnostic pathology.

MATERIALS AND METHODS

As described previously,19  167 meta-analysis articles were identified from 12 major diagnostic pathology journals and 4 major medicine journals through a literature search in PubMed. The searched journals included American Journal of Clinical Pathology, American Journal of Surgical Pathology, Modern Pathology, Human Pathology, Laboratory Investigation, Archives of Pathology & Laboratory Medicine, International Journal of Clinical and Experimental Pathology, Journal of Clinical Pathology, Pathology, Histopathology, Virchows Archiv, Histology and Histopathology, New England Journal of Medicine, JAMA, Lancet, and BMJ. According to the publisher locations, 7 of these 16 journals were classified as non–North American journals. These 167 meta-analyses were also categorized into those by North American authors and by non–North American authors according to their first author's affiliation.19 

We assessed the quality of each meta-analysis article by scoring them based on the 27-item checklist of the PRISMA statement. According to the PRISMA checklist,4  the 27 items are placed in 7 domains, including title, abstract, introduction (rationale, objectives), methods (protocol and registration, eligibility criteria, information sources, search, study selection, data collection process, data items, risk of bias in individual studies, summary measures, synthesis of results, risk of bias across studies, additional analyses), results (study selection, study characteristics, risk of bias within studies, results of individual studies, synthesis of results, risk of bias across studies, additional analyses), discussion (summary of evidence, limitations, conclusions), and funding.4,15  Item 2, abstract, was given 2 points if data sources, study eligibility criteria, and study appraisal and synthesis methods were all presented in the abstract in addition to objectives, results and conclusions; it was given 1 point if 1 or 2 of the above-mentioned 3 items were presented in the abstract; it was given 0 points if none of the 3 items was presented in the abstract. We subdivided item 5, protocol and registration, into 3 subitems: if a review protocol existed (1 or 0 points); if existing, whether and where it could be accessed (1 or 0 points); and if it was available to access, whether registration information was included (1 or 0 points). Item 14, synthesis of results in the methods domain, was given 2 points if measures of consistency for each meta-analysis were described in addition to other methods, otherwise 1 point was given; it was given 0 points if no methods were mentioned. Item 21, synthesis of results in the results domain, was given 2 points if the results of both confidence intervals and measures of consistency were presented in addition to other presented results, otherwise 1 point was given; it was given 0 points if neither of these 2 results was presented. All the other items were given 1 or 0 points based on the presence or absence of the respective items. The final PRISMA score of each publication was the sum of the given points of these 27 items, ranging from 0 to 32 points. The PRISMA scores of the articles were used to assess their reporting quality.

Descriptive statistics, frequency and percentage, were performed to describe the adherence to each item of the PRISMA statement checklist. Wilcoxon rank sum tests were performed to compare the PRISMA scores between various groups. Pearson χ2 tests were performed to examine potential associations. Spearman correlation coefficients were calculated to test the association between the PRISMA score and the adjusted citation ratio (the citations received by a meta-analysis paper divided by the mean citations of all original research, review, and meta-analysis articles combined for the same journal in the same year).19  The data were analyzed using SAS for Windows 9.4 (SAS Institute, Cary, North Carolina). A 2-sided P value < .05 was considered statistically significant.

RESULTS

Forty-one diagnostic pathology meta-analyses were identified and included in the study. Among the 126 identified medicine articles, there were 3 original research articles that referred to meta-analyses (2.4%) and 5 systematic review articles without meta-analyses (4.0%); therefore, 118 medicine meta-analyses were analyzed in this study, including 49 articles (41.5%) published in 2006 and 69 (58.5%) articles published in 2011.

Compared with medicine, diagnostic pathology meta-analyses less likely reported 23 of the 27 items (85.2%) of the PRISMA statement checklist, including the title domain (item 1), abstract domain (item 2), methods domain (items 5–10 and 12–16), results domain (items 17–25), and funding domain (item 27). For items 2, 14, and 21, the meta-analyses in diagnostic pathology had a lower percentage with a full score, 2 points, than those in medicine. However, the meta-analyses in diagnostic pathology had a better adherence to data items (item 11) in the methods domain than those in medicine (36 of 41; 88%; versus 85 of 118; 72%; P = .04). Of note, the data items item was defined as, “List and define all variables for which data were sought (eg, PICOS [participants, interventions, comparisons, outcomes, and study design], funding sources) and any assumptions and simplifications made.” 4 The adherence to the risk of bias across studies in the methods domain (item 15) and the results domain (item 22) was relatively low in both diagnostic pathology and medicine meta-analyses (diagnostic pathology, 12 of 41; 29%; and 15 of 41; 37%; medicine, 57 of 118; 48%; and 63 of 118; 53%, respectively). Neither diagnostic pathology nor medicine meta-analyses had an acceptable adherence to protocol and registration (item 5) in the methods domain: the presence of the indication of whether a review protocol existed was only 7% in diagnostic pathology (3 of 41) and 29% in medicine (34 of 118) (Table 1).

Table 1. 

Reporting of Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement Items in Major Medicine and Diagnostic Pathology Journals

Reporting of Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement Items in Major Medicine and Diagnostic Pathology Journals
Reporting of Preferred Reporting Items for Systematic Reviews and Meta-Analyses Statement Items in Major Medicine and Diagnostic Pathology Journals

The PRISMA scores of the meta-analyses in pathology were lower than those in medicine before or during 2009 (median [interquartile range (IQR)] = 15 [10, 19] versus 26 [23, 28], P < .001) and after 2009 (median [IQR] = 24 [16.5, 26] versus 27 [23, 29], P < .001). In medicine meta-analyses, there was no difference in PRISMA scores between 2006 and 2011 (median [IQR] = 26 [23, 28] versus 27 [23, 29], P = .26). However, in diagnostic pathology meta-analyses, PRISMA scores significantly improved after 2009 (median [IQR] = 15 [10, 19] versus 24 [16.5, 26], P = .002) (Table 2).

Table 2. 

Comparison of the Reporting Quality of Meta-Analyses Between Major Medicine Journals and Diagnostic Pathology Journals: Before or in 2009 and After 2009

Comparison of the Reporting Quality of Meta-Analyses Between Major Medicine Journals and Diagnostic Pathology Journals: Before or in 2009 and After 2009
Comparison of the Reporting Quality of Meta-Analyses Between Major Medicine Journals and Diagnostic Pathology Journals: Before or in 2009 and After 2009

As Table 3 shows, the first author's location (North America versus non–North America) was associated with the reporting quality of medicine and diagnostic pathology meta-analyses, respectively, but in opposite directions. Specifically, the PRISMA scores of the medicine meta-analyses by North American first authors were higher than those by non–North American first authors (median [IQR] = 27.5 (25, 29) versus 26 [23, 28], P = .03), whereas the PRISMA scores of the diagnostic pathology meta-analyses by North American first authors were lower than those by non–North American first authors (median [IQR] = 16 [11.5, 23] versus 25 (22, 26), P = .001). Therefore, it is understandable that overall the first author's location did not link to the reporting quality of meta-analyses.

Table 3. 

Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American First Authors: Before or in 2009 and After 2009

Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American First Authors: Before or in 2009 and After 2009
Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American First Authors: Before or in 2009 and After 2009

In the articles by North American first authors, the PRISMA scores of the diagnostic pathology meta-analyses were significantly lower than those of the medicine meta-analyses (median [IQR] = 16 [11.5, 23] versus 27.5 [25, 29], P < .001). However, in the articles of non–North American first authors, there was no difference in PRISMA scores between the diagnostic pathology and medicine meta-analyses. There were also more meta-analyses with a structured abstract by non–North American first authors compared with the ones by North American first authors (86 of 94 articles; 91.49%; versus 51 of 64 articles; 79.69%; P = .03), and the meta-analyses with a structured abstract had better PRISMA scores than the ones with an unstructured abstract (median [IQR] = 26 [23, 28] versus 22 [15, 24], P < .001), as shown by Table 4. We also found diagnostic pathology meta-analyses had fewer structured abstracts than medicine meta-analyses (22 of 41; 53.66%; versus 115 of 117; 98.29%; P < .001). Similar findings were present in the meta-analyses in North American journals and non–North American journals, as shown by Table 5.

Table 4. 

Comparison Between the Meta-Analyses With a Structured Abstract and With an Unstructured Abstract

Comparison Between the Meta-Analyses With a Structured Abstract and With an Unstructured Abstract
Comparison Between the Meta-Analyses With a Structured Abstract and With an Unstructured Abstract
Table 5. 

Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American Journals: Before or in 2009 and After 2009

Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American Journals: Before or in 2009 and After 2009
Comparison of the Reporting Quality of Meta-Analyses Between North American and Non–North American Journals: Before or in 2009 and After 2009

We then examined the potential association between PRISMA scores of meta-analyses and articles' adjusted citation ratios. Overall, the PRISMA scores of diagnostic pathology and medicine meta-analyses did not correlate with their adjusted citation ratios (correlation coefficient = 0.131, P = .10). Subgroup analyses also showed no correlations between PRISMA scores and adjusted citation ratios in medicine meta-analyses (correlation coefficient = 0.158, P = .09), diagnostic pathology meta-analyses (correlation coefficient = −0.134, P = .40), meta-analyses published before or in 2009 (correlation coefficient = 0.124, P = .34), and meta-analyses published after 2009 (correlation coefficient = 0.156, P = .13) (Table 6).

Table 6. 

Association Between the Reporting Quality of Meta-Analyses and Adjusted Citation Ratios

Association Between the Reporting Quality of Meta-Analyses and Adjusted Citation Ratios
Association Between the Reporting Quality of Meta-Analyses and Adjusted Citation Ratios

DISCUSSION

We show here that the reporting quality of the meta-analyses in diagnostic pathology was lower than that in medicine. We also found that the reporting quality of diagnostic pathology meta-analyses was associated with publication year (higher after 2009) and first author's location (higher in non–North Americans), but not journal publisher's location or article's adjusted citation ratio.

One likely reason for the lower reporting quality of diagnostic pathology meta-analyses is the lack of interest, education, and knowledge in meta-analysis among pathologists and/or diagnostic pathology journal editors and reviewers, and associated underutilization of meta-analysis in diagnostic pathology.1921  Indeed, only one diagnostic pathology journal officially endorsed the PRISMA statement as of May 2016, whereas numerous medicine journals endorsed it.14 

Author locations may be another factor associated with the lower reporting quality of diagnostic pathology meta-analyses. We found North American first authors more likely produced meta-analyses of lower reporting quality than non–North Americans. The North American first authors also more likely used unstructured abstracts. Despite a considerably high concordance between the locations of the first authors and the journal publishers (72%; 115 of 159), journal publisher's location did not link to the reporting quality of diagnostic pathology meta-analyses, suggesting that the location of first authors (perhaps all authors as well) was independently associated with reporting quality. Therefore, more and/or better education of North American authors (mostly pathologists) on meta-analysis methodology may improve the reporting quality of diagnostic pathology meta-analyses. A recommendation of more education on meta-analysis methodology is indeed supported by prior works.1921 

The reporting quality of the meta-analyses in diagnostic pathology significantly improved after 2009, which was possibly linked to the earlier works on the reporting quality of meta-analyses,7,912  continuous guideline development,1113  and eventually the publication of the PRISMA statement in 2009, despite manuscript preparation time and publication latency.15 

This is one of the first studies to show that there was no association between the reporting quality of meta-analyses and their adjusted citation ratios regardless of medical specialties and publishing time. An earlier study found that meta-analyses in higher-impact journals were of higher quality,22  but did not further stratify the meta-analyses according to their individual number of citations as we did. The adjusted citation ratio in our view well represents the usefulness and impact of a meta-analysis in its field.19  Our finding implies that a highly cited meta-analysis does not necessarily have higher reporting quality. The reliability and accuracy of a meta-analysis cannot be appropriately determined by its citations. Hence, the highly cited yet lower-quality meta-analyses may be biased and misleading. Therefore, readers should be cautious about the reporting quality of a meta-analysis article. They may even consider examining the reporting quality of meta-analysis articles based on the PRISMA statement or the like, instead of the articles' numbers of citations, before citing these articles.

There are some potential limitations in our study. First, the sample size of the meta-analyses in diagnostic pathology was relatively small because of the underutilization of meta-analyses in diagnostic pathology.19  Second, we used only one reporting quality measure in this study. Some associated biases may be present in our work compared with the studies using both the PRISMA statement and Assessment of Multiple Systematic Reviews.22  Finally, the PRISMA statement was developed as a reporting guideline for assessing the benefits and harms of medical interventions.4,15  It may not be the best tool for evaluating the meta-analyses in diagnostic pathology, which mainly concern etiology, diagnosis, and prognosis. Indeed, in our study, most of the medicine meta-analyses assessed medical interventions, but only 2 of the 41 diagnostic pathology meta-analyses evaluated medical interventions (M.K., L.Z., unpublished data, February 2015). However, the reporting quality of all 41 diagnostic pathology meta-analyses was characterized using the 27 items of the PRISMA statement without any modifications. Therefore, their PRISMA scores might not accurately reflect their true reporting quality and might be biased to some degree. Modification of the PRISMA statement based on the characteristics of diagnostic pathology seems interesting and perhaps even necessary; more work is needed.

In conclusion, the reporting quality of the meta-analyses in diagnostic pathology was lower than that in medicine. In diagnostic pathology, higher reporting quality of meta-analyses was associated with recent publications (after 2009) and non–North American first authors. There is still room to improve the reporting quality of meta-analyses in both medicine and diagnostic pathology, especially for protocol and registration (item 5) and risk of bias across studies in the methods domain (item 15) and the results domain (item 22). The reporting quality of meta-analyses did not correlate with their adjusted citation ratios, which suggests that readers of published meta-analyses in medicine and diagnostic pathology should consider evaluating the reporting quality of these articles irrespective of their numbers of citations (ie, how popular they are among authors). The findings seem to call for more attention, education, and research on the methodology and reporting guidelines of diagnostic pathology meta-analyses.

References

References
1
Booth
A,
Clarke
M,
Ghersi
D,
Moher
D,
Petticrew
M,
Stewart
L.
An international registry of systematic-review protocols
.
Lancet
.
2011
;
377
(
9760
):
108
109
.
2
Straus
S,
Moher
D.
Registering systematic reviews
.
CMAJ
.
2010
;
182
(
1
):
13
14
.
3
Zhang
L,
Chen
Z,
Fukuma
M,
Lee
LY,
Wu
M.
Prognostic significance of race and tumor size in carcinosarcoma of gallbladder: a meta-analysis of 68 cases
.
Int J Clin Exp Pathol
.
2008
;
1
(
1
):
75
83
.
4
Moher
D,
Liberati
A,
Tetzlaff
J,
Altman
DG.
Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement
.
PLoS Med
.
2009
;
6
(
7
):
e1000097
.
5
Uman
LS.
Systematic reviews and meta-analyses
.
J Can Acad Child Adolesc Psychiatry
.
2011
;
20
(
1
):
57
59
.
6
Garg
AX,
Hackam
D,
Tonelli
M.
Systematic review and meta-analysis: when one study is just not enough
.
Clin J Am Soc Nephrol
.
2008
;
3
(
1
):
253
260
.
7
Wen
J,
Ren
Y,
Wang
L,
et al.
The reporting quality of meta-analyses improves: a random sampling study
.
J Clin Epidemiol
.
2008
;
61
(
8
):
770
775
.
8
Moher
D,
Simera
I,
Schulz
KF,
Hoey
J,
Altman
DG.
Helping editors, peer reviewers and authors improve the clarity, completeness and transparency of reporting health research
.
BMC Med
.
2008
;
6
:
13
.
9
Moher
D,
Tetzlaff
J,
Tricco
AC,
Sampson
M,
Altman
DG.
Epidemiology and reporting characteristics of systematic reviews
.
PLoS Med
.
2007
;
4
(
3
):
e78
.
10
Lau
J,
Ioannidis
JP,
Schmid
CH.
Quantitative synthesis in systematic reviews
.
Ann Intern Med
.
1997
;
127
(
9
):
820
826
.
11
Shea
BJ,
Grimshaw
JM,
Wells
GA,
et al.
Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews
.
BMC Med Res Methodol
.
2007
;
7
:
10
.
12
Jefferson
T,
Rudin
M,
Brodney Folse
S
,
Davidoff
F
.
Editorial peer review for improving the quality of reports of biomedical studies
.
Cochrane Database Syst Rev
.
2007
;
(2):MR000016.
13
Moher
D,
Cook
DJ,
Eastwood
S,
Olkin
I,
Rennie
D,
Stroup
DF.
Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement: Quality of Reporting of Meta-analyses
.
Lancet
.
1999
;
354
(
9193
):
1896
1900
.
14
PRISMA endorsers
.
PRISMA Web site.; http://www.prisma-statement.org/Endorsement/PRISMAEndorsers.aspx. Published 2015. Accessed May 23, 2016
.
15
Liberati
A,
Altman
DG,
Tetzlaff
J,
et al.
The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration
.
PLoS Med
.
2009
;
6
(
7
):
e1000100
.
16
Hopewell
S,
Boutron
I,
Altman
DG,
Ravaud
P.
Deficiencies in the publication and reporting of the results of systematic reviews presented at scientific medical conferences
.
J Clin Epidemiol
.
2015
;
68
(
12
):
1488
1495
.
17
Tan
WK,
Wigley
J,
Shantikumar
S.
The reporting quality of systematic reviews and meta-analyses in vascular surgery needs improvement: a systematic review
.
Int J Surg
.
2014
;
12
(
12
):
1262
1265
.
18
Ge
L,
Wang
JC,
Li
JL,
et al.
The assessment of the quality of reporting of systematic reviews/meta-analyses in diagnostic tests published by authors in China
.
PLoS One
.
2014
;
9
(
1
):
e85908
.
19
Kinzler
M,
Zhang
L.
Underutilization of meta-analysis in diagnostic pathology
.
Arch Pathol Lab Med
.
2015
;
139
(
10
):
1302
1307
.
20
Mayo
E,
Kinzler
M,
Zhang
L.
Considerations for conducting meta-analysis in diagnostic pathology
.
Arch Pathol Lab Med
.
2015
;
139
(
11
):
1331
.
21
Marchevsky
AM,
Wick
MR.
Evidence-based pathology: systematic literature reviews as the basis for guidelines and best practices
.
Arch Pathol Lab Med
.
2015
;
139
(
3
):
394
399
.
22
Fleming
PS,
Koletsi
D,
Seehra
J,
Pandis
N.
Systematic reviews published in higher impact clinical journals were of higher quality
.
J Clin Epidemiol
.
2014
;
67
(
7
):
754
759
.

Author notes

The authors have no relevant financial interest in the products or companies described in this article.