The purpose of this editorial is to address ongoing concerns for research design problems, statistical analysis problems, and research study conclusions made in manuscripts and published articles. Editorial dilemmas are created when editors recognize important research ideas, but the research manuscripts for these ideas reflect poorly argued studies. When such dilemmas occur, journal editors are generally limited to one of the following options: 1) Rejecting a paper that simply is not tenable; 2) encouraging the author(s) to rewrite the paper and resubmit a revised paper; or 3) becoming ghost writers for the author(s) to bring an acceptable level of quality to the paper.

Literature Review

Nuijten1  reported evidence to support an argument that statistical reporting errors are distorting scientific literature. Nuijten further reported that publication bias was the most prominent reason given, and provided the following concrete solutions to correct this problem:

  • Sharing data. The author makes the collected raw data available for independent analysis.

  • Preregistration. The author writes a detailed research and analysis plan, posts this plan in a repository, and then proceeds to collect data (eg, https://www.clinicaltrialsregister.eu). The plan also may be submitted to a targeted journal for an “in principle acceptance” of the work prior to the investigation.

  • Technology. The author uses automated procedures like “Statcheck” (http://statcheck.io) to check statistical reporting errors.

Allison et al.2  wrote that procedural and analytic mistakes in peer reviewed papers are discoverable; however, they are sometimes impossible to fix. Allison et al. further attempted to address more than 25 procedural and analytic errors. Journal editors, they reported, were sincere in their reviews; however, the manuscripts were generally not well prepared. Allison et al. cited ten editorial problems, including several listed below:

  • Editors were powerless or reluctant to take speedy and appropriate action with published articles that had methodological and/or analysis errors.

  • Journals did not typically provide a person to contact when invalidating errors were uncovered; therefore, the complaint was too frequently lost within a journal's editorial hierarchy.

  • Journals that acknowledged invalidating methodological and/or analytical problems were reluctant to print retractions. The Committee on Publication Ethics (COPE; http://publicationethics.org) has advocated that readers should not have to pay to read retractions.

  • A standard process for requesting raw data for analysis has not existed.

  • An online platform (PubMed) has been available to process comments about published papers; however, the comments have not been incorporated into the literature. Concerns have neither been clearly displayed on journal websites nor cross-referenced in any convenient way.

Noorden3  addressed the problem of inaccurate use of statistics in published papers and discussed Science as an example of a journal that is taking action to correct such inaccuracies. Noorden reported that Science: 1) retains seven statisticians on a Statistical Board of Reviewing Editors (SBoRE) and 2) strongly recommends all papers reporting potential new biomarkers be evaluated by an independent statistician before submission for review (see http://www.sciencemag.org/authors/science-editorial-policies).

Kass et al.4  recommended simple rules for effectively addressing statistical arguments in science papers. Using a list format with editorial comments, Kass et al. included some of the following rules:

  • 1.

    Statistical methods should enable data to answer scientific questions. One of the most consistent errors experienced with manuscripts is the absence of a clearly written research question(s).

  • 2.

    Plan-ahead. Consult with an independent statistician to plan a data driven research project; guide during the data acquisition process; intervene during the analysis and interpretation of the findings process; and drive the process of writing and summarizing the study's findings.

  • 3.

    Statistical analysis is far more than a set of computations. Authors very rarely submit data-based analytic papers that are mathematically well founded and argued; therefore, demonstrating that statisticians are critically needed to assist authors from the beginning of their initial research planning stage through writing the paper submitted for editorial review.

  • 4.

    Keep it simple. Authors who do not effectively work with statisticians frequently use statistical tools that are poorly conceived, are beyond the abilities of the authors to convincingly use, do not address viable research questions, and produce results that are mathematically untenable.

Actions Taken by Journal of Oral Implantology

In response to the concern for research design problems, statistical analysis problems, and research study conclusions, Journal of Oral Implantology (JOI) has taken the following steps to improve the quality of the manuscripts submitted for review:

  • 1.

    A sequenced research design and statistical analytic rubric has been created as a guide for authors as well as for editors who review submitted papers to improve interrater reliability, and for both authors and reviewers to achieve more consistently recognized targeted outcomes. An added benefit of using the analytic rubric is that readers will see improved science reporting with more consistency and continuity in the placement of key empirical elements within and between published articles. The analytic rubric is being used by contributing authors and reviewers and is available online at the JOI website.]

  • 2.

    JOI is beginning the process of requiring that all data based argued scientific papers submitted for consideration as publications be reviewed by independent statisticians and the statistician's identification and contact information be cited in each author's paper.

  • 3.

    JOI is beginning the process of having all data based argued scientific papers submitted for consideration for publication be reviewed and approved by a JOI editorial biostatistician prior to acceptance for publication.

Summary

Journal editors have responsibilities to ensure the accuracy of published scientifically argued papers, to protect the patients who practitioners treat based upon the finding of published scientific papers, and to ensure that published scientific papers have tenable longevity as progressive technology advancements are used to test the scientific merits of journal publications. The position of JOI is that these responsibilities need to be shared more fully with authors of scientific papers vying for publication in journals and not simply the responsibilities of journal editorial boards.

References

References
1
Nuijten
MB.
Preventing statistical errors in scientific journals
.
Eur Sci Edit
.
2016
;
42
:
8
10
. .
2
Allison
DB,
Brown
AW,
George
BJ,
Kaiser
KA.
Reproducibility: a tragedy of errors
.
Nature
.
2016
;
530
:
27
29
. .
3
Noorden
RV.
Science joins push to screen statistics in papers
.
Nature
.
July 3, 2014.
.
4
Kass
RE,
Caffo
BS,
Davidian
M,
Meng Xiao-Li YB, Reid N. Ten simple rules for effective statistical practice
.
PLoS
:
Computat Biol
.
2016
;
12:e1004961.