Historians have associated the word enigma (puzzle) with the Enigma cipher device developed in the mid-20th century and used extensively by the Axis powers during World War II. The Enigma machines (a family of machines) were unlocked by Allied powers with substantial help from Polish mathematicians Marian Rejewski, Henryk Zygalski, and Jerzy Różycki, who had mastered and applied the analytical tools needed to break the code used in these machines. Code breaking the mystery of the Enigma machines used during World War II had a major impact on how this war ended.1 

What is the meaningful and applicable information cloaked in the enigma of dental implantology research journal articles? The Axis powers who used their Enigma machines confused protected information by breaking expected language rules. Researchers who are new and/or have marginal research knowledge and experiences about research methodology and inferential statistics may have the best research intentions; however, inconsistent and incorrectly used language defies and confounds the rules of inferential statistics.

Using descriptive and inferential statistical language to help communicate plausible answers may be confusing for many who read research reports. Descriptive statistics is used to aggregate and summarize data by using quantitative expressions, graphs, and tables. Inferential statistics uses systematically sampled data to make inferences about the population from which the sample was extracted. A common overly generalized research conclusion is to conveniently sample subjects/patients from a unique (biased) group and then to generalize the results to populations unlike the unique sample used for the study.

Additionally, inferential statistical results frequently use qualifiers. For example, “if” answer qualifiers are commonly used by statisticians. Qualifiers like “if” may be confusing to many implantology practitioners because answers using inferential statistical tools do not always produce “yes” or “no” answers, as is commonly expected in social languages. It is not uncommon to read a research paper published with the author arguing the illusion that the results “prove” something. “Proofs exist only in mathematics [proof theory] and logic, not in science.”2,3  Additionally, “The replication (replicability, reproducibility) crisis in … clinical medicine [implant dentistry] arises from the fact that many ‘well-confirmed' experimental results are subsequently overturned by studies that aim to replicate the original study.”4 

Journalists are also frequently confused by the language enigma of science. The proverb “a little knowledge is a dangerous thing” is very true for observers of natural sciences. For example, Kyle Cranmer, a physicist at New York University, reminded reporters about the relationship between the Higgs boson (elementary particle in particle physics) and five-sigma (a statistical construct): “This is not the probability that the Higgs boson does or doesn't exist; rather, it is the probability that if the particle does not exist, the data that CERN scientists collected in Geneva, Switzerland would be at least as extreme as what they observed.”5 

Effectively and successfully communicating useful research information to dental implant partitioners has 2 main challenges: 1) dental implant practitioners frequently do not understand, nor are they expected to understand, the complex nuance of research methodology and statistical languages, and 2) dental implant practitioners incorrectly trust that professionally presented and published research has been effectively vetted for procedural, analytical, and interpretation accuracy.6  While research publications help research colleagues advance dental implantology research, the primary audience for published research reports are for dental implant practitioners. Therefore, research publications need to be written primarily for consumers of information and secondarily for producers/researchers of information.

Hardwicke and Goodman asked the question: “How often do leading biomedical journals use statistical experts to evaluate statistical methods?” Their results compared recent findings with reports from 1998.7 Journal of Oral Implantology (JOI) provides author submission guidelines to address the Hardwicke and Goodman survey concerns:

Scientific claims in biomedical research are typically derived from statistical analyses. However, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity.... [there have been few improvements] since 1998.7 

Realizing that statistical analysis is an enigma to many authors, researchers, and readers, JOI has a biostatistician review all manuscripts presenting conclusions based on data collection and analyses. This is done to help ensure data presented are tenable and conclusions are empirically argued. The conclusions often determine how clinicians' practice and patients ultimately benefit. Science paper submissions to JOI are reviewed by JOI. However, JOI requires author confirmation of all empirical arguments by referencing the name, title, and affiliation of an independent statistician who vets the author's work.

Using the words of William James, “We must be careful not to confuse data with the abstractions we use to analyze them."8  Also, George Udny Yule, a British statistician (Yule Distribution) wrote: “In our lust for measurement, we frequently measure that which we can rather than that which we wish to measure ... and forget that there is a difference."8 

These quotes highlight a few of the many abstracts associated with empirical research arguments. They are reminders that 1) measurement reliability and validity are only tenable and plausible when a researcher accurately measures what is intended to be measured, 2) a statistician cannot fix by analysis the shortcomings of untenable data,9  and 3) published and presented science research papers are owned and used by a working society to be eternally reviewed, eternally analyzed, eternally judged, and eternally discussed.

So, how was the Enigma machine code broken by the Polish mathematicians?1  The simple answer is by using basic tenable science applications.

Good operating procedures, properly enforced, would have made the plugboard Enigma machine unbreakable. However, …poor operating procedures… allowed the Enigma Machines to be reverse-engineered and the ciphers to be read.”1 

As Paul Harvey said, “And now you know the rest of the story!”10 

1. 
Cryptanalysis of the Enigma
.
Wikipedia
.
2021
.
2. 
Hayes
AF,
Gee
TL,
Price
IR,
Cooksy
R,
Patrech
A.
Probability and “proof” in statistics
.
In:
Research Methods and Statistics PESS202 Lecture and Commentary Notes
.
Armidale, Australia: University of New England; 2000. https://webstat.une.edu.au/unit_materials/c5_inferential_statistics/probability_and_proof.html. Accessed June 16,
2021
.
3. 
Kanazawa
S.
Common misconceptions about science I: “scientific proof”
.
Psychol Today
.
2008
.
2021
.
4. 
Bird
A.
Understanding the replication crisis as a base rate fallacy
.
British J Philos Sci.
Published online August 13,
2018
.
5. 
Lamb
E.
5 sigma what's that?
Scientific American
2021
.
6. 
Turone
F.
The trouble with health statistics
.
CancerWorld Plus
.
2021
.
7. 
Hardwicke
TE,
Goodman
SN.
How often do leading biomedical journals use statistical experts to evaluate statistical methods? The results of a survey
.
PLOS ONE
.
2020
;
15:e0239598.
8. 
Famous quotes about statistics. Mahrashtra State, India: Savitribai Phule Pune University
.
2021
.
9. 
Shafrin
J.
You can't fix by analysis what you bungled by design
.
Healthcare Economist
2021
.
10. 
The rest of the story
.
Wikipedia
.
2021
.