It is with considerable trepidation that I approach a commentary on publication guidelines for qualitative research. But the sheer volume of qualitative manuscripts being submitted to Intellectual and Developmental Disabilities (IDD) and their comparatively high rates of rejection suggest the need and timeliness of editorial comment.

My guess is that most readers of this journal entered the professional ranks well after the “paradigm wars” of the 1980s. It was during this time that increased interest in alternative methodologies in educational research and the social and health sciences led to vigorous debate over the merits of “quantitative” and “qualitative” methodologies (see, for example, Gage, 1989). While the quantitative/qualitative distinction is a crude one that encompasses many vastly different traditions of inquiry in a simple binary classification, it will suffice for the purposes of this commentary.

I am not a qualitative researcher and, thus, my trepidation in making recommendations. Indeed, for an edited book on community integration (Hayden & Abery, 1994), I was asked to craft a chapter on experimental research methodology as a counterpoint to another chapter on qualitative approaches. I surmised my role was to play the traditionalist and to extoll the virtues of positivism in community integration research (Fujiura, 1994), while Steve Taylor and Robert Bogdan would represent the forward-thinking progressives in their discussion of qualitative methods (Taylor & Bogdan, 1994). I have far too much respect for the qualitative tradition; rather than play the foil, my chapter emphasized the importance of drawing from many different perspectives in order to more comprehensively represent the complexity of community living. Different questions required different approaches: A good perspective and one that I still very much adhere to. My expertise, however, clearly lies on the quantitative end of the continuum.

An expansive view of research paradigms was not new a proposition then, though it was often met with limited or grudging acceptance across the research community. When Steve Taylor assumed the editorship of IDD in 1994, he felt compelled to reveal his epistemological bent (he denied the existence of objectivity!) and, in similar fashion emphasized the importance of diversity of methodology and research perspectives (Taylor, 1994). Intellectual and Developmental Disabilities is richer for his inclusiveness.

While the publication rate for qualitative papers has increased in IDD and across many other journals in social and health sciences, the more traditional quantitative paper still dominates as a share of all published articles (Alise & Teddlie, 2010; Rennie, 2012). Whether this is good, bad, or appropriate depends on the discipline, the research problem, and the types of research questions asked. One likely consequence, however, is that publication conventions for the qualitative manuscript are still emerging and evolving. The absence of consensus conventions has led to considerable variability in how qualitative manuscripts are evaluated. One reviewer's “compelling quotation” strikes another reviewer as a “cherry-picked” anecdote. While certainly true that quantitative articles also elicit reviews at variance with one another, an editor can at least fall back upon a common core of consensus features. Readers who were attentive during their methodology classes should recall at least some of these features: statements about the parameters of sampling; measurement validity and reliability; statistical validity; and, in the causal comparative studies, adequacy of design in establishing internal validity. Of course, there is no shortage of quality indicators for qualitative research, but, because of the diversity of method and the limitations of the short-form journal article, how these should be communicated in a manuscript is considerably less clear.

Standards for Qualitative Research

A number of standards for qualitative research have been proposed, such as the Consolidated Criteria for Reporting Qualitative Research (COREQ; Tong, Sainsbury, & Craig, 2007), Standards for Reporting Qualitative Research (SRQR; O'Brien, Harris, Beckman, Reed, & Cook, 2014) and the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ; Tong, Flemming, McInnes, Oliver, & Craig, 2012) for summarizing a qualitatively based literature, as well as a number of commentaries directed to specific disciplines (Burns, 1989; Duran et al., 2006; Elliott, Fischer, & Rennie, 1999; Stenius, Makela, Miovsky, & Gabrhelik, 2004; Tracy, 2010; Trainor & Graue, 2013). There are many others. All are excellent and tend to converge on similar features and themes. The COREQ and SRQR are notable because their development was based on systemic syntheses of the published literature. The COREQ (Tong et al., 2007) is a 32-item checklist for qualitative research standards drawn from a systematic review, reviewer guidelines drawn from academic journals, and published commentaries (though only four of the 32 items focus on manuscript preparation). The SRQR (O'Brien et al., 2014) focuses exclusively on publication standards, featuring 19 guidelines along with specific manuscript elements associated with each standard. For example, the standard of “trustworthiness” can be communicated via the reporting of member checking, audit trails, triangulation, and so on. I very much encourage readers to refer to the SRQR or any of the other rich sources on conducting and reporting qualitative research.

I am sympathetic, however, with the critics who believe that checklists are problematic when applied to qualitative research. Much of the criticism is based in the differences in epistemology, that is, assumptions about the nature of knowledge (Barbour, 2001; Madill, Jordan, & Shirley, 2000; Rennie, 1995, 2012). If one were to deny or minimize the existence of objective facts because reality is constructed through individual experience, then is the checklist an artifice of an empirical bias? This is a question well beyond the intent of this commentary (and my conceptual skills). It is enough to suggest that not all are on board within the qualitative camp on the notion of a common core of standards.

I have a more prosaic concern with the checklist approach for manuscripts development—“checklist paralysis.” Or as Barbour (2001) noted, having the tail (guidelines) wag the dog (study methods). The secret sauce in using any form of quality checklist is to apply the elements thoughtfully rather than mechanistically, in the context of one's study and purpose in conducting the study. The primary reason is that, for disability researchers, qualitative research is not a singular concept. It is not a “thing.” For example, rather than presenting a single list of criteria, Trainor and Graue (2013) consider different touchstones for quality within each of the forms of qualitative research: action research, autoethnography, case studies, critical discourse, ethnography, and so on. Across the many disciplines involved in disability research, the qualitative label has been applied to a veritable forest of methodologies based on the notion of constructed realities. The phenomenon that is used as data is similarly diverse.

A General Approach

The essential value of a qualitative study emerges from the core assumptions of the constructivist word view upon which qualitative methodologies are built. Knowledge is constructed—recall Steve Taylor's rejection of objectivity—and, thus, the study of any phenomenon requires a careful rendering of perspectives, subjective experiences, rich contextual detail, and personal meanings. Interpretations emerge from extended and often iterative interaction between the investigator and the source material.

Fine. But how is this represented in the short-form journal article? Twenty-odd pages, including references, tables, and figures, with double-spaced lines using a 12-point font is not a format amenable to communicating exhaustive detail. Something has to move to the forefront of the narrative. A great deal must be pushed to the background. In a list of manuscript standards, a specific item may be more or less important, a function of different approaches and problems. Rather, let me describe what I see as two general guidelines from the literature on qualitative research that the author needs to communicate given limited space: coherence and transparency.

Coherence

I will discuss coherence only generally because I consider it an essential element of any study regardless of methodology or research tradition. Though the term is employed widely in the literature on evaluation of qualitative research, the specific meaning and application varies. Coherence has been used to describe the connection between question and method (Morse, Barrett, Mayan, Olson, & Spiers, 2008) and, most commonly, as a feature of interpretation (Elliott et al., 1999; Fossey, Harvey, Mcdermott, & Davidson, 2002; Krefting, 1991). I prefer a broader notion of coherence: as the general internal consistency of the manuscript (Tracy, 2010). Internal consistency is more than a checklist of criteria—it is the connectedness of elements across the checklist. For example, more than just clearly stating a research problem, we can ask the extent to which the problem is contextualized and shaped by an extant literature and theoretical framework. Is the selected methodology optimal given the character of the research problem? Do the results and how they are framed in the discussion inform the literature and theory presented in the introductory narrative? A more direct way of addressing coherence is to ask three questions of the study being summarized in the manuscript:

  • (1)

    So what?

  • (2)

    Why was it done this way?

  • (3)

    What does the field know that wasn't known before?

While any study from any research tradition should address all three questions, the second question is especially pertinent to the purposes of this commentary.

The question of “why was it done this way?” should be addressed at two different levels of specificity. At the more general level is the central rationale of qualitative strategies—to describe phenomena on their own terms. Coherence requires a rationale that emphasizes the importance of considering the topic perspectives through the eyes of the participants and describing constructed meanings. The qualitative study of personal meanings and subjective experiences is likely not the optimal approach if a cause-and-effect program impact evaluation is the primary intent of the study.

But the notion of coherence extends beyond a gross qualitative-quantitative distinction. Again, qualitative research represents many different approaches using different methods and emphasizing types of data across varying forms of analysis. The purpose served by a content analysis of open-ended responses to a survey question is vastly different than a deeply observational ethnographic study. Thus, which issues need to be communicated and the specifics of how quality is communicated I leave to author judgement and the unique circumstances of study problem and methodology. Methods should be driven by the problem within the larger context of the study rationale. Everything is connected to everything else.

Transparency

I have a favorite cartoon showing two mathematicians pondering a chalk board that extends across the length of a wall. The board is filled with obtuse equations. In the middle of the sequence of equations there is the phrase “. . . and then a miracle occurs.”

A common mistake of qualitative manuscripts submitted to IDD is the lack of information on how raw data is transformed into interpretation. Often the steps are generically described in some pro forma fashion (e.g., “first we conducted interviews, then the tapes were transcribed, and finally the transcriptions were analyzed using grounded theory”) without any supporting narrative or illustrations describing the investigator's reasoning in producing the results. From the reader's perspective, it is largely an act of faith (the “miracle”) how the author moved from raw data to interpretation.

Like the term coherence, “transparency” is a commonly cited but inconsistently applied descriptor of qualitative research. Stenius et al. (2004), for example, discusses transparency in terms of the analysis tasks, while Hiles (2008) employs it as an overarching characteristic of the qualitative study, encompassing research problem through concluding statements. For the purposes of this discussion, I split the difference and use a somewhat narrower conception that refers to the reader's ability to follow and directly evaluate study processes and, especially, the interpretation and conclusions.

The ingredients for establishing transparency would seem reasonably straightforward: first, conduct an exemplary study, and, second, employ as many pages of narrative as needed to communicate all the details. With respect to the former ingredient, there are many excellent sources of advice on how to execute the good qualitative study, many of which are cited in this commentary. The second ingredient is problematic. If manuscript length were not a constraint, the inductive processes involved in data reduction could unfold in a leisurely fashion, beginning with the raw data and illustrating each of the iterative stages as the investigator interrogated the data, identified emerging themes, and reasoned through the final extraction of themes.

But IDD imposes page limits and the author is confronted with the challenge of what to exclude. Unfortunately, it is transparency that is usually sacrificed, often supplanted by extensive quotations presented as “evidence” supporting the identified themes. Of far greater value would be an illustration of the steps involved in the analysis, perhaps with intermediate results, and a summary of the types of decisions made along the way and rules or guidelines employed in the data reduction. Summary tables could be used for a condensed presentation. However managed and presented, the point is to make available to the reader as many of the critical operations of data reduction and interpretation. This is the “chain of reasoning” (Stenius et al., 2004, p. 86). For example, one of the most common analytical approaches in submitted papers is the content analysis of narrative units. Rather than leaping from methods to a summary of the final themes, intermediate codes leading to the final set of themes could be summarized along with the rules and rationale for reduction. In constructing interpretations from themes or other narratives, a competing account offered by the data could be interrogated and the author's thinking in drawing an alternative interpretation. Any of the published guidelines describe the essential elements of a quality analysis and interpretation; the writer's challenge is to strategically select the element(s) that most optimally communicate the core finding and the reasoning behind it.

On the Use of Quotations

The treatment of quotations in the qualitative study is of special importance in light of the conflicting demands of limited journal space and the significance given to participant “voice” in the qualitative epistemology. Quotations are nearly universally used in qualitative reports. In the constructivist tradition, the words of the participant are a direct line of sight into the phenomenon of interest. And, yet, the quotation is often the weak link of submitted manuscripts. The handling of quotes can be powerful and illuminating or a tedious waste of precious page space. While there are precious few guidelines in their use, a good strategy is to use them sparingly in the short journal article, and only when their appearance deepens the reader's understanding of the narrative point (White, Woodfield, & Ormston, 2013). Each quote represents the unique perspective of a participant and, thus, the choice of words matter and how they are used is important, and each nuance should be subject to interpretation.

In a consensus study of qualitative researchers (conducted qualitatively, of course), Corden and Sainsbury (2006) found variability in how authors employed quotations, but the common theme was the clarity of purpose in using them. Rather than inserting quotes simply to illustrate a theme or demonstrate concurrence with an interpretation drawn in the article (the most common approach in submissions to IDD), quotations were used strategically: when the choice of words or the way they are used would deepen the reader's understanding, or if the quotation was necessary to explain or illustrate the unique perspective of the participant. Thus, the appearance of quotations that happen to be consistent with the article's core themes provides no comfort or evidence for the skeptic, nor should it. There are no controls for “cherry-picking” quotations. On the other hand, the carefully selected quotation that serves an analytical or illustrative purpose within the network of connected ideas in the article can profoundly deepen the reader's understanding of the importance of the participant's voice.

The use of direct quotations engages some unique values regarding power relationships in the conduct of research. Again, the quote should give voice to those whose lives are being represented in the article. The statement can be seen as a form of empowerment for those traditionally treated as “passive” objects of research rather than active participants. But such use also suggests a unique responsibility of the author in selecting and vetting the quotation (Corden & Sainsbury, 2006). Use quotations judiciously.

Concluding Thoughts

Rennie (2012) likened the publication of qualitative research in the more traditionally oriented social science journal to navigating a “hazardous strait”—reporting results based on methods vastly different from the journal's conventions in a manner consistent with these conventions (p. 394). Thus, the constructivist researcher may object to the application of empirical-like concepts in qualitative research guidelines such as the notions of dependability and trustworthiness proposed by Guba and Lincoln (1989). While we may agree that (some) objects of study may not have a reality independent of the investigator's observation and interpretation, it is not unreasonable to expect readers questioning the credibility of an observation, personal anecdote, or thematic interpretations. This is the pedigree of IDD and we ask submitting authors to accommodate.

Do not frame these accommodations as a binary feature that a manuscript has or fails to possess but, rather, as a continuum along which the author can influence the reader's (and reviewer's) confidence. I encourage potential authors to seek out the many excellent resources on conducting and reporting qualitative research. Again, there are no simple checklist criteria, but, then, why should one expect simplicity when the essence of qualitative research is to communicate complexity and nuance in the lives of those with a disability?

References

References
Alise
,
M. A.
,
&
Teddlie
,
C.
(
2010
).
A continuation of the paradigm wars? Prevalence rates of methodological approaches across the social/behavioral sciences
.
Journal of Mixed Methods Research
,
4
(
2
),
103
126
.
Barbour
,
R. S.
(
2001
).
Checklists for improving rigour in qualitative research: A case of the tail wagging the dog?
BMJ
,
322
(
7294
),
1115
.
Burns
,
N.
(
1989
).
Standards for qualitative research
.
Nursing Science Quarterly
,
2
(
1
),
44
52
.
Corden
,
A.
,
&
Sainsbury
,
R.
(
2006
).
Using verbatim quotations in reporting qualitative social research: Researchers' views
.
York, UK
:
University of York
.
Duran
,
R. P.
,
Eisenhart
,
M. A.
,
Erickson
,
F. D.
,
Grant
,
C. A.
,
Green
,
J. L.
,
Hedges
,
L. V.
,
&
Schneider
,
B. L.
(
2006
).
Standards for reporting on empirical social science research in AERA publications: American Educational Research Association
.
Educational Researcher
,
35
(
6
),
33
40
.
Elliott
,
R.
,
Fischer
,
C. T.
,
&
Rennie
,
D. L.
(
1999
).
Evolving guidelines for publication of qualitative research studies in psychology and related fields
.
British Journal of Clinical Psychology
,
38
(
3
),
215
229
.
Fossey
,
E.
,
Harvey
,
C.
,
Mcdermott
,
F.
,
&
Davidson
,
L.
(
2002
).
Understanding and evaluating qualitative research
.
Australian and New Zealand Journal of Psychiatry
,
36
(
6
),
717
732
.
Fujiura
,
G. T.
(
1994
).
Research perspectives and the community living experience
.
In
M. F.
Hayden
&
B. H.
Abery
(
Eds.
),
Challenges for a service system in transition
(pp
.
23
42)
.
Baltimore, MD
:
Paul H. Brookes Publishing Co
.
Gage
,
N. L.
(
1989
).
The paradigm wars and their aftermath: A “historical” sketch of research on teaching since 1989
.
Educational Researcher
,
18
(
7
),
4
10
.
Guba
,
E. G.
,
&
Lincoln
,
Y. S.
(
1989
).
Fourth generation evaluation
.
Beverly Hills, CA
:
Sage Publications
.
Hayden
,
M. F.
,
&
Abery
,
B. H.
(
Eds.
). (
1994
).
Challenges for a service system in transition
.
Baltimore, MD
:
Paul H
.
Brookes Publishing Co
.
Hiles
,
D. R.
(
2008
).
Transparency
.
In
L. M.
Given
(
Ed.
),
The Sage encyclopedia of qualitative research methods
(pp
.
891
893)
.
Thousand Oaks, CA
:
Sage Publications, Inc
.
Krefting
,
L.
(
1991
).
Rigor in qualitative research: The assessment of trustworthiness
.
American Journal of Occupational Therapy
,
45
(
3
),
214
222
.
Madill
,
A.
,
Jordan
,
A.
,
&
Shirley
,
C.
(
2000
).
Objectivity and reliability in qualitative analysis: Realist, contextualist and radical
.
British Journal of Psychology
,
91
(
1
),
1
20
.
Morse
,
J. M.
,
Barrett
,
M.
,
Mayan
,
M.
,
Olson
,
K.
,
&
Spiers
,
J.
(
2008
).
Verification strategies for establishing reliability and validity in qualitative research
.
International Journal of Qualitative Methods
,
1
(
2
),
13
22
.
O'Brien
,
B. C.
,
Harris
,
I. B.
,
Beckman
,
T. J.
,
Reed
,
D. A.
,
&
Cook
,
D. A.
(
2014
).
Standards for Reporting Qualitative Research: A synthesis of recommendations
.
Academic Medicine
,
89
(
9
),
1245
1251
.
Rennie
,
D. L.
(
1995
).
On the rhetorics of social science: Let's not conflate natural science and human science
.
The Humanistic Psychologist
,
23
(
3
),
321
332
.
Rennie
,
D. L.
(
2012
).
Qualitative research as methodical hermeneutics
.
Psychological Methods
,
17
(
3
),
385
398
.
Stenius
,
K.
,
Makela
,
K.
,
Miovsky
,
M.
,
&
Gabrhelik
,
R.
(
2004
).
How to write publishable qualitative research
.
In
T. F.
Babor
,
K.
Stenius
,
S.
Savva
&
J.
O'Reilly
(
Eds.
),
Publishing addiction science: A guide for the perplexed (2nd ed
.,
pp
.
82
97)
.
Brentwood, UK
:
Multi-Science Publishing Company, Ltd
.
Taylor
,
S. J.
(
1994
).
The editor's perspective
.
Mental Retardation
,
32
(
1
),
60
61
.
Taylor
,
S. J.
,
&
Bogdan
,
R.
(
1994
).
Qualitative research methods and community living
.
In
M. F.
Hayden
&
B. H.
Abery
(
Eds.
),
Challenges for a service system in transition
(pp
.
43
64)
.
Baltimore, MD
:
Paul H. Brookes Publishing Co
.
Tong
,
A.
,
Flemming
,
K.
,
McInnes
,
E.
,
Oliver
,
S.
,
&
Craig
,
J.
(
2012
).
Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ
.
BMC Medical Research Methodology
,
12
(
1
),
181
.
Tong
,
A.
,
Sainsbury
,
P.
,
&
Craig
,
J.
(
2007
).
Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups
.
International Journal for Quality in Health Care
,
19
(
6
),
349
357
.
Tracy
,
S. J.
(
2010
).
Qualitative quality: Eight “big-tent” criteria for excellent qualitative research
.
Qualitative Inquiry
,
16
(
10
),
837
851
.
Trainor
,
A. A.
,
&
Graue
,
E.
(
2013
).
Reviewing qualitative research in the social sciences
.
New York, NY
:
Routledge
.
White
,
C.
,
Woodfield
,
K.
,
&
Ormston
,
R.
(
2013
).
Writing up qualitative research
.
In
J.
Ritchie
,
J.
Lewis
,
C. M.
Nicholls
&
R.
Ormston
(
Eds.
),
Qualitative research practice: A guide for social science students and researchers
(pp
.
367
400)
.
London, UK
:
Sage
.

Author notes

Glenn T. Fujiura, University of Illinois at Chicago.