Reviewing the Walsh and Kastner (2006) manuscript was difficult. The difficulty came less from technical issues than in trying to organize a frame of reference for understanding the implications of the critique of the Conroy, Spreat, Yuskauskas, and Elks (2003) publication. In addition to methodological concerns, there were subtexts involving integrity, science, and the “zeitgeist” of community living. The lack of commentary from Conroy and his coauthors only complicates the process of distilling some meanings from this episode. Faulty, indeterminate, and sometimes blatantly bad research has appeared before and most certainly will continue to slip through the screen of peer review. Although disability researchers with a postmodernist inclination might argue the point (Danforth, 2004; Smith, 2001; Wilson, 2000), I cling to a belief that the logic of methodology and systematic inquiry will, in the long run, transcend human biases that invariably enter into the research enterprise. My original intent for the commentary was to review the details of my re-analysis. A more interesting discussion, however, emerged from the lessons we might draw regarding the character of research and scientific review in the intellectual disabilities field. Let me first describe the general contours of my evaluation of the Walsh and Kastner (2006) critique.
The Walsh and Kastner Review
There is little question that the Conroy et al. (2003) publication is rife with errors, measurement concerns, and as yet unexplained sample and data discrepancies. The Walsh and Kastner (2006) critique is thorough, and their identification of methodological issues largely accurate. Some of the issues reflect imprecision: sample specification, computation errors, analytic treatment of ordinal and skewed data, use of instrumentation of questionable validity, and interpretation of statistical results. In these instances I considered the omission of methodological narrative the primary transgression. Conroy et al.'s evaluation of the subset of residents (59.5%) evaluated in 1996 is not an uncommon practice in field-based evaluations, but I find myself as concerned over the absence of explanation or acknowledgment as I am with the potentially biasing effect of subject attrition.
Measurement errors represent the second and most significant body of criticism, and, in the aggregate, seriously compromises Conroy et al.'s (2003) study. For example, severity scores for challenging behavior encompassed a range of values not possible: a 100-point total on a 16-item assessment scored on a 4-point scale. Computation of service hours illustrated carelessness in Conroy et al.'s coding and scoring protocol. Pre-move 1990 data recoded instances of 100+ service hours into a 99-hour value, whereas the 1995 data had no such artificial ceiling. Inspection of the frequencies for habilitation services, one of the two statistically significant increases noted by Conroy et al. shows that 87 of the 254 subjects were coded at “99” hours in 1990, rendering the claim of “dramatic” increases questionable at best. Other examples of coding errors include improbable levels of family contact. There were multiple instances of skewed data that are more appropriately analyzed as ranks or proportions rather than averaged values. Ultimately, I concur with Walsh and Kastner (2006) that the published article was a fundamentally flawed report.
Shadish, Cook, and Campbell (2002) noted the irony of the public persona of the scientist as detached and eternally skeptical, given the reality of a researcher trusting “much more than he or she ever doubts” (p. 29). We develop and consume research largely predicated on trust. In the vast majority of instances of peer-reviewed publications, reviewers do not examine data at this level of detail; rather the system of reviewing and communicating research results largely depends on an implicit assumption of the goodness of the written summary and self-evaluation of procedure, analysis, and results. There are no perfect studies in our field, and for complex, systems-wide evaluations such as this, I would expect (and largely forgive) many methodological compromises. It is, however, the obligation of authors to highlight limitations and concerns; the absence of forthright communication regarding procedural compromises troubled me far more than the details of the methodological missteps. We may quibble over the details of method and analysis, or the degree to which reliability and validity have been addressed in the design, but veracity of reporting is always assumed.
The Community-Living Zeitgeist
Walsh and Kastner (2006) referred to the zeitgeist of the community living imperative (p. 368). Zeitgeist, a German word that describes a particular world view at a specific time, was one of the first new concepts I learned as an undergraduate studying disability in the 1970s. It is a very useful word for the disability field given our ferment and experimentation and always-evolving conceptions of disability. I initially reacted negatively to Walsh and Kastner's (2006) use of the word because the intent seemed to suggest temporality, something that would pass. On further reflection, I thought the use of zeitgeist may also serve as an admonition that there are additional obligations in this process of conducting and communicating research: an obligation of readers to exercise due diligence and question everything. In Budd's (2001) analysis of research retractions, he observed, “For correction to be possible there has to be some unwillingness to accept what is stated today as absolutely and inviolably true” (p. 312). Published literature is interpreted and accepted only in part according to theoretical or methodological standards, but also by the politics and ideology of the issue at hand. The field of intellectual disabilities is deeply infused with philosophies and is highly politicized. I would wish no other form because it is this tenacity that has secured so many victories and transformed the nation's systems of care and support. Dissidents to the prevailing values of the field, however, are few. Errors in the original study were found only as a consequence of intense scrutiny by Walsh and Kastner (2006). I am not revisiting the scientist versus advocate debates; all of us—researchers and consumers of research—are acutely aware of the conflicts and politics of the field; to expect neutrality is naïve. Embrace your biases; but our causes are best served if we seek out objective, quality knowledge in a disciplined manner: The onus lies with us all, both researchers and readers of research.
Limits of Peer Reviewing
I have always found it striking that so little training is given to the task of peer reviewing in our graduate training. Presumably, reviewing is regarded as an intellectual by-product of other forms of training: Anyone who has who has exhibited some form of expertise in something can do it. Having been a reviewer, the subject of reviews, and a participant on many review panels, I have learned not all are created equal; and, yet, a major part of the professional duty of academics is to review their colleagues' work.
This episode should serve as a reminder to all consumers of research about the scope and limits of peer review. Serious missteps will go unnoticed. A fundamental question is the scope of faulty research “undetected or unacknowledged” (Budd, Sievert, & Schultz, 1998). Among the cynical, this episode may suggest the hollowness of the claim of research as a privileged form of knowledge. Although there may be threads of insight in the charges when claims exceed facts, or data are used indiscriminately as a higher form of knowing, those who argue miss the fundamental point that research is about the accountability of information. Bereiter (1994) referred to the process as simply another form of discourse distinguished by its commitment to method. The seriousness with which the editorial leadership of Mental Retardation approached the task reminds us that through the application of methodological standards and the review process, science is self-correcting in the long run. This is how it should work.
Concluding Remarks: Framing the Questions
I end this commentary with a final reflection on the implications of the critique for the substantive issue of community living. Walsh and Kastner (2006) concluded that, “As the number of traditional ICF/MR congregate institutional settings decline and newer support models of community services arise, it is vitally important to objectively examine the impact that service system changes have for individuals with complex and profound disabilities” (p. 368). My caveat is to attend to how the facts are framed by the research question. How the question is asked is as important as the methods used to answer it.
In the Conroy et al. (2003) study, the core question asked was about the impact of deinstitutionalization, a not unreasonable question, and one that has been asked by hundreds of other investigators spanning multiple decades. Readers of the study and the Walsh and Kastner (2006) critique must understand that program evaluations are just that: an analysis of the impact of a program, however defined, that encompasses an enormous range of phenomenon, some of which may profoundly influence outcomes and others that are largely irrelevant. This is the nature of the beast. The comparison, however, is simply too ill-defined to be interpreted as an empirical test of community living (or institutional living for that matter). Rarely do we draw comparisons between paragons of opposing services philosophies. The more likely contrast is one of two alternative systems sharing many common features attempting to work with limited resources under trying circumstances. Congregate-care living is not always human warehousing. Community programs are not always exemplars of inclusion, participation, and opportunities for choice. Framing the question in terms of a simple comparison between generically labeled options blinds us to what features of program and environment are responsible for observed outcomes (Fujiura, 1998). This is hardly a new observation; the inherent limitation of such comparisons was identified in reviews of the first generation of deinstitutionalization research (e.g., Intagliata, Willer, & Wicks, 1981; Landesman-Dwyer, 1981; Rutman, 1981; Sigelman, Novak, Heal, & Switzky, 1980).
Community living is not an independent variable. Where people live and are supported is a policy choice. It is a value. My bias is that all should be served in the community, but I also know the community must do much better. Yes, methodology is important, but progress is driven by the right questions, and the most pertinent question is not “Which is better?” but, rather, “How can we best implement our ideals?”
Author: Glenn T. Fujiura, PhD, Department of Disability and Human Development, University of Illinois at Chicago, 1640 W. Roosevelt Rd., Chicago, IL 60608. email@example.com