Abstract

The purpose of this article is to move the field of intellectual and closely related developmental disabilities (IDD) towards a better understanding of evidence and evidence-based practices. To that end, we discuss (a) different perspectives on and levels of evidence, (b) commonly used evidence-gathering strategies, (c) standards to evaluate evidence, (d) the distinction between internal and external validity, and (e) guidelines for establishing evidence-based practices. We also describe how the conceptualization and use of evidence and evidence-based practices are changing to accommodate recent trends in the field.

Since the 1960s, policy makers, researchers, and practitioners have focused considerable attention on the concept of evidence and evidence-based practices (EBPs). The term evidence-based (sometimes referred to as evidence-informed) has gained traction in a number of primary disciplines and human services including health care, rehabilitation, and education. Recently, federal, state, and local governments have begun to increase their mandates for the use of EBPs, with the corresponding development of evidence-based registers (Burkhardt, Schroter, Magura, & Means, 2015; Means, Magura, Burchardt, & Schroter, 2015).

Despite the emphasis on evidence and EBPs, exactly what constitutes credible evidence continues to be debated (Donaldson, Christie, & Mark, 2009; Drake, 2014; Kaiser & McIntyre, 2010). Discussions surrounding what is credible evidence have historically focused largely on the philosophical and ideological disagreements concerning taxonomies and hierarchies of methodological quality and robustness (Means et al., 2015). Increasingly, however, evidence that is derived outside the sphere of experiments is also accepted as both credible and relevant (Mitchell, 2011). Donaldson et al. (2009), for example, emphasized that what constitutes credible evidence should not be limited to its quality and robustness, but must also include the context of evidence, methodological appropriateness, and the reported experiences and personal well-being of individuals. Analogously, Archibald (2015) stressed that in addition to scientific evidence, other elements, such as cultural appropriateness and concerns about equity and human rights, play a role in decision making.

The emphasis on evidence and EBPs and the debates concerning what constitutes credible evidence and the role evidence plays in establishing EBPs and decision making have significant implications for the field of intellectual and developmental disabilities (IDD) as the field moves increasingly towards participatory action research, utilization-focused evaluation, methodological pluralism, outcomes-driven policy formation, and outcomes evaluation (Claes, van Loon, Vandevelde, & Shalock, 2015; Schalock, Verdugo, & Gomez, 2011; Turnbull & Stowe, 2014). Within this context, the purpose of the present article is to move the field of IDD towards a better understanding of evidence and EBPs. Specifically, we discuss (a) different perspectives on and levels of evidence, (b) commonly used evidence-gathering strategies, (c) standards to evaluate evidence, (d) the difference between internal and external validity, and (e) guidelines for establishing EBPs. Throughout the article the term evidence is defined as the available body of facts or information indicating whether a belief or proposition is true or valid, and evidence-based practices as practices for which there is a demonstrated relation between specific practices and measured outcomes.

Perspectives on and Levels of Evidence

Perspectives on Evidence

Three perspectives on evidence have emerged in the fields of IDD, health, rehabilitation, chemical dependency, and education (Biesta, 2010; Broekaert, Autrique, Vanderplasschen, & Colpaaert, 2010; Claes et al., 2015; Goldman & Azrin, 2013; Schalock et al., 2011; Shogren & Turnbull, 2010). These three are empirical-analytical, phenomenological-existential, and poststructural. The primary focus of the empirical-analytical perspective is on experimental or scientific results obtained from data gathering strategies including random trials, experimental/control designs, quasi-experimental designs, multiple baseline designs, and/or multivariate designs. The primary focus of the phenomenological-existential perspective is on reported experiences and enhanced human functioning, social participation, and/or personal well-being, with associated data gathering strategies including self-reports, case studies, ethnographies, participatory action research, multivariate designs, and/or grounded theory. The primary focus of the poststructural perspective is on desired public policy outcomes assessed via mixed methods designs, multivariate designs, population surveys, meta-analyses, and/or data registers.

These different perspectives on evidence (a) reflect a number of philosophical assumptions on the nature of knowledge, practice, and reality; (b) frame one's approach to data collection, analysis, and interpretation; (c) determine one's sensitivity to different world views; (d) shape one's thinking; and (e) represent the intersection of inquiry with application (Archibald, 2015; Strober, 2006). Additionally, the different perspectives on evidence in the field of IDD are influenced by the significant changes in policies and practices that are occurring. Briefly, these changes involve (a) encouraging a participator action research approach to research and evaluation; (b) embracing a social-ecological model of disability; (c) emphasizing the multidimensionality of human functioning; (d) implementing systems of supports based on personal goals and the pattern and intensity of support needs; (e) conducting outcomes evaluation that encompasses human functioning dimensions, social participation, and personal well-being; and (f) identifying the multiple predictors of personal outcomes (Reinders, 2008; Schalock & Verdugo, 2012).

Levels of Evidence

There are different levels of evidence. For example, Veerman and van Yperen (2007) and Brady, Canavan, & Redmond (2016) suggested the following three levels of evidence with associated parameters and types of research.

  • Causal evidence: There is substantial evidence that the outcome is caused by the practice. Associated research designs include randomized control trials, quasi-experimental designs, and structural equation modeling (multivariate analyses).

  • Indicative evidence: It has been demonstrated that the intervention clearly leads to the desired outcomes. Associated research designs include baseline and follow-up measures and process studies.

  • Descriptive and theoretical evidence: The intervention has a plausible rationale to explain why it should work and with whom. Associated types of research include logic model, theoretical understanding, or monitoring of practice delivery.

Evidence-Gathering Strategies

Evidence-gathering strategies can be organized into two broad measurement approaches: quantitative or qualitative. Quantitative research designs include (a) experimental-control designs (e.g., equivalent groups, randomized control trials, repeated measures, multivariate), (b) quasi-experimental designs (e.g., time series designs, multiple baseline designs, pre-post comparisons, nonequivalent control group, counterbalanced), and (c) nonexperimental designs (e.g., descriptive research, meta-analysis, consumer surveys). Qualitative research designs include grounded theory, ethnography, participation research, and case studies. A detailed description of these designs and their use can be found in Neutens and Rubinson (2010) and Norwood (2010).

The specific evidence-gathering strategy employed is influenced primarily by the perspective on evidence taken, the practice(s) being evaluated, the statutory/regulatory environment, the constituents involved in the evidence-gathering strategy, the expertise of the researchers, and the receptivity of the consumers to the information provided. Regardless of the evidence-gathering strategy employed, establishing the relation between specific practices and measured outcomes (i.e., an evidence-based practice) requires demonstrating application fidelity of the practice(s) in question. As discussed by Hogue and Dauber (2013), fidelity consists of three related factors: adherence, competence, and differentiation. Adherence is the extent to which the practice is implemented using current best practices. Competence is the quality of the evidence-gathering process. Differentiation is the degree to which the practice employed is clearly differentiated from a potentially related practice (e.g., focusing on quality of life vs. emphasizing quality of care).

Evaluation Standards

Evaluating evidence is useful only within the context of the questions being asked, what is best for whom, and what is best for what (Biesta, 2010; Bouffard & Reid, 2012; Brantlinger, Jimenez, Klinger, Pugach, & Richardson, 2005). Thus, the evaluation of evidence and potentially establishing EBPs should be based on standards related not just to the quality and robustness of the evidence (which has historically been the criteria used), but also its relevance (Claes et al., 2015; Schalock et al., 2011).

Quality of Evidence

The quality of evidence relates to the research design used. In reference to quantitative designs, the quality of evidence can be ranked from high to low with experimental designs ranked highest, followed by quasi-experimental and nonexperimental designs (Gugiu, 2015; Sackett, Richardson, Rosenberg, & Haynes, 2005). In reference to qualitative designs, the quality of evidence is evaluated based on its credibility, transferability, dependability, and confirmability (Cesario, Morin, & Santa-Donato, 2002).

Robustness of Evidence

The robustness of evidence relates to the magnitude of the observed effect(s). In quantitative research designs, robustness is determined from probability statements, the percent of variance explained in the dependent variable by variation in the independent variable, and/or the statistically derived effect size (Daly et al., 2007; Ferguson, 2009; Given, 2006; Soler, Trizio, Nickels, & Wimstatt, 2012). In qualitative evidence-gathering strategies, the robustness of evidence is determined by whether the practice is based on a validated conceptual framework, a diversified sample, data-triangulation, a clear report of the analysis, and the generalizability of the findings (Cesario, 2002; Creswell, Hanson, Plano, & Morales, 2007; Daly et al., 2007).

Relevance of Evidence

The multiple perspectives on evidence summarized earlier and the increased use of participatory action research, utilization-focused research, and outcomes evaluation necessitate a third criterion for evaluating evidence: its relevance. For those making managerial decisions in the field of IDD, relevant evidence identifies those practices that enhance human functioning, social participation, and personal well-being. For example, research reported by Beadle-Brown, Bigby, and Bould (2015); Claes, van Hove, Vandevelde, van Loon and Schalock (2012); Gomez, Pena, Arias, and Verdugo (2016); and Walsh et al. (2010) indicated that valued personal outcomes are significantly influenced by (a) organization-level factors such as support staff strategies (e.g., facilitative assistance, including communication supports and ensuring a sense of basic security), support staff characteristics (e.g., teamwork and job satisfaction), practice leadership, management practices that reduce staff turnover and job stress, and employment programs; and (b) systems-level factors such as participation opportunities (e.g., contact with family members, friends, and people in one's social network), normalized community living arrangements, and the availability of transportation.

Analogously, for those making disability-related policy decisions, relevant evidence is that which (a) enables organizations and systems to be effective, efficient, and sustainable; (b) influences public attitudes toward people with disabilities; (c) enhances long-term outcomes for service recipients; (d) changes education and training strategies; (e) encourages efficient resource allocation practices; and (f) aligns policy goals, supports, and policy-related outcomes. For example, recent research (e.g., Shogren, Luckasson, & Schalock, 2015; Ticha, Hewitt, Nord, & Larson, 2013; Verdonschot, De Witte, Reichrath, Buntinx, & Curfs, 2009) has identified a number of policy-related practices that significantly influence personal outcomes in education and adult rehabilitation. These practices include self-determination, full citizenship, education/life-long learning, productivity, inclusion in society and community life, and human relationships.

Internal and External Validity

It is widely accepted that validity is an essential element in evaluating evidence and determining its use. Campbell and Stanley (1983) proposed a validity model that involved two principal types of validity: internal and external. Internal validity asks whether a particular practice makes a difference; external validity asks whether a particular practice can be generalized to other populations, settings, or treatments. These two types of validity have been addressed historically by a top-down approach in which a series of evaluations begin by maximizing internal validity through efficacy evaluations, progressing to effectiveness evaluations aimed at strengthening external validity. Within this paradigm, efficacy evaluations/studies assess specific practices in an ideal, highly controlled, clinical research setting, with randomized controlled trials considered the gold standard. If the efficacy evaluations find the practice has the desired effect on a small, homogeneous sample, effectiveness evaluations then estimate treatment or intervention effects in real-world environments. According to Chen (2010), although this top-down approach is used widely, it does not encompass the previously described multiple perspectives on evidence and the multiple standards used to evaluate evidence.

As an alternative to this top-down approach, Chen (2010) proposed a bottom-up approach to validity. This approach expands the Campbell and Stanley's (1983) model of internal and external validity into three types: internal, external, and viable. Internal validity is the extent to which evidence obtained from an evidence-gathering strategy provides objective evidence that a specific practice causally affects specific outcomes. External validity addresses the perspectives of different stakeholders, and the need to meet political, organizational, and community requirements for valid evidence. Viable validity, which expands the concept of external validity, focuses on the extent to which results can be generalized from a research setting to a real-world setting, and from one real-world setting to another targeted setting. Such generalization incorporates contextual factors and stakeholder views and interests.

Guidelines for Establishing Evidence-Based Practices

In this section of the article, we propose two guidelines for establishing EBPs: using clear operational definitions of key terms, and employing a systematic approach.

Operational Definitions

Evidence-based practices are practices for which there is a demonstrated relation between specific practices and measured outcomes. Based on this definition, there are four terms that need to be understood and defined operationally: evidence-based, practices, demonstrated relation, and outcomes.

Evidence-based

As discussed previously, evidence (a) is defined as the available body of facts or information indicating whether a belief or proposition is true or valid; (b) can be viewed from different perspectives; and (c) is evaluated on the basis of its quality, robustness, and relevance.

Practices

Practices are interventions, services, strategies, supports, and policies that focus on the enhancement of human functioning, social participation, and well-being. Best practices can come from research-based knowledge, professional values and standards, and empirically-based clinical judgment, or be derived from a rigorous process of peer review and evaluation indicating effectiveness in improving outcomes. For a practice to be used as an independent variable to establish EBPs, it needs to be defined operationally and measured.

Demonstrated relation

A demonstrated relation will depend on the quality and robustness of the evidence. A demonstrated relation can be inferred if (a) there is substantial evidence that the outcome is caused by the practice, (b) it has been demonstrated that the intervention clearly leads to the desired outcome, or (c) there is a significant correlation between a specific practice and the measured outcome.

Outcomes

Outcomes are specific indicators of the benefits derived by program recipients that are the result of the practice(s) employed. Outcome indicators (a) are perceptions, behaviors, and/or conditions that give an indication of the person's status on the respective targeted outcome area and are obtained via self-report or the report of others; and (b) need to be based on a clearly articulated and valid conceptual and measurement model, assessed reliably, and have utility in that they are used to demonstrate the effectiveness of the aforementioned practices. Commonly used outcome areas in the field of IDD involve human functioning, social participation, and personal well-being (Gomez & Verdugo, 2016; Schalock & Luckasson, 2014).

A Systematic Approach

Figure 1 depicts six proposed components to a systematic approach to establishing EBPs. We propose further that these six components comprise the criteria to determine whether or not a practice in question is an evidence-based practice. By way of an overview/summary:

Figure 1

Components of a systematic approach to establishing evidence-based practices.

Figure 1

Components of a systematic approach to establishing evidence-based practices.

  • Agreeing on the perspective on evidence requires familiarity with the tenants of the empirical-analytical, phenomenological-existential, or poststructural perspective.

  • Defining the practice in question involves quantifying and assessing the practice and viewing it as an independent variable.

  • Selecting outcome areas and outcome indicators involves (a) incorporating commonly used outcome areas in the field of IDD (e.g., human functioning, social participation, and/or personal well-being), (b) relating specific outcome areas to specific practices, (c) defining operationally outcomes and outcome indicators, (d) assessing the indicators in a reliable way, and (e) viewing outcomes as dependent variables

  • Gathering evidence requires (a) employing an evidence-gathering strategy that is consistent with the question(s) asked, the practices being evaluated, statutory/regulatory parameters, the constituents involved, and the available expertise; (b) defining operationally the strategies; and (c) demonstrating application fidelity.

  • Establishing the credibility of the evidence involves evaluating the obtained evidence in terms of its quality, robustness, and relevance.

  • Evaluating the relation between practice(s) and outcome(s) requires determining if: (a) there is substantial evidence that the outcome was caused by the practice, (b) it has been demonstrated that the intervention clearly leads to the outcome, and/or (c) the intervention has a plausible rationale to explain why it should work and with whom.

Conclusion

So, are we there yet in terms of agreeing on what is credible evidence and establishing evidence-based practices? Probably not. We are getting closer, however, in (a) our understanding of the different perspectives on and levels of evidence; (b) employing relevant evidence-gathering strategies; (c) evaluating evidence based on it quality, robustness, and relevance; (d) using a systematic approach to establishing EBPs; and (e) recognizing that evidence is useful only within the context of the questions being asked, what is best for whom.

Although not discussed in this article, once EBPs are established, they need to be implemented. There is an extensive body of literature regarding the translation of EBPs into practice. Key aspects of this process include (a) being sensitive to the organization or system's receptivity and culture; (b) considering EBPs within the social-ecological model of disability, which allows for a broader range of practices and encourages the design of minimally intrusive interventions; and (c) implementing the practice(s) in question via consultation with learning teams and within resource constraints (Biesta, 2010; Cook, Tankersley, & Landrum, 2009; Farrington, Clare, Holland, Barrett, & Oborn, 2015; Mitchell, 2011; Newman & Page, 2010; Pronovost, Berenholtz, & Needham, 2008; Raghavan, Bright, & Shadoin, 2008; Steinfeld et al., 2015).

In addition to the implementation strategies just mentioned, the systematic approach to establishing EBPs outlined in Figure 1 suggests the need to establish partnerships within the IDD field among policy makers, practitioners, researchers, and customers in establishing EBPs and maximizing their implementation. In this partnership, policy makers should incorporate the concept of outcomes-driven policy to formulate policies that specify policy-related goals and desired outcomes. Practitioners (including systems-level personnel) should transform their policies and practices to align with outcomes-driven policy and translate EBPs into practice. Researchers should use evidence-gathering strategies to evaluate the quality, robustness, and relevance of the evidence. Customers need to be involved in participatory action research that puts them in the position of evaluating the relevance of evidence, and the degree to which the practices in question enhance their functioning, social participation, and personal well-being.

References

References
Archibald,
T.
(
2015
).
“They just know”: The epistemological politics of “evidence-based” non- formal education
.
Evaluation and Program Planning
,
48
,
137
148
. doi:
Beadle-Brown,
J.,
Bigby,
C.,
&
Bould,
E.
(
2015
).
Observing practice leadership in intellectual and developmental disability services
.
Journal of Intellectual Disability Research
,
59
,
1081
1093
. doi:
Biesta,
G. J. J.
(
2010
).
What “what works” still won't work: From evidence-based education to value-based education
.
Studies in Philosophy and Education
,
29
,
491
503
. doi:
Bouffard,
M.,
&
Reid,
G.
(
2012
).
The good, the bad, and the ugly of evidence-based practice
.
Adapted Physical Activity Quarterly
,
29
,
1
24
. doi:
Brady,
B.,
Canavan,
J.,
&
Redmond,
S.
(
2016
).
Bridging the gap: Using Veerman and van Yperen's (2007) framework to conceptualize and develop evidence informed practice in an Irish youth work organization
.
Evaluation and Program Planning
,
55
,
128
133
. doi:
Brantlinger,
E.,
Jimenez,
R.,
Klinger,
J.,
Pugach,
M.,
&
Richardson,
V.
(
2005
).
Qualitative studies in special education
.
Exceptional Children
,
71
,
195
207
. doi:
Broekaert,
E.,
Autrique,
M.,
Vanderplasschen,
W.,
&
Colpaert,
K.
(
2010
).
‘The human prerogative': A critical analysis of evidence-based and other paradigms of care in substance abuse treatment
.
Psychiatric Quarterly
,
81
,
227
238
. doi:
Burkhardt,
J. T.,
Schroter,
D. C.,
Magura,
S.,
&
Means,
S. N.
(
2015
).
An overview of evidence-based program registers (EBRPs) for behavior health
.
Evaluation and Program Planning
,
48
,
92
99
. doi:
Campbell,
D. T.,
&
Stanley,
J.
(
1983
).
Experimental and quasi-experimental designs for research
.
Chicago, IL
:
Rand McNally
.
Cesario,
S.,
Morin,
K.,
&
Santa-Donato,
A.
(
2002
).
Evaluating the level of evidence of qualitative research
.
Journal of Obstetrics, Gynecologic, and Neonatal Nursing
,
31
,
708
714
. doi:
Chen,
H. T.
(
2010
).
The bottom-up approach to integrative validity: A new perspective for program evaluation
.
Evaluation and Program Planning
,
33
,
205
214
. doi:
Claes,
C.,
van Hove,
G.,
Vandevelde,
S.,
van Loon,
J.,
&
Schalock,
R. L.
(
2012
).
The influence of support strategies, environmental factors, and client characteristics on quality of life-related outcomes
.
Research in Developmental Disabilities
,
33
,
96
103
. doi:
Claes,
C.,
van Loon,
J.,
Vandevelde,
S.,
&
Schalock,
R. L.
(
2015
).
An integrative approach to evidence based practices
.
Evaluation and Program Planning
,
48
,
132
136
. doi:
Cook,
B. G.,
Tankersley,
M.,
&
Landrum,
T.
(
2009
).
Determining evidence-based practices in special education
.
Exceptional Children
,
75
,
365
383
.
Creswell,
J. W.,
Hanson,
W. E.,
Plano,
V. L.,
&
Morales,
A.
(
2007
).
Qualitative research designs: Selection and implementation
.
The Counseling Psychologist
,
35
,
236
264
. doi:
Daly,
J.,
Willis,
K.,
Small,
R.,
Green,
J.,
Welch,
N.
Kealy,
M.
, &
Hughes,
E.
(
2007
).
A hierarchy of evidence for assessing qualitative health research
.
Journal of Clinical Epidemiology
,
60
,
43
49
. doi:
Donaldson,
S. I.,
Christie,
C. A.,
&
Mark,
M. M.
(
2009
).
What counts as credible evidence applied research and evaluation practice?
Thousand Oaks, CA
:
Sage
.
Drake,
R.
(
2014
).
Current perspectives on evidence-based practice
.
Psychiatric Services
,
65
,
1
19
. doi:
Farrington,
C.,
Clare,
I. C. H.,
Holland,
A. J.,
Barrett,
M.,
&
Oborn,
E.
(
2015
).
Knowledge exchange and integrated services: Experiences from an integrated community intellectual (learning) disability service for adults
.
Journal of Intellectual Disability Research
,
59
,
238
247
. doi:
Ferguson,
C. F.
(
2009
).
An effect size primer: A guide for clinicians and researchers
.
Professional Psychology: Research and Practice
,
40
,
532
538
. doi:
Given,
I.
(
2006
).
Qualitative research in evidence-based practice. A valuable partnership
.
Library Hi Tech
,
24
,
376
386
. doi:
Goldman,
H. H.,
&
Azrin,
S. T.
(
2013
).
Public policy and evidence-based practices
.
Psychiatric Clinics of North America
,
26
,
899
910
. doi:
Gomez
L. E.,
Pena,
E.,
Arias,
B.,
&
Verdugo,
M. A.
(
2016
).
Impact of individual and organizational variables on quality of life
.
Social Indicators Research
,
125
,
649
664
. doi:
Gomez,
L. E.,
&
Verdugo,
M. A.
(
2016
).
Outcomes evaluation
.
In
R. L.
Schalock
&
K. D.
Keith
(
Eds.
),
Cross-cultural quality of life: Enhancing the lives of persons with intellectual disability (2nd ed
.
; pp. 71–80).
Washington, DC
:
American Association on Intellectual and Developmental Disabilities
.
Gugiu,
P. C.
(
2015
).
Hierarchy of evidence and appraisal of limitations (HEAL) grading system
.
Evaluation and Program Planning
,
48
,
149
159
. doi:
Hogue
A.,
&
Dauber,
S.
(
2013
).
Assessing fidelity to evidence-based practices in usual care: The example of family therapy for adolescent behavior problems
.
Evaluation and Program Planning
,
37
,
21
30
. doi:
Kaiser,
A. P.,
&
McIntyre,
L. L.
(
2010
).
Introduction to special section on evidence-based practices for persons with intellectual and developmental disabilities
.
American Journal on Intellectual and Developmental Disabilities
,
115
,
357
363
. doi:
Means,
S. N.,
Magura,
S.,
Burchardt,
J. T.,
&
Schroter,
D. C.
(
2015
).
Comparing rating paradigms for evidence-based program registers in behavioral health: Evidentiary criteria and implications for assessing programs
.
Evaluation and Program Planning
,
48
,
100
116
. doi:
Mitchell,
V.
(
2011
).
Value based health and social care: Beyond evidence-based practice
.
Nursing Ethics
,
18
,
865
890
. doi:
Neutens,
J. J.,
&
Rubinson,
L.
(
2010
).
Research techniques for the health sciences
.
San Francisco, CA
:
Benjamin Cummings
.
Newman,
E. A.,
&
Page,
A. C.
(
2010
).
Bridging the gap between best evidence and best practice in mental health
.
Clinical Psychology Review
,
30
,
127
142
. doi:
Norwood,
S. L.
(
2010
).
Research essentials: Foundations for evidence-based practices
.
Boston, MA
:
Pearson Electronic
.
Pronovost,
P.,
Berenholtz,
S.,
&
Needham,
D.
(
2008
).
Translating evidence into practice: A model for large scale knowledge translation
.
British Medical Journal
,
337
,
76
90
. doi:
Raghavan,
R.,
Bright,
C.,
&
Shadoin,
A.
(
2008
).
Toward a policy ecology of implementation of evidence-based practices in public mental health settings
.
Implementation Science
,
3
,
3
26
. doi:
Reinders,
H.
(
2008
).
The transformation of human services
.
Journal of Intellectual Disability Research
,
52
,
564
571
. doi:
Sackett,
D. L.,
Richardson,
W. S.,
Rosenberg,
W.,
&
Haynes,
R. B.
(
2005
).
Evidence-based medicine: How to practice and teach EBM
.
Philadelphia, PA
:
Churchill-Livingstone
.
Schalock,
R. L.,
&
Luckasson,
R.
(
2014
).
Clinical judgment (2nd ed.)
.
Washington, DC
:
American Association on Intellectual and Developmental Disabilities
.
Schalock,
R. L.,
&
Verdugo,
M. A.
(
2012
).
A leadership guide for today's disabilities organizations: Overcoming challenges and making change happen
.
Baltimore, MD
:
Brookes
.
Schalock,
R. L.,
Verdugo,
M. A.,
&
Gomez,
L. E.
(
2011
).
Evidence-based practices in the field of intellectual and developmental disabilities
.
Evaluation and Program Planning
,
34
,
273
282
. doi:
Shogren,
K.,
&
Turnbull,
H. R.
(
2010
).
Public policy and outcomes for persons with intellectual disability: Extending and expanding the public policy framework of AAIDD's 11th edition of intellectual disability: Definition, classification, and systems of supports
.
Intellectual and Developmental Disabilities
,
48
,
375
386
. doi:
Shogren,
K. A.,
Luckasson,
R.,
&
Schalock,
R. L.
(
2015
).
Using context as an integrative framework to align policy goals, supports, and outcomes in intellectual disability
.
Intellectual and Developmental Disabilities
,
53
,
367
376
. doi:
Soler,
L.,
Trizio,
E.,
Nickels,
T.,
&
Wimstatt,
W. C.
(
2012
).
Characteristics of robustness of science
.
Dordrecht/Heidelberg/London
:
Springer
.
Steinfeld,
B.,
Scott,
J.
Vilander,
G.,
Marx,
L.,
Quirk,
M.,
Lindberg,
J.,
&
Koerner,
K.
(
2015
).
The role of lean process improvement in implementation of evidence-based practices in behavioral health care
.
Journal of Behavioral Health Services and Research
412
,
504
518
. doi:
Strober,
M. H.
(
2006
).
Habits of mind: Challenges for multidisciplinarity
.
Social Epistemology
,
20
,
315
331
. doi:.
Ticha,
R.,
Hewitt,
A.,
Nord,
D.,
&
Larson,
S.
(
2013
).
System and individual outcomes and their predictors in services and supports for people with IDD
.
Intellectual and Developmental Disabilities
,
51
,
298
315
. doi:
Turnbull,
H. R.,
&
Stowe,
M. J.
(
2014
).
Elaborating on the AAIDD public policy framework
.
Intellectual and Developmental Disabilities
,
52
,
1
12
. doi:
Veerman,
J. W.,
&
van Yperen,
T. A.
(
2007
).
Degrees of freedom and degrees of certainty: A developmental model for the establishment of evidence-based youth care
.
Evaluation and Program Planning
,
30
,
212
221
. doi:
Verdonschot,
M. M. L.,
DeWitte,
L. P.,
Reichrath,
E.,
Buntinx,
W. H. E.,
&
Curfs,
L. M. G.
(
2009
).
Impact of environmental factors on community participation of persons with an intellectual disability: A systematic review
.
Journal of Intellectual Disability Research
53
,
54
64
. doi:
Walsh,
P. N.,
Emerson,
E.,
Lobb,
C.,
Hatton,
C.,
Bradley,
V.,
Schalock,
R. L.,
&
Moseley,
C.
(
2010
).
Supported accommodation for people with intellectual disabilities and quality of life: An overview
.
Journal of Policy and Practices in Intellectual Disabilities
,
115
,
218
233
. doi: