The purpose of this article is to move the field of intellectual and closely related developmental disabilities (IDD) towards a better understanding of evidence and evidence-based practices. To that end, we discuss (a) different perspectives on and levels of evidence, (b) commonly used evidence-gathering strategies, (c) standards to evaluate evidence, (d) the distinction between internal and external validity, and (e) guidelines for establishing evidence-based practices. We also describe how the conceptualization and use of evidence and evidence-based practices are changing to accommodate recent trends in the field.
Since the 1960s, policy makers, researchers, and practitioners have focused considerable attention on the concept of evidence and evidence-based practices (EBPs). The term evidence-based (sometimes referred to as evidence-informed) has gained traction in a number of primary disciplines and human services including health care, rehabilitation, and education. Recently, federal, state, and local governments have begun to increase their mandates for the use of EBPs, with the corresponding development of evidence-based registers (Burkhardt, Schroter, Magura, & Means, 2015; Means, Magura, Burchardt, & Schroter, 2015).
Despite the emphasis on evidence and EBPs, exactly what constitutes credible evidence continues to be debated (Donaldson, Christie, & Mark, 2009; Drake, 2014; Kaiser & McIntyre, 2010). Discussions surrounding what is credible evidence have historically focused largely on the philosophical and ideological disagreements concerning taxonomies and hierarchies of methodological quality and robustness (Means et al., 2015). Increasingly, however, evidence that is derived outside the sphere of experiments is also accepted as both credible and relevant (Mitchell, 2011). Donaldson et al. (2009), for example, emphasized that what constitutes credible evidence should not be limited to its quality and robustness, but must also include the context of evidence, methodological appropriateness, and the reported experiences and personal well-being of individuals. Analogously, Archibald (2015) stressed that in addition to scientific evidence, other elements, such as cultural appropriateness and concerns about equity and human rights, play a role in decision making.
The emphasis on evidence and EBPs and the debates concerning what constitutes credible evidence and the role evidence plays in establishing EBPs and decision making have significant implications for the field of intellectual and developmental disabilities (IDD) as the field moves increasingly towards participatory action research, utilization-focused evaluation, methodological pluralism, outcomes-driven policy formation, and outcomes evaluation (Claes, van Loon, Vandevelde, & Shalock, 2015; Schalock, Verdugo, & Gomez, 2011; Turnbull & Stowe, 2014). Within this context, the purpose of the present article is to move the field of IDD towards a better understanding of evidence and EBPs. Specifically, we discuss (a) different perspectives on and levels of evidence, (b) commonly used evidence-gathering strategies, (c) standards to evaluate evidence, (d) the difference between internal and external validity, and (e) guidelines for establishing EBPs. Throughout the article the term evidence is defined as the available body of facts or information indicating whether a belief or proposition is true or valid, and evidence-based practices as practices for which there is a demonstrated relation between specific practices and measured outcomes.
Perspectives on and Levels of Evidence
Perspectives on Evidence
Three perspectives on evidence have emerged in the fields of IDD, health, rehabilitation, chemical dependency, and education (Biesta, 2010; Broekaert, Autrique, Vanderplasschen, & Colpaaert, 2010; Claes et al., 2015; Goldman & Azrin, 2013; Schalock et al., 2011; Shogren & Turnbull, 2010). These three are empirical-analytical, phenomenological-existential, and poststructural. The primary focus of the empirical-analytical perspective is on experimental or scientific results obtained from data gathering strategies including random trials, experimental/control designs, quasi-experimental designs, multiple baseline designs, and/or multivariate designs. The primary focus of the phenomenological-existential perspective is on reported experiences and enhanced human functioning, social participation, and/or personal well-being, with associated data gathering strategies including self-reports, case studies, ethnographies, participatory action research, multivariate designs, and/or grounded theory. The primary focus of the poststructural perspective is on desired public policy outcomes assessed via mixed methods designs, multivariate designs, population surveys, meta-analyses, and/or data registers.
These different perspectives on evidence (a) reflect a number of philosophical assumptions on the nature of knowledge, practice, and reality; (b) frame one's approach to data collection, analysis, and interpretation; (c) determine one's sensitivity to different world views; (d) shape one's thinking; and (e) represent the intersection of inquiry with application (Archibald, 2015; Strober, 2006). Additionally, the different perspectives on evidence in the field of IDD are influenced by the significant changes in policies and practices that are occurring. Briefly, these changes involve (a) encouraging a participator action research approach to research and evaluation; (b) embracing a social-ecological model of disability; (c) emphasizing the multidimensionality of human functioning; (d) implementing systems of supports based on personal goals and the pattern and intensity of support needs; (e) conducting outcomes evaluation that encompasses human functioning dimensions, social participation, and personal well-being; and (f) identifying the multiple predictors of personal outcomes (Reinders, 2008; Schalock & Verdugo, 2012).
Levels of Evidence
There are different levels of evidence. For example, Veerman and van Yperen (2007) and Brady, Canavan, & Redmond (2016) suggested the following three levels of evidence with associated parameters and types of research.
Causal evidence: There is substantial evidence that the outcome is caused by the practice. Associated research designs include randomized control trials, quasi-experimental designs, and structural equation modeling (multivariate analyses).
Indicative evidence: It has been demonstrated that the intervention clearly leads to the desired outcomes. Associated research designs include baseline and follow-up measures and process studies.
Descriptive and theoretical evidence: The intervention has a plausible rationale to explain why it should work and with whom. Associated types of research include logic model, theoretical understanding, or monitoring of practice delivery.
Evidence-gathering strategies can be organized into two broad measurement approaches: quantitative or qualitative. Quantitative research designs include (a) experimental-control designs (e.g., equivalent groups, randomized control trials, repeated measures, multivariate), (b) quasi-experimental designs (e.g., time series designs, multiple baseline designs, pre-post comparisons, nonequivalent control group, counterbalanced), and (c) nonexperimental designs (e.g., descriptive research, meta-analysis, consumer surveys). Qualitative research designs include grounded theory, ethnography, participation research, and case studies. A detailed description of these designs and their use can be found in Neutens and Rubinson (2010) and Norwood (2010).
The specific evidence-gathering strategy employed is influenced primarily by the perspective on evidence taken, the practice(s) being evaluated, the statutory/regulatory environment, the constituents involved in the evidence-gathering strategy, the expertise of the researchers, and the receptivity of the consumers to the information provided. Regardless of the evidence-gathering strategy employed, establishing the relation between specific practices and measured outcomes (i.e., an evidence-based practice) requires demonstrating application fidelity of the practice(s) in question. As discussed by Hogue and Dauber (2013), fidelity consists of three related factors: adherence, competence, and differentiation. Adherence is the extent to which the practice is implemented using current best practices. Competence is the quality of the evidence-gathering process. Differentiation is the degree to which the practice employed is clearly differentiated from a potentially related practice (e.g., focusing on quality of life vs. emphasizing quality of care).
Evaluating evidence is useful only within the context of the questions being asked, what is best for whom, and what is best for what (Biesta, 2010; Bouffard & Reid, 2012; Brantlinger, Jimenez, Klinger, Pugach, & Richardson, 2005). Thus, the evaluation of evidence and potentially establishing EBPs should be based on standards related not just to the quality and robustness of the evidence (which has historically been the criteria used), but also its relevance (Claes et al., 2015; Schalock et al., 2011).
Quality of Evidence
The quality of evidence relates to the research design used. In reference to quantitative designs, the quality of evidence can be ranked from high to low with experimental designs ranked highest, followed by quasi-experimental and nonexperimental designs (Gugiu, 2015; Sackett, Richardson, Rosenberg, & Haynes, 2005). In reference to qualitative designs, the quality of evidence is evaluated based on its credibility, transferability, dependability, and confirmability (Cesario, Morin, & Santa-Donato, 2002).
Robustness of Evidence
The robustness of evidence relates to the magnitude of the observed effect(s). In quantitative research designs, robustness is determined from probability statements, the percent of variance explained in the dependent variable by variation in the independent variable, and/or the statistically derived effect size (Daly et al., 2007; Ferguson, 2009; Given, 2006; Soler, Trizio, Nickels, & Wimstatt, 2012). In qualitative evidence-gathering strategies, the robustness of evidence is determined by whether the practice is based on a validated conceptual framework, a diversified sample, data-triangulation, a clear report of the analysis, and the generalizability of the findings (Cesario, 2002; Creswell, Hanson, Plano, & Morales, 2007; Daly et al., 2007).
Relevance of Evidence
The multiple perspectives on evidence summarized earlier and the increased use of participatory action research, utilization-focused research, and outcomes evaluation necessitate a third criterion for evaluating evidence: its relevance. For those making managerial decisions in the field of IDD, relevant evidence identifies those practices that enhance human functioning, social participation, and personal well-being. For example, research reported by Beadle-Brown, Bigby, and Bould (2015); Claes, van Hove, Vandevelde, van Loon and Schalock (2012); Gomez, Pena, Arias, and Verdugo (2016); and Walsh et al. (2010) indicated that valued personal outcomes are significantly influenced by (a) organization-level factors such as support staff strategies (e.g., facilitative assistance, including communication supports and ensuring a sense of basic security), support staff characteristics (e.g., teamwork and job satisfaction), practice leadership, management practices that reduce staff turnover and job stress, and employment programs; and (b) systems-level factors such as participation opportunities (e.g., contact with family members, friends, and people in one's social network), normalized community living arrangements, and the availability of transportation.
Analogously, for those making disability-related policy decisions, relevant evidence is that which (a) enables organizations and systems to be effective, efficient, and sustainable; (b) influences public attitudes toward people with disabilities; (c) enhances long-term outcomes for service recipients; (d) changes education and training strategies; (e) encourages efficient resource allocation practices; and (f) aligns policy goals, supports, and policy-related outcomes. For example, recent research (e.g., Shogren, Luckasson, & Schalock, 2015; Ticha, Hewitt, Nord, & Larson, 2013; Verdonschot, De Witte, Reichrath, Buntinx, & Curfs, 2009) has identified a number of policy-related practices that significantly influence personal outcomes in education and adult rehabilitation. These practices include self-determination, full citizenship, education/life-long learning, productivity, inclusion in society and community life, and human relationships.
Internal and External Validity
It is widely accepted that validity is an essential element in evaluating evidence and determining its use. Campbell and Stanley (1983) proposed a validity model that involved two principal types of validity: internal and external. Internal validity asks whether a particular practice makes a difference; external validity asks whether a particular practice can be generalized to other populations, settings, or treatments. These two types of validity have been addressed historically by a top-down approach in which a series of evaluations begin by maximizing internal validity through efficacy evaluations, progressing to effectiveness evaluations aimed at strengthening external validity. Within this paradigm, efficacy evaluations/studies assess specific practices in an ideal, highly controlled, clinical research setting, with randomized controlled trials considered the gold standard. If the efficacy evaluations find the practice has the desired effect on a small, homogeneous sample, effectiveness evaluations then estimate treatment or intervention effects in real-world environments. According to Chen (2010), although this top-down approach is used widely, it does not encompass the previously described multiple perspectives on evidence and the multiple standards used to evaluate evidence.
As an alternative to this top-down approach, Chen (2010) proposed a bottom-up approach to validity. This approach expands the Campbell and Stanley's (1983) model of internal and external validity into three types: internal, external, and viable. Internal validity is the extent to which evidence obtained from an evidence-gathering strategy provides objective evidence that a specific practice causally affects specific outcomes. External validity addresses the perspectives of different stakeholders, and the need to meet political, organizational, and community requirements for valid evidence. Viable validity, which expands the concept of external validity, focuses on the extent to which results can be generalized from a research setting to a real-world setting, and from one real-world setting to another targeted setting. Such generalization incorporates contextual factors and stakeholder views and interests.
Guidelines for Establishing Evidence-Based Practices
In this section of the article, we propose two guidelines for establishing EBPs: using clear operational definitions of key terms, and employing a systematic approach.
Evidence-based practices are practices for which there is a demonstrated relation between specific practices and measured outcomes. Based on this definition, there are four terms that need to be understood and defined operationally: evidence-based, practices, demonstrated relation, and outcomes.
As discussed previously, evidence (a) is defined as the available body of facts or information indicating whether a belief or proposition is true or valid; (b) can be viewed from different perspectives; and (c) is evaluated on the basis of its quality, robustness, and relevance.
Practices are interventions, services, strategies, supports, and policies that focus on the enhancement of human functioning, social participation, and well-being. Best practices can come from research-based knowledge, professional values and standards, and empirically-based clinical judgment, or be derived from a rigorous process of peer review and evaluation indicating effectiveness in improving outcomes. For a practice to be used as an independent variable to establish EBPs, it needs to be defined operationally and measured.
A demonstrated relation will depend on the quality and robustness of the evidence. A demonstrated relation can be inferred if (a) there is substantial evidence that the outcome is caused by the practice, (b) it has been demonstrated that the intervention clearly leads to the desired outcome, or (c) there is a significant correlation between a specific practice and the measured outcome.
Outcomes are specific indicators of the benefits derived by program recipients that are the result of the practice(s) employed. Outcome indicators (a) are perceptions, behaviors, and/or conditions that give an indication of the person's status on the respective targeted outcome area and are obtained via self-report or the report of others; and (b) need to be based on a clearly articulated and valid conceptual and measurement model, assessed reliably, and have utility in that they are used to demonstrate the effectiveness of the aforementioned practices. Commonly used outcome areas in the field of IDD involve human functioning, social participation, and personal well-being (Gomez & Verdugo, 2016; Schalock & Luckasson, 2014).
A Systematic Approach
Figure 1 depicts six proposed components to a systematic approach to establishing EBPs. We propose further that these six components comprise the criteria to determine whether or not a practice in question is an evidence-based practice. By way of an overview/summary:
Agreeing on the perspective on evidence requires familiarity with the tenants of the empirical-analytical, phenomenological-existential, or poststructural perspective.
Defining the practice in question involves quantifying and assessing the practice and viewing it as an independent variable.
Selecting outcome areas and outcome indicators involves (a) incorporating commonly used outcome areas in the field of IDD (e.g., human functioning, social participation, and/or personal well-being), (b) relating specific outcome areas to specific practices, (c) defining operationally outcomes and outcome indicators, (d) assessing the indicators in a reliable way, and (e) viewing outcomes as dependent variables
Gathering evidence requires (a) employing an evidence-gathering strategy that is consistent with the question(s) asked, the practices being evaluated, statutory/regulatory parameters, the constituents involved, and the available expertise; (b) defining operationally the strategies; and (c) demonstrating application fidelity.
Establishing the credibility of the evidence involves evaluating the obtained evidence in terms of its quality, robustness, and relevance.
Evaluating the relation between practice(s) and outcome(s) requires determining if: (a) there is substantial evidence that the outcome was caused by the practice, (b) it has been demonstrated that the intervention clearly leads to the outcome, and/or (c) the intervention has a plausible rationale to explain why it should work and with whom.
So, are we there yet in terms of agreeing on what is credible evidence and establishing evidence-based practices? Probably not. We are getting closer, however, in (a) our understanding of the different perspectives on and levels of evidence; (b) employing relevant evidence-gathering strategies; (c) evaluating evidence based on it quality, robustness, and relevance; (d) using a systematic approach to establishing EBPs; and (e) recognizing that evidence is useful only within the context of the questions being asked, what is best for whom.
Although not discussed in this article, once EBPs are established, they need to be implemented. There is an extensive body of literature regarding the translation of EBPs into practice. Key aspects of this process include (a) being sensitive to the organization or system's receptivity and culture; (b) considering EBPs within the social-ecological model of disability, which allows for a broader range of practices and encourages the design of minimally intrusive interventions; and (c) implementing the practice(s) in question via consultation with learning teams and within resource constraints (Biesta, 2010; Cook, Tankersley, & Landrum, 2009; Farrington, Clare, Holland, Barrett, & Oborn, 2015; Mitchell, 2011; Newman & Page, 2010; Pronovost, Berenholtz, & Needham, 2008; Raghavan, Bright, & Shadoin, 2008; Steinfeld et al., 2015).
In addition to the implementation strategies just mentioned, the systematic approach to establishing EBPs outlined in Figure 1 suggests the need to establish partnerships within the IDD field among policy makers, practitioners, researchers, and customers in establishing EBPs and maximizing their implementation. In this partnership, policy makers should incorporate the concept of outcomes-driven policy to formulate policies that specify policy-related goals and desired outcomes. Practitioners (including systems-level personnel) should transform their policies and practices to align with outcomes-driven policy and translate EBPs into practice. Researchers should use evidence-gathering strategies to evaluate the quality, robustness, and relevance of the evidence. Customers need to be involved in participatory action research that puts them in the position of evaluating the relevance of evidence, and the degree to which the practices in question enhance their functioning, social participation, and personal well-being.