Abstract

Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions, services, and supports provided to those persons to whom the policy is directed. This article describes a systematic approach to policy evaluation based on an evaluation framework and an evaluation process that combine the use of logic models and systems thinking. The article also includes an example of how the framework and process have recently been used in policy development and evaluation in Flanders (Belgium), as well as four policy evaluation guidelines based on relevant published literature.

Introduction and Overview

Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions and supports provided to those persons to whom the policy is directed. Policy evaluation logically follows policy development and implementation. As discussed in preceding articles, policy development involves the decision process by which individuals, groups, or institutions establish policies that align basic concepts, principles, procedures, or protocols, and policy-specific goals and associated outcomes. In contrast, policy implementation is based on a contextual analysis, employs a value-based approach, aligns the service delivery system both horizontally and vertically, and is implemented through a partnership.

Policy evaluation is a complex process that is influenced by numerous contextual issues and challenges associated with operationalizing measureable outcome indicators, deciding on what constitutes credible evidence, developing the approach taken to outcome evaluation, enhancing the capability of organizations and systems to assess policy-related outcomes, and using the evaluation results for multiple purposes. The intent of this article is to address these issues and challenges by describing a policy evaluation framework and a policy evaluation process based on the use of logic models and systems thinking. In addition, the article presents an example of how the framework and process have recently been used in policy development and evaluation in Flanders (Belgium), and discusses four policy evaluation guidelines based on relevant published literature.

Policy Evaluation Framework

Logic models are used widely in policy evaluation because of their utility in articulating the operative relations among policy goals, program services, and desired outcomes; enabling policy makers and provider organizations to understand what must be done to achieve policy outcomes; identifying critical factors that can influence policy outcomes; and clarifying for policy implementers the sequence of policy-related inputs, throughput, outputs, and outcomes (Donaldson, 2007; Funnell & Rogers, 2011; Schalock & Verdugo, 2012; Schalock, Verdugo & Gomez, 2011; van Loon et al., 2013). Figure 1 summarizes the four components of a logic model applied to policy evaluation.

Figure 1

Policy evaluation framework.

Figure 1

Policy evaluation framework.

The input component involves a value-based policy that leads to the development and implementation of interventions, services, and supports to enhance personal, family, and/or societal valued outcomes. Values are characterized by their ideological origin, resistance to change over time, goal-oriented nature, ability to affect one's choice and interest, and subjectivity (Shams, Akbari Sari & Yazdani, 2016).

The throughput component involves a system of supports that encompasses interventions, services, and individualized support strategies that aim to promote the development, independence, interests, and well-being of a person, and to enhance the individual's functioning, participation within society, and engagement in life activities. A system of supports is the planned and integrated use of an array of strategies and resources that include professionally based interventions, agency-provided services, and individually focused support strategies. These support strategies encompass natural supports, technology, prosthetics, education across the lifespan, reasonable accommodations, dignity and respect, personal strengths/assets, and professional services (Chiu, Lombardi, Claes, & Schalock, 2017). A system of supports provides a structure to enhance elements of human performance that are interdependent and cumulative and built around the individual's needs and aspirations.

The output component of the evaluation framework includes the structures and environments that provide opportunities and support a person's participation, involvement, and development, and enhance personal, family, or societal well-being. The outcome component involves personal, family, or societal changes or benefits that follow as a result or consequence of some activity, intervention, support, or service. These outcomes are reflected in measures of personal well-being such as enhanced quality of life and socio-economic status and are in line with the basic principles and articles of the United Nations Convention on the Rights of Persons With Disabilities (UNCRPD; United Nations, 2006).

Policy Evaluation Process

The described policy evaluation framework is a way of integrating theoretical components of a logic model applied to policy evaluation. This section of the article discusses the six steps that are involved in a systematic approach to policy evaluation. These six steps are summarized in Figure 2.

Figure 2

Policy evaluation process.

Figure 2

Policy evaluation process.

Step 1: Identify Policy-Related Goals and/or Objectives

The first step in the policy evaluation process involves identifying policy-related goals and/or objectives. In this step, policy rules and regulations are analyzed according to their intended value-based outcomes. This is an important step as it gives an indication in which way the actual policy is focusing on long-term, sustainable quality of life improvement (Costanza et al., 2008). The role of the government is not “to make people happier,” but to create conditions in order to meet basic human needs related to a valued life of quality (Nussbaum, 2015). Improvement of quality of life is the result of the extent to which basic needs are met (objective) in relation to personal or group perceptions (subjective; Costanza et al., 2008; Hagerty et al., 2001).

Step 2: Operationalize Goals/Objectives Into Outcome Areas

The second step involves operationalizing goals and objectives into outcome areas associated with personal, family, or societal changes. In this phase, the alignment between value-based goals and outcome areas is made explicit (Leichsenring, 2004). Table 1 lists common outcome areas associated with these changes.

Table 1

Outcome Areas Associated With Personal, Family, or Societal Well-Being

Outcome Areas Associated With Personal, Family, or Societal Well-Being
Outcome Areas Associated With Personal, Family, or Societal Well-Being

Step 3: Select Measurable Indicators

Step three involves selecting measureable outcome indicators per outcome area. The selection of measurable indicators is not an easy exercise. Indicators should be valid (actually measure what they are intended to), reliable (provide the same information if measured by different persons), sensitive (able to measure change), and specific (reflect changes only in the situation concerned; Bowen & Kreidler, 2008). The biggest challenge is to find indicators asking the right questions, instead of using indicators that are already available. Therefore, indicator selection and development should be a collaborative process, including important contextual information and expertise of different stakeholders. Commonly used categories of indicators are structure, process, and outcome (Hung & Jerng, 2014). Structure indicators reflect capacities available for interventions, whereas process indicators provide information on how well the intervention has been established. Outcome indicators are essential in policy evaluation because they allow one to assess the effect(s) of the policy. They also represent the validity of the process as defined, and the adequacy of the structure as put forward (Deerberg-Wittram, Guth, & Porter, 2013).

Step 4: Gather Evidence

In previous work, we elaborated on evidence-gathering strategies that can be organized into two broad measurement approaches: quantitative or qualitative. Quantitative research designs include experimental-control designs (e.g., equivalent groups, randomized control trials, repeated measures, multivariate), quasi-experimental designs (e.g., time series designs, multiple baseline designs, pre-post comparisons, nonequivalent control group, counterbalanced), and nonexperimental designs (e.g., descriptive research, meta-analysis, consumer surveys; Claes, van Loon, Vandevelde, & Schalock, 2015). Qualitative research designs include grounded theory, ethnography, participation research, and case studies. A detailed description of these designs and their use is published in Neutens and Rubinson (2010) and Norwood (2010).

The specific evidence-gathering strategy employed is influenced primarily by the perspective on evidence taken, the practice(s) being evaluated, the statutory/regulatory environment, the constituents involved in the evidence-gathering strategy, the expertise of the researchers, and the receptivity of the consumers to the information provided (Schalock, Gomez, Verdugo, & Claes, in press). Regardless of the evidence-gathering strategy employed, establishing the relation between specific practices and measured outcomes (i.e., an evidence-based practice) requires demonstrating application fidelity of the practice(s) in question. As discussed by Hogue and Dauber (2013), fidelity consists of three related factors: adherence, competence, and differentiation. Adherence is the extent to which the practice is implemented using current best practices. Competence is the quality of the evidence-gathering process. Differentiation is the degree to which the practice employed is clearly differentiated from a potentially related practice (e.g., focusing on quality of life vs. emphasizing quality of care).

Step 5: Establish the Credibility of the Evidence

Establishing the credibility of the evidence involves being sensitive to three different perspectives on the credibility of evidence: the empirical-analytical, the phenomenological-existential, and the post-structural (Broekaert, Autrique, Vanderplasschen, & Colpaert, 2010; Claes et al., 2015). These three perspectives relate to different approaches and, thereby, how disability-related policy is evaluated. The empirical-analytical perspective focuses on experimental or scientific evidence (Blayney, Kalyuga, & Sweller, 2010; Brailsford & Williams, 2001; Cohen, Stavri, & Hersh, 2004). In distinction, the phenomenological-existential perspective emphasizes evidence based on the reported experiences of well-being (Kinash & Hoffman, 2009; Mesibov & Shea, 2010; Parker, 2005). From a post-structural perspective, the credibility of evidence is based on public policy principles such as inclusion, self-determination, participation, and empowerment (Broekaert, Van Hove, Bayliss, & D'Oosterlinck, 2004; Goldman & Azrin, 2003; Shogren & Turnbull, 2010).

Regardless of the perspective taken, establishing the credibility of evidence is based on its quality, its robustness, and its relevance (Claes et al., 2015). The quality of evidence is related to the methodology or type of research design. Based on the methodology used, the quality of evidence can be ranked from high to low as follows (Sackett, Richardson, Rosenberg, & Haynes, 2005): randomized trials and experimental/control designs, quasi-experimental designs, pre-post comparisons, correlational studies, case studies, surveys. The robustness of evidence refers to the magnitude of the observed effect. The magnitude of the observed effect(s) can be determined from: (a) probability statements (e.g., the probability that the results are due to chance is less than 1 time in 100, p < .01); (b) the percent of variance explained in the dependent variable by variation in the independent variable; and/or (c) the statistically derived effect size. When qualitative research methods are used, other standards can be employed to evaluate the robustness of the evidence (cf. Brantlinger, Jimenez, Klingner, Pugach, & Richardson, 2005; Claes et al., 2015). The relevance of evidence is related to purpose. Major purposes involve clinical, managerial, and policy decision making. Evaluating the relevance evidence needs to be done within the context of the questions being asked, what is best for whom, and what is best for what (Biesta, 2010; Brantlinger et al., 2005; Bouffard & Reid, 2012).

Step 6: Use the Evidence/Outcomes for Multiple Purposes

Policy-related evaluation is defined as assessing personal, family, or societal changes or benefits that follow as a result or consequence of some activity, intervention, service, or support. These outcomes can be used for multiple purposes, including summative evaluation, formative evaluation, and research. Table 2 provides examples of each of these uses. The material presented in Table 2 is based on the published work of Azzam and Levine (2015), Claes et al. (2015), Cullen et al. (2016), Deerberg-Wittram et al. (2013), and Gugiu & Rodriguez-Campos (2007).

Table 2

Exemplary Uses of Assessed Policy Outcomes

Exemplary Uses of Assessed Policy Outcomes
Exemplary Uses of Assessed Policy Outcomes

Example From Flanders

Since 2014, the law on personal budgets has been approved by the Flemish government. The purpose and goal of this law is to give people with a disability more control over their lives. As part of a new system of support, the use of personal budgets is seen as a vehicle for change. This change aims to empower people with disabilities and give them more control. The implementation of personal budgets is one part of a social policy that is outcome-driven, and one that strives for the enhancement of quality of life in line with the UNCRPD (Claes, Vandenbussche, & Lombardi, 2016; Schalock & Keith, 2016; Vlaams Parlement, 2013-2014).

In terms of evidence-based policy, the Flemish government seeks an answer to one main question: “What is the impact of personal budgets on the quality of life of persons with disabilities?” We used the 6-step policy evaluation process depicted in Figure 2 to determine potential outcomes for each policy subgoal. Table 3 summarizes these potential outcomes based on document analyses, case studies, expert panels, and an international Delphi study.

Table 3

Examples of Data Collection in Terms of Policy Evaluation

Examples of Data Collection in Terms of Policy Evaluation
Examples of Data Collection in Terms of Policy Evaluation

Policy Evaluation Guidelines

Policy evaluation is not done in a vacuum. In addition to the structured approach reflected in Figures 1 and 2 regarding a policy evaluation framework and process, there are at least four factors that signfiicantly influence policy evaluation and the use of policy evaluation results. These four involve: (a) contextual variables that influence disability policy at the micro-, meso-, and macro-system levels; (b) different perspectives on evidence; (c) the fidelity of the policy's implementation; and (d) the evaluation capability of the organization or system involved in the implementation and evaluation of policy.

Be Sensitive to Contextual Variables

Contextual variables can influence policy evaluation at the micro-, meso-, and macro-system level. At the microsystem level, for example, consumer empowerment, self-advocacy, and personal and family-centered planning have brought about changes in the focus of interventions, services, and supports; self-directed funding and personal budgets; and the criteria by which policy outcomes are evaluated (Shogren, Luckasson, & Schalock, 2015; Shogren, Schalock, & Luckasson, in press).

At the mesosytem level, organizations and systems are changing their policies and practices to conform to the transformation era, whose characteristics include being more person/family centered, streamlined and horizontally structured, and performance based (Schalock & Verdugo, 2014). Concurrently, we are seeing the emergence of new public management that views the market as the prime regulatory instrument in the public domain, with an associated emphasis on decentralization, quality control, effectiveness, and efficiency (DiRita, Parmenter, and Stancliffe, 2008; Schalock & Verdugo, 2012).

At the macrosystem level, both human service organizations and larger service delivery systems are being challenged by changes in the social-political-fiscal environments within which people with disabilities and their families live and service/support delivery systems operate. These challenges and change are reflected in an increased emphasis on continuous quality improvement, demonstrated policy accountability, a focus on organization and system sustainability, and multiple performance-based perspectives (Schalock, Verdugo, & Lee, 2016).

Agree on Perspective on Evidence

One of the major results of these contextual variables has been the emergence of different perspectives on evidence. The perspective one takes on evidence will influence not only how one evaluates the credibility of policy-related outcome data, but also its potential use (Archibald, 2015; Biesta, 2010; Mertens, 2016; Morrow & Nkwake, 2016). As discussed previously, the primary focus of the empirical-analytical perspective is on experimental or scientific results obtained from data-gathering strategies such as random trials, experimental/control designs, quasi-experimental designs, multiple baseline designs, and/or multivariate designs. The primary focus of the phenomenological-existential perspective is on reported experiences and enhanced human functioning, social particpation, and/or personal well-being, with associated data-gathering strategies such as self-reports, case studies, ethnographics, participatory action research multivariate designs, and/or grounded theory. The primary focus of the poststructural perspective is on desired public policy outcomes assessed via mixed methods designs, multivariate designs, population surveys, meta-analyses, and/or data registers.

These different perspectives reflect a number of philosophical assumptions on the nature of knowledge, practice, and reality; frame one's approach to data collection, analysis, and interpretation; determine one's sensitivity to different world views; shape one's thinking; and represent the intersection of evaluation and application (Schalock et al., in press). As an important policy evaluation guideline, stakeholders need to be familiar with the different perspectives on evidence and frame policy evaluation to be aligned with the emphasized perspective.

Ensure Application Fidelity

The effectiveness of a given policy is related in large part to whether it is implemented in reference to three application fidelity critria: adherence, competence, and differentiation. As discussed by Claes et al. (2015) and Hogue and Dauber (2013), adherence refers to the quality or extent to which the policy is actually implemented within the organization or system's policies and practices. Competence refers to the quality of skill delivery and whether the policy was implemented by organization and systems-level personnel who have those attitudes, skills, and knowledge required for knowledge transfer and effective implementation. Differentiation refers to the degree to which organization- and systems-level policies and practices reflect the logic model parameters depicted in Figure 1, rather than previous service/support delivery approaches. As an important policy evaluation guideline, unless a policy is implemented consistent with its stated parameters and meets these three application fidelity criteria, there is no way to accurately evaluate its intended outcome.

Build Evaluation Capacity

Disability policy is implemented largely through service/support provider organizations and the large systems that provide statutory rules, regulations, and funding. With the increasing focus on outcomes-driven policy formulation and outcomes evaluation, a critical issue that emerges is the level of evaluation capability (i.e., capacity) of those organizations and systems that are expected to provide outcome information. The term “evaluation capacity” refers to developing in organizations and systems the necessary skills to conduct ongoing, rigorous evaluation (Cousins, Goh, Elliott, Aubry, & Gilbert, 2014). A recent analysis (Norton, Milat, Edwards, & Giffin, 2016) identified those factors associated with successful capacity building. These factors were: training and professional development as an element of evaluation capacity building, participatory approaches to evaluation, linking training with practical application, partnerships among evaluators and key stakeholders, embedding evaluation into routine practices, and tailoring the evaluation capacity building strategy to the organization or system's context.

The strong connection between successful capacity building and practical application underscores the distinction between capacity to do (i.e., building) vs. capacity to use (i.e., utilization). As discussed by Bourgeois, Whynot, & Theriault (2015), Cousins et al. (2014), and Schalock et al. (2016), integrating the results of policy evaluation into organization and system routines and cultures is associated closely with a commitment to quality improvement that involves a continous process of enhancing valued outcomes through a quality improvement loop consisting of assessing, planning, doing, and evaluating.

These four policy evaluation guidelines will help overcome many of the barriers to policy evaluation reported in the literature (cf. Flitcroft, Gillespie, Salkeid, Carter, & Trevena, 2011; Trochim, 2009). In a recent analysis of specific policy evaluation barriers from a systems perspective, Schneider, Milat, & Moore (2016) reported that: (a) at the macrosystem level, barriers involve political influence/sensitivity, limited funding, and time constraints; (b) at the mesosystem level, barriers involve staff retention/turnover, approval process, culture of evaluation, tools, training, intellectual property regulation, and changing liaisons; and (c) at the microsystem level, barriers involve the skill level of staff, confidence, staff trust, career priorities, and motivation.

Conclusion

This article has stressed the need to use a structured approach to policy evaluation that is based on a clearly described and operationalized evaluation framework (Figure 1) and an evaluation process (Figure 2). Logic models provide the framework to design theoretical relations among input, throughput, output, and outcome components of public policy. This framework incorporates values; policy-related interventions, services, and supports; structures and environments that facilitate growth, development, and enhancement; and personal, family, and societal changes resulting from these inputs, throughputs, and outputs. The six-step policy evaluation process and evaluation capacity are important factors that evaluators need to be sensitive to, as are the previously discussed policy evaluation guidelines. A structured approach such as the one described in this article provides a policy evaluation framework and also brings together the necessary triade of policy, practice, and research.

References

References
Archibald,
T.
(
2015
).
They just know: The epistemological politics of ‘evidence-based'non-formal education
.
Evaluation and Program Planning
,
48
,
137
148
.
Azzam,
T.,
&
Levine,
B.
(
2015
).
Politics in evaluation: Politically responsive evaluation in high stakes environments
.
Evaluation and Program Planning
,
53
,
44
56
.
Biesta,
G. J. J.
(
2010
).
Why “what works” still won't work: From evidence-based education to values-based education
.
Studies in Philosophy and Education
,
29
(
5
),
491
503
.
Blayney,
P.,
Kalyuga,
S.,
&
Sweller,
J.
(
2010
).
Interactions between the isolated-interactive elements effect and levels of learner expertise: Experimental evidence from an accountancy class
.
Instructional Science
,
38
(
3
),
277
287
.
Bouffard,
M.,
&
Reid,
G.
(
2012
).
The good, the bad, and the ugly of evidence-based practice
.
Adapted Physical Activity Quarterly
,
29
(
1
),
1
24
.
Bourgeois,
I.,
Whynot,
J.,
&
Theriault,
E.
(
2015
).
Application of an organization evaluation capacity self-assessment instrument to different orgaizations: Similarities and lessons learned
.
Evaluation and Program Planning
,
50
,
47
55
.
Bowen,
S.,
&
Kreidler,
S.
(
2008
).
Indicator madness: A cautionary reflection on the use of indicators in healthcare
.
Health Policy
,
3
(
4
),
41
48
.
Brailsford,
E.,
&
Williams,
P. L.
(
2001
).
Evidence-based practices: An experimental study to determine how different working practice affects eye radiation dose during cardiac catheterization
.
Radiography
,
7
(
1
),
21
30
.
Brantlinger,
E.,
Jimenez,
R.,
Klingner,
J.,
Pugach,
M.,
&
Richardson,
V.
(
2005
).
Qualitative studies in special education
.
Exceptional Children
,
71
(
2
),
195
207
.
Broekaert,
E.,
Van Hove,
G.,
Bayliss,
P.,
&
D'Oosterlinck,
F.
(
2004
).
The search for an integrated paradigm of care models for people with handicaps, disabilities and behavioural disorders at the Department of Orthopedagogy of Ghent University
.
Education and Training in Developmental Disabilities
,
39
(
3
),
206
216
.
Broekaert,
E.,
Autrique,
M.,
Vanderplasschen,
W.,
&
Colpaert,
K.
(
2010
).
'The human prerogative': A critical analysis of evidence-based and other paradigms of care in substance abuse treatment
.
Psychiatric Quarterly
,
81
(
3
),
227
238
.
Chiu,
C.,
Lombardi,
M.,
Claes,
C.,
&
Schalock,
R.L.
(
2017
).
Aligning UNCRPD articles, QOL domains, and supports. An international Delphi study
.
Unpublished manuscript
.
Claes,
C.,
van Loon,
J.,
Vandevelde,
S.,
&
Schalock,
R. L.
(
2015
).
An integrative approach to evidence based practices
.
Evaluation and Program Planning
,
48
,
132
136
.
Claes,
C.,
Vandenbussche,
H.,
&
Lombardi,
M.
(
2016
).
Human rights and quality of life domains: Identifying cross-cultural indicators
.
In
R. L.
Schalock
&
K. D.
Keith
(
Eds.
),
Cross-cultural quality of life: Enhancing the lives of people with intellectual disability (2nd ed
.,
pp
.
167
174
).
Washington, DC
:
American Association on Intellectual and Developmental Disabilities
.
Cohen,
A. M.,
Stavri,
P. Z.,
&
Hersh,
W. R.
(
2004
).
A categorization and analysis of the criticisms of evidence-based medicine
.
International Journal of Medical Informatics
,
73
(
1
),
35
43
.
Costanza,
R.,
Fisher,
B.,
Ali,
S.,
Beer,
C.,
Bond,
L.,
Boumans,
R.,
. . .
Farley,
J.
(
2008
).
An integrative approach to quality of life measurement, research, and policy
.
Surveys and Perspectives Integrating Environment and Society
,
1
(
1
),
11
15
.
Cousins,
J. B.,
Goh,
S. C.,
Elliott,
C.,
Aubry,
T.,
&
Gilbert,
N.
(
2014
).
Government and voluntary sector differences in organizational capacity to do and use evaluation
.
Evaluation and Program Planning
,
44
,
1
13
.
Cullen,
P.,
Clapham,
K.,
Byrne,
J.
Hunter,
K.,
Senserrick,
L. K.,
&
Ivers,
R.
(
2016
).
The importance of context in logic model constructin for a multi-site community-based Aboriginal driver licensing program
.
Evaluation and Program Planning
,
57
,
8
15
.
Deerberg-Wittram,
J.,
Guth,
C.,
&
Porter,
M. E.
(
2013
).
Value-based competition: The role of outcome measurement
.
Public Health Forum
,
21(4), 12.e1–12.e3.
Decr. Vl. 25
. (
2014
).
April 2014 houdende de Persoonsvolgende Financiering voor personen met een handicap en tot hervorming van de wijze van financiering van de zorg en ondersteuning van personen met een handicap
.
Belgisch Staatsblad
.
2014035693
DiRita,
P. A.,
Parmenter,
T. R.,
&
Stancliffe,
R. J.
(
2008
).
Utility, economic rationalism and the circumscription of agency
.
Journal of Intellectual Disability Research
,
52
(
7
),
618
625
.
Donaldson,
S. I.
(
2007
).
The emergence of program theory-driven evaluation science
.
New York, NY
:
Taylor & Francis Group
.
Ferket,
N.,
Claes,
C.,
&
De Maeyer,
J.
(
2016
).
Het ontwikkelen van een theoretisch model over de relatie tussen Persoonsvolgende Financiering (PVF) en de Quality of Life (QOL)
.
Brussels, Belguim
:
VAPH
.
Flitcroft,
K.,
Gillespie,
J.,
Salkeid,
G.,
Carter,
S.,
&
Trevena,
L
(
2011
).
Getting evidence into policy: The need for deliberative strategies
.
Social Sciences and Medicine
,
72
(
7
),
1039
1046
.
Funnell,
S. C.,
&
Rogers,
P. J.
(
2011
).
Purposeful program theory: Effective use of theories of change and logic models
.
San Francisco, CA
:
Sage
.
Goldman,
H. H.,
&
Azrin,
S. T.
(
2003
).
Public policy and evidence-based practice
.
Psychiatric Clinics of North America
,
26
(
4
),
899
917
.
Gugiu,
P. C.,
&
Rodriguez-Campos,
L.
(
2007
).
Semi-structured interview protocol for constucting logic models
.
Evaluation and Progam Planning
,
30
,
339
350
.
Hagerty,
M. R.,
Cummins,
R. A.,
Ferriss,
A. L.,
Land,
K.,
Michalos,
A. C.,
Peterson,
M.,
. . .
Vogel,
J.
(
2001
).
Quality of life indexes for national policy: Review and agenda for research
.
Social Indicators Research
,
55
(
1
),
1
96
.
Hogue
A.,
&
Dauber,
S.
(
2013
).
Assessing fidelity to evidence-based practices in usual care: The example of family therapy for adolescent behavior problems
.
Evaluation and Program Planning
,
37
,
21
30
.
Hung,
K. Y.,
&
Jerng,
J. S.
(
2014
).
Time to have a paradigm shift in health care quality measurement
.
Journal of the Formosan Medical Association
,
113
(
10
),
673
679
.
Kinash,
S.,
&
Hoffman,
M.
(
2009
).
Children's wonder-initiated phenomenological research: A rural primary school case study
.
Evaluation
,
6
(
3
),
1
14
.
Leichsenring,
F.
(
2004
).
Randomized controlled versus naturalistic studies: A new research agenda
.
Bulletin of the Menninger Clinic
,
68
(
2
),
137
151
.
Mertens,
D. M.
(
2016
).
Assumptions at the philosophical and programmatic levels in evaluation
.
Evaluation and Program Planning
,
59
,
102
108
.
Mesibov,
G. B.,
&
Shea,
V.
(
2010
).
The TEACCH Program in the era of evidence-based practice
.
Journal of Autism and Developmental Disorders
,
40
(
5
),
570
579
.
Morrow,
N.,
&
Nkwake,
A. M.
(
2016
).
Conclusion: Agency in the face of complexity and the future of assumption-aware evaluation practice
.
Evaluation and Program Planning
59
,
154
160
.
Neutens,
J. J.,
&
Rubinson,
L.
(
2010
).
Research techniques for the health sciences
.
San Francisco, CA
:
Benjamin Cummings
.
Norton,
S.,
Milat,
A.,
Edwards,
B.,
&
Giffin,
M.
(
2016
).
Narrative review of strategies by organizations for building evaluation capacity
.
Evaluation and Program Planning
,
58
,
1
19
.
Norwood,
S. L.
(
2010
).
Research essentials: Foundations for evidence-based practices
.
Boston, MA
:
Pearson Electronic
.
Nussbaum,
M. C.
(
2015
).
Philosophy and economics in the capabilities approach: An essential dialogue
.
Journal of Human Development and Capabilities
,
16
(
1
),
1
14
.
Parker,
M.
(
2005
).
False dichotomies: EBM, clinical freedom, and the art of medicine
.
Medical Humanities
,
31
(
1
),
23
30
.
Sackett,
D. L.,
Richardson,
W. S.,
Rosenberg,
W.,
&
Haynes,
R. B.
(
2005
).
Evidence-based medicine: How to practice and teach EBM
.
London, England
:
Churchill-Livingstone
.
Schalock,
R. L.,
Gomez,
L. E.,
Verdugo,
M. A.,
&
Claes,
C.
(
in press
).
Evidence and evidence-based practices: Are we there yet?
Intellectual and Developmental Disabilities
.
Schalock,
R. L.,
&
Keith,
K. D.
(
2016
).
Cross-cultural quality of life: Enhancing the lives of people with disabilities (2nd edition)
.
Washington, DC
:
AAIDD
.
Schalock,
R. L.,
&
Verdugo,
M. A.
(
2012
).
A conceptual and measurement framework to guide policy development and systems change
.
Journal of Policy and Practice in Intellectual Disabilities
,
9
(
1
),
63
72
.
Schalock,
R. L.,
&
Verdugo,
M. A.
(
2014
).
Quality of life as a change agent
.
International Public Health Journal
,
6
(
2
),
105
116
.
Schalock,
R. L.,
Verdugo,
M. A.,
&
Gomez,
L. E.
(
2011
).
Evidence-based practices in the field of intellectual and developmental disabilities: An international consensus approach
.
Evaluation and Program Planning
,
34
(
3
),
273
282
.
Schalock,
R. L.,
Verdugo,
M.A.,
&
Lee.
T.
(
2016
).
A systematic approach to an organization's sustainability
.
Evaluation and Program Planning
,
56
,
56
63
.
Schneider,
C. H.,
Milat,
A. J.,
&
Moore,
G.
(
2016
).
Barriers and facilitators to evaluation of health policies and programs: Policymaker and research perspectives
.
Evaluation and Program Planning
,
58
,
208
215
.
Shams,
L.,
Akbari Sari,
A.
&
Yazdani,
S.
(
2016
).
Values in health policy: A concept analysis
.
International Journal of Health Policy and Management
,
5
(
11
),
623
630
.
Shogren,
K. A.,
Luckasson,
R.,
&
Schalock,
R. L.
(
2015
).
Using context as an integrative fraework to align policy goals, supports, and outcomes in intellectual disability
.
Intellectual and Developmental Disabilities
,
53
,
367
376
.
Shogren,
K.A.,
Schalock,
R. L.,
&
Luckasson,
R.
(
in press
).
The use of a context-based change model to unfreeze the status quo and drive valued outcomes
.
Journal of Policy and Practice in Intellectual Disabilitiies
.
Shogren,
K. A.,
&
Turnbull,
H. R.
(
2010
).
Public policy and outcomes for persons with intellectual disability: Extending and expanding the public policy framework of AAIDD's 11th edition of Intellectual Disability: Definition, Classification, and Systems of Support
.
Intellectual and Developmental Disabilities
,
48
(
5
),
375
386
.
Trochim,
W. M. K.
(
2009
).
Evaluation policy and evaluation practice
.
New Directions for Evaluation
,
2009
(
123
),
13
32
.
United Nations
. (
2006
).
Convention on the rights of persons with disabilities
.
van Loon,
J.,
Bonham,
G.,
Peterson,
D.,
Schalock,
R. L.,
Claes,
C.,
&
Decramer,
A.
(
2013
).
The use of evidence-based outcomes in systems and organizations providing services and supports to persons with intellectual disabilities
.
Evaluation and Program Planning
,
36
(
1
),
80
88
.
Vlaams Parlement
.
(2013-2014
). Ontwerp van decreet houdende de persoonsvolgende financiering voor personen met een handicap en tot hervorming van de wijze van financiering van de zorg en de ondersteuning voor personen met een handicap
.
Brussels, Belgium
:
Vlaams Parlement
.

Author notes

The authors gratefully acknowledge the collaboration with the Flemish Gouvernment, more specifically the Flemish Agency for Persons with a Disability (VAPH).