Abstract

Researchers have evaluated active support in agencies for persons with developmental disabilities to increase staff assistance and service user engagement. A systematic review identified two studies in which researchers reported three experimental evaluations of active support. Only one experiment showed a clear functional relationship between active support with “ineffective” to “questionable” percentage of nonoverlapping data points effect sizes and acceptable percentage of all nonoverlapping data points effect sizes. Two experiments did not show experimental control; however, there was evidence that the investigators in these studies did not sufficiently manipulate the independent variable. Based on these data, active support only meets Chambless and Hollon's (1998) criterion for a “promising treatment” but not an evidence-based practice. Future research on active support should demonstrate that the experimenter manipulated the independent variable and reported data on individual participants.

Active support is a training system used to increase staff assistance and service user engagement (Jones et al., 1999). The implementation of active support in community residences for people with developmental disabilities has increased over the past decade (Jones et al., 1999; Koritsas, Iacono, Hamilton, & Leighton, 2008; Stancliffe, Harman, Toogood, & McVilly, 2008; Toogood, 2008). Many investigators have attempted to establish the efficacy of active support, but with varied results (Beadle-Brown, Hutchison, & Whelton, 2008; Bradshaw et al., 2004; Jones et al., 1999; Jones, Felce, Lowe, Bowley, Pagler, & Gallagher, 2001; Jones, Felce, Lowe, Bowley, Pagler, Strong, et al., 2001; Mansell, Beadle-Brown, Whelton, Beckett, & Hutchison, 2008; Mansell, Elliott, Beadle-Brown, Ashman, & MacDonald, 2002; Mansell, McGill, & Emerson, 2001; Stancliffe, Harman, Toogood, & McVilly, 2007; Toogood, 2008); however, some researchers have argued that its efficacy is well-proven and future studies should instead be focused on refining its effects and applying it more widely (Stancliffe, Jones, & Mansell, 2008). Unfortunately, there have been no systematic reviews of the active support literature using evidence-based practice standards (Chambless & Hollon, 1998.) Therefore, in this study, we conducted a systematic review of experimental evaluations of active support incorporating the evidence-based practice criteria.

Method

The first author searched CINAHL, ERIC, MEDLINE, and PSYCHINFO using the search terms active support in the abstract and intellectual, developmental, or learning and disabilities in the text. This initial search yielded 21 potentially relevant articles. The first author examined the reference section of each article and identified 4 additional articles. He then screened all 25 articles to determine which met the following criteria: (a) published in an English-language peer-reviewed journal; (b) had a clearly outlined participant and method section in which the researcher(s) described participant characteristics, service environment, and all steps involved in the application of active support; (c) used a small-N or group experimental design with implementation of active support in an agency serving clients with intellectual or other developmental disabilities; and (d) used measures of staff assistance and service user engagement. Ten studies met these inclusion criteria (Beadle-Brown et al., 2008; Bradshaw et al., 2004; Jones, Felce, Lowe, Bowley, Pagler, Gallagher, 2001; Jones, Felce, Lowe, Bowley, Pagler, Strong, et al., 2001; Jones et al., 1999; Mansell et al., 2008; Mansell et al., 2002; Mansell et al., 2001; Stancliffe et al., 2007; Toogood, 2008). The authors then coded these studies based on the established evidence-based practice criteria (Chambless & Hollon, 1998) as to (a) whether they were experiments and, if so, what experimental design was used; (b) whether they reported using treatment manuals; (c) whether they described participant characteristics; (d) which outcome measures were used; and (e) which researchers conducted the study. In well-established treatments the intervention must be applied in well-designed randomized control trials or small-N experiments, with researchers using treatment manuals and appropriate outcome measures demonstrated by two or more distinct research groups (Chambless & Hollon, 1998).

Inclusion Criteria Decision Process

Only Jones et al. (1999) and Stancliffe et al. (2007) reported experiments and met the other inclusion criteria; other investigators did not use experimental designs as defined by the Chambless and Hollon (1998) criteria. Bradshaw et al. (2004) and Mansell et al. (2002) used pre–posttest comparison group designs that did not involve individual randomization of participants, and only Mansell et al. employed a control group. These designs do not meet the more conservative evidence-based practice standards, which is the reason for the exclusion of these studies from further analysis, despite their positive findings with respect to active support.

In two studies researchers reported experiments. Jones et al. (1999) and Stancliffe et al. (2007) both used nonconcurrent multiple baseline across settings (houses) designs. Stancliffe et al. reported two experiments; the first was a multiple baseline design across three nongovernment-run group homes and the second, a multiple baseline design across two government-run group homes (Stancliffe et al., 2007, Figures 1 and 2, respectively.) Thus, there were a total of three experiments evaluating active support.

Data Extraction and Analysis

We identified three experiments in which researchers used multiple baseline designs across settings. Therefore, we first calculated effects sizes using the percentage of nonoverlapping data points and percentage of all nonoverlapping data points for each outcome variable (i.e., staff assistance and client engagement) in each experiment. Because researchers in all three experiments also reported follow-up data, we calculated effect sizes for both intervention and follow-up. Scruggs and Mastropieri (1998) suggested that researchers consider percentage of nonoverlapping data points greater than 90% as very effective; 70% to 90%, effective; 50% to 70%, questionable; and below 50%, ineffective (Scruggs & Mastropieri, 1998, p. 224). Although these percentage of nonoverlapping data points criteria are valid and helpful in the evaluation of single-subject research, percentage of all nonoverlapping data points is favored due to its stronger correlations with other indices of intervention effect and consideration of all baseline data points (Parker, Hagan-Burke, & Vannest, 2007). Because there were no randomized controlled trials, we could not calculate Cohen's d or related effect size statistics. We then carefully examined the figures in each study to determine whether the study had experimental control over each variable.

Results and Discussion

Table 1 shows that the percentage of nonoverlapping data points for staff assistance varied from 27.7% to 66% (ineffective to moderately effective) during treatment and from 30% to 50% (ineffective) at follow-up. The percentage of nonoverlapping data points for client engagement ranged from 16.7% to 54.0% (ineffective to questionable) during treatment and from 0% to 50% (ineffective) during follow-up. The percentage of all nonoverlapping data points calculations yielded greater values for staff assistance, ranging from 60.6% to 85% during the application of active support and from 81% to 90% at follow-up. Percentage of all nonoverlapping data points for client engagement varied from 66.7% to 78% during treatment and 81% to 93.3% at follow-up.

Table 1

Percentage of Nonoverlapping Data Points for the Dependent Variables by Study

Percentage of Nonoverlapping Data Points for the Dependent Variables by Study
Percentage of Nonoverlapping Data Points for the Dependent Variables by Study

Review of the figures revealed that only Jones et al. (1999) demonstrated experimental control of both staff assistance and client engagement. These authors also reported highly statistically significant intervention effects for total engagement, and staff assistance, ps < .00005, based on an ANOVA, although they did not report F statistics or degrees of freedom.

Stancliffe et al. (2007) reported standardized mean effect sizes for engagement pre- to posttest of .32 and pretest to follow-up, .51. The pre- to posttest and pretest to follow-up effect sizes were larger for measures of staff assistance, .61 and .94, respectively. Neither of Stancliffe et al.'s experiments, however, demonstrated experimental control of either dependent variable. In Experiment 1 there was no experimental control over client engagement because intervention data overlapped almost completely with baseline data for both houses in the study. Also, there was no experimental control over staff assistance because there was substantial overlap between baseline and intervention data for all three houses. Staff assistance in House 1 was, in fact, lower and had a descending trend during intervention in comparison to baseline. Experiment 2 also did not have experimental control over client engagement because the intervention data overlapped substantially with baseline data for House 2 and because House 2 baseline data had an accelerating trend. Similarly, in Experiment 2 Stancliffe et al. did not have experimental control over staff assistance because of substantial overlap between baseline and intervention for House 1 (Note that in both figures Stancliffe et al. used nonlinear x-axes that did not line up with each other, making inferences concerning experimental control difficult.)

Both studies met the other criteria for well-established treatments; however, both Stancliffe et al. (2007) and Jones et al. (1999) failed to collect treatment integrity data. Stancliffe et al. did not provide a thorough description of the staff members receiving training nor the houses where training occurred; they only reported the sample size and staff member's type of position along with staffing levels, turnover, and average length of employment in the houses. There was no pertinent information such as the education level and experience staff had working with people with developmental disabilities. Similarly, information on the role staff members served, programming, and management structure was not provided.

In our systematic review we only identified one experiment demonstrating that active support caused a change in client engagement. Thus, active support meets Chambless and Hollon's (1998) criterion for a promising treatment but not an evidence-based practice. However, the effect sizes, from the percentage of nonoverlapping data points, for both staff assistance and client engagement were modest and only met Scruggs and Mastropieri's (1998) criteria for ineffective to questionable. The percentage of all nonoverlapping data points calculations broadly paralleled the percentage of nonoverlapping data points values for the intervention, although the absolute numbers differed. The percentage of all nonoverlapping data points effect sizes were greater than the percentage of nonoverlapping data points values, especially for the follow-up data. Although there are no criteria for judging percentage of all nonoverlapping data points effect sizes as is possible with percentage of nonoverlapping data points, the percentage of all nonoverlapping data points may be a more useful and accurate effect size for evaluating the two studies (Parker et al., 2007).

Although active support specifically targets the promotion of staff assistance, the effect sizes for the experimental studies varied, and Stancliffe et al. (2007) failed to demonstrate experimental control of this variable. Given the strong correlation between staff support and client engagement (Felce, 1998; Felce, Lowe, Beecham, & Hallam, 2000), these findings suggest that these researchers did not manipulate the independent variable sufficiently to produce large effects on client engagement. This raises the question as to whether active support is an ineffective intervention or an effective intervention that has not been applied optimally. Perhaps future researchers should conduct parametric analyses to investigate what rate of staff support produces what duration of client engagement. Staff support includes both antecedent interventions, such as various forms of assistance, and consequences, such as praise. Another suggestion for future researchers is to investigate which of these components are effective and what might be done to enhance their effectiveness. The measures of staff support used in these studies were simply recordings of whether staff provided assistance, but whether staff correctly used systematic prompting procedures, such as most-to-least and least-to-most prompting procedures, was not recorded. Likewise, these studies did not positively demonstrate that staff praise reinforced client engagement. Recently, researchers have found that different forms of attention, such as reprimands, tickles, eye contact, and praise, may function as reinforcers for different people (Kodak, Northup, & Kelley, 2007) and that attention from preferred, but not nonpreferred staff, may function as a reinforcer (Jerome & Sturmey, 2008). Thus, the measure of staff assistance used in these studies may be insensitive to important functional properties of staff assistance. Researchers could address this issue by parsing measures of staff assistance into its components, such as least-to-most and most-to-least prompting and contingent staff interaction.

Treatment integrity data were not reported in either study. This issue raises the broader question as to what precisely constitutes the independent variable in active support. For example, if a staff member attends all training sessions and accurately completes all workshop exercises, but their behavior did not change on the job, did the experimenter manipulate the independent variable? How much of what kind of change in staff behavior is required to say that active support is indeed implemented? Future researchers should attend to this issue by defining operationally what the independent variable is. This would be useful for practitioners to determine whether active support has been implemented and for researchers to determine the integrity of the independent variable.

In the process of evaluating studies against the evidence-based practice criteria, we excluded from further analysis two studies that used pre- posttest comparison group designs (Bradshaw et al., 2004; Mansell et al., 2002). The evidence-based practice criteria require that investigators use randomized control trials or small-N experimental designs. With an intervention like active support, which is carried out in pre-established services settings such as group homes, the individual randomization of participants is simply impossible. Perhaps the evidence-based practice criteria need to be evaluated based on the research to which they are applied. In cases such as the ones identified here, a balance could be struck between well-controlled studies and the logistics of the intervention or research setting. The evaluation of other applied interventions such as active support may bear this out.

The effect size calculations in the present review involved both percentage of nonoverlapping data points and percentage of all nonoverlapping data points indices. Although both Jones et al. (1999) and Stancliffe et al. (2007) used a small-N design, each graph they used represented the group data collected across group homes. It is questionable whether percentage of nonoverlapping data points and percentage of all nonoverlapping data points calculations are valid for use in such situations, and this may represent a limitation to the present review. Neither group of authors reported individual data across staff members and clients within the homes. This is understandable given the focus of active support as a home-wide intervention involving all staff and clients. Future researchers who examine the individual effects of active support within these settings may help in clarifying the specific intervention components and client characteristics that may affect the outcomes of active support training.

References

References
Beadle-Brown
,
J.
,
A.
Hutchison
, and
B.
Whelton
.
2008
.
A better life: The implementation and effect of person-centred active support in the Avenues trust.
Tizard Learning Disability Review
13
:
15
24
.
Bradshaw
,
J.
,
P.
McGill
,
R.
Stretton
,
A.
Kelly-Pike
,
J.
Moore
,
S.
Macdonald
, et al
.
2004
.
Implementation and evaluation of active support.
Journal of Applied Research in Intellectual Disabilities
17
:
139
148
.
Chambless
,
D. L.
and
S. D.
Hollon
.
1998
.
Defining empirically supported therapies.
Journal of Consulting and Clinical Psychology
66
:
7
18
.
Felce
,
D.
1998
.
The determinants of staff and resident activity in residential services for people with severe intellectual disability: Moving beyond size, building design, location and number of staff.
Journal of Intellectual and Developmental Disability
23
:
103
119
.
Felce
,
D.
,
K.
Lowe
,
J.
Beecham
, and
A.
Hallam
.
2000
.
Exploring the relationships between costs and quality of services for adults with severe intellectual disabilities and the most severe challenging behaviours in Wales: A multivariate regression analysis.
Journal of Intellectual and Developmental Disability
25
:
307
326
.
Jerome
,
J.
and
P.
Sturmey
.
2008
.
Reinforcing efficacy of interactions with preferred and nonpreferred staff under progressive-ratio schedules.
Journal of Applied Behavior Analysis
41
:
221
225
.
Jones
,
E.
,
D.
Felce
,
K.
Lowe
,
C.
Bowley
,
J.
Pagler
,
B.
Gallagher
, and
A.
Roper
.
2001
.
Evaluation of the dissemination of active support training in staffed community residences.
American Journal on Mental Retardation
106
:
344
358
.
Jones
,
E.
,
D.
Felce
,
K.
Lowe
,
C.
Bowley
,
J.
Pagler
,
G.
Strong
, et al
.
2001
.
Evaluation of the dissemination of active support training and training trainers.
Journal of Applied Research in Intellectual Disabilities
14
:
79
99
.
Jones
,
E.
,
J.
Perry
,
K.
Lowe
,
D.
Felce
,
S.
Toogood
,
F.
Dunstan
, et al
.
1999
.
Opportunity and the promotion of activity among adults with severe intellectual disability living in community residences: The impact of training staff in active support.
Journal of Intellectual Disability Research
43
:
164
178
.
Kodak
,
T.
,
J.
Northup
, and
M. E.
Kelley
.
2007
.
An evaluation of the types of attention that maintain problem behavior.
Journal of Applied Behavior Analysis
40
:
167
171
.
Koritsas
,
S.
,
T.
Iacono
,
D.
Hamilton
, and
D.
Leighton
.
2008
.
The effect of active support training on engagement, opportunities for choice, challenging behavior and support needs.
Journal of Intellectual and Developmental Disability
33
:
247
256
.
Mansell
,
J.
,
J.
Beadle-Brown
,
S.
MacDonald
, and
B.
Ashman
.
2003
.
Resident involvement in activity in small community homes for people with learning disabilities.
Journal of Applied Research in Intellectual Disabilities
16
:
63
74
.
Mansell
,
J.
,
J.
Beadle-Brown
,
B.
Whelton
,
C.
Beckett
, and
A.
Hutchison
.
2008
.
Effect of service structure and organization on staff care practices in small community homes for people with intellectual disabilities.
Journal of Applied Research in Intellectual Disabilities
21
:
398
413
.
Mansell
,
J.
,
T.
Elliott
,
J.
Beadle-Brown
,
B.
Ashman
, and
S.
MacDonald
.
2002
.
Engagement in meaningful activity and “active support” of people with intellectual disabilities in residential care.
Research in Developmental Disabilities
23
:
342
352
.
Mansell
,
J.
,
P.
McGill
, and
E.
Emerson
.
2001
.
Development and evaluation of innovative residential services for people with severe intellectual disability and serious challenging behavior.
International Review of Research in Mental Retardation
24
:
245
298
.
Parker
,
R. I.
,
S.
Hagan-Burke
, and
K.
Vannest
.
2007
.
Percentage of all non-overlapping data (PAND): An alternative to percentage of nonoverlapping data points.
Journal of Special Education
40
:
194
204
.
Scruggs
,
T.
and
M.
Mastropieri
.
1998
.
Summarizing single-subject research: Issues and applications.
Behavior Modification
22
:
221
242
.
Stancliffe
,
R. J.
,
A. D.
Harman
,
S.
Toogood
, and
K. R.
McVilly
.
2008
.
Staff behavior and resident engagement before and after active support training.
Journal of Intellectual and Developmental Disability
33
:
257
270
.
Stancliffe
,
R. J.
,
A. D.
Harman
,
S.
Toogood
, and
K. R.
McVilly
.
2007
.
Australian implementation and evaluation of active support.
Journal of Applied Research in Intellectual Disabilities
20
:
211
227
.
Stancliffe
,
R. J.
,
E.
Jones
, and
J.
Mansell
.
2008b
.
Research in active support.
Journal of Intellectual and Developmental Disability
33
:
194
195
.
Toogood
,
S.
2008
.
Interactive training.
Journal of Intellectual and Developmental Disability
33
:
215
224
.

Author notes

Editor-in-Charge: Steven J. Taylor

Jeffery P. Hamelin, MA (e-mail: JHamelin@gc.cuny.edu), Graduate Assistant, The Graduate Center, Psychology, The City University of New York, New York, NY 10016. Peter Sturmey, PhD, Professor, Queens College, CUNY, 65-30 Kissena Blvd., Flushing, NY, 11367.