The Supports Intensity Scale – Adult Version (SIS—A) has been widely adopted throughout North America and the world since its publication a little over a decade ago. Many organizations and jurisdictions operate under regulations that require an annual assessment of people who receive services and supports that are financed through public funds. The time and energy devoted to an annual SIS—A reassessment has become a concern in cases where the resulting information is largely redundant with information from a prior assessment. This article presents findings from an investigation of two approaches to creating a protocol to assist SIS—A users in distinguishing situations where there is a high likelihood that support needs have not changed in meaningful ways from situations where there is a reasonable possibility that support needs have changed. The SIS—A Annual Review Protocol was created based on these analyses as well as consideration of conceptual issues associated with support needs assessment. Ways in which this protocol might be used, as well as data that need to be collected to evaluate its usefulness, are discussed.
The 9th edition of the terminology and classification manual (Luckasson et al., 1992), published by the American Association on Intellectual and Developmental Disabilities (AAIDD) was groundbreaking with regard to how intellectual disability was conceptualized. Instead of an internal trait characterized by defects in mental aptitude (i.e., the medical model), the authors of the 9th edition proposed that intellectual disability was better understood as a state of functioning characterized by a significant and chronic mismatch between a person's competencies and the demands of settings and activities associated with participating in contemporary society (i.e., the social-ecological model). A social-ecological conceptualization does not deny that people with intellectual disability experience limitations to body functions and structure that impact activity and participation and that result in “limitations in both intellectual functioning and in adaptive behavior” (Schalock et al., 2010, p.1). Such limitations, however, are not considered to be the most salient characteristic of people with intellectual disability and therefore should not be the primary target of professional efforts. Rather, the focus should be on modifying typical contexts to maximize participation and identifying and arranging the supports people need to properly negotiate the demands of the settings and activities in which they wish to participate.
Apart from medical professionals whose work is focused on discovering and applying interventions that cure, prevent, or lesson the severity of central nervous system impairments (e.g., the PKU test to determine whether an infant has the enzyme needed to use phenylalanine in his or her body), the vast majority of professionals working with people with intellectual disability and their families should find the social-ecological conceptualization to be far more relevant to their everyday work. When people with intellectual disability are understood by their support needs, professionals prioritize their time and energy on modifying environments, adapting activities, and providing individualized supports that lead to fuller participation and enhanced opportunities in contemporary society.
The social-ecological conceptualization of intellectual disability presented in the 9th edition was a significant departure from prior manuals and was generally applauded on philosophical and ideological grounds. There were, however, significant concerns and criticisms regarding a variety of issues, especially in regard to operational criteria for diagnosis and classification. A key concern was that the 9th edition manual replaced the concept of “adaptive behavior” with the “adaptive skills,” and a classification system based on intensity of support needs was introduced that involved using a 4-point scale to rate the extent of assistance people required in regard to 10 adaptive skill areas (Luckasson et al., 1992). Critics pointed out that although adaptive behavior was a multidimensional construct, there was no empirical support for the 10 distinct adaptive skill areas the 9th edition authors had identified (e.g., see MacMillan, Gresham, & Siperstein, 1993; Thompson, McGrew, & Bruininks, 1999). Moreover, classifying people by intensity of support needed in response to the 10 adaptive skill areas was criticized as a subjective and unreliable process (e.g., see Greenspan, 1997; MacMillan et al., 1993). These criticisms, along with the increasing interest in social-ecological models of disability and an awareness that progress in any field is often linked to the ability to measure key constructs of interest, provided the impetus for AAIDD to organize and charge a task force with developing a psychometrically valid tool to measure the support needs of people with intellectual disability.
The Supports Intensity Scale – Adult Version (
The SIS—A (Thompson et al., 2015), originally known as the SIS (Thompson et al., 2004), was the outcome of the task force's work, and it has been available to professionals in the field of intellectual disability for more than a decade. It is currently being used widely throughout North America (28 U.S. States and Canadian Provinces), and has been translated into 13 languages and is used in 23 different countries across Europe, Asia, Australia, and South America (AAIDD, 2015). The psychometric indicators of the translated versions have been consistent with the original English language version (Thompson & DeSpain, in press; Verdugo, Arias, Ibàñez, & Schalock, 2010).
From its inception, the SIS—A was perceived as a tool that would provide useful information to professionals working at the individual, organizational, and jurisdictional levels. At the individual level, it was envisioned that planning teams would use SIS—A results with information from person-centered planning processes to better understand a person's support needs and inform the development of individualized support plans. At the organizational level, aggregate SIS—A results could be used by support providers to better understand the support needs profiles of the people whom they serve, which could help guide decision-making regarding a variety of topics, including setting priorities for staff development and assessing the extent to which organizational services were aligned with people's support needs. Finally, at the jurisdictional level, aggregate SIS—A data could be used to create more equitable approaches to resource allocation and enable jurisdictional leaders to identify people who might be over- or under- supported (Thompson et al., 2004). Examples of how the SIS—A has proven useful at all three levels have been documented in the professional literature (e.g., see Baily & Nixon, 2014; Chou, Lee, Chang, & Yu, 2013; Thompson, Schalock, Agosta, Teninty, & Fortune, 2014; van Loon, Claes, Vandevelde, Van Hove, & Schalock, 2010; Wehmeyer, et al. 2009; Wehmeyer, Tassé, Davies, & Stock. 2012).
A semi-structured interview involving at least two respondents who have a good knowledge of the individual who is being assessed is used to administer the SIS—A. Information is collected on 32 items related to extra support needed because of medical conditions or challenging behaviors, and 57 items that are focused on support needed to be successful in a wide range of activities that are a part of daily life in contemporary society. These items are organized into six subscales and a supplemental scale. As is the case with any meaningful assessment of a person's strengths, limitations, and needs, administering the SIS—A requires sufficient time to gather information from respondents, which includes setting appointments and, if necessary, traveling to interview locations. Although administration time varies, it is not unusual for the process to take two hours and sometimes more.
The time and expense involved in a SIS—A reassessment is necessary and justifiable when people's support needs have changed in important ways since the time of their prior assessment. In instances where it can be reasonably assumed that a person's support needs have not changed, however, a SIS—A reassessment is not necessary. Public policies concerning time between reassessments (using the SIS—A or any other tool) vary by jurisdiction, but in the U.S. it is common for individuals on whom public funds are expended to be reassessed on an annual basis. This is done to ensure that decisions are made with accurate and current information.
Because support needs are conceptualized as being relatively stable over time, in the case of the SIS—A it is likely that there are a high percentage of people whose support needs have not changed in any meaningful way over the course of a single year. It would be helpful, therefore, to have a method to distinguish people whose support needs have changed from people whose support needs have not changed. Such a method would enable finite resources to be spent more efficiently. In this article we present findings from an investigation of two approaches to creating a protocol that would help differentiate people for whom a full SIS—A reassessment is warranted from those for whom it is not. Based on the results of these analyses, as well as consideration of critical conceptual issues associated with the assessment and measurement of support needs, we describe the content and structure of a new protocol and discuss future research efforts that will be needed to evaluate its usefulness.
SIS—A Annual Review Protocol
We examined extant data from AAIDD SISOnline, a web-based system designed to assist large SIS user groups as well as small organizations and individuals to enter, score, and retrieve SIS—A data. SISOnline users can access to immediate results that summarize each participant's raw scores and corresponding standard scores (AAIDD, 2013). The total number of protocols in the SISOnline when analyzing the data were 129,864 youth and adults with intellectual and developmental disabilities (age range: 16 - 64 years old). The only demographic information available was age and gender due to the agreements made with regard to secondary data analysis with SISOnline users. The mean age of participants was 38 (SD = 13.3). Females constituted 41% of the sample (n = 53,205), while males made up 59% of the sample (n = 76,635).
The research questions driving our analyses were:
Could a “screener” be created that would indicate cases where a SIS—A reassessment was needed?
Could a subset of SIS—A items be identified to create a “short form” of the SIS—A that adequately captured what is measured by the full set of items?
Investigating the Feasibility of a Screener
Identifying one candidate dimension
One option considered was developing a ‘screener' version, where all items were administered but data was only collected on one of the three SIS—A response dimensions (frequency, daily support time, or type of support) with the assumption that scores on the screener would be correlated with scores on the complete scale and could be used to identify changes in scores demonstrating a need for reassessment. We used two different analytic methods to identify the optimal dimension [either frequency (FREQ), daily support time (DST), or type of support (TYPE)] for a screener: (a) running multi-trait multi-method (MTMM) analyses and (b) examining the mean squared errors. First, based on the results from MTMM analyses that decomposed the trait variances (i.e., variances of focal support-need constructs) and method variances (i.e., variances of three dimensions), we decided to select the dimension that had the lowest absolute average size of the loadings for each dimension within and across the focal constructs (i.e., support needs) and that correlated lower with the other two dimensions. Based on results presented in Table 1, FREQ and DST dimensions were candidate dimensions for the screener version.
Next, we identified the dimension that had the lowest mean squared error (MSE). In this context, the MSE was used to quantify the “distance” between two covariance matrices in the latent space (i.e., the covariance matrix when fitting the model with the averages across three dimensions vs. the covariance matrix when fitting the model with scores from each dimension). Here, smaller MSE values are better as they indicate less distance between the two covariance matrices. As provided in Table 2, the FREQ dimension was the most desirable dimension for the screener version based on the MSE values. Given results from MTMM analyses and MSEs together, we determined to move forward using the FREQ dimension.
Examining the factorial integrity
The next step was to examine if the FREQ dimension maintained the factorial integrity of the construct —“ensuring that the construct embodied in the short form maintains the same position in the nomological network of relations among construct (cf. Cronbach & Meehl, 1955) as did the full, longer scale” (Widaman, Little, Preacher, & Sawalani, 2011, p.53). To examine the factorial integrity, measurement invariance testing was conducted between two groups at both item and parcel levels: (a) Group 1- average scores across FREQ, DST, and TYPE (full version) and (b) Group 2 - scores of the FREQ dimension only (screener version). Parcels, which are known for improving model estimation and fit characteristics, were created based on the item-to-construct balancing technique (Little, Rhemtulla, Gibson, & Schoemann, 2013). Measurement invariance consists of three sequential tests: configural model, weak factorial invariance model, and strong factorial invariance model (for more information on the measurement invariance, see Brown, 2015). First, configural model fit was evaluated with conventional fit statistics (i.e., an absolute fit index of root mean square error of approximation < .08, comparative fit index and Tucker-Lewis index > .90, standardized root mean square residual < .08). Next, both weak and strong invariance models were tested with changes in comparative fit index less than .01 (Cheung & Rensvold, 2002).
As shown in Table 3, measurement invariance was not established at either the item or parcel levels, indicating that the construct comparability was not supported between Groups 1 and 2 (average scores across FREQ, DST, and TYPE vs. FREQ only). Even partial invariance (i.e., freeing several restrictions on parameters) was not supported, suggesting a need for an alternative way of identifying a subset of items that represent the support-need constructs, which is further addressed in the next section.
Investigating the Feasibility of a Short Form
Identifying the subset of items
The second approach was to identify the subset of items (i.e., three items per construct) that best maintained the factorial integrity of the long form of the SIS—A. Unlike the first approach, all three dimensions were retained; only the number of items per support need domain differed. As a starting point to identify three representative items per construct, we ran a confirmatory factor analysis to determine strong indicators of each support-need construct. After determining the representative indicators that had the strongest factor loadings, we ran measurement invariance tests between two groups: (a) Group 1- three parcel scores based on all items (full version) and (b) Group 2 - three representative item scores (short version). Measurement invariance was not supported for any reasonable combination of strong indicators. The short version of an assessment should maintain statistically similar means, variances, and associations of the constructs when compared to those from the full version, or they should be minimally changed (Widaman et al., 2011). However, in our case, even the fundamental steps required to move forward to examining latent means, variances, and covariances (i.e., establishing measurement invariance) failed with the three items per construct that had the strongest loadings.
As the next step, we identified three representative items from each subscale that produced the least differences in latent variance/covariance and mean matrices (i.e., the items that produced the lowest MSE) when compared with the full version. For each of the seven support-need constructs, each possible subset of three items was used to indicate the construct while the remaining six constructs were fixed at their parceled versions. We then computed the MSEs comparing the latent variances/covariances and means of these partially item-level models to the latent variances/covariance and means from the fully parceled version to identify the subset of items that best maintained the latent parameters. This approach entailed fitting 420 partially item-level models to test the variance/covariances matrices and 420 models to test the mean matrices. Tables 4 and 5 provide the subset of items for each construct that produced the smallest mean squared errors. When we modeled the data with the representative items provided in Table 5, the model fits were satisfactory with and without the inclusion of the Protection and Advocacy domain: (a) six-factor model: χ2 (120) = 92963.929, RMSEA = .077, CFI = .943, TLI = .928, and SRMR = .033 and (b) seven-factor model: χ2 (168) = 128786.969, RMSEA = .077, CFI = .934, TLI = .917, and SRMR = .036. The model fits were consistently good when compared with model fits of the full version (Table 6).
Comparing latent means, variances, and correlations
After determining the “representative” items (three items per construct; see Table 5), we compared latent means, variances, and correlations of the construct based on the parceled model (full scale) with the latent means, variances, and correlations of the construct based on the selected of items (short form). We used nested model chi-squared difference tests to check for differences in the latent parameters between the long and short forms. As seen in Table 7, only Community Living construct had different latent means when we compared the parceled model (i.e., full version) and the item model (i.e., short version). The mean difference in the Community Living construct between full and short versions was small (i.e., .004; as provided in right-side of Table 7).
The latent variances and correlations, however, differed between the parceled model and the selected item model. As seen in Table 8, two constructs (i.e., Lifelong Learning & Social) had the same variances whereas the remaining five constructs (Home Living, Community Living, Employment, Health & Safety, and Protection & Advocacy) had different variances. In addition, the full and short versions had different correlations except for one correlation between Employment and Social domains (Table 9). These findings suggested that while this subset of representative items produces the least differences in latent mean matrices, there are still significant differences, particularly in the latent variances and correlations across the short and long set of items, suggesting that the subset of items is limited in its representation of the constructs on the complete SIS—A.
The results from the two analytic approaches suggested there are no specific dimensions or subset of items that fully represented the full version of the SIS—A. Therefore, a valid assessment of the intensity and pattern of a person's support needs can only be completed by administering the full SIS—A. The 21 items in Table 5, derived from efforts to identify a “short-form,” however, can provide a reasonable estimate of support needs, and therefore would be appropriate to use within an approach (i.e., protocol) intended to determine whether a full SIS—A reassessment was necessary for someone who had been assessed within the past three years (the maximum time between SIS—A assessments for results to be considered current).
As addressed earlier, users must be very cautious about employing these representative items as the variances and associations between support-need constructs (i.e., correlations) significantly differed between the short- and full-versions. The subset of items, however, have latent parameters that are as close as possible to those of the long form in terms of a well-understood and trustworthy distance measure (i.e., the MSE). Therefore, although the subset cannot fully replace the long form (because the larger number of items in the long form is absolutely necessary to ensure adequate measurement properties), the subset of items could potentially be used to inform a decision on whether or not to re-administer the full version of the SIS—A.
SIS—A Annual Review Protocol
A protocol designed to determine whether or not a SIS—A reassessment was necessary would need to have several features. First and foremost, the protocol would need to make it clear that it was only applicable in cases where there had been a SIS—A assessment completed within the past three years. Although in one respect the 3-year time frame is arbitrary, a consensus has emerged that results over 3-years old lack currency (Commonwealth of Pennsylvania Department of Public Welfare Office of Developmental Programs, n.d.; New Mexico Department of Health, 2013; State of Maine Aging and Disability Services, 2012). Even in instances where person's support needs and SIS—A scores have not change since a prior assessment that was completed over 3-years prior, policymakers and professionals have a responsibility to base decisions on current information. Therefore, when SIS—A results are over 3-years old, verifying that a person's support needs have not changed is just as compelling of a reason to update assessment information as to document any changes in support needs.
The time and effort required for administration is another critical feature critical for a protocol that is designed to inform users whether or not a SIS—A reassessment is needed. The longer it takes to complete a protocol, the less its value is to users. For a protocol to be useful, the overall time and effort saved in not completing reassessments with people for whom only redundant information would result in an annual reassessment, must outweigh the time and effort that is invested in completing the protocol for people for whom the protocol indicates a reassessment is needed (and would have been reassessed as a matter of policy had the protocol not been introduced).
A third feature of a desirable protocol is that it is sufficiently comprehensive in scope so that critical factors affecting support needs are not overlooked or dismissed. There is an obvious and unavoidable tension between comprehensiveness of scope and the brevity of administration that was called for in the prior paragraph. The 21 items included in Table 5 are the best candidates for inclusion in the protocol because they best reflect the overall construct measures by the full SIS—A. The items, however, would not need to be scored based on the 3 dimensions of support needs (type, time, frequency) as part of a protocol to determine a need for reassessment. Rather, all that is needed is for users to consider a straightforward question: Has there been a change in the person's support needs in any of these 21 life activities since the previous assessment? Several yes answers would require users to consider and discuss ways in which a person's support needs have changed since the prior assessment. Even a single yes answer in relation to any of the 21 items, however, would require a planning team to discuss ways a person's support needs have changed and the conclusion could very well be that a full SIS—A assessment was needed to get an accurate, current profile of the individual's support needs.
In addition to the subset of 21 SIS—A items, there are two other sets of indicators that are critical to consider in order to evaluate whether a person's support may have changed in important ways and warrant a reassessment. The first involves identification of any significant life experiences/events since the prior assessment that might meaningfully change a person's support needs. These include: (a) loss of parent, spouse, or other close loved one; (b) personal injury or illness; (c) change in financial status; (d) change in residential status; (e) change in employment status; (f) involvement with the criminal justice system; (g) change in social and/or recreational activities; (h) changes in access to or regular use of technologies; (i) retirement; and (j) birth of a Child. These life experiences have the potential to disrupt support networks, skills sets, and coping strategies, all of which can result in a person's support needs changing in important ways. The second set of indicators calls for considering any changes in health status and/or the emergence of new problem behaviors that affect a person's support needs. Once again, straightforward questions provide a sufficient means to uncover potential influences on support needs. Namely, “Has the person experienced any of the life events (listed above), new health problems or medical issues, or displayed any challenging behaviors that impact his or her support needs since the last assessment?” If the answer is yes, then a discussion should occur that is focused on how specific changes have impacted a person's daily life and support needs. Unless there is a good reason to believe the impact has been negligible, a SIS—A reassessment should be undertaken.
SIS—A Annual Review Protocol
Based on data analyses and the conceptual considerations discussed above, a SIS—A Annual Review Protocol was created that includes sections devoted to (a) consideration of major life events, (b) changes in medical conditions or health status, (c) changes in challenging behavior, and (d) changes in supported needed in relation to 21 items on SIS—A shown in Table 5. Future research is need to field test the protocol to determine if it can be used to reliably distinguish those whose support needs have changed and need to be reassessed with SIS—A in order to collect information that properly reflects current support needs from those for whom it is highly unlikely that their support needs have changed since the prior assessment, and for whom a reassessment would not provide enough new information to justify the expenditure of time and energy.
To evaluate this protocol, a sample of people would need to have been assessed with the SIS—A within the past 3 years, be assessed using the SIS—A Annual Review Protocol with the reviewers' conclusion (i.e., to reassess or not) recorded, and finally be reassessed using the SIS—A. Four potential outcomes from such a study are shown in Figure 1. Quadrants 1 and 4 would consist of cases where the protocol worked as intended. The cases in the first quadrant are those where upon completing the Protocol the reviewers concluded a new SIS—A assessment was needed, and the new SIS—A assessment revealed that people's support needs had changed in important ways. In terms of Quadrant 4, no reassessment was called for upon completing the Protocol and the subsequent administration of SIS—A confirmed that people's support needs had not changed since the previous administration.
In contrast, Quadrants 2 and 3 would reflect instances where the SIS—A Annual Review Protocol did not work as intended. Quadrant 3 cases are false positives. Here, the Protocol reviewers indicated a need for a reassessment based on belief that the person's support needs had changed in meaningful ways since the previous SIS—A assessment. Upon comparing scores from the two administrations of the SIS—A, however, the pre- and post-review scores are similar and there is no basis to suggest that the pattern or intensity of a person's support needs had changed. Although false positives are not desirable, a certain number can be tolerated because the only cost of a false positive is the time and expense involved in re-administering the SIS—A. As mentioned previously, this is something many jurisdictions are already doing on an annual basis.. Cases in Quadrant 2, the false negative, are far more concerning. In this quadrant the Protocol has resulted in reviewers concluding that there is no need for a reassessment because people's support need have not changed. Subsequent SIS—A reassessments, however, have revealed that people's support needs have changed in important ways.
The widespread use of the SIS—A and the emerging consensus that the completion of the full SIS—A assessment every year is not always needed because of the relative stability of support needs over time has created a need for a protocol that can be completed on a yearly basis to determine if changes have occurred that impact support needs (e.g., changes in major life areas, or in support needs) and create a need for reassessment using the full SIS—A. This paper described the steps taken to develop the SIS—A Annual Review Protocol that can be administered to determine if reassessment with a full SIS—A assessment is needed. It is recommended that the full SIS—A be administered every three years to ensure that decisions are made with accurate and current information. The SIS—A Annual Review Protocol provides a viable tool for use in the intervening years to ensure that changes have not occurred that should be considered in the supports planning process and perhaps necessitate a SIS—A reassessment. Further research is needed to establish the reliability and validity of the Annual Review Protocol in determining the need for reassessment.