In the 1990s, there was a significant change in how governments viewed publicly provided services. In the area of disability services, it has been suggested that providers could demonstrate their effectiveness with reference to the quality of life of their clients. One instrument often used in quality of life research for people with intellectual disabilities is the Schalock and Keith (1993) Quality of Life Questionnaire; however, before this instrument can be used with confidence, the reliability of its scores must be demonstrated. We investigated the stability of the four Quality of Life Questionnaire factors over various populations. Three of the four factors were found to be stable. This raises potential concern over the use of the Quality of Life Questionnaire in assessing service providers' effectiveness.
Editor in charge: Kevin Walsh
Schalock (1999) noted that in the 1990s, there was a significant change in how the public viewed education, health care, and social service programs. Of particular interest has been the change in focus from inputs to outputs, the redefining of clients as consumers or customers, and the empowerment of citizens. These changes have resulted in agencies involved in the provision of public services having to measure their outputs and their consumers' outcomes in order to demonstrate their efficiency and effectiveness to funders and other stakeholders. McVilly and Rawlinson (1998) and Schalock (1999) noted that both society as a whole and individual consumers prefer the measurement of agencies' achievements to be person-referenced. Specifically, in Australia the Commonwealth Disability Services Act (Commonwealth of Australia, 1986) prescribed principles, objectives, and standards for agencies to use when assessing their services in terms of the quality of life of people with disabilities. This, along with similar requirements in other countries around the world, has resulted in the use of quality of life measures to assess the effectiveness of service provision (e.g., Eggleton, Robertson, Ryan, & Kober, 1999; Schalock & Lilley, 1986; Stancliffe & Keane, 1999).
Schalock and Keith's (1993) Quality of Life Questionnaire is one of many instruments used to measure the quality of life of an individual with an intellectual disability. The Quality of Life Questionnaire was selected for the present research because the data presented here are from a larger data set resulting from an investigation of the effect of employment on people with intellectual disabilities. Consequently, instruments such as the Lifestyle Satisfaction Scale (Heal & Chadsey-Rusch, 1985) and the Multifaceted Lifestyle Satisfaction Scale (Harner & Heal, 1993) were not selected due to their focus on assessing satisfaction within residential and community settings. Cummins (1997a), in his review of quality of life scales for people with intellectual disabilities, noted that the Quality of Life Questionnaire and the Comprehensive Quality of Life Scale for people with cognitive impairment (Cummins, 1997b) both meet the basic requirements for useful instruments to measure quality of life. The decision to utilize the Quality of Life Questionnaire rather than the Comprehensive Quality of Life Scale for this research was based on the larger volume of previous research in which the Quality of Life Questionnaire has been utilized and a preference by some of the employment agencies, from which participants were selected, for this instrument.
Cummins (2000) noted that there had been relatively little research carried out on the psychometric properties of quality of life instruments specific to people with intellectual disabilities, with the Quality of Life Questionnaire regarded as being no different than the other instruments. The few exceptions have been Eggleton et al. (1999), who reported the internal reliability of this questionnaire; Rapley, Ridgway, and Beyer (1998), who investigated the reliability of the Quality of Life Questionnaire scores between two proxy respondents and between proxy respondents and clients; and Rapley and Lobley (1995), who conducted a replication of Schalock and Keith's (1993) factor analysis.
In this paper we investigated the Quality of Life Questionnaire's factor structure in greater depth, because Rapley and Lobley (1995) found some differences between their factors and those reported in Schalock and Keith (1993). Although Rapley and Lobley provided overall support for this instrument, a closer investigation of their results suggests that such a conclusion might not have been warranted. They found several items from the Quality of Life Questionnaire that did not load as reported by Schalock and Keith (1993), but explained these differences by stating that they were probably due to sampling from a different population than had been assessed by Schalock and Keith (1993). However, for an instrument to be valid, it is essential that it factors consistently across different populations (Everett, 1983; Hair, Anderson, Tatham, & Black, 1995). Hence, if the explanation Rapley and Lobley advanced is correct, it casts doubt on the reliability of the Quality of Life Questionnaire scores.
Consequently, we used Australian data to conduct a factor analysis of the Quality of Life Questionnaire and then calculated coefficients of congruence (Harman, 1967) to determine the factor stability of this instrument across heterogeneous subpopulations.
Schalock and Keith (1993) specifically developed their questionnaire to assess the quality of life of people with intellectual disabilities. It has 40 items, each of which relates to an aspect of a person's life, and is administered by an interviewer, who asks the interviewee each of the 40 questions. For each item the interviewer provides the interviewee with three possible responses, from which the interviewee selects the response most appropriate to their life situation. These responses are scored from 1 (low) to 3 (high), thus giving the overall quality of life score a theoretical range of 40 to 120. In addition to being able to compute an overall quality of life score, the Quality of Life Questionnaire was designed to allow the computation of four subdimensions (factors), which measure the following different aspects of quality of life: (a) personal life satisfaction, (b) individual competence and productivity at work, (c) feelings of empowerment and independence in the living environment, and (d) feelings of belonging and community integration (Schalock & Keith, 1993). Because each factor contains 10 questions, each factor has a theoretical range of 10 to 30.
The Quality of Life Questionnaire allows the interviewer to re-phrase items if the interviewee does not understand the original question. It also allows for proxy respondents if the person is unable to complete an interview unassisted.
The 172 people with intellectual disabilities who were interviewed for our study (63 females [37%], 109 males [63%]) were selected from the registers of six Western Australian metropolitan agencies that either provide employment for people with intellectual disabilities or place them into employment. All 172 people interviewed were able to respond to the Quality of Life Questionnaire questions, and, consequently, there was no need for proxy respondents.
The median age of these participants was 22 years (range = 17 to 61). Although there were some job seekers (n = 22, 13%), the majority of the sample were employed (n = 150, 87%) in sheltered employment (n = 64, 43%) or open employment (n = 86, 57%). Information on living arrangements was not available for all participants; for those with available information, 62% (n = 98) lived with their family, 22% (n = 35) lived independently, 12% (n = 20) lived in a group home, and 4% (n = 6) had a supported community placement.
Our sample differed substantially from Rapley and Lobley's (1995) United Kingdom sample but was similar to Schalock and Keith's (1993) United States sample. In terms of employment, 36% of Rapley and Lobley's (1995) sample was employed, with 5% in open employment. In contrast, 91% of Schalock and Keith's (1993) sample were in open employment. In terms of living arrangements, 12% of our sample lived in a group home, compared to nearly half (49%) of Rapley and Lobley's (1995) sample. Schalock and Keith's (1993) participants were substantially different again, with 19% of individuals living independently and 42% semi-independently. This is similar to our sample, where 22% of participants lived independently and 62% with their family.
Tabachnick and Fidell (1989) stated that to conduct factor analyses, it is preferable to have sample sizes of over 200. Although the sample used in this paper was not that large, and it is acknowledged that this may be of concern, a principal axis factor analysis with varimax rotation was still conducted: Hair et al. (1995) noted that it was acceptable to conduct factor analysis with a sample of 100 or larger. The factors and the factor loadings we obtained were then compared with those reported by Schalock and Keith (1993) and Rapley and Lobley (1995). The results of this analysis are presented in the first part of the Results section.
The factor loadings obtained by Rapley and Lobley (1995) differed slightly from those reported by Schalock and Keith (1993), which Rapley and Lobley attributed to the differing underlying nature of the two samples. They noted that the divergences in item loadings appeared to be a function of differences in human service provision and the life situations of persons with learning disabilities for the two samples. If this interpretation is correct, the different factor loadings between the two samples are of concern. Everett (1983) emphasized that it is important “that the same factor procedures applied to different sets of respondents … should provide factors which are stable between different sets of respondents” (p. 199). Therefore, if the different factor loadings reported by Rapley and Lobley were due to the different life situations of the individuals assessed, it brings the reliability of the Quality of Life Questionnaire scores into question. Consequently, the factor stability of the Quality of Life Questionnaire was investigated through dichotomizing the sample across various heterogeneous groupings and calculating the coefficient of congruence (Harman, 1967).
The coefficient of congruence is calculated based on the congruence of factor loadings across two samples. As explained by Everett and Entrekin (1980), if there were n items, where A and B represented the factor structure matrices for the two sets of respondents, then the factor congruence between the jth factor in one set and the kth factor in the other set would be given by:
The factor stability of the Quality of Life Questionnaire was investigated by (a) randomly splitting the sample in half (b) on the basis of gender (female vs. male) and (c) the level of functional ability, as measured by the Functional Assessment Inventory (Crewe & Athelstan, 1984). This inventory consists of 30 behaviorally anchored rating items, ranging from 0 (no significant impairment) to 3 (severe impairment) and is used to assess people's vocational capabilities and deficiencies, thus giving a theoretical range of 0 to 90. The 30 items rate capacity in seven significant areas: cognition, vision, communication, motor functioning, physical condition, vocational qualifications, and adaptive behavior. The support workers of those people interviewed completed this inventory. A low functional ability was defined as a score of greater than 22, with a high functional ability defined as a score of less than or equal to 22. The median of 22 was selected as the midpoint because there are no generally accepted ranges that define people with intellectual disabilities as having either a high functional ability or low functional ability. It is interesting to note that, as would be expected, the majority of the people in the high functional ability group were in open employment and the majority of people with low functional ability were in sheltered employment. However, there were some people with a high functional ability in sheltered employment and, conversely, some people with a low functional ability in open employment. Table 1 provides more details on the type of employment by level of functional ability. Note that there was a significant difference between the two subpopulations, U = 9.367, p < .000. (d) The factor stability investigation continued with an examination of employment when the sample was dichotomized into people employed in open employment versus people employed in sheltered employment, and job seekers and (e) an examination of employment when the sample was dichotomized into people employed in open employment versus those employed in sheltered employment.
It was not possible to conduct a factor stability analysis comparing employed respondents versus job seekers because only 22 participants were job seekers. On all occasions, principal axis factor analyses with varimax rotation were conducted to calculate the coefficients of congruence. The results of these analyses are presented later in the Results section.
Principal Axis Factor Analysis With Varimax Rotation
The Bartlett test of sphericity was significant, χ2(780, N = 172) = 2,438.68, p < .000, and, although marginal, the Kaiser-Meyer-Okin measure of sampling was greater than .6, KMO = .762, which indicated that it was possible to conduct a factor analysis on the sample.
The Scree test (Cattell, 1966) suggested that four factors were appropriate, explaining 32.4% of variance. This result was similar to that of Rapley and Lobley (1995), who also indicated that four underlying factors were sufficient. However, the percentage of variance explained by their four factors was 45%. Rapley and Lobley's greater explanation of variance is interesting given that their sample was more homogeneous than the current sample. Interestingly, the explanation of variance we found was similar to the 37% reported by Schalock and Keith (1993) whose sample, too, was more heterogeneous than Rapley and Lobley's.
The factor structure that we obtained, shown in Table 2, was similar to Schalock and Keith's (1993), with only three items failing to load on the original Quality of Life Questionnaire factors. Both Item 9 of Empowerment/Independence (“Are there people living with you who sometimes hurt you, pester you, scare you, or make you angry?”) and Item 3 of Social Belonging/Community Integration (“Do you worry about what people expect of you?”) load on the factor that the original Quality of Life Questionnaire termed Satisfaction. Due to the addition of these two items, this factor was referred to in Table 2, using Rapley and Lobley's (1995) terminology, as Life Satisfaction and Domestic Contentment because we felt this heading more accurately reflected the content of the items. It is not unreasonable to expect that these two items could affect the life satisfaction and domestic contentment of a person; therefore, their loading on this factor was considered rational.
The other item that was found not to load according to Schalock and Keith's (1993) original Quality of Life Questionnaire factors was Item 10 of Satisfaction (“What about your family members? Do they make you feel an important part of the family, sometimes a part of the family, or like an outsider?”). This item was found to load on the Social Belonging/Community Integration factor. It is easy to see how acceptance of one's family members could impact upon a feeling of belonging and integration, and, thus, the fact that this item loaded on this factor was considered meaningful.
It is interesting to note that Rapley and Lobley (1995) also found that Item 9 of Empowerment/Independence loaded on the Life Satisfaction and Domestic Contentment factor. However, the other divergences in loadings to those of the original Quality of Life Questionnaire (Item 3 of Social Belonging/Community Integration and Item 10 of Satisfaction) differ from the divergences found by Rapley and Lobley. Nevertheless, it is extremely impressive that 37 of the 40 items (92.5%) loaded according to Schalock and Keith's (1993) original analysis. This was a better result than achieved by Rapley and Lobley (1995) who only found 32 items (80%) that loaded consistent with Schalock and Keith's (1993) analysis.
The internal reliability of the four factors was reasonable with Cronbach's (1951) alpha coefficients for the four factors of Satisfaction and Domestic Contentment, Competence/Productivity, Empowerment/Independence, and Social Belonging/Community Integration being .47, .91, .75, and .68, respectively. Although the low reported Cronbach alpha coefficient for the Satisfaction factor was of concern, no explanation can be offered for this, especially given that both Rapley and Lobley and Schalock and Keith reported acceptable coefficients (above .70) (Hair et al., 1995). However, in another Australian study, Eggleton et al. (1999) also reported a Cronbach alpha coefficient below .70 (Cronbach α = .69) for the Satisfaction factor. The reliability of the overall instrument (all 40 items) scores was, however, very good, with a Cronbach alpha coefficient of .81.
The results of the factor stability analysis are presented in Table 3. The table shows the coefficients of congruence when the sample was randomly split into two and then when the sample was dichotomized based on the various heterogeneities of the population. For the subpopulation sample sizes for each analysis, refer to the Method section, under Participants.
The coefficient of congruence should be equal to or greater than .940 to show factor congruence (Harman, 1967). As can be seen in Table 3, based on the various dichotomizations, the Competence/Productivity, Empowerment/Independence, and Social Belonging/Community Integration factors scores are above .94 on all occasions. As such, these factors can be said to be highly congruent. With this high degree of factor stability, researchers and practitioners should be confident that the scores that they obtain from these factors reflect the same underlying construct.
The Satisfaction factor, however, is of concern because in four of the five dichotomizations we found a coefficient of congruence below .94. Only on one occasion, when the sample was split by gender, was the coefficient of congruence acceptable. When the sample was dichotomized based on level of functional ability, we were not able to calculate a coefficient of congruence for the Satisfaction factor, as none of the 10 items associated with this factor loaded on the same factor across the two samples. Consequently, the Satisfaction factor cannot be considered congruent. This low level of stability, coupled with the low Cronbach alpha observed for the current sample and reported by Eggleton et al. (1999), brings the reliability of the Satisfaction factor scores into question. Perhaps this factor is not well-specified and needs to be amended. The implication of this for researchers and practitioners who use the Quality of Life Questionnaire is that they should interpret the satisfaction score with caution and may be well-advised not to place much emphasis on this score.
Discussion and Further Research
The factor analysis just reported validated the factor structure of the Quality of Life Questionnaire. The factor stability analysis, however, highlighted one major issue that should be of concern for people who administer and use the Quality of Life Questionnaire. The factor stability analysis showed that three of the four Quality of Life Questionnaire factors were congruent across heterogeneous subpopulations and, as such, the scores obtained for these factors can be relied upon. However, it also revealed that the Satisfaction factor was not congruent. In addition, the Cronbach alpha for this dimension was low, suggesting a lack of internal consistency. This means that it may be difficult to attribute reliability to the scores obtained for the Satisfaction factor. This would suggest the need for researchers and practitioners who use the Quality of Life Questionnaire to exercise considerable discretion in any interpretation of the Satisfaction score obtained on this questionnaire. It remains unclear how this may affect the reliability of the overall Quality of Life Questionnaire score. Although the overall questionnaire had an acceptable Cronbach alpha of .81, suggesting internal consistency, not all four factors are congruent. Whether this is of sufficient concern that it precludes use of the Quality of Life Questionnaire in its current form is something that researchers or practitioners must decide upon in light of the intended use of the Quality of Life Questionnaire scores.
The assessment of the effectiveness of service provision is one area where considerable thought must be given as to the appropriateness of using the Quality of Life Questionnaire. As mentioned in the introduction, there has been increasing pressure from governments and the public alike for service providers to measure person-reference outcomes, which has resulted in the usage of quality of life instruments. The question that must be answered now is that given the lack of factor congruence on the Satisfaction factor, is the Quality of Life Questionnaire an appropriate tool to measure such outcomes? In answering this question, in addition to paying attention to the reliability of this questionnaire, researchers and practitioners should consider the other quality of life instruments. Do the scores obtained from other quality of life instruments have a higher degree of reliability? If the other instruments produce scores with lower degrees of reliability, then the Quality of Life Questionnaire may be the better choice. However, given the lack of research on the reliability of quality of life instrument scores, it is not possible to answer this question at the present time.
It is of some reassurance that the Quality of Life Questionnaire factored relatively consistently across samples drawn from the United States (Schalock & Keith, 1993), the United Kingdom (Rapley & Lobley, 1995), and Australia (current study). However, given that this study, like Schalock and Keith's and Rapley and Lobley's, only drew upon people from one country, it would be interesting to conduct a factor stability analysis across populations from different countries. Extending this idea further, it would be interesting to test the factor stability of the Quality of Life Questionnaire across non-Anglo-Saxon countries. Using Hofstede's (1984) cultural blocks, it can be seen that the United States, the United Kingdom, and Australia are all within the same cultural block. Hofstede argued that countries in different cultural blocks have different concepts of quality of life and that instruments derived in an Anglo-Saxon cultural block may not necessarily be applicable to other cultural blocks (e.g., to the Asian cultural block). Conducting a factor stability analysis across cultural blocks would prove very useful in establishing whether or not the Quality of Life Questionnaire is generalizable to other countries with different value systems.
NOTE: This paper is based on a doctoral thesis currently in process by the first author, under the direction of the second author, in fulfillment of the requirements for the doctorate degree in the Department of Accounting and Finance, The University of Western Australia. This research was supported, in part, by a grant provided by the Faculties of Economics and Commerce, Education, and Law, The University of Western Australia. The authors acknowledge the comments provided by David Woodliff and Juliana Ng, The University of Western Australia. We are also extremely grateful to the agencies that allowed us to interview their clients, the staff from those agencies who were always helpful, and, of course, to those people whom we interviewed.
Authors:Ralph Kober, BCom(Hons), Lecturer (email@example.com), and Ian R. C. Eggleton, PhD, Professor, Department of Accounting and Finance, The University of Western Australia; Department of Accounting and Finance, The University of Western Australia, 35 Stirling Highway, Crawley, WA, 6009, Australia, and Director of Research, Edge Employment Solutions. Requests for reprints should be sent to the first author