The concept of quality of life currently impacts program development, service delivery, management strategies, and outcome evaluation in the area of intellectual disabilities. Maryland uses peer interviewers to assess consumer-perceived quality of life among adult recipients of MR/DD services and supports. In this article we describe the survey instrument and procedures and discuss assessment issues of quality of responses, acquiescence, and proxy respondents. We present the psychometric properties for eight core quality of life domains among 923 people assessed in FY 2001. Results are summarized and development of a model for enhancing social inclusion, personal development, and self-determination was described. Service and personal characteristics relating to quality of life as well as some ways the results can be used for program enhancement are discussed.
The concept of quality of life is currently impacting program development, service delivery, management strategies, and outcome evaluation in a number of human service areas, including intellectual and developmental disabilities. Advances during the last decade in understanding the concept of quality of life have occurred simultaneously with the emergence of participatory action research, which involves the active role of consumers in evaluation and research activities.
Conceptually, we currently understand three important aspects about the quality of life construct. First, it is multidimensional and includes a number of domains of personal well-being. The present study is based on eight core domains that have been identified in the international quality of life literature (Schalock & Verdugo, 2002): Emotional Well-Being, Interpersonal Relations, Material Well-Being, Personal Development, Physical Well-Being, Self-Determination, Social Inclusion, and Rights. Many quality of life investigators suggest that the actual number of domains is perhaps less important than recognizing that any proposed structure must have a multi-element framework, realizing that people know what is important to them, and understanding that any set of domains must represent the complete quality of life construct. Second, a number of subjective and objective indicators of each of these core domains can be used for either quantitative or qualitative quality of life assessments (Schalock, 2001). Third, although one's quality of life has both subjective and objective components, it is primarily the perception of the individual that reflects the quality of life that he or she is experiencing (Schalock & Verdugo, 2002).
Participatory action research is an approach to research and evaluation that relies on both the involvement of stakeholders to identify and evaluate elements of their lives that warrant investigation and potential change and the participation of the consumer of the services in the evaluation and utilization process. Participatory action research is particularly suited to issues of quality of life because individuals with intellectual disabilities can contribute to our collective understanding of how quality of life can be conceptualized, what quality of life looks like, and determining ways to improve outcomes (Gardner, 2000; Gettings, 2001; Pennell, 2001; Whitney-Thomas, 1997).
The present study involved two quality of life and participatory action research-related premises: (a) people with intellectual disabilities should be asked directly about their own life, and (b) interviewers with intellectual disabilities are in the best position to elicit meaningful responses from their peers. In this study we report on the fourth year of a state-level project involving consumer-based quality of life assessment that identified factors influencing the perceived level of satisfaction among adult recipients of mental retardation/developmental disability (MR/DD) services and supports in the state of Maryland (the Ask Me!sm Project). In this project people with developmental disabilities were trained to survey other consumers' perceived quality of life as measured with an adaptation of the Schalock and Keith (1993) Quality of Life Questionnaire. In addition to identification of significant predictors of life satisfaction, results also confirmed the reliability of consumer-generated responses, the feasibility of using interviewers who are persons with intellectual disabilities, the utility of using simplified language and response formats, and the validity of the quality of life factor structure investigated.
We are aware that the two premises of the present study raise a number of issues regarding the potential for acquiescence (yes-saying), the reliability and validity of the data, and the use of proxies to obtain information from people with limited conceptual and/or language skills. In reference to acquiescence, Finlay and Lyons (2002) have summarized a number of studies and found that acquiescence can be caused by factors such as a desire to please, learned submissiveness, complex grammar, and questions requiring complex judgment. They suggested several strategies to reduce acquiescence: use “either/or” rather than “yes/no” questions, allow “don't know” responses, balance positively with negatively worded questions, simplify question wording, understand the types of judgments that are complex for respondents, and employ statistical corrections. Finlay and Lyons also suggested that questions be developed in pilot work with people who have mental retardation so that the content and format will be meaningful and understood. Matikka and Vesala (1997) suggested that acquiescence is affected more by situational or interaction factors that occur in an interview rather than by individual characteristics of respondents.
With regard to the use of proxies, Stancliffe (2000) recently reviewed the literature on proxy respondents and quality of life and found that (a) it remains an open question as to whether findings of agreement between proxies and self-reports from verbal individuals may validly be generalized to nonverbal people with more significant intellectual limitations who cannot respond for themselves, (b) use of proxies can be justified when the questionnaire used is known to possess empirically well-established consumer–proxy agreement, and (c) it is difficult to predict a priori the likely level of agreement between proxies and self-reports due to significant instrument and procedural variation. Generally, the quality of life literature suggests that when people cannot respond for themselves, two proxies should be interviewed and their responses averaged in order to increase the reliability and validity of information (Rapley & Beyer, 1997; Rapley & Hopgood, 1997; Schalock & Keith, 1993; Stancliffe, 2000).
The data presented in this article represent a continuation and extension of the multiyear state-level project described previously (Schalock, Bonham, & Marchand, 2000). The Fiscal Year (FY) 2001 Ask Me! Survey was conducted in the fourth and final year of the project with voluntary provider participation prior to statistical sampling of Maryland community providers with mandatory participation. The survey was designed to measure quality of life in eight domains rather than the previous five and included new procedures to increase and assess the quality of information. Our purposes in writing this article were to (a) describe the survey instrument (based on eight core quality of life domains) and survey procedures used in this consumer-based quality of life assessment; (b) discuss a number of consumer-based quality of life assessment issues, including the quality of responses, acquiescence, and proxy respondents; (c) summarize the psychometric properties of the new eight-domain quality of life instrument; and (d) summarize the results from 923 individuals with intellectual disabilities, develop a path model for enhancing system goals of social inclusion, personal development, and self-determination; and identify service and personal characteristics as they relate to assessed quality of life domains for provider use. Our conclusion section is a discussion of the study's strengths and limitations and a description of some of the ways the survey results are being used for program enhancement.
The FY 2001 Ask Me! Project included 923 people representing consumers of services from 33 community providers. These providers of MR/DD services volunteered to participate in the survey and did not represent a random sample of the 148 providers in Maryland. However, they were located in all parts of the state and served both urban and rural areas, with as few as 42 and as many as 1,176 people. Combined, they served 57% of the 11,500 people supported by the Maryland Developmental Disabilities Administration. (Beginning in FY 2002, a scientific sample of providers is included in Ask Me! each year.)
Each provider agency supplied the project researchers with a list of the people they served. The project coordinator selected an initial sample of 35 people by starting with a random number between 1 and n, and taking every nth name thereafter, where n = number of names on the list divided by 35. After receiving the sampled names, providers contacted the people or their legal guardians to tell them about the study, ask them to participate, and obtain signed permission forms from guardians prior to the interview. A few names (36) were subsequently removed from the sample because the people were no longer receiving services from the agency at the interview date, were less than 18 years of age and mistakenly included on the list, or were Ask Me! interviewers and, therefore, ineligible for the interview. The guardians for 2.6% of the remaining people refused to allow them to participate; 7.3% of the people did not agree to be interviewed after the survey was explained to them; 1.7% were too ill during the interview period to participate; and 7.1% failed to show up for the interview, could not be contacted, or had no other reason recorded for nonparticipation. (People willing to be part of the project, but who were unable to answer questions for themselves, had their interviews completed by proxies, as discussed later in the survey procedures.) The overall response rate was 81.2%, ranging from a low of 54.5% at one provider agency to a high of 100% at another. An additional 113 names were randomly selected for providers who appeared likely to have far fewer than the targeted 30 completed interviews (a maximum of 7 additional names per provider). Interviews took place between August 2000 and April 2001.
Providers completed background forms on the 923 people with completed survey information. These showed that 8% of the participants were 18 to 24 years of age (Code 1), 26% were 25 to 34 (Code 2), 28% were 35 to 44 (Code 3), 19% were 45 to 54 years of age (Code 4), 10% were 55 to 64 (Code 5), 6% were 65 years and over (Code 6), and 4% had no age reported. More than half (57%) were men. Provider records identified 17% with profound (Code 1), 18% with severe (Code 2), 20% with moderate (Code 3), 32% with mild (Code 4), and 10% with borderline mental retardation or average intelligence (Code 5); there were no records for 3%. One third (37%) had communication or speaking impairments, 32% had behavior or emotional difficulties, 23% had hearing or vision impairments, 22% used a wheelchair or had difficulty walking, 19% had seizures, and 8% were medically fragile. One sixth (15%) of the people lived alone or with housemates in their own home with fewer than 40 hours of residential services a week, 34% lived with their families, 31% lived in alternative living units of three people with disabilities and 40 or more hours of residential services per week, 14% lived in staffed group homes of four or more people with disabilities, 4% lived in foster homes or state institutions (with day services in the community), and 2% did not have their residential setting reported. A few of the people (6%) had competitive employment (Code 5), 27% had supported employment (Code 4), 37% were in vocational programs (Code 3), 21% were in other types of day programs (Code 2), and the other 8% went to senior centers or had no day programs reported (Code 1). Most of the people (82%) were transported by their providers three or more times a week (Code 4, with 1 or 2 times a week Code 3, less than 1 per week Code 2, coordinates other transportation Code 1, and none Code 0). Hours of residential services were grouped for later analysis into 0 hours (Code 1), 1 to 39 hours per week (Code 2), 40 to 79 hours (Code 3), 80 to 167 hours (Code 4), and 168 hours per week (Code 5). Other variables with no code indicated were Coded 1 if the situation applied, 0 otherwise.
The survey instrument includes six questions for each of the eight core domains and three duplicated questions to measure inter-item agreement. The domain of Social Inclusion included questions about being part of the community, doing things in it, having friends in the community, seeing friends on weekends, and relations with neighbors. The domain of Self-Determination contains questions about housemate choice, job or day activity choice, food choice, privacy when desired, buying things with own money, and being able to say what you think. Personal Development includes questions about the importance of person's job, training for a better job, learning to be a better person, chance to be what you want, receipt of needed services, and information on sexuality. The Rights domain includes questions about staff asking before entering your home or room, being able to lock the bathroom door, talking on the telephone in private, spending time by yourself, voicing complaints against staff, and voting in elections. Included in the Interpersonal Relations domain are questions about people who help you learn and reach goals, frequency of visiting or talking with family, having close friends, and doing things that impress others. Examples of issues in the domain Emotional Well-Being include being happy, liking yourself, feeling proud of where you live, feeling safe in the neighborhood, and being treated the same as other people. Physical Well-Being includes questions about how concerned others are about your health, perception of your own health, eating habits, getting needed sleep, receiving regular dental care, and freedom from others hitting or hurting you. In the final domain, Material Well-Being, the issues were about personal possessions, worrying about money, how well off you feel, earning good money, and having money to spend and save.
Because prior surveys have shown that the perceived availability of transportation has a strong relation with quality of life, this survey also contained five questions about the availability of transportation, which we considered to be a short-term outcome. It reflects people's perceptions similar to long-term quality of life measures but relates to specific service provisions.
Three fourths of the survey questions came from a list of questions generated by people receiving services (“People on the Go,” 1996). These individuals also helped develop the final form of the questions to be included in the survey by interviewing each other and suggesting alternative wording for the questions that they had a hard time reading or understanding. The FY 2000 survey pretested 7 questions for each domain, and the FY 2001 survey included the 6 questions with the highest psychometric contribution to the domain's scale. The 56 questions in the final survey contained an average of 9.6 words (16 at the most) that could be widely understood (83% were one-syllable words and 13% were two-syllable). This development process and final question structure involved many of the aspects that Finlay and Lyons (2002) recommended to reduce the likelihood of acquiescence.
Each question had three possible responses. The first was favorable (e.g., yes, very, lots, most, depending on the question), associated with a happy face and the numeral 1 on a flash card shown to the respondent, and scored as +1. The second response was neutral (sometimes, some, OK, not sure), associated with a neutral face and the numeral 2, and scored as 0. The third answer was unfavorable (e.g., no, none, not), associated with a sad face and the numeral 3 on the flash card, and scored a −1. This order was kept throughout, with the expectation that the simplicity of a constant format would overshadow any first or last response bias that might result.
The ARC of Maryland employed 38 interviewers who had developmental disabilities and received MR/DD services from 19 different providers. Interviewers were generally middle-aged adults: 17% were 18 to 34 years of age, 39% were 35 to 44, 42% were 45 to 54, and 3% were 55 years of age and over. More than half were women (58%). One fourth (24%) were African American. One third (37%) interviewed on the Ask Me! Project for the first time during FY 2001, 13% had interviewed during all 3 earlier years of the project, and the rest had interviewed for 1 or 2 previous years (21% and 29%, respectively). Five of the more experienced interviewers primarily served as quality assurance consultants. Three interviewers utilized communication devices to conduct interviews with the survey questions and responses preprogrammed. Several interviewers had complicating mobility issues (e.g., used wheelchairs, canes, or walkers) and required extensive transportation arrangements to travel to interviewing sessions. One fifth of the interviewers (19%) could not read. Reading was not a requirement for the job because interviewers worked in pairs (discussed below as part of the survey procedures). Nonreaders helped the interviewee follow the answer choices represented by the face card and frequently marked the answer category given by interviewees. Although interviewers generally led fairly independent lives with little support needed, many had little prior employment history.
The FY 2001 project members had to recruit interviewers in only a few areas of the state because a large interviewer pool had developed when people interviewed during previous years expressed their desires to become interviewers. During the initial 2 years of the project, however, recruitment began with materials mailed to 300 individuals and organizations: self-advocacy groups, service coordination offices, community service providers, local and state commissions on disability, local and regional rehabilitation offices, local coordinators for the Americans With Disabilities Act (ADA), independent living centers, and other advocacy organizations for persons with disabilities and their families. Through these materials we explained the project, asked for assistance in disseminating “help wanted” information, and included applications for interested persons with developmental disabilities. The project members conducted regional face-to-face job interview sessions as well as some telephone interviews with individuals unable to arrange transportation. Interviewers were selected based on skills important for interviewing: listening, objectivity, sensitivity, self-motivation, dependability, self-advocacy, and interest in traveling.
A centralized one-day training occurred prior to the beginning of the survey period. The training served to orient new interviewers on their responsibilities and to refresh experienced interviewers. During training, project goals were reviewed, the previous year's results were summarized, qualities and roles of good interviewers were discussed, and the project protocols and processes were thoroughly reviewed. The trainers highlighted any changes in the interview process since the previous year and provided observation and feedback as interviewers practiced interviewing each other. Bimonthly regional trainings followed throughout the year. These trainings were focused on particular areas that project staff felt could use improvement, such as proper introductions, leadership, disability sensitivity, and “red flags” of interviewing (e.g., report of abuse, agitation). The regional project staff provided interviewers with specific feedback using quality assurance procedures, including videotapes of actual interviews for self and peer evaluation, feedback from the quality assurance consultants (experienced interviewers from prior years) based on standardized forms and qualitative observations as well as general monitoring. Regional trainings also included time for team building among the interviewers.
Two-person teams conducted 68% of the interviews and individual interviewers conducted the remainder. Most of the interviews (68%) occurred at the work site or day program, 8% at people's homes, 14% at other locations, and 9% over the telephone. Less than 1% were self-completed and returned by fax or mail. Usually, one team member read the survey questions and the responses while the other pointed to the happy, neutral, and sad faces on the flash card and recorded the answers. The interviews were conducted in private rooms. No staff members employed by the service provider agencies were in the rooms during the interviews unless specifically requested by the people being interviewed. An Ask Me! supervisor was available in a nearby area to answer specific questions, solve problems, or offer suggestions when an interviewer needed assistance or a respondent needed additional accommodations to complete the interview. People were advised that they did not have to answer a question if they did not want to—it was strictly voluntary. People were also assured that their answers would remain private and that no one providing services nor their families would see their answers. Only the research team would know what they said, and all had pledged to keep results confidential. Interviews generally took 20 to 30 minutes. These procedures and the survey instrument enabled 75% of the people to respond totally for themselves and another 7% to respond for themselves with the help of someone they selected to act as an interpreter. The remaining 18% were “interviewed” via proxies. The flash card with the three faces allowed people to point to their answer if they could not respond verbally; 64% of those with speech difficulties and 68% of those classified with profound mental retardation were able to respond for themselves.
The Ask Me! Project team believes everyone should be given an opportunity to respond for themselves. Only the interviewer teams, in consultation with the Ask Me! supervisors, could make the determination that people did not sufficiently understand the questions to respond for themselves. When this occurred, interviews were conducted with proxies. Project staff asked provider agencies to identify two people who knew the consumer well to serve as proxy respondents. The project staff asked for one of these individuals to be an advocate (family member, friend, or possibly the service coordinator). Interviewers asked the proxies to answer the questions as they thought the person would answer. The desired combination of an advocate and a staff member provided proxy information for 39 of the 166 people needing proxies, and two staff members provided proxy information for 20 people. The average of the responses from the two proxies to a question was considered to be the best approximation of what the person would have said if he or she could have answered the questions. A single advocate provided proxy information for 21 people, and a single staff member provided information for 86 people. People with proxy reporting were more likely than self-respondents to have profound or severe mental retardation (77% vs. 27%, respectively), to have communication or speaking impairments (68% vs. 30%), to live in group homes (23% vs. 13%), and to be medically fragile (14% vs. 7%).
The project team conducted pretests of the new questions for the FY 2001 Ask Me! Project during fall 2000 and spring 2001. The first pretest involved 4 self-advocates who interviewed each other while the research staff observed. The second pretest involved 23 of the FY 2000 interviewers who marked a paper survey that had the 56 questions used during the first 3 years of the project plus 40 new questions. Seven questions were dropped, and the third pretest involved interviewing 99 randomly chosen people served by the last four agencies participating in the FY 2000 survey (Bonham, Basehart, & Marchand, 2000a). The first part of these interviews had the same 56 questions in the same order as the FY 1998, FY 1999. and the other FY 2000 interviews, of which 20 questions would remain in the new survey instrument. The 33 new questions followed. This procedure, while lengthening the interview and increasing respondent fatigue, allowed comparison of the eight new quality of life domains with the five quality of life domains discussed in an earlier paper on the project (Schalock et al., 2000). Bonham et al. (2000a) illustrated this comparison.
We used SPSS (1999) for all statistical analyses. Pearson's chi-square tests were used for differences between two groups in the percentage giving a specific answer. For differences between the means of two groups, we used independent sample t tests calculated with or without the assumption of equal variance depending upon the results of the Levene's test for equality of variance. Analysis of variance (ANOVA) was used for differences among the means of three or more groups Reliability analysis was conducted by computing Cronbach's alphas for internal consistency. Scale scores were computed as the average score of the six questions in each quality of life domain or the five questions for perceived availability of transportation, with the scoring system described earlier (favorable = +1, neutral = 0, unfavorable = −1). An average scale score was calculated if two thirds or more of the questions had answers. Nominal characteristics of the people and their services were re-coded into dichotomous variables for purposes of subsequent correlation and regression analysis. Ordinal variables were assumed to be interval variables unless inspection showed large departures from this assumption. Bivariate correlations were used for an exploratory phase. Personal and service characteristics that showed significant correlations, p < .05, with one or more quality of life domains were included in later regression analyses.
Multiple regressions were used for two analytic purposes. Our first purpose was to develop a path analysis showing hypothesized cause and effect among the eight quality of life domains that would be consistent with the mission statement of the Developmental Disabilities Administration (to promote social inclusion, personal development, and self-determination) and the data. To achieve this purpose we used forward stepwise multiple regression to regress each of the domains on those hypothesized to be prior to or simultaneous with it. Arrows in the path diagram show the statistically significant relations, and the path coefficients are the standardized multiple regression coefficients (betas) from the final regression step.
Our second analytic purpose for regression analysis was to show the potential contributions of personal and service characteristics to each of the eight quality of life domains, independent of other characteristics. All of the characteristics that had shown a significant correlation with any domain were entered simultaneously into the regression equations for this purpose.
Quality of Response
The majority of the respondents (68% self and 58% proxy) answered all 56 survey questions. Self-respondents answered several more questions than did single proxies (Ms = 53.2 and 51.3, respectively), t(151) = 2.14, p < .05, resulting in unequal variances. Two proxies together answered more questions (M = 55.0) than did self-respondents, t(127) = 3.60, p < .01, which also resulted in unequal variances. Self-respondents identified by agency records as having profound mental retardation answered fewer questions than did self-respondents identified with borderline or no mental retardation (Ms = 53.2 and 55.3, respectively), t(125) = 2.19, p < .05, again, unequal variances.
The survey included three pairs of questions to determine inter-item agreement. A question about earning good money was repeated 13 questions later. The same response was given to both questions by 75% of self-respondents and 71% of single proxy respondents, a nonsignificant difference. The same averaged response was given to both questions by 56% of the two-proxy sets, which is about what would be expected if each of the proxies gave the same answer 71% of the time (i.e., .71 × .71 = .50). A set of questions on general happiness with life had slightly different wording but the same response categories. The second question occurred 43 questions after the first. The same response was given to both questions by 71% of self-respondents and 81% of single proxy respondents, χ2(1) = 4.09, p < .05, and 74% of the two-proxy sets. The greater consistency of single proxies, however, may have reflected only their uncertainty about how the people for whom they were responding actually felt. Uncertainty often translates into giving a neutral response to a question, and 37% of single proxies, compared with 20% of self-respondents, gave the neutral response to the first asking of the question. When two proxies responded for people, they both gave the neutral answer 25% of the time. Two questions about choice of housemate addressed the same concept 23 questions apart, but different wording was used for both the question and the responses. Both self-respondents and proxies agreed with themselves less frequently on this pair of questions than the previous two pairs. However, the percentage of self-respondents who gave the same answer to both questions (56%) did not differ significantly from the percentage of single proxies who did so (60%); 38% of the two-proxy sets gave the same averaged response.
People classified with profound retardation showed no statistical difference in their level of inter-item agreement than did those classified with borderline or no retardation. On the exact same wording, 79% and 83%, respectively, gave the same response. On similar wording, 72% of those with profound and 81% of those with borderline retardation gave the same response. On the same concept but different wording, 58% of both groups gave the same response both times.
Acquiescence and Nay-Saying
The Ask Me! Survey was not designed to directly test acquiescence by multiple reverse-worded questions nor test order bias by changing the order of positive and negative responses. The study findings, however, provide some insight on the issue of acquiescence and nay-saying. Only a few people (12% of self and 1% of proxy respondents) gave the first (favorable) response to 51 to 56 questions (91% or more). On average, self-respondents gave the favorable response to 61% of the questions, single proxies gave the favorable response to 58% of the questions, and both of two proxies gave the favorable response to 44% of the questions (see Table 1). Self-respondents did not differ significantly from single proxy respondents, but single proxies did differ from double proxies, t(164) = 4.69, p < .01. An ANOVA found no statistical difference by level of retardation in the percentage of questions answered positively. Self-respondents classified in agency records as having profound retardation gave the favorable response to 60% of the questions whereas self-respondents classified as having borderline or no retardation gave the favorable response to 64% of the questions.
At the opposite end, less than 1% of both self- and proxy respondents gave the answer that was the most unfavorable to 51 to 56 questions. Self-respondents gave negative responses to 19% of the questions: single proxies, to 16%, and both double proxies, to 11%, a statistically significant difference, F(2, 920) = 6.54, p < .01. The level of retardation among self-respondents was significantly related to the percentage of questions with negative responses, F(5, 887) = 6.48, p < .01. Self-respondents with severe retardation answered 25% of the questions with negative responses; those classified with profound retardation, 20%; and those with no retardation, 7%.
We expected that the six questions within each core quality of life domains would cluster together to produce a reliable single score for each domain. Table 2 shows that all core domain scales had an acceptable level of internal consistency for self-respondents, άs = .70 to .76. Proxies (single and double combined) had slightly higher internal consistency on one scale, the same on one scale, and lower internal consistency on six scales. As can be seen from the table, proxies exhibited low alphas for the domains of Interpersonal Relations and Physical Well-Being. The five questions involved with the scale of Perceived Availability of Transportation had slightly lower internal consistency than did the core quality of life domain scales, with proxy responses having lower internal consistency than self-respondents. Calculated across all respondents, the alpha values were acceptable to marginal for all scales.
Effects of Proxies
The average scale scores for all quality of life domains were positive, or on the favorable side, although they could potentially range from −1.0 to +1.0 (see Table 3). Proxy respondents reported a statistically different quality of life (t test) than did self-respondents in five of the core domains. Proxy respondents reported higher levels of Physical and Emotional Well-Being than did self-respondents and lower levels of Rights, Social Inclusion, and Self-Determination. The reporting by one proxy differed from the averaged reporting of two proxies only in the domain of Material Well-Being, but neither one nor two proxies differed statistically from self-respondents in that domain. The type of proxy made a difference only in the domain of Physical Well-Being, where single family/advocate proxies did not differ statistically from self-respondents, but single staff proxies reported greater Physical Well-Being (.84) than did self-respondents (.55), t(175) = 9.26, p < .01, unequal variances. People responding for themselves with a helper/translator reported higher levels of Physical Well-Being (.69) than did people responding for themselves without any help (.55), t(713) = 2.40, p < .05, equal variances. People with a helper/translator also reported higher levels of Emotional Well-Being (.65) than those responding without any help (.55), t(95) = 2.30, p < .05, unequal variances, which suggests a possible influence by the helper/translator within these particular domains.
It is possible that proxies reported different levels of quality of life than did self-respondents because the people who required proxies differed from those who responded for themselves. A multiple forward stepwise regression of the probability of proxy response on personal and service characteristics showed six characteristics had a significant relation to proxy response, R2 = .20, F(6, 870) = 36.9, p < .01. In the order the variables entered the regression, people were more likely to require proxies the lower their cognitive ability, β = −.25, ΔR2 = .128, p < .01, if they had communication or speaking impairments, β = .20, ΔR2 = .047, p < .01, and they received behavioral support from the agencies through which they were sampled, β = .10, ΔR2 = .013, p < .01. People were more likely to require proxies the further they were from independent employment, β = −.08, ΔR2 = .006, p < .05, and if they received residential support, β = .07, ΔR2 = .005, p < .05, or purchase of care services from the agencies through which they were sampled, β = .07, ΔR2 = .005, p < .05. Cognitive ability, communication or speaking impairments, and employment independence had relations with the core quality of life domains and could potentially explain part of the reason proxies gave different responses to the survey than did self-respondents. The analysis in a later section shows that proxy responses differed from self-responses in seven of the eight domains after controlling for cognitive ability, independence of employment, and communication and speaking impairments. The inclusions of a proxy response variable in later multiple regressions provide some statistical correction, as suggested by Finlay and Lyons (2002).
Quality of Life Path Model
The quality of life literature does not clearly define a hierarchy among the quality of life domains nor specify cause and effect relations among them. Table 4 shows that all eight quality of life domains in Ask Me! had significant correlations with each other, p < .01, but varied in the magnitude of those correlations (.35 to .69). We sought to understand how the domains might have direct or indirect effects on each other in a way that could provide guidance to the Maryland Developmental Disabilities Administration (DDA) and community providers on ways to enhance people's quality of life. The mission statement of DDA specifies social inclusion, personal development, and self-determination as goals for people receiving services. We took these three domains as desired final outcomes to be placed on the right side of the path analysis shown in Figure 1. Because we had no a priori reason to hypothesize a direction of causality among these three domains, we regressed each on the other seven domains, using forward stepwise regression. Social Inclusion and Self-Determination did not enter into each other's final regression, suggesting they did not directly affect each other. One or two of the other domains had no significant independent relations with them either. Personal Development, on the other hand, entered into the forward stepwise regressions of both Social Inclusion and Self-Determination, and all seven other domains entered into its final stepwise regression. We, therefore, hypothesized that it was more central to people's overall quality of life than were the other two missions, placed it causally prior to the other two domains in the path diagram, and regressed it finally on only the five domains to its left.
Physical Well-Being is one of the most discussed domains in the literature, is a key component for licensing, and Gardner (2003) defined heath, safety, and welfare as basic assurances. We, therefore, hypothesized that Physical Well-Being is foundational and placed it on the left side of the path diagram. Emotional Well-Being had the least variability of the eight quality of life scores, showed no independent relations with Rights and Self-Determination, and we hypothesized that it also was foundational. We placed Material Well-Being on the left of the diagram because DDA financial support of community services is the basis of the Maryland MR/DD system and is assumed to affect people's quality of life. We did not hypothesize that there would be cause and effect relations among these foundational domains; double-headed curved arrows with correlations as the path coefficients show their interrelations. This left the remaining two domains for the center of the path diagram. We hypothesized that Interpersonal Relations were more likely to affect people's sense of Rights than the other way around, particularly because Rights have been discussed less in the quality of life literature than have Interpersonal Relations (Schalock & Verdugo, 2002).
With the causal relations hypothesized, the final stepwise regressions were run to calculate the standardized multiple regression coefficients (βs) to show as path coefficients on the diagram. All of the paths shown by straight arrows were statistically significant at the .05 or .01 level; the sizes of the path coefficients indicated the strength of the direct causal relations. The strength of indirect relations can be estimated by multiplying the coefficients along the indirect paths. The R2 represents the proportion of variance accounted for on the dependent variable by the preceding paths.
Characteristics Related to Quality of Life Domains Scores
We used multiple regressions to determine which person and service characteristics contributed significantly to which quality of life domains, independent of the hypothesized relations among the domain scores as specified in the path model. The person and service characteristics reported by agencies that were entered simultaneously in the multiple regressions (shown in Table 5) had been found to have significant bivariate correlations (data not shown) with at least one quality of life domain. Characteristics that did not show any significant bivariate correlations with quality of life were not entered into the regressions. These were types of residential placement for those not living with their families, types of services provided by the agencies through which these people were sampled, the length of time they had been with those agencies, the types of transportation used in a month that were not covered by those agencies, walking difficulties, medical fragility, difficulty with behavior or emotions, epilepsy or seizures, and hearing/vision difficulty.
Transportation is multi-faceted, and the FY 2001 survey included three types of transportation measures: (a) the scale of Perceived Transportation Availability, discussed earlier, that was based on answers to five questions in the interview; (b) the report by agencies on the number of times per week that those agencies transported the people, and (c) the report by agencies that people used other forms of transportation at least once a month (their own or their families' vehicles, other agencies' vehicles, public transit, paratransit [van or sedan services for people with disabilities], or taxis). None of the measures in this third set had significant correlations with any quality of life measure and, therefore, were not included in the regressions. Table 5 shows that the more people perceived that transportation was available to them, the higher they reported their quality of life in six of the eight domains, independent of the number of times per week that their agencies reported providing them transportation. Perceived availability of transportation did not affect Social Inclusion and Personal Development, but the more frequently agencies provided transportation during the week, the higher the quality of life scores in these two domains. The negative relations between frequency of agency transportation and quality of life scores in two domains suggests that more frequent provision of transportation in a week is not sufficient if it does not increase people's perceptions that transportation is available to them when they want it. The frequency with which agencies reported providing transportation in a week had absolutely no correlation to people's perceived availability of transportation.
Three other service characteristics helped predict one or two quality of life scores. The more hours people received residential services, the lower the Social Inclusion they reported. People who lived with their families reported higher levels of Emotional Well-Being than did those in other residential settings, but also reported lower levels of Social Inclusion. The more independently people were employed, the higher they reported their Material Well-Being.
Four characteristics of people significantly affected their reported quality of life. The younger the person was, the higher they reported their Personal Development and Interpersonal Relations. Men reported higher levels of Rights than did women, but lower levels of Material Well-Being. The higher the agencies reported people's cognitive ability, the higher people reported their level of Rights and level of Self-Determination. People without communication or speaking difficulties reported higher levels of Rights and Material Well-Being than did those who had such difficulties. Proxies reported people to have higher Physical Well-Being than did self-respondents with the same characteristics, but they reported people to have lower quality of life in six of the other seven domains. Only in the domain of Emotional Well-Being did proxies not differ from self-respondents.
The Ask Me! Survey, with results from 923 people with developmental disabilities in Maryland, contributes to the quality of life discussion. The instrument used for personal interviews builds upon the Schalock and Keith (1993) Quality of Life Questionnaire that measures four domains and incorporates more recent literature in the field that identifies eight quality of life domains (Schalock & Verdugo, 2002). The mission statement of the Maryland Developmental Disabilities Administration specifically includes three of these eight domains, and the survey provides the agency quantifiable outcome measures as required by the state budgeting process. It also supplies information for providers to use to enhance services and for individuals and families to use in obtaining the best possible services. These findings demonstrate that although there are distinct aspects for all domains of quality of life, they do relate to each other.
Participatory Action Research
The Ask Me! Project fits well in the broader movement towards participatory action research and evaluation, in which consumers participate actively in the design, implementation, analysis, and use of research data (Dudley, 2001; Elorriaga, Garcia, Martinez, & Unamunzaga, 2000). The Signs of Quality that the Maryland self-advocacy group wrote before the project began (People on the Go, 1996) gave rise to most of the survey questions. People with developmental disabilities helped develop the survey as they pretested and administered the questionnaire. The Ask Me! Project team trained people with disabilities to be interviewers, monitored quality control procedures, supervised interviewers, and keyed the data. Interviewers participated in panel presentations on the project and findings at state and national conferences. Some people with disabilities have been with the project for 4 years and have found career opportunities in it. The project has motivated interviewers to develop additional personal skills and provide advocacy leadership that encourages others to develop such skills themselves.
The Ask Me! Project has shown that people with developmental disabilities want to and will tell about their quality of life when asked questions meaningful to them, in words that they understand, by people like themselves in a nonthreatening environment, and with patience and assistance when necessary so that those with limited communication skills are not excluded. Ask Me! is not just a set of questions. It is a total approach to collecting valid, reliable, and useful information from the people who best know their quality of life.
Quality of life is multidimensional, and each dimension can be measured in many ways. The Ask Me! Survey provides a set of six measures for each dimension that has face validity and acceptable psychometric properties. Most people with developmental disabilities could answer all 48 questions in the eight core domains, plus some additional questions for quality assurance and measurement of transportation availability, without loosing interest or showing fatigue. Inter-item consistency indicated that their responses were not random and that people responding for themselves answer questions more consistently than other people can answer about them.
The Ask Me! Project does not solve all the issues related to collecting information from or about people with disabilities. It has not rigorously tested such aspects as acquiescence, nay-saying, first or last response bias, and response reliability and validity. Results did suggest that people seldom answered all the questions with the favorable response. The project incorporated many of the characteristics that Finlay and Lyons (2002) identified as reducing acquiescence. We believe a key factor in reducing acquiescence is the use of professionally trained peers to conduct the interviews. In one test in the early development of the survey, people were randomly assigned to paid peer interviewers, volunteer staff interviewers, and volunteer interviewers associated with an agency's quality assurance committee. The last group of interviewers received significantly more positive responses than did peer or staff interviewers in one of the five domains included in the survey at that time (Bonham, Basehart, & Marchand, 2002b).
An important aspect of the results of the Ask Me! Survey is the ability of most people to answer the questions for themselves, including people with low cognitive abilities. Over half of the people with agency classifications of profound and severe retardation responded for themselves as did almost all of the people classified with moderate or less retardation. Experience in Ask Me! has shown that some people can answer the survey questions for themselves, even though their agencies and families initially said they could not. We believe that trained peer interviewers in consultation with project staff are in the best position to judge whether people understand the questions well enough to answer for themselves. For those who do not, information must come from a different source.
A proxy respondent should know the person well and respond to questions as they believe the person would respond. The Ask Me! Project team relied upon agency coordinators to identify proxies and did not conduct additional screening. The project team planned to interview two proxies for each person who could not respond, but this proved difficult in many cases. Some people had only staff members involved in their lives, and staff turnover resulted in some staff proxies knowing little more than what was written in the files. Some family members identified as potential proxies told the project team that they did not know their family member well enough to respond. Some potential proxies refused. Others could not be located or contacted during a reasonable period of time. Therefore, more people than desired had only one proxy to provide information, but we believe one proxy was better than excluding the person from representation in the project.
The project team has found differences between self and proxy reporting, although no formal comparisons have been conducted. Half the people interviewed during the first year of the project (FY 1998) were interviewed again during the second year (FY 1999). Most of them responded for themselves both years or had proxies respond for them both years. The average quality of life scores for both groups were about the same in both years. However, the few people who responded for themselves in one year and had proxies responding in the other year reported significantly lower qualities of life than did their proxies (Bonham et al., 1999). The Ask Me! Project team drew from this experience and felt that everything possible should be done to increase self-response. Also, the project team has found that two proxies give different answers to many questions, and both, obviously, cannot be right. Proxies did not provide more consistent responses than did self-respondents to duplicate asking of the same question, had lower internal scale consistency, and gave more neutral responses than did self-respondents. In seven of the eight domains, proxies reported significantly higher or lower quality of life than did self-respondents, independent of individual and service characteristics. However, the interrelations of the quality of life domains were about the same for proxy reports and self reports. The Ask Me! Project team prefers proxy reporting to no representation, but the project has been designed to minimize the need for proxy reporting.
Quality of Life Measurement
The Ask Me! Survey team recognized eight quality of life domains: Social Inclusion, Personal Development, Self-Determination, Rights, Interpersonal Relations, Emotional Well-Being, Physical Well-Being, and Material Well-Being. People, on average, reported the most positive quality of life in the domain of Physical Well-Being, and the lowest quality of life in the domain of Rights. People had less variance in their reporting of Physical Well-Being than in their reporting of Rights. Physical Well-Being has received much more research attention and dissemination than has Rights (Schalock & Verdugo (2002). This suggests to us that service providers have had more research assistance to promote physical well-being than they have had to promote rights and, therefore, have been more effective in tailoring services to meet individual needs and desires. The eight quality of life domains encompass the three specific mission goals of the Maryland Developmental Disabilities Administration, and a path model consistent with the data provides guidance on ways for the state system to achieve its mission. We have suggested that the state initially focus on enhancing personal development, because it is the mission goal that the path model suggests is most central to people's quality of life and affects or is affected by all other quality of life domains.
The findings demonstrate that although some individual characteristics affect people's quality of life, they do not determine people's quality of life. Cognitive ability independent of proxy reporting affects only Self-Determination and Rights. Proxy reporting may also reflect cognitive ability and is usually associated with lower quality of life. Communication and speaking difficulty affects only Rights and Material Well-Being. Gender and age have small effects on one or two domains. As measured in this study, the presence or absence of services by the focal provider had little effect on people's quality of life. A study of self-determination and the effect of service coordinators (Bonham et al., 2000b) suggests that the impact of services will only be seen after their suitability and quality have been measured, not just their presence or absence.
Our findings demonstrate the importance of transportation for people. The more frequently their agencies provide transportation, the higher their quality of life in some domains. More important than agency-reported frequency, however, was the availability of transportation as perceived by individuals. Availability means transportation that does not have to be planned far in advance, that goes when they decide, that they can count on, and that does not cause them to miss out on activities. The impact of transportation availability on life quality is much greater than any individual characteristics, including cognitive ability.
Much time and effort has gone into the Ask Me! Project, but it still has limitations. The 6 questions for each domain have face validity and acceptable psychometric properties. However, unconstrained factor analysis of the 48 questions did not clearly identify eight factors with all of the expected 6 questions strongly loading on each factor. This may be due to the interconnectedness of all quality of life domains but may also be due to unmeasured problems in the instrument. Finally, the project team could not oversee agency completion of background forms nor check on their accuracy. Agencies varied greatly in the consistency of background information recorded for the same individuals interviewed during both the first and the second years of the project. Agencies reported 7% of these people as either younger at the second than the first interview or more than 10 years older at the second than the first interview. Agency representatives reported different IQs in adjacent years for one third (37%) of the people surveyed both times, and the reported IQs dropped an average of 3 points between the first and second years. One agency respondent recorded the same type of day activity in the 2 years for 20% of the people it served, whereas two agencies' respondents reported the same day activity for 100% of the people they served. Relationships, or lack of relationships, between services and quality of life may reflect data quality rather than underlying causality. In future years, the project team will use data from the Developmental Disabilities Administration to identify the types of services for which the state will reimburse a provider for the person.
The Ask Me! Project team plans to reduce other weaknesses of the current study. In this study we did not collect information about one fifth of the original sample population and, therefore, cannot determine whether nonresponse bias may be present. The plan is to reduce nonresponse by identifying all the providers of services for a person and then attempt interviewing that person through a secondary provider or service coordinator. Field staff will be asked to record additional information on nonresponses and to contact DDA regional offices when other information is not available. The protocol of the present study specified two proxy responses when people could not respond for themselves. However, two proxies were interviewed only one third of the time, and no explanation was given as to why a second proxy was not interviewed. Additional emphasis on identifying and contacting proxies should increase the number with two proxies in future years.
This study included random samples of about 30 people from 33 agencies in Maryland who volunteered to participate. Although representing a broad range of people, those interviewed did not constitute a scientific sample of all people in Maryland supported by DDA. People not served by the 33 organizations had no chance of being included in the sample, people served by large organizations had a smaller probability of selection than people served by smaller organizations, and people served by two of the organizations had twice the probability of selection as those served by only one. In future years, all people supported by DDA will have a known probability of selection, and weights will be applied to those included in the project to reflect their probabilities of selection. The two-stage stratified sampling procedure will provide accurate estimates for the system as a whole while clustering interviews for efficient fieldwork and for reporting information for individual providers.
In the path model developed in this study, we hypothesized cause and effect relations. Although the data were consistent with these relations, data collected at a single point in time cannot test the direction of cause and effect. Longitudinal data are necessary to confirm the model. On the basis of results of a small longitudinal study conducted during the development of the project with an earlier version of the questionnaire (Bonham et al., 2000b), we are optimistic that such data will confirm all or most of the model.
The present study included some internal validation of the data, but we did not have access to any external data to validate the findings. It would be valuable to know if the higher scale scores for Emotional Well-Being than for Personal Development truly reflected higher emotional-well being than personal development or only reflected the way the specific indicator questions were worded. It would also be valuable to know that changes in people's quality of life scores over time reflect changes that occur in their lives. The Bonham et al. (2000b) self-determination study suggests that they do. The authors showed that people's quality of life increased over 19 months when they moved to more independent residential settings, their quality of life remained the same when the type of their residential setting and its reimbursement rate remained the same, and quality of life declined when the type of residential setting remained the same but its reimbursement rate either increased or decreased.
Use of Findings
The Ask Me! Project reflects the rapidly emerging importance given to consumer outcome measurement and the need for programs to measure outcomes and manage programs for results (Schalock & Bonham, 2003). Consumer-based quality of life assessment can be useful for quality management on three levels: agency-level for continuous program improvement; state-level for establishing goals and monitoring the MR/DD system; and advocacy-level for enhancing choice of services and self-determination. The most important use of consumer-reported quality of life information is for continuous program enhancement at the agency level (Schalock, 2001). Participating providers received the frequencies of response by the people they serve; had opportunities to attend workshops on how to read, understand, and use their data; and are being encouraged by the state to use the information in their quality assurance plans. Agencies have used the Ask Me! results during the development years in various ways, and use is expected to increase as they share their experiences.
One agency found that the people they served reported few meaningful relations during the first survey. Agency administrators brought an outside consultant to its next planning retreat to train staff on development of meaningful relations and set this area as an agency-wide goal. In the next survey, people reported a much higher quality of life in this area. A second agency started with a guest speaker at its annual staff day, continued the speaker's theme during subsequent staff meetings and individual planning sessions, and developed concrete information that staff members could use in their daily work to improve life quality in the designated area. Ask Me! data prompted a third agency to create a quality assurance committee that would help staff members become more aware of things important to the people they served. Quality of life concepts became key to the agency's service delivery system.
Once implemented statewide, the Ask Me! Project will offer DDA structured feedback from the people supported and comparative data on system performance in areas central to system goals, reported by Gettings (2001) as essential for a comprehensive quality management system. The DDA used data from this study to respond initially to the state legislature's requirement that each state agency set goals to submit with its budget request to demonstrate how it will manage for results. The appropriately weighted Ask Me! results in subsequent years will be used to (a) establish firm goals in the areas of self-determination, personal growth, and productivity and (b) measure their achievement of meeting these targets. These policy-driven goals will be defined as both minimum thresholds for quality assurance and higher averages for quality improvement, the two components Gettings (2001) suggested as necessary for a quality management system. The DDA, in turn, requires that service providers submit quality assurance plans that are outcome-based and will automatically accept Ask Me! data as appropriate measures of outcomes.
Gettings (2001) indicated that another important part of a quality management program is to provide information to consumers and families that will help them make decisions on services. Maryland will join other states (Ferdinand & Smith, 2000) in publishing quality of life scores after most of their agencies have been included in Ask Me!. Its Guide to Services is widely distributed to consumers and families and provides information in a standard format for every state-licensed agency. However, DDA received only data aggregated to the state level through this study year, leaving only providers knowing their agency-specific results. This procedure has established trust and encouraged additional providers to voluntarily join the project. It has also given agencies time to experiment with using the information and initiate program change before the data become public knowledge. In subsequent years, the Ask Me! Project will provide comparative quality of life scores to assist people with intellectual disabilities to make their own decisions on services.
The Maryland Ask Me! Project began as a response to a consent decree to settle a lawsuit brought by the Maryland Disability Law Center against the Maryland Developmental Disability Administration. It developed as participatory action research with strong support from people with disabilities, service providers, advocacy organizations, and the state disability agency. People receiving services are central to the project. They identified the questions to be asked, helped word questions to make them understandable, conducted the interviews, and, most important, responded for themselves about their quality of life. Pretests, experiments, and data analysis all point to reliable and valid data to inform the different parts of the system about people's quality of life, provide guidance for program enhancement, and serve as a component for making judgments. The real value of the project, however, is just now being seen as it moves from development to statewide implementation and begins to direct change at the individual, provider, and system level that will enhance the quality of life of people with developmental disabilities.
This project was made possible by a grant from the Maryland Developmental Disabilities Administration and the Maryland Developmental Disabilities Council. The Arc of Maryland administered the project in consultation with People on the Go, the state-wide self-advocacy group. The Ask Me! Survey is copyrighted, with a training manual available for organizations interested in replicating the project in other states. The manual provides all necessary materials and information to conduct the survey. It is available at cost and includes the survey, interview protocol, and interviewer training information. All documents are also on diskette. To protect the integrity of the project, The Arc of Maryland has developed a licensing agreement for entities wanting to become certified to use the survey. For further information, contact the first author.
Authors: Gordon Scott Bonham, PhD (gbonham@BonhamResearch.com), President, Bonham Research, 2316 Wineberry Terrace, Baltimore, MD 21209. Sarah Basehart, MPS, Director for Community Programs, and Cristine Boswell Marchand, MS, Executive Director, The Arc of Maryland, 49 Old Solomons Island Rd., Annapolis, MD 21401. Robert L. Schalock, PhD, Professor Emeritus, Hastings College, PO Box 285, Chewelah, WA 99109. Nancy Kirchner, LCSW, Director, DDA Southern Regional Office, 312 Marshall St., 7th Floor, Laurel, MD 20707. Joan M. Rumenap, MBA, Abilities Network, 300 E. Joppa Rd., #1103, Towson, MD 21286