Abstract

An important line of research involves asking people with intellectual and developmental disability (IDD) to self-report their experiences and opinions. We analyzed the responsiveness of 11,391 adult users of IDD services to interview questions from Section 1 of the 2008-2009 National Core Indicators-Adult Consumer Survey (NCI-ACS). Proxy responses were not allowed for the selected questions. Overall, 62.1% of participants answered the questions and were rated by interviewers as understanding the questions and as responding consistently. Most participants responded in an all-or-none fashion, answering either all or most questions or few to none. Individuals with milder levels of IDD and with speech as their primary means of expression were more likely to answer the questions and provide a scoreable response. Interviewer ratings of interviewees’ answering questions, understanding of questions, and consistent responding were each related to responsiveness.

Introduction

One important aspect of contemporary research regarding intellectual and developmental disability (IDD) involves seeking the views of people with IDD about their own life circumstances and support arrangements. While many people with IDD can self-report competently, it remains challenging to find ways for people with more severe levels of disability to communicate their views, so these individuals may be disenfranchised in interview studies requiring self-response. There are few recent large-scale studies documenting the extent of self-reporting or identifying factors related to whether or not an individual is able to self-report. This study provides contemporary data on the percentage of adult users of IDD services who consistently answered interview questions about their lives and experiences; describes the characteristics of adults with IDD who did or did not respond to interview questions; and examines the relationship between personal characteristics that may affect the person’s ability to understand questions, communicate answers, or to participate in an interview, as well as the proportion of items to which the person responded (responsiveness).

“Responsiveness” refers to the proportion of people who answer a self-report question or instrument with a scoreable response and, in an individual case, the proportion of questions the person answers for him or herself. A classic series of studies on responsiveness of people with IDD to interview questions was conducted by Carol Sigelman and colleagues (Sigelman et al., 1980, 1981). They reported that appropriate responding was strongly related to IQ (r  =  .51), with the percentage of adults responding appropriately varying consistently in relation to the level of intellectual disability, from severe (58.5%), moderate (78.0%), and mild (81.1%). Appropriate responding was very limited for participants with profound IDD, and the researchers concluded that verbal interviews were not feasible for these individuals. Responsiveness to two parallel interview forms administered a week apart was highly correlated (r  =  .96), suggesting that the ability to respond to verbal questions is a stable individual behavior. Overall, they reported that “responsiveness to interview questions is stable… [and] appears to have a cognitive basis… [but is] not … affected by factors such as age, sex” (Sigelman et al., 1981, p.118). They also found that the form of the question affected responding, with the highest percentage of adults (84.8%) responding to verbal yes/no questions and much lower percentages responding to verbal multiple choice (51.9%) and open-ended items (50.6%).

The National Core Indicators (NCI) project, a U.S. Department of Health and Human Services’s Administration on Intellectual and Developmental Disabilities project of national significance, has measured IDD long-term support and service system quality outcomes in the United States since 1997 (Bradley & Moseley, 2007). The purpose of NCI is to identify and measure core indicators of performance of state developmental disabilities service systems. The NCI-Adult Consumer Survey (NCI-ACS) is part of the overall NCI program. The NCI-ACS includes a personal interview with adult IDD service users from participating states. For a subset of items (i.e., Section 1 of the NCI-ACS), every participant is interviewed, regardless of intellectual functioning or communication skills, with no individuals prescreened out as being unable to participate.

Annual NCI surveys play a crucial role in quality assurance for IDD services and are administered to large, randomly selected multistate samples of adults with IDD who receive Medicaid-funded long-term services and supports. NCI-ACS self-report data (e.g., on loneliness and friendships) have been used by states to initiate major changes in policy, funding, and services (Moseley, Kleinert, Sheppard-Jones, & Hall, 2013).

We conducted secondary analyses of data from the 2008-2009 NCI-ACS (National Association of State Directors of Developmental Disabilities Services and Human Services Research Institute, 2008) to examine the degree to which adults with IDD respond to self-report-only interview questions and to examine the characteristics of adults who do or do not respond.

Method

Ethics Review

Because this article reports secondary analysis of an existing de-identified data set, the study was deemed by the University of Minnesota Institutional Review Board to be exempt from full review. The NCI-ACS “pre-survey form” (available from the National Association of State Directors of Developmental Disabilities Services or the Human Services Research Institute) describes the informed consent procedures used by states that collect and submit de-identified data for analysis.

Participants

A total of 11,569 adults with IDD from 20 states (AL, AR, CT, DE, GA, IL, IN, KY, LA, MA, MO, NC, NJ, NY, OH, OK, PA, SC, TX, and WY) were included in the 2008-2009 NCI-ACS sample. Of those, 178 (1.5%) had missing data for the item on which the interviewer rated whether or not the participant completed Section 1 of the NCI-ACS. Therefore, this article reports on a sample of 11,391 participants. Participants had an average age of 42.71 years (SD  =  14.43, range  =  18 to 97); more than half had no (4.8%), mild (38.1%), or moderate (28.5%) levels of intellectual disability; and 76.7% used speech as the primary mode for communication (see Table 1).

Sampling

Each participating state randomly sampled its population of adult (18+ years) IDD service recipients in institutional, community, or family home settings, or a subset of these. Some states only surveyed recipients of Medicaid waiver-funded Home and Community-Based Services (i.e., excluding adults living in Medicaid Intermediate Care Facilities for Individuals With Intellectual Disabilities services). For the 2008-2009 NCI-ACS survey, state sample sizes ranged from 193 (DE) to 1,439 (NY) and averaged 570. Two states omitted certain background section items, resulting in missing data for all participants from those states for the omitted items.

Instrumentation

The 2008-2009 NCI-ACS includes several sections: a presurvey form; a background information section describing demographic characteristics and service participation; Section 1 containing questions about employment/day program, home, safety and health, friends and family, services/supports, and self-directed supports; and Section II containing questions about community inclusion, choice, rights, and access to services. The 47 items in the Section 1 interview may only be answered by the sampled adult with IDD because of concerns about the validity of proxy data for what the NCI-ACS authors judged to be more subjective items (see Cummins, 2002; Cusic et al., 2001; Emerson, Felce & Stancliffe, 2013). Items used for calculating participant responsiveness were drawn from Section 1 of the 2008-2009 NCI-ACS. The 19 selected items are all phrased as yes/no questions (see Table 2), but interviewers used either a 2-point (yes/no) or 3-point (yes/often/most of the time; in-between/sometimes/not often/maybe; no/rarely) response scale when scoring the participant’s answer to each question (see Table 2 for item-by-item details on scoring). Participants only heard the spoken yes/no question and did not have access to the written scoring details, so we describe these questions as yes/no questions throughout this article. Tests of the NCI-ACS Section 1 adult consumer interview yielded inter-rater agreement of 92% to 93% and test-retest reliability of 80% (Smith & Ashbaugh, 2001).

Item selection

We analyzed responses to 19 yes/no questions in Section 1 that were applicable to all 11,391 participants (i.e., questions on which it was not possible to receive a score of “not applicable”). Twenty-eight items relevant to and asked only of a subset of participants were excluded. For example, while all participants were asked whether they had a job in the community (i.e., not facility-based), only those who had such a job were asked five follow-up questions about those jobs. The self-reporting rate to the question about a community job asked of all participants was 78%, whereas the response rates for the follow-up items asked only of those who had a community job and were able to answer the initial question ranged from 90% to 98%. Including the follow-up items would have artificially inflated the responsiveness rate because of the unrepresentatively high ability of the cohort that was asked these questions.

Items that were asked but to which the interviewee with IDD did not respond or gave an unscoreable response such as “Don’t know,” “no response,” or “unclear response” were given a specific code. Items that were not asked because of consistent nonresponsiveness (see following paragraph for details), or because of accidental omission were left blank (system missing in SPSS terms) and were classified as “not asked” for this study. The coding scheme made it possible to distinguish clearly between items that (a) were asked but to which the interviewee with IDD did not respond or gave an unscoreable response such as “Don’t know,” and (b) items that were not asked and so were left blank.

Interview Procedure

NCI-ACS interview questions are asked in a standard order (see Table 2). During the NCI-ACS interview, questions may be repeated or rephrased by the interviewer to improve understanding. Rephrasing suggestions are provided in the NCI-ACS protocol for some items. Participants with difficulty with spoken language can answer using picture responses. The interviewer scores each item based on the person’s reply to the question and subsequent probes. A support person may be present during the interview, but scoring is based solely on the responses provided by the participant. If the person with IDD does not provide scoreable answers to any of the first four applicable questions, interviewers are instructed to discontinue the NCI-ACS Section 1 interview and move to the Section 2 items, which may be answered by proxy.

Prior to the interview, interviewers gathered information about the names of the person’s work or day placement, support staff, and so on, so they can use familiar names and terms during the interview to help the person understand certain questions. For example, for the item about having a job in the community, the interviewer may ask “Do you work at (name of job setting)?”

The interview is conducted in private. Parents or guardians may be present if they insist. Others (e.g., staff, family, friends) may be present if the individual requests, if needed for interpretation purposes, or if a private interview may pose risks to the interviewer. Following the interview, the interviewer completes a feedback sheet involving ratings of whether (a) the Section 1 interview could be completed, (b) the person understood most of the questions, and (c) the person answered the questions consistently.

Interviewer training

The NCI-ACS protocol is supported by an ongoing training program for interviewers. The program includes training manuals, presentation slides, training videos, scripts for scheduling interviews, lists of frequently asked questions, picture response formats, and a question-by-question review of the survey tool. The training also covers basic skills for interviewing persons with IDD.

Analyses

Missing data were imputed where appropriate to obtain as accurate as possible a picture of responsiveness (described further in the system-missing data section). Descriptive statistics were used to quantify the number of Section 1 items responded to and to describe the proportion of participants responding to each item. Analysis of variance (ANOVA) was used to compare the responsiveness of groups of participants defined on the basis of interviewer ratings of (a) completing Section 1, (b) understanding the questions, and (c) consistent responding. To help identify possible reasons for nonresponding, chi-square statistics were used in univariate analyses of the relationship between selected personal characteristics and interviewer ratings of participant responding. Finally, blockwise linear regression was employed to assess the extent to which blocks of personal characteristics and environmental factors were able to explain the variance in responsiveness. All inferential statistics were deemed significant at p < .05. IBM SPSS Version 22 software was used for all statistical analyses.

Results

Dependent Variable

The dependent variable, responsiveness, was the total number of NCI-ACS Section 1 questions (of the 19 questions applicable to all respondents) that were answered with a response that could be scored (e.g., as either yes, sometimes, or no). Answers of “don’t know,” “an unclear response,” or “no answer” were given a code designating them as nonscoreable. Each item with a response that could be scored contributed 1 point to the responsiveness total. Each item coded as nonscorable contributed 0 points to the responsiveness total. The responsiveness total was the sum of the responsiveness values for all 19 items and could range from 0 to 19.

Evaluating System-Missing Data

Of the 11,391 participants, 1,257 (11.0%) had system-missing (i.e., blank) data for one or more of the 19 questions. As noted, responses scored by interviewers as “don’t know,” “unclear response,” or “no answer” were counted as nonscoreable and were, therefore, not system missing. System-missing data indicated that either the question was not asked or the answer was not recorded. Because the reason an item had been left blank was relevant to our research question, we conducted further analyses of those instances (see Figure 1 for a flowchart summarizing our approach).

The NCI-ACS practice of discontinuing the Section 1 interview for participants who failed to answer all of the first four relevant items meant interviewers did not ask the remaining 15 items. Those items were left blank (i.e., system missing). Participants who declined to be interviewed before or during the Section 1 interview also had large numbers of items that were not asked, leaving many system-missing items.

Alternatively, interviewer error resulted in the occasional question being accidentally not asked or its answer not being recorded, resulting in a small number of system-missing items for an individual participant.

Decision Rules for Computing Responsiveness Scores

Participants with 15 or more system-missing items

Of the 11,391 participants, 664 (5.8%) had system-missing data for 15 or more of the 19 items, likely because the interview had been discontinued as previously described. In those cases, we referred to interviewer ratings of the person’s Section 1 responses in deciding how to estimate responsiveness for the system-missing items. Six hundred twenty-four (94.0%) participants with 15 or more system-missing items were rated by the interviewer as being unable to respond to the Section 1 interview either because they could not communicate sufficiently (n  =  568) or they were unwilling to participate in the interview (n  =  56). For these participants, a system-missing value was considered to be equivalent to a response of “don’t know,” “no response,” or “unclear response,” and items with system-missing values were recorded as unscoreable (i.e., contributing 0 points to the responsiveness score). The recoded data for these participants are referred to as imputed system-missing data. Forty participants with 15 or more system-missing items who were not rated by interviewers as being unable to communicate sufficiently or unwilling to be interviewed were omitted from all analyses of responsiveness because the reason the items were system missing was unknown.

Participants with 5 to 14 system-missing items

Sixty-two participants had between 5 and 14 system-missing items. We considered this number of unasked questions to be more than could plausibly be attributed to interviewer error and, likewise, not consistent with halting the interview after obtaining no response to the first four items, so it was unclear how best to impute responsiveness. Consequently, these 62 participants were omitted from all subsequent analyses of responsiveness.

Participants with one to four system-missing items

Finally, 531 participants had between one and four system-missing items, including 454 with system-missing data for a single item. For the purpose of rating responsiveness, it seemed reasonable to assume that these item(s) had been accidentally omitted from the interview, so we prorated responsiveness based on the 15 or more questions that had been asked. For example, a participant with four system-missing items who provided scoreable responses to 10 of the other 15 questions was given an imputed responsiveness score of (10/15)*19  =  12.67. The responsiveness score for these 531 participants is based on what we refer to elsewhere as prorated system-missing data.

We did not consider interviewer ratings when computing a responsiveness score for participants with four or fewer system-missing items. Instead, we accounted for variations in ability to communicate or willingness to be interviewed by using responsiveness scores that were directly proportionate to the person’s responsiveness to the 15 or more items they were asked.

Altogether, data from 102 (0.9%) participants were excluded from further analyses of responsiveness because of system-missing data, while 11,289 (99.1%) participants were included. Throughout this article, whenever responsiveness data are analyzed or reported (Figure 2; Tables 3, 4, and 5) we use responsiveness data that include the imputed and prorated responsiveness scores just described. The sole exception is Table 2, as is explained in the following section.

Item-by-Item Responsiveness

Item-by-item responsiveness data are shown in Table 2. The “Response Scoreable” data in this table included imputed system-missing data because this imputation was done at the item level. Prorated system-missing data were not included because prorated scores provided an estimate of overall responsiveness, but were not calculated for individual items. The “Question Asked” columns in Table 2 show the number of people (including those with imputed system-missing data) who were asked each question. The number of people who were not asked particular questions (for reasons other than the person provided unscoreable responses to the four first items) averaged 33.6 people per item.

The second set of columns (“Response Scoreable”) shows the number and percentage of respondents who provided scoreable responses to each item, again including those with imputed system-missing data. On average, across all 19 items, 70.9% of participants provided scoreable responses to the Section 1 questions, ranging from 67.7% (“Do you ever feel lonely?”) to 78.4% (“Do you have a job in the community?”). This percentage was largely consistent across items, with 15 of 19 items falling within plus or minus 3 percentage points of the mean of 70.9%. This average does not take into account the interviewer ratings of understanding questions and of consistent responding.

Distribution of Responsiveness Scores

The remainder of this article focuses on the responsiveness score computed for each participant. Figure 2 provides a frequency distribution for the responsiveness score. People whose responsiveness score was imputed (n  =  624) or prorated (n  =  531) are included. Prorated scores are rounded to the nearest whole number for presentation. Most participants provided scoreable responses to either 16 or more items (68.1%) or answered three or fewer items (26.0%). That is, 94.1% of the sample effectively self-reported in an all-or-none fashion.

Interviewer Ratings

Figure 3 is a flow chart showing the number of participants rated by interviewers as having (a) answered the questions and completed the interview, (b) understood the questions, and (c) answered consistently.

Overall, there were 11,304 participants with complete data on all interviewer ratings. Of these, 7,022 or 62.1% of participants were rated “yes” on all three ratings (Figure 3): having answered the questions and completed the interview, understood most questions, and provided consistent responses. A further 705 (6.2%) were rated “not sure” on one or both ratings of understanding and of consistent responding. Thus, overall, depending on how one deals with the “not sure” ratings, somewhere between 62.1% (n  =  7,022) and 68.4% (n  =  7,727) of participants were rated by interviewers as completing Section 1 with satisfactory understanding and consistency (see Figure 3).

Among the 8,026 rated as having completed Section 1, 212 participants were rated “no” on one or both of the understanding and consistency ratings, suggesting that it is necessary to look beyond merely completing Section 1 and to also check for understanding and consistent responding.

Interviewer Ratings and Responsiveness

Section 1 interview completed

Comparing the responsiveness scores with interviewer ratings yielded several comparisons. First, there was the interviewer rating of whether the person completed Section 1 (see Table 3).

There was an overwhelming difference in the number of Section 1 items answered between those rated by interviewers as completing Section 1 or not, F (4, 11,284)  =  21,294.43, p < .001, ηp2  =  .88. Individuals rated as completing Section 1 answered an average of 18.29 of the 19 questions (96.3%), whereas those rated as not completing Section 1 answered 1.74 questions on average (9.2%). The dramatic differences in responsiveness in the expected direction strongly validated the accuracy of the interviewers’ ratings about completing Section 1.

These ratings are further validated by examination of the personal characteristics of the participants in the various ratings categories. Participants rated as not completing Section 1 because they “could not communicate sufficiently” had a much higher proportion with severe/profound levels of intellectual disability (72.9%) than participants who “answered independently or with some assistance” (10.8%), χ2(16, N  =  10,605)  =  4425.23, p < .001. Likewise, in terms of a diagnosed communication disorder (57.2% vs. 33.2%), χ2(4, N  =  10,373)  =  677.50, p < .001, and the person’s primary means of expression being speech (10.1% vs. 84.1%), χ2(20, N  =  11,122)  =  5053.82, p < .001, there were substantial between-group differences in the expected direction.

Most people who did not complete the Section 1 questions did so because of communication difficulties, but there was a smaller number who were unwilling to participate in the interview. We examined whether this unwillingness was associated with personal characteristics that may have made the interview difficult for one or both parties (e.g., autism spectrum disorder [ASD], challenging behavior, health issues). People with ASD (50.0%) did complete the interview, but this percentage was substantially lower than for those without ASD (71.9%). A higher percentage of those with ASD (4.8%) were unwilling to participate versus individuals with no ASD diagnosis (1.8%), indicating that people with ASD were less willing to be interviewed. Overall the relationship between ASD diagnosis and Section 1 participation was significant, χ2(4, N  =  10,373)  =  245.39, p < .001.

The picture was similar for individuals with challenging behavior (i.e., individuals rated as needing extensive support for behavior in one or more of the following areas: self-injury, disruptive behavior, destructive behavior). Fewer people with challenging behavior completed Section 1 (53.5% vs. 73.0%). A larger percentage of people with challenging behavior were unwilling to participate (4.2% vs. 1.7%). Overall, the relationship between challenging behavior and Section 1 participation was significant, χ2(4, N  =  10,802)  =  205.26, p < .001.

Other individuals may have been unable to take part in the interview because of health issues (e.g., dementia, overall health rating). The percentage of those with a dementia diagnosis who completed Section 1 (57.0% vs. 69.8%) was lower, χ2(4, N  =  10,373)  =  19.26, p < .001. Likewise, individuals whose overall health was rated as poor were less likely to complete Section 1 (64.5% vs. 73.1%), χ2(4, N  =  9,682)  =  34.44, p < .001.

Ratings of understanding and consistency

Overall, 8,003 of 11,289 (70.9%) participants were rated as having completed Section 1 (see Table 4). For participants who completed Section 1 interviews, interviewers rated participants’ understanding of the questions and the consistency of their answers. Participants who did not complete Section 1 were given a not applicable (NA) rating for understanding and consistency and these NA ratings were treated as missing data.

Table 4 shows that, overall, 90.8% of participants rated by interviewers as completing Section 1 were also rated by interviewers as having understood the questions, and 89.7% were rated as having responded to the questions consistently. Responsiveness scores varied by the interviewer ratings. Participants with complete Section 1 interviews who were rated as having no understanding, or as providing inconsistent responses had significantly lower responsiveness scores than those who completed Section 1 and were rated as having understood most questions and as having responded consistently (Table 4). Furthermore, the agreement between these two ratings was overwhelming, χ2(4, N  =  7,915)  =  7,723.43, p < .001, with 97.4% of those rated as giving consistent responses also rated as having understood the questions. Taken together, these findings suggest that interviewers’ ratings were consistent and likely provided an effective way to identify individuals whose responses were questionable.

Regression Analyses of Factors Associated With Responsiveness

To examine the association between responsiveness and various factors, we completed a blockwise linear regression analysis in which independent variables were entered sequentially in blocks (see Table 5).

A total of 37% of the variability in responsiveness scores could be accounted for by level of intellectual disability. Participants with a milder level of IDD had higher responsiveness scores. Communication skills accounted for an additional 12.5% of the variability after accounting for level of IDD, demographics, and type of disability. People who communicated via speaking had higher responsiveness scores.

There were significant multivariate differences between states in responsiveness, with the state block accounting for an additional 2.8% of variance, despite being entered last of eight blocks of variables. The state with the largest number of participants (NY) was used as the reference category in the regression analysis. Relative to the reference state, two states had significantly higher responsiveness and nine states had significantly lower responsiveness; the remaining six states did not differ significantly from the reference state. Two states (IN and TX) were omitted from the analyses because of missing data. The raw percentage of participants rated by interviewers as completing Section 1 ranged from 50.6% to 91.3% (χ2[76, N  =  11,289]  =  1561.40, p < .001) between states, as compared to 70.5% for the sample as a whole (see Figure 3).

Other variables accounting for a statistically significant amount of variability in responsiveness in the final regression equation included mental illness (participants with a diagnosis of mental illness responded to more items), dementia (participants with dementia responded to fewer items), autism spectrum disorder (participants with ASD responded to fewer items), overall health (participants in poorer health responded to fewer questions), and amount of support needed for behavior (people with more extensive support needs responded to fewer questions). Compared to people living in agency-owned living arrangements housing one to three people with IDD, people living in agency-owned settings in which seven or more people with IDD lived had lower responsiveness scores.

Once the other characteristics were accounted for, there were no differences in responsiveness based on age, gender, diagnosis of brain injury, Down syndrome, cerebral palsy, hearing loss, speaking a language other than English, taking psychotropic medications, or living arrangements other than larger agency settings.

Discussion

We examined responsiveness to self-report interview questions from Section 1 of the NCI-ACS (Bradley & Moseley, 2007) by 11,289 adult IDD service users from 20 states. By design, questions in NCI-ACS Section 1 may only be answered by self-report. Overall, 62.1% of participants were able to answer the Section 1 NCI-ACS questions and were rated by interviewers as understanding the questions and as responding consistently.

All-or-None Responding

Fujiura (2012) concluded that the issue of valid responding to interview questions is often framed as a dichotomy: Can the person answer or not? This study provided clear empirical evidence to underpin such a dichotomy. Overwhelmingly, participants either answered almost all questions (68.1% answered between 16 and 19 of 19 questions) or almost none (26.0% answered between 0 and 3 of 19 questions). Few participants (5.9%) answered some questions (between 4 and 15 of 19 questions).

Our conclusion about this pattern of all-or-none responding needs to be tempered by the fact that interviewers were instructed to discontinue the Section 1 interview if the service user did not give a scoreable response to any of the first four items asked, despite the interviewer’s efforts to repeat the question or to paraphrase it in the simplest terms. The interview was stopped to avoid embarrassing or distressing the person by repeatedly asking them to do something they were evidently unable to do (Stancliffe, Wilson, Bigby, Balandin, & Craig, 2014). Discontinuing interviews with nonresponders meant that they did not have the opportunity to answer unasked questions. The practice of halting the interview is clearly defensible as a respectful way to interview individuals. However, it complicated the analysis and interpretation of responsiveness data by requiring us to impute responsiveness to these system-missing items for affected participants. Had every participant been asked every question, these complications would not have arisen.

We identified 664 participants (5.8% of the sample) in this situation. Superficially, this practice appears to reduce responsiveness rates on the basis that some participants might have been able to answer more questions if asked. So we enquired, could the “none” part of the all-or-none pattern be because these people were only asked four of the 19 questions? The answer is clearly no. When we omitted the 5.8% of the participants with 15 or more items of system-missing data, the all-or-none pattern persisted as all (68.1% of participants), some (5.9%), or none (20.2%) for the remaining participants. Our treatment of participants with large amounts of system-missing data did not alter the overall pattern of all-or-none responding, but it did affect the relative size of the “none” group. Therefore, the analytic challenge was to treat the data from this 5.8% of the sample in a manner that enabled the most accurate estimate of responsiveness.

In fact, discontinuing interviews could increase the assessed responsiveness rate because scoring for these unasked questions was left blank (i.e., system missing). Consequently, without imputing values for the system-missing data, affected individuals would be excluded from analyses, thereby systematically reducing the number of nonresponders contributing to the overall responsiveness rate. To estimate responsiveness as accurately as possible, we imputed an item score of “no response” for system-missing data for the 624 participants with 15 or more of the 19 items system missing and who were rated by interviewers as being unable to respond either because they could not communicate sufficiently or because they were unwilling to participate in the Section 1 interview. We based our decision on previous research (Sigelman et al., 1981) and on our own findings of all-or-none responding. Readers should draw their own conclusions on how our treatment of missing data contributed to the all-or-none response pattern we found. We propose that our finding of predominant all-or-none responding stands because (a) the system-missing-data issue only affected a small percentage of the sample, and (b) our approach ensured that almost all individuals with system-missing NCI-ACS data were included in the analyses to more fairly reflect responsiveness.

Reasons for Nonresponse

Of the nonresponders, the vast majority were rated by interviewers as not completing Section 1 because they “could not communicate sufficiently.” This rating was supported by our analysis of responsiveness and of client characteristics that showed these nonresponders indeed were substantially more likely to have communication impairments and more severe IDD (Table 5). This is not to say that the nonresponders were unable to communicate at all, but rather that they had considerable difficulty with communication in the NCI-ACS interview environment, even though multiple steps were taken to facilitate communication. A few participants (2.7%) were able to answer using picture responses but, on the whole, participants with significant communication and comprehension challenges found the NCI-ACS interview too difficult and were unable to respond to all or almost all questions.

A small percentage of participants (2.0%; see Table 3) were unwilling to participate in the NCI-ACS interview and so they responded to few questions (mean  =  2.62 of 19). A higher percentage of participants who were not willing to participate had characteristics (e.g., ASD diagnosis or challenging behavior) that may have made the social situation of an interview unpleasant or unsafe for one or both parties.

At least for the type of questions asked in the NCI-ACS Section 1 (i.e., yes/no questions), participants answered in a largely all-or-none fashion. Almost all participants were willing to try to respond, and those who were able to respond to the initial questions generally answered (almost) all of the subsequent items, whereas individuals who were unable to answer the early questions generally responded to (almost) none. This finding confirms and extends Sigelman et al.’s (1981) finding that responsiveness is a stable individual characteristic. The authors are not aware that this pattern of all-or-none responding has been as clearly identified in previous research on self-reporting.

Individuals who were more responsive were likely to have milder levels of IDD and to use speech as their primary means of expression. Other personal characteristics that were significantly related to greater responsiveness included not having a diagnosis of ASD, dementia, or communication disability; being rated as having better overall health; and not needing extensive support for challenging behavior. Level of IDD, and “speech being the primary means of expression” had the greatest explanatory power of all the personal characteristics, pointing to the central role of cognitive and communication issues in the ability to self-report (Emerson et al., 2013; Fujiura, 2012; Stancliffe et al., 2014). These findings are consistent with previous research on self-reporting (e.g., Sigelman et al., 1980, 1981) and further confirm that it is individuals with certain characteristics whose voice is mostly being heard when studies ask people with IDD for their views.

What to Do About People Who Are Unable to Respond to Interview Questions

More than 25% of participants were unable to respond to the NCI-ACS Section 1 interview questions, so their data on these items were coded as no response. What can be done to enable the views and experiences of these nonresponding participants to be known?

There is clear evidence that question wording strongly affects responsiveness among people with IDD. Brief, direct, concrete questions with simple response scales yield higher levels of responsiveness (Emerson et al., 2013; Fujiura, 2012; Stancliffe et al., 2014). However, these steps have already been taken with the NCI-ACS Section 1 questions, so little improvement of question wording is possible to further enhance responsiveness. Likewise, nonverbal communication techniques can facilitate understanding and responding in some cases. However, as was seen in the current study, only a small percentage of participants (2.7%) answered using picture responses. Other noninterview methods of data collection, such as observations of people with more severe disabilities, need to be explored further to ensure that information is being collected that represents the experiences of as many people with IDD as accurately as possible.

One approach with the potential to enable interview information to be obtained about the largest numbers of nonresponders is to allow proxy responses to NCI-ACS Section 1 items that are all currently classified as self-report only. However, there are concerns about the validity of proxy data.

Proxy responding

Proxy responses by someone who knows the person well are frequently used for individuals with limited communication abilities who are unable to self-report. Proxies can provide valid data about objective, observable issues such as adaptive behavior, but there are serious concerns that proxies do not respond validly to more subjective, less observable matters (Cummins, 2002; Cusic et al., 2001; Emerson et al., 2013). For example, Chadsey-Rusch, DeStefano, O’Reilly, Gonzalez, and Collet-Klingenberg (1992) showed that proxies’ reports of loneliness experienced by adults with IDD did not correlate significantly with these adults’ self-reports. Remedying missing data by using proxies should not be pursued at the cost of loss of validity, so for questions about loneliness, for example, there may be no alternative but to accept the inevitability of missing data for those who are unable to self-report.

Implications for National Core Indicators Adult Consumer Survey Data Collection

To maximize the likelihood that the NCI-ACS generates data that are reliable and valid for making policy and system improvement decisions, it is critical that the characteristics of the data are well understood. Questions have been placed in Section 1 (self-report only) of the NCI-ACS on the assumption that they are too subjective to be answered validly by proxies. There is evidence to support this assumption for some, but not all, of these NCI-ACS questions. It is possible that certain questions in Section 1 could be answered validly by proxies. Directly testing all NCI-ACS interview items by separately interviewing a sample of individuals with IDD who can self-report and proxies reporting on these same individuals, then comparing their responses, would provide an evidence-based method for classifying questions as self-report only or as an item where both self-reports and proxy reports are acceptable. Questions with self-report:proxy agreement above a specified threshold would be deemed to indicate valid proxy reporting and could be moved to a section in the NCI-ACS interview where proxy responding is permitted. This approach would generate more complete data on these questions so that individuals would not be excluded because they are unable to self-report. As Emerson et al. (2013) proposed:

it would seem to be good practice for researchers who wish to use proxy reports in the place of self-reports to provide empirical evidence of agreement between the two among those sample members capable of self-reporting. In other words, the substitutability of proxy reporting for self-reporting should be demonstrated rather than asserted. (p. 338)

Interviewer Ratings of Understanding and Consistent Responding

At the end of each interview the interviewer rated whether (a) the Section 1 interview could be completed, (b) the person understood most of the questions, and (c) the person answered the questions consistently. We compared these ratings to responsiveness scores. It is axiomatic that people rated as not completing the Section 1 interview would answer fewer questions, so this finding is unsurprising, although it does strongly confirm the accuracy of the interviewers’ ratings. However, the vast magnitude of the difference between those rated as not completing Section 1 versus those rated as completing the interview (means of 1.74 vs. 18.29 of 19 questions answered, Table 3) underscores, yet again, the all-or-none pattern of responding.

Interviewer ratings of understanding and of consistent responding are different indicators that the person comprehended the questions, as shown by the very high agreement between the two ratings. It seems logical to expect that people who understood the questions would respond to more questions than individuals who lacked understanding. Consequently, our finding of significantly greater responsiveness by individuals rated more positively on these two items makes logical sense, and provides evidence for the validity of these ratings by interviewers.

Our results also point to the utility of these interviewer ratings in identifying problems with understanding and consistent responding among individuals who complete the interview. The vast majority (around 90%; Table 4) of participants rated as having completed the Section 1 interview were also rated as having understood the questions and as having replied consistently. However, the remaining 10% of those who finished Section 1 were rated by interviewers as having some degree of problem with understanding or consistent responding and were also found, as would be expected, to have responded to significantly fewer questions. In short, we have demonstrated empirically that interviewers can make accurate judgements about responsiveness and consistent responding. This provides an evidence base to justify reliance on interviewer ratings as a straightforward way of identifying whose self-report data should be used and whose should not.

Between-State Differences

Differences in responsiveness between states accounted for a significant 2.8% of variance, even though the state block (block 8) was entered last in the regression analysis (Table 5). State samples were randomly selected from among adult users of IDD services in each state, but we cannot rule out the possibility that between-state sampling differences may account for some of the variability in between-state responsiveness. However, we controlled for important personal characteristics and living arrangements in the regression analysis (Table 5), so this explanation is far less plausible. Rather, it seems that the nature of each state’s IDD service system and policies may have influenced responsiveness in, as yet, unknown ways. Significant between-state differences have been reported in other analyses of NCI-ACS data that have examined choice (Lakin et al., 2008a; Tichá et al., 2012), state per-service-user expenditures on IDD services (Lakin et al., 2008b), and percentage of service users with an ASD diagnosis (Hewitt et al., 2011).

Taken together, these findings suggest that future between-state comparisons could explore such differences by examining specific, measureable differences in state service provision, expenditure, and policy that may help account for variability among states. For example, Hewitt et al. (2011) found a significant relationship between state policies on eligibility and on public funding for people with ASD to use IDD services and the percentage of service users with ASD in each state.

Limitations

The main strengths of this study relate to the large, randomly selected, multistate sample of adults with IDD and the use of robust, reliable interview questions (Smith & Ashbaugh, 2001). The limitations associated with data imputation and ceasing the interview if the person did not respond to the first four questions have already been discussed. Another limitation is the use of yes/no questions, given that Sigelman et al. (1980, 1981) showed conclusively that responsiveness varies markedly between question types, with yes/no questions eliciting by far the highest rate of responses. Because of this issue, our study findings are most relevant to yes/no questions. Likewise, future research is needed to extend our findings to multiple choice and open-ended items. Due to the format of the NCI-ACS Section-1 questions, this was not possible in the current study.

Conclusions

Most, but not all, adults with IDD who receive long-term services and supports can answer yes/no questions about their daily life and well-being. Responsiveness (i.e., number of questions answered through self-report) varied in a predominantly all-or-none pattern. Variability in responsiveness was substantially accounted for by differences in level of IDD and verbal communication skills, indicating that people with more profound levels of IDD and/or more pronounced communication impairments continue to be disenfranchised in relation to expressing their views on their life and circumstances. There was consistent evidence supporting the validity of interviewers’ ratings of interviewees’ responsiveness, understanding of questions, and consistent responding.

Acknowledgments

Preparation of this paper was supported by Grant #H133G080029 from the National Institute on Disability and Rehabilitation, U.S. Department of Education (federal funds for this 3-year project total $599,998—99.5% of the total program costs, with 0% funded by nongovernmental sources).

References

References
Bradley
,
V. J
.,
&
Moseley
,
C
.
(
2007
).
Perspectives: National Core Indicators: Ten years of collaborative performance measurement
.
Intellectual and Developmental Disabilities
,
45
(
5
),
354
358
.
Chadsey-Rusch
,
J
.,
DeStefano
,
L
.,
O’Reilly
,
M
.,
Gonzalez
,
P
.,
&
Collet-Klingenberg
,
L
.
(
1992
).
Assessing the loneliness of workers with mental retardation
.
Mental Retardation
,
30
,
85
92
.
Cummins
,
R. A
.
(
2002
).
Proxy responding for subjective well-being: a review
.
International Review of Research in Mental Retardation
,
25
,
183
207
.
Cusick
,
C
.,
Brooks
,
C
.,
Whiteneck G
. (
2001
).
The use of proxies in community integration research
.
Archives of Physical Medicine & Rehabilitation
,
82
,
1018
1024
.
Emerson
,
E
.,
Felce
,
D
.,
&
Stancliffe
,
R. J
.
(
2013
).
Issues concerning self-report data and population-based data sets involving people with intellectual disabilities
.
Intellectual and Developmental Disabilities
,
51
(
5
),
333
348
.
Finlay
,
W. M
.,
&
Lyons
,
E
.
(
2002
).
Acquiescence in interviews with people who have mental retardation
.
Mental Retardation
,
40
,
14
29
.
Fujiura
,
G. T
.
(
2012
).
Self-reported health of people with intellectual disability
.
Intellectual and Developmental Disabilities
,
50
(
4
),
352
369
.
Hewitt
,
A. S
.,
Stancliffe
,
R. J
.,
Johnson Sirek
,
A
.,
Hall-Lande
,
J
.,
Taub
,
S
.,
Engler
,
J
.,
Moseley
,
C
.
(
2011
).
Characteristics of adults with autism spectrum disorder who use adult developmental disability services: Results from 25 US states
.
Research in Autism Spectrum Disorders
,
6
(
2
),
741
751
.
IBM SPSS Statistics for Windows (Version 22) [Computer software]
.
Armonk, NY
:
IBM Corp
.
Lakin
,
K. C
.,
Doljanac
,
R
.,
Byun
,
S
.,
Stancliffe
,
R. J
.,
Taub
,
S
.,
&
Chiri
,
G
.
(
2008a
).
Choice making among Medicaid Home and Community-Based Services (HCBS) and ICF/MR recipients in six states
.
American Journal on Mental Retardation
,
113
(
5
)
325
342
.
Lakin
,
K. C
.,
Doljanac
,
R
.,
Byun
,
S
.,
Stancliffe
,
R. J
.,
Taub
,
S
.,
&
Chiri
,
G
.
(
2008b
).
Factors associated with expenditures for Medicaid home and community based services (HCBS) and intermediate care facilities for persons with mental retardation (ICF/MR) services for persons with intellectual and developmental disabilities
.
Intellectual and Developmental Disabilities
,
46
(
3
),
200
214
.
Moseley
,
C
.,
Kleinert
,
H
.,
Sheppard-Jones
,
K
.,
&
Hall
,
S
.
(
2013
).
Using research evidence to inform public policy decisions
.
Intellectual and Developmental Disabilities
,
51
(
5
),
412
422
.
National Association of State Directors of Developmental Disabilities Services (NASDDDS) and Human Services Research Institute
. (
2008
).
National Core Indicators Adult Consumer Survey 2008-2009
.
Alexandria, VA
:
NASDDDS
.
Sigelman
,
C. K
.,
Schoenrock
,
C. J
.,
Spanhel
,
C. L
.,
Hromas
,
S. G
.,
Winer
,
J
.,
L
,
Budd
.,
E
,
C
.,
&
… Martin
,
P. W
.
(
1980
).
Surveying mentally retarded persons: Responsiveness and response validity in three samples
.
American Journal of Mental Deficiency
,
84
,
479
489
.
Sigelman
,
C. K
.,
Schoenrock
,
C. J
.,
Winer
,
J. L
.,
Spanhel
,
C. L
.,
Hromas
,
S. G
.,
Martin
,
P. W
.,
&
… Bensberg
,
G. J
.
(
1981
).
Issues in interviewing mentally retarded persons: An empirical study
.
In
Bruininks
,
R. H
.,
Meyers
,
C. E
.,
Sigford
,
B. B
.,
Lakin
,
K. C
.
. (
Eds
.)
Deinstitutionalization and community adjustment of mentally retarded people
(pp.
114
129
).
Washington, DC
:
American Association on Mental Deficiency
.
Smith
,
G
.,
&
Ashbaugh
,
J
.
(
2001
).
National Core Indicators project: Phase II consumer survey technical report. Cambridge, MA: Human Services Research Institute
.
Retrieved from http://www.hsri.org
Stancliffe
,
R. J
.,
Wilson
,
N. J
.,
Bigby
,
C
.,
Balandin
,
S
.,
&
Craig
,
D
.
(
2014
).
Responsiveness to self-report questions about loneliness: a comparison of mainstream and intellectual disability-specific instruments
.
Journal of Intellectual Disability Research
,
58
(
5
),
399
405
.
Tichá
,
R
.,
Lakin
,
K. C
.,
Larson
,
S
.,
Stancliffe
,
R. J
.,
Taub
,
S
.,
Engler
,
J
.,
Moseley
,
C
.
(
2012
).
Correlates of everyday choice and support-related choice for 8,892 randomly sampled adults with intellectual and developmental disabilities in 19 states
.
Intellectual and Developmental Disabilities
,
50
(
6
),
486
504
.

Author notes

Roger J. Stancliffe, Centre for Disability Research and Policy, The University of Sydney, Australia; Renáta Tichá, Sheryl A. Larson, Amy S. Hewitt, and Derek Nord, Research and Training Center on Community Living, University of Minnesota.