Using principles of community-based participatory research we developed a new theory-based measure of health-related quality of life (HRQOL) for individuals with intellectual disability (ID). We recruited adults with ID (n = 129) to take part in interviews and review successive versions of HRQOL items. Critical input about content and understandability shaped the items, as did input from four focus groups of parents/caregivers (n = 16) and representative stakeholders from community-based agencies (n = 7). The resulting HRQOL measure, called the HRQOL-IDD, contains 42 items. The response format depicts a gradient of fluid-filled cups (“none” to “full”) to represent frequency of experience of each item on a 5-point scale.
Improving individual health-related quality of life (HRQOL) is integral to advancing the nation's health, according to Healthy People 2020 (U.S. Department of Health and Human Services, 2012; WHO, 2001). Indicators of HRQOL include self-rated physical and mental health, overall well-being, and participation in society (U.S. Department of Health and Human Services, 2012; WHO, 2001). Based on two decades of observation, concept mapping, and empirical measurement nationally and internationally, Schalock and colleagues proposed an enduring conceptual understanding of the eight core domains of HRQOL for people with intellectual disabilities (ID): emotional well-being, interpersonal relations, material well-being, personal development, physical well-being, self-determination, social inclusion, and rights (e.g., Schalock, Bonham, & Verdugo, 2008; Verdugo, Navas, Gomez, & Schalock, 2012). Because health is not just the absence of disease, HRQOL and quality of life (QOL) measures often overlap in both definition and measurement (Andresen & Meyers, 2000). Consequently, a HRQOL measure will typically include an individual's perceptions of that person's health (e.g., physical, psychological, and social functioning), level of impairment, symptoms, and disability as well as perceptions, role functions, and general well-being (Andresen & Meyers, 2000).
HRQOL is increasingly used as an outcome in clinical trials, effectiveness research, and research on quality of care (Wilson & Cleary, 1995) and is a critical outcome indicator in evaluating health care needs of people with intellectual disability (ID) (Fujiura, 2012; Parmenter, 2001). It is an especially relevant indicator of the process of health care because it is multifaceted and one of the strongest predictor variables of mortality and morbidity (DeSalvo, Bloser, Reynolds, He, & Muntner, 2006). According to the National Institutes of Health, a valid and reliable HRQOL measure is “urgently needed” for people with intellectual disabilities. “There is an urgent need to identify outcome measures that are adequately sensitive, specific, reliable and valid to demonstrate treatment benefit. . . . [Particularly needed are] health-related quality-of-life assessment tools for children and adults with intellectual and developmental disabilities” (NIH, 2010).
Unfortunately, many measures of HRQOL used with people with ID are insufficient or inadequate. Some are borrowed from general community quality-of-life measures and lack validation for this population. Even among instruments created for people with ID, common measurement limitations include problematic measurement design, poorly structured and sequenced questions, inappropriate response formats, and distortions related to acquiescence and social desirability (Cardell, Clark, & Pett, 2015). Other measures prove faulty when used in community-based settings due to the heterogeneity of abilities of respondents (Finlay & Lyons, 2001; Fujiura, 2012). For example, individuals with ID at less severe levels may be able to use degrees of magnitude to distinguish points on a Likert scale; yet these types of judgments may be more daunting for individuals with more severe limitations. In principle, health and social service providers espouse the importance of self-reported subjective experiences like quality of life, yet proxy respondents such as family members and staff are often asked to answer subjective questions about HRQOL for practical reasons. Even when well-intentioned, proxy reports about subjective experiences yield different results than self-reports (Claes et al., 2012). Whenever people can respond for themselves, quality-of-life researchers recommend self-report as a statistically reliable best practice aligned with values of self-determination and empowerment (Claes et al., 2012; Fujiura, 2012).
The purpose of our mixed-method, community-based participatory research was to generate a self-reported subjective health-related quality-of-life measure for adults with intellectual disability. Like the Patient-Reported Outcome Measurement Information System (PROMIS) measures developed by the National Institutes of Health (Cella et al., 2007; DeWalt, Rothrock, Yount, & Stone, 2007), we relied on a “binning and winnowing” process to collect possible items and then modify the best non-redundant items using focus groups and cognitive interviews. In using these qualitative approaches to item development, we embraced a guiding principle of self-determination of adults with ID in the research process. The complexity of participatory research and the abilities of the intended respondents required enhancements to traditional instrument development approaches. The HRQOL measure we aimed to develop was to be completed by adults with ID at less severe levels (not by proxy respondents). Furthermore, the measure was to be locally suitable for our community's culture and prepared for subsequent large-scale reliability and validity testing. To this end, the measure needed to be appropriate for community-based administration with minimal supervision and limited respondent and agency burden.
Developing the HRQOL-IDD Measure: A Systematic Approach
For more than a decade, the research team developed community-based research collaborations to implement a series of mutually agreeable descriptive and intervention studies, from a randomized trial to reduce overweight and increase healthy lifestyle behaviors (Pett et al., 2013) to the systematic development of an HRQOL measure. Together we established processes for our work, a shared purpose, and mutually-agreed upon goals for each project.
Community-based and participatory research (CBPR) approaches resonate with an ethic of social justice, self-determination, and empowerment in disability research (Drum et al., 2009; Israel, Eng, & Shulz, 2012) and the sentiment, “nothing about us without us” (Clark & Ventres, 2016, p. 3). Even so, participatory approaches acknowledge distinct communities as holding intersecting identities, inherent strengths, and collective resources. Participatory action research (PAR) and CBPR approaches are variable, with a spectrum of participation and sometimes overlapping or competing communities involved. The balance of decision-making influence may wax and wane over the course of collaboration as engagement of different partners at different points in the research endeavor draw from a diversity of strengths. In this study of HRQOL, academic researchers contributed scientific expertise for qualitative refinement of items and quantitative considerations of scale performance. We took the lead in organizing the team and pacing the work. Participants with ID were expert in the lived experience of health-related quality of life and how to best ask about it. Their participation was critical. Through sequential participatory rounds of review, their opinions and experiences drove decision-making about health-related quality-of-life items to be included, adapted, or discarded. Parents/caregivers added context about the family and community perspective of item acceptability and pertinence. Stakeholders from community agencies offered an overview of their clients' needs and abilities, as well as agency-level considerations (such as length, administrative burden) for on-site administration of a new HRQOL measure. Stakeholders represented the county-level parks and recreation adaptive program, a university-based specialty clinic serving people with intellectual disability, an independent living center, and a community-based advocacy agency. Agencies would be administering the completed HRQOL to their clients, so their endorsement was critical for long-term uptake of the measure.
Capitalizing on the strengths of the research team, community partner agencies, adults with ID, and parents/ caregivers, we followed an instrument development process described below.
1. Selecting Community Partners
We invited four community agencies to join us, based on positive past collaborations and the extent of their community outreach and relationships with adults with ID. All agreed to collaborate on the HRQOL study, from question formulation to application of results in community settings. Each provided a letter of support, submitted with the grant application, outlining how their agency would recruit and involve people with ID to the research study and host their participation at their site.
2. Establishing a Collaborative Research Process
As soon as the study was funded we hosted our first collaborative workshop with community agency partners to review the research purpose (adopting or generating a HRQOL measure for community use) and assure their buy-in. We also reviewed principles of community-based participatory research. This initial meeting uncovered some new concerns about the HRQOL measurement enterprise. One stakeholder from a partner agency was on board with the project yet stated a survey of 3 items, maximum, would be feasible to implement in a community settings. The research team, in contrast, suggested that a longer initial measure was essential to test items and arrive at a final, more parsimonious measure. Another stakeholder said the agency would be “happy to help the researchers,” indicating a passive role mismatched with the mutuality expected in participatory research. These comments prompted a stakeholder needs assessment, the first step in our instrument development process, to determine if the proposed topic was sufficiently aligned with community priorities, and if not, how we might adapt to find a mutually agreeable interest area.
3. Stakeholder Needs Assessment and Alliance Formation
The lead investigator made personal visits to each agency to conduct interviews with leaders to assess whether we shared enough resolve to pursue a participatory, community-based project and if the topic was pertinent. Each interview discussed what mattered most to their agency and the people they served. Fieldnotes from four interviews (n = 6 agency leaders) were content analyzed using a qualitative descriptive approach (Sandelowski, 2000) for quality-of-life indicators valued by the agencies. The resulting priorities for quality of life for people with ID were having friends, getting around town, making choices (and even mistakes), shaping public policy allowing for opportunity and equality, health, hygiene, self-care, enjoyment of life, physical activity, and material possessions. This list closely matched the theoretical domains of quality of life identified by Schalock and colleagues (Schalock et al., 2008) (Table 1).
We reconvened as a group, and the researchers presented agency values and mission statements along with content analysis of their priorities adjacent to the domains of quality of life. Through discussion, enthusiasm for a HRQOL instrument development project increased as their priorities mapped to the shared research purpose. They agreed, as a group, to continue meeting together to develop a measure of HRQOL they would find useful. Our meeting segued into a discussion about the characteristics of measures suited to the agencies and the people they serve.
4. Critical Review of HRQOL Self-Report Measures (179 Items Identified)
The research team started by evaluating seven existing HRQOL measures for people with ID to determine if an existing measure would be locally applicable and acceptable (Table 2). We reviewed each measure using criteria suggested by stakeholders and the research team, such as ease of self-administration, acceptable reliability and validity, and a logical, predictable structure and sequence of questions written at a 3rd–5th grade reading level. Of the available measures, none were being used by our community partners, and none were wholly acceptable for a variety of reasons. Three were not based on a conceptual framework, one used a proxy respondent exclusively, three had poor reliability, and others required extensive training to administer. Internationally developed instruments were carefully reviewed for items that included language which was specific to a particular culture or geographic region and required adaptation (or elimination) in the measure intended U.S.-based audience.
Although no single existing measure of HRQOL met all the criteria for acceptability and utility within our context, we reviewed each measure for suitable items. The instrument development approach followed the standards for item identification suggested by PROMIS. The research team culled from these measures 179 items congruent with the eight theoretical domains of quality of life for people with ID (Schalock et al., 2008).
5. Stakeholder Digital Rating of 179 Items From the Published Measures for Fit by Domain and Utility (Resulting in 45 Reformulated Items)
In this step, seven stakeholders from community partner agencies received an individualized electronic list of the 179 items collected from existing HRQOL measures. Each stakeholder rated individual items using a system of “green,” “yellow,” or “red” to indicate acceptable, reserved or cautionary acceptability, or unacceptable utility based on their experience with people with ID. A research team member compiled ratings from the stakeholders for ease of interpretation by the research team. Most items fell in the “yellow” zone, indicating they needed revision. Next, each item was evaluated for alignment with a primary domain (e.g., physical well-being or material well-being) and secondary domain from the HRQOL framework. The research team selected usable or “green” items aligned with each domain, eliminated redundant items, and made minor adjustments in wording to increase understandability of items marked “yellow” or “red.” In a meeting with stakeholders about the disposition of items, a group discussion ensued with additional suggestions for improving (or completely eliminating) problematic items in the “yellow” and “red” categories. The result was 45 nonredundant items, organized by domain, and comprising the first version of a HRQOL measure for people with ID.
6. Individual Cognitive Interviews With People With ID (
n = 17) and Four Parent/Caregiver Focus Groups to Review Items (45 Items)
Using the 45-item version of the HRQOL instrument, 17 adults with ID took part in individual cognitive interviews conducted by two researchers using the “think aloud” technique (Willis, 2004). After reading each item (or having it read for them), the participant responded to two questions: What is this question asking you? and How would you answer this question? Through the use of audio recordings and fieldnotes, the team compiled item-by-item feedback from the interviews, paying specific attention to items that were confusing, wording that was difficult to understand, and items that elicited unrelated answers. Simultaneous with participant interviews, four focus groups of parents/caregivers (n = 16) previewed the same 45 items and discussed if each would be meaningful and useful in assessing HRQOL for their family member or client.
Members of the research team compared the cognitive interview data with the parent/caregiver responses for each item, sparking valuable discussions about the usefulness and community responsiveness of each item. Items generating discordant results about the importance and relevance of particular items between adults with ID, community agencies, and the parent/caregiver groups were forwarded for research team review and discussion. Three examples of challenges in item refinement illustrate how we negotiated best-fit solutions for item construction that respected community values, stakeholder capacity, and self-determination of people with ID.
Are you allowed to have a girlfriend or boyfriend?
The content of this item was viewed as perilous by parents. In efforts to protect their adult child with ID, some parents had conveyed that boyfriends/girlfriends were risky and not allowed, and they knew of other parents who also actively discouraged or forbade such relationships. Posing this question was feared, as it could raise the consciousness of adults with ID to question the authority of their parents and invite reconsideration of the possibility of such relationships. Parents asked us to strike it from the measure.
Taking the opposite position, stakeholders from community agencies pressed the research team in a prior conversation to eliminate the word “allowed,” claiming it dismissed the rights of people to pursue adult relationships and undermined their self-determination. Furthermore, stakeholders advocated for the addition of the words or spouse to acknowledge the right of people with ID to marry. They strongly supported retaining the revised item. In cognitive interviews, adults with ID responded to this item with animation, smiling at interviewers and telling stories of past relationships or expectations for future relationships. Some adults also reported the reaction of their parents or guardians to such relationships. Stories of both successful relationships and more problematic relationships surfaced, including stories of dismay and sadness when adults were not able to start or sustain intimate relationships due to lack of an interested partner or parental opposition. We arrived at a mutually agreeable decision to retain a revised version of the item: Can you have a boyfriend, girlfriend, or spouse if you want?
How often do you participate in group activities?
Cognitive interview responses to this item highlighted concerns about the purpose of asking each question and how responses should be interpreted. The item was likely intended to index social inclusion, with the assumption that more is better. However, it became apparent that people living in a group home experienced continuous inclusion in group activities. The expectation of participation in constant group activities was detrimental to quality of life if one did not choose the activity or enjoy the group. Similarly, if the activities were boring or silly it was detrimental to their experience of quality of life.
Neither the parent/caregiver focus groups nor the agency stakeholders identified this question about group participation as problematic. Even so, the compelling concerns of adults with ID persuaded the research team to add a follow-up question to this (and other) items. With agreement from all involved groups we retained the item and in later phases added a follow-up question, Would you like to change that? With this revision, respondents could assess how often they participated in group activities and decide if it was too little, just right, or too much.
Do you feel life is worth living?
Parents were concerned about this question, and community stakeholders agreed. Asking if life is worth living could stir memories of past suicidal inclinations or uncover current preoccupations with suicide. In cognitive interviews, people with ID discussed this question. A few admitted to feeling “life wasn't worth living” in the past, and told how they had worked through these feelings with mental health providers, family, and caregivers. Interviews and fieldnotes recorded their sadness in revisiting that time in their lives.
Although the research team considered this item to be clinically useful, it was eliminated due to concerns related to feasibility and liability. Some community agencies did not have enough staff with appropriate mental health training to triage individuals who might screen positive to this item. Furthermore, the agencies uniformly agreed they would not use the HRQOL measure if the item were retained due to concern about liability.
Following the cognitive interviews, parent/caregiver focus groups, and discussion with stakeholders, the research team met to integrate feedback and arrive at the best mutually agreeable version of each item for presentation back to the communities of interest.
7. Workshop to Review Items (38 Items Selected) and Determine a Response Format
We presented the item analysis and resulting revision at a workshop with community stakeholders to elicit their input. By the conclusion of the workshop we had reduced the item count to 38.
During the workshop we also deliberated on the best way to collect item responses. Traditional methods for gauging an individual's response to a survey question assume a baseline level of functional literacy and neurotypical reasoning abilities. Questions often ask respondents to recall events or feelings, quantify those (“how many times in the last 30 days”), and assign a gradient level of seriousness, distress, or importance to the experience. The team grappled with how to best portray a response format for items to acknowledge stakeholders' early misgivings about level of difficulty and respect their recommendations for appropriate literacy level, concrete questions with examples, reduced response options, and pictorial representation of responses.
A literature review of the available response formats used in HRQOL measures designed for people with ID suggested two-, three-, and five-option Likert-type formats made more accessible with pictorial representations: steps, gestural steps, and buckets (Cummins, 1997b). The concept of steps was a concrete way to represent amount, yet the drawings of steps we encountered were unrealistic and lacked proportionality. Gestural steps included a figure on a set of increasing steps portraying various types of body language to reflect different levels of magnitude.
Overall, the team found these illustrations to be poorly rendered and ambiguous. For example, when the figure was perched on top of the shortest step (meant to indicate “not important”) the figure was drawn with shoulders shrugged. Would this be interpreted as “don't know” or “not important”? The increasing degree of importance indicated by increasing steps also had an overlay of affect. The “not important” step was paired with a figure who appeared unhappy, and the “very important” step was paired with a figure who appeared happiest.
The most appropriate and most easily understood response format appeared to be the representation of buckets with incrementally increasing levels of fluid representing increasing levels of frequency. After reviewing all response format options, agency stakeholders endorsed the buckets idea to quantify responses, with the suggestion we modify buckets to cups as a more familiar container in everyday use. We retained a graphic designer to render visual images of fluid-filled cups congruent with five response options (“never” to “always”). These were rendered in two editions: cups-with-straws and cups-without-straws. Both editions contained numbers and word anchors corresponding with the fluid level (see Figure 1). The two proposed response formats were prepared for consideration by adults with ID for their recommendations.
8. Pilot Testing With Adults With ID: Acquiescence Prescreening, HRQOL Items, Cups-With-Straws and Cups-Without-Straws (
n = 9 Adults) (38 Items)
We obtained consent from nine adults at a community partner agency to take part in a pilot test of the measure. To preface the measure we added a pre-participation acquiescence scale. Acquiescent or “yes” responses may be logged by respondents when questions that are too grammatically complex, require complex judgments, or when a respondent has a desire to please the interviewer (Finlay & Lyons, 2002). Other factors may also influence acquiescence, but are more difficult to manage, such as gender of respondent (females demonstrate more acquiescence), and gender of the respondent in interaction with gender of interviewer (female respondents with male interviewers are more often yea-sayers) (Matikka & Vesala, 1997). Prior to asking quality-of-life questions, we planned to assess if prospective participants could reliably employ both “yes” and “no” responses in a survey situation. The acquiescence scale was borrowed from the Comprehensive Quality of Life Scale (Cummins, 1997a), and asked four questions, two of which required a “no” answer. The negative questions were “Do you make all your own clothes and shoes?” and “Do you choose who lives next door?” In later administrations of the acquiescence screening preliminary to HRQOL administration, approximately 10% of respondents (10 of 103 adults with ID) were unable to answer the acquiescence questions correctly; their HRQOL responses were excluded from preliminary psychometric analyses.
We also queried respondents regarding their preference for the cups-with-straws or the cups-without-straws editions of the graphic response format. Three respondents chose the cups with straws, six chose cups without straws. Most said it “didn't matter” which was used in the final format. One participant said “the straws I like. Because I can imagine whatever I want in there. Water, Coke, milk.” Although a fun feature, we agreed as a team that the straws were unnecessary for the quantification of frequency. It could even prove confusing to have a straw in a cup if the respondent imagined her favorite drink and viewed the completely empty cup as indicating she had finished her drink and that was good. We proceeded with the more straightforward cups without straws.
9. Face Validity Assessment by Three Content Experts and Reading-Level Assessment
Prior to proceeding to larger-scale feasibility testing, we sought confirmation of face validity of the items. We approached three experts with complementary academic and clinical backgrounds and over seven decades of combined engagement in research and practice with people with ID: a neuroscientist/physician, an applied researcher with an education psychology background, and a clinical psychologist. Their task was to assess face validity of the HRQOL-IDD content and items and comment on anticipated usability in both community and research settings. All three experts endorsed the items and response format with minor suggestions. Using this input, the research team and community partners reviewed the scale and made final adjustments and minor edits to the items. Reading level was assessed at 3.0 grade level using the Flesh-Kincaid assessment, meeting expectations for readability.
10. Community Feasibility Testing of 42-Item Version (
n = 103, Representing Clients at All Agencies and in the Community)
Over the next year, we continued to meet with our communities of interest and refine the items. Adding four items strengthened the scale in domains with previously fewer items. To date, 50 males and 53 females (n = 103) completed the HRQOL paper-pencil measure in a feasibility testing phase (mean age was 37.9 years [SD = 13.9, range: 18–75]). Of those who completed the measure, 93.4% were white, non-Hispanic, and of those 9.0% were of Hispanic ethnicity. Sixty-nine (67.0%) of the respondents were employed, mostly part-time (n = 55). Regarding functional ability, 2.3% had ID at more severe levels, with a larger percentage in the “moderate” ID range than reported for the U.S. population (our sample 65.9%, 31.8%, 2.3% and national data 85%, 10%, and 3%–4% respectively for less severe, moderate, and more severe designations) (King, Toth, Hodapp, & Dykens, 2009). Average completion time was 24.5 minutes (SD = 15.5; range: 8–90 min.). The wide range of completion rates depended on level of respondent functional literacy and whether they needed the questions read to them.
Completion time did not depend on functional ability (p = .40) or need for assistance (p = .84). Several people carefully filled in the radio buttons to log their answers, a problem that increased completion time and will be eliminated in future, digitized formats. Forty-nine of the adults were recruited directly from the clientele at partner agencies and completed two versions of the HRQOL-IDD: One organized with random-order items and one organized by domain.
Feasibility assessment in the agency setting included agency preference for the measure by domain versus random item order, completeness of the measure, attrition, time to completion, and characterization of respondents and their responses. Agency stakeholders expressed a preference for the domain-organized version, which allowed at-a-glance review of responses by their clients to focus attention on areas of lower quality of life.
Preliminary examination of the items and their fit within the subscales confirmed that they mirrored the conceptual framework developed by Schalock et al. (2008) (Table 3). Descriptive statistics and histograms indicated that, as IDD research has previously reported, the respondents' self-reported HRQOL was positive especially in the domains of self-determination, rights, and material well-being. Five of 8 subscales (Table 3) were significantly negatively skewed, indicating more positive HRQOL among the respondents. Scores for all 42 items were also negatively skewed indicating a preference for the positive end of the 5-point scale (Never: 9.0%, Rarely: 7.0%, Sometimes: 20.7%, Most of the time: 18.1%; Always: 45.3%). Cronbach's alphas ranged from .40 (Rights) to .81 (Material well-being) (X̄ = .65). Three subscales, personal development (k = 4), social inclusion (k = 4), and rights (k = 4), had fewer items and lower alphas than the other five subscales.
Our challenge was to employ scientifically robust research methods in a participatory and collaborative design to meet the needs of adults with ID and overcome measurement barriers. Ongoing collaboration from three communities of interest—people with ID, parents/caregivers, and stakeholders representing community agencies—resulted in a piloted self-report, paper-pencil, subjective measure of health-related quality of life, the HRQOL-IDD. The measure reflects the preferences of adults with ID in item content and response format. Preliminary analyses of the scale suggest that additional refinement and assessment of psychometric properties is warranted. We anticipate psychometric testing of the instrument with a large, diverse, national sample of persons with ID to provide an adequate test of item and scale reliability. Establishing concurrent and discriminant validity with other HRQOL measures and a depression measure would also be useful next steps.
Stakeholder feedback suggests a digitized format would be beneficial to expand usability. A digitized version of the HRQOL measure with touch-screen technology and an audio option would increase accessibility to individuals with less than 3rd grade reading ability and those with fine motor limitations precluding paper-pencil completion of the responses. Translating the instrument may broaden applicability to people who speak languages other than English. Cross-cultural measurement of quality of life is heavily context dependent, and literal translations would only partially address validity. Ethnographically revisiting the conceptual and contextual domains that shape the experience of qualify of life is warranted at each cross-cultural adaptation of a HRQOL measure (Warren & Manderson, 2013).
Future uses for a robust HRQOL measure include population-specific outcome assessment for new therapeutics and person-centered planning with individuals. The National Institutes of Health promotes the development of measures like HRQOL for specific populations (like individuals with Down syndrome) so that emerging pharmacotherapies and future genomic interventions can be evaluated for their effects on quality of life as well as functional outcomes (NIH, 2010). In addition to using HRQOL as a therapeutic evaluation metric, a HRQOL measure can provide a baseline understanding of an individual's quality of life in each domain for the purpose of person-centered planning and individualized supports (Verdugo, Navas, Gómez, & Schalock, 2012). Locally, we anticipate community partners will employ the new measure to assess quality of life over time, both before and after participation in community-based programs that address HRQOL domains, because their services map directly onto HRQOL domains. Change-over-time identified through serial HRQOL assessments would provide feedback to medical providers, social work and recreation providers, parents, caregivers, and other staff about ongoing areas of satisfaction and need, and provide a platform for collaborative decision-making (Schalock, 2004). Demonstrating improvement in HRQOL over time, agency partners could even use the results to highlight successful programs in their annual reports or grant applications for future funding.
To achieve sustainability and social validity through a participatory research design, the process of developing the HRQOL-IDD relied on a mature academic-practice collaborative with a decade of successful interaction. Our collaborative used an organizational quality improvement process with end-user participation, a strong approach deemed highly protective against threats to social validity (Seekins & White, 2013). Like most participatory projects, this one existed on a continuum of participation with some caveats and limitations imposed by the time frame, funding, organizational investment, diversity of opinions among the different communities of participants, and expertise of each of the academic and community partners.
Issues of power and participation when working with people with ID is a point of discussion and a philosophical point of import. We were clear that each of the communities that took part in this study contributed expertise. People with ID have been disregarded, silenced, or marginalized in research yet may be eager to take part (Bigby & Frawley, 2015; Valade, 2004). The explicit participatory framework used in this study provided an ethical foundation as well as processual validity to instrument development. We recognize that tension between academic and community partners and within the community partner constellations (people with ID, parents/caregivers, and agency stakeholders) both challenged our linear timeline and deepened the emancipatory research principles we embraced.
A limitation of this study is the time lag commonly noted in translation of research to practice. Participatory construction of locally-appropriate measures is a time-intensive and iterative process. Adoption by community agencies is a sustainable manner is not yet accomplished. To make this both practical and sustainable is an ongoing challenge, and a concern common to participatory research approaches and disability research (Israel et al., 2006; Mayan & Daum, 2016; McDonald & Stack, 2016; Seekins & White, 2013). Many models and ways of doing collaborative, engaged research can be found, and our efforts represent one way.
Using participatory methods was an incremental effort to address health disparities and quality of life for people with ID, not a definitive solution. People with ID have a “right to accessible, appropriate, evidence-based services that enable them to achieve personal goals and to enjoy a quality of life equal to that of people who do not have a disability” (Townsend-White, Pham, & Vassos, 2012, p. 271). Those who develop new programs, services, surgeries, medications, and policies are all expected to be accountable for reporting effectiveness and effect on individuals, including whether or not interventions improve quality of life.
The health and wellness needs of adults with disabilities is a critical area of health service deficiency (Drum et al., 2009; Krahn & Drum, 2007). Providers, researchers, and policymakers need to develop more effective health services that reflect the principles of empowerment and self-determination. To facilitate this process, subjective health measures responsive to context are needed to accurately reflect the lived experiences of people with disabilities, including those with ID. To advance our science and improve health-related quality of life for people with ID, our research team developed partnerships with community partners in clinical care, recreation, and advocacy centers. Through these partnerships we engaged adults with ID to help shape a measure of health-related quality of life ready for additional psychometric testing and revision with a diverse national sample. The steps we employed in this research process may be instructive for other research and community-based teams seeking to measure similar constructs.
This article is based on a Presentation of Distinction awarded by the Council for the Advancement of Nursing Science, 2014, and presented at the State of the Science Congress on Nursing Research with the title Complexity in instrument development: Designing a health-related quality of life measure for people with intellectual disabilities.
Funded by a University of Utah Community-Based Research Grant, and the University of Utah College of Nursing RITe HERE Research Innovation Team.
Thanks are due to the University of Utah nursing and occupational therapy students who took part in data collection and preparation of research materials: Allie Yost Majors, Eric Smith, Heather Arias, Cindy Thomas, and Hillary Palmer.
We acknowledge the active engagement of the community partners who made this work possible: Salt Lake County Parks & Recreation Adaptive Programs, Utah Independent Living Center, Utah Association for Intellectual Disabilities, and Neurobehavior HOME University of Utah Hospitals and Clinics. Dr. Diana Brixner, College of Pharmacy, University of Utah, facilitated the face validity review process, and we thank her for her generous assistance.
We express appreciation to Christine Bigby and the Living with Disability Research Centre at LaTrobe University for the opportunity to prepare the article as part of a Visiting Professorship awarded to Lauren Clark.