Health professions education researchers, including those who study graduate medical education (GME), have a long-standing love affair with surveys. Evidence of this fondness can be found by reviewing recent articles published in the Journal of Graduate Medical Education (JGME). By our count, 56% of Original Research and Brief Report articles published in 2021 used a survey. This large proportion is not surprising, considering the constraints that many GME scholars face, including limitations of time, money, and methodological expertise. Consequently, surveys are often the most accessible research method for GME investigators. In addition, surveys are commonly used by GME educators for trainee assessment and program evaluation. For these reasons, surveys are quite adaptable and can be an efficient way to assess hard-to-measure psychological constructs like beliefs, values, attitudes, perceptions, and opinions.1
Notwithstanding their widespread use and methodological flexibility, surveys come with several inherent weaknesses. One weakness, supported by decades of empirical evidence in fields like public opinion polling, sociology, and psychology, is that low levels of respondent motivation can lead to poor-quality data.2 In GME the problem may be even more acute, as resident physicians have many competing time constraints, including clinical and educational responsibilities, as well as life beyond work. These and other constraints make prioritizing surveys difficult, regardless of the merit of any particular GME study or evaluation effort.
With this landscape in mind, we focus on the issue of respondent motivation in this Editorial. To address motivation, we first discuss cognition and highlight what participants typically consider when completing a survey. Next, we describe several response behaviors that can occur when motivation is low, thereby resulting in low-quality survey data. We conclude with design and implementation strategies that can help researchers optimize respondent motivation and ultimately lead to more precise, accurate, and interpretable survey data.
Survey Response Process
To understand respondent motivation, it is helpful to first examine the psychology of survey response. A classic framework used to describe the cognitive work of taking a survey is Tourangeau and colleagues' response process model (see Figure).3 It proposes that respondents move through 4 cognitive processes when taking a survey. First, they need to comprehend the survey item and interpret the meaning of the words on the page (in a self-administered survey). Next, they need to retrieve from their long-term memory the relevant information needed to respond to the item. That information could include specific dates for activities in the past, or an attitude or opinion about a topic. Generally, something must be retrieved from memory. Next, respondents need to integrate that information into a judgment and, in some cases, make an estimation. For example, a respondent asked to report how often they gave blood last year might not remember all instances and therefore would need to estimate the number based on how often blood drives are held. If blood drives are conducted quarterly, then the respondent might estimate “4” as the number of times they gave blood last year. Finally, once respondents have an answer in mind, they must report that answer on the survey and adapt their response based on the options provided. In the example above, if the response options for the frequency of blood donations are presented as “sometimes” or “often,” then a respondent would need to convert their answer “4” into what they believe is the most appropriate response category.
It is important to note that respondents may jump around, and even skip steps, while working through the 4 cognitive processes. For instance, a person asked to report how often they saw a physician last year might begin retrieving that information from memory but wonder if going to the physical therapist should be counted. They might then jump back to the comprehension step and reread the question to look for clues. The respondent might then jump forward to the response step to look for other clues about what a reasonable number of physician visits might be, based on the response options provided. In this way, the survey response process is nonlinear; respondents hop around and use contextual clues provided by the survey to help navigate and respond to individual survey questions.
Another important point about these 4 cognitive steps is that difficulties encountered at any point along the process can produce errors. For example, respondents might misunderstand a question because of confusing wording or atypical visual layout, not be able to retrieve the relevant information because they have forgotten it, or not be able make an accurate judgment because they do not have the necessary information to give an informed answer. In each example, response errors may occur, and the answers provided are more likely to be imprecise, inaccurate, and difficult for survey researchers to interpret. Furthermore, respondents can and often do take cognitive shortcuts while working through a survey. That is, at any step in the response process—comprehension, retrieval, judgment and estimation, or reporting—respondents may not optimize the survey response process. Instead, they may choose to conserve their mental energy and satisfice.4
Motivation and Satisficing
Concerns about respondent motivation have long been described in the survey design literature. More than 2 decades ago, Krosnick4 noted that “a great deal of cognitive work is required to generate an optimal answer to even a single question.” As such, high-quality answers tend to come from respondents who are motivated to expend that energy and optimize the survey response process. Respondents are motivated by numerous factors, including their desire for self-expression, intellectual challenge, and a desire to be helpful. In GME, residents report being motivated to participate in surveys out of a sense of duty and professionalism.5 On the other hand, personal experience and decades of empirical research tell us that respondents are often unmotivated to provide high-quality answers to survey questions. Krosnick calls this common situation satisficing.4
Satisficing is the degree to which respondents “compromise their standards and expend less energy.”4 That is, rather than devote the necessary effort to generate optimal answers, respondents often give “good enough” answers by, for example, being less thoughtful about a question's meaning, searching their memory less thoroughly, integrating retrieved information carelessly, and/or selecting a response imprecisely. Thus, instead of carefully working their way through the 4 cognitive steps of the response process to generate the best, most precise answers (ie, optimizing the process), respondents who satisfice conserve their mental energy and settle for giving just satisfactory answers.4 Although empirical evidence is limited,5 we suspect that satisficing may be particularly prevalent for GME trainees given their unique time and context constraints.
In practice, satisficing results in a number of response behaviors that lead to low-quality survey data: these include (1) rushing through a survey; (2) selecting the first reasonable answer; (3) agreeing with all statements presented on the survey; (4) selecting the same options repeatedly, in a straight line (so-called straightlining); (5) selecting “don't know” or “not applicable” without actually thinking about the question being asked; and (6) skipping items or entire sections of a survey.2 Satisficing is epitomized by this quote from a resident at a large Midwestern academic medical center who was asked about their survey behaviors: “A lot of the time… I'll just click the middle all the way through, because I have nothing really to contribute and I just want to get through it.”5 As this statement implies, answers from respondents who are not optimizing the response process are suboptimal and result in poor-quality data that are unlikely to be trustworthy, credible, or valid for their intended use.
Mitigating Satisficing and Encouraging Thoughtful Responses
In light of the problems that result from satisficing, it is important for GME educators and researchers to understand the phenomenon and work to implement solutions that mitigate harms to data quality. Krosnick4 described 3 conditions that promote satisficing: (1) greater task difficulty; (2) lower respondent ability or education level; and (3) lower participant motivation to respond. In most cases, respondent ability and education level are fixed. Fortunately, GME researchers often are surveying high-ability participants who are well educated and thus less likely to satisfice than, for example, a high school student. As for task difficulty and respondent motivation, survey designers can influence these factors—and appreciably mitigate satisficing—through thoughtful design and implementation practices.6
Easing task difficulty is the most important way to mitigate satisficing. In the case of a survey, the task is completing the survey. The best approach for survey designers, to make the task easier, is to follow evidence-informed best practices. The goal is to design a high-quality survey that supports respondents as they work their way through the 4 cognitive response processes. Although these design practices have been articulated in detail elsewhere,1,7-9 we highlight in Table 1 a number of high-yield practices that GME survey designers can use to simplify the task of survey completion.
Finally, designers can directly address respondent motivation—the motivation to accept a survey invitation, start the survey, and optimize the response process—by viewing a survey request as a social exchange. As described by Dillman et al,8 “people are more likely to comply with a request from someone else if they believe and trust that the rewards for complying with that request will eventually exceed the costs of complying.” In other words, potential respondents often consider 3 primary factors: rewards (What will I gain by taking this survey?), costs (How much time will it take?), and trust (Do I trust the invitation source and the proposed data use?). Table 2 includes several practices that designers can employ to magnify rewards, decrease costs, and fortify trust.
Summary
High-quality survey results come from participants who are motivated to optimize the response process. Unfortunately, many respondents are unmotivated and tend to conserve their mental energy and satisfice, thereby settling for “good enough” answers. A useful model for understanding how respondents think through a survey is the response process model, which describes 4 cognitive steps: comprehension, retrieval, judgement and estimation, and response. By using this model, survey developers can anticipate the cognitive work of respondents and mitigate respondents' tendencies to satisfice. Respondent motivation can be further bolstered by considering the costs, rewards, and trust involved in survey completion. By employing these strategies, researchers and educators can optimize respondent motivation and collect better-quality survey data. In addition, we encourage investigators to study GME-specific survey strategies, designed for the unique GME population of residents, staff, and faculty, to optimize survey data quality.
References
Author notes
Disclaimer: Small portions of this editorial were originally published on the Harvard Macy Institute Community Blog (https://harvardmacy.org/index.php/hmi/designing-better-surveys) under a Creative Commons license (CC BY). The CC BY license allows material to be distributed, remixed, adapted, and built upon in any medium or format, so long as attribution is given to the creator. The views expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the Uniformed Services University of the Health Sciences, the Department of Defense, or the US Government.