Background

In medical education, self-administered questionnaires are used to gather information for needs assessments, innovation projects, program evaluations, and research studies. Despite the importance of survey methodology, response rates have declined for years, especially for physicians.

Objective

This study explored residents' experiences with survey participation and perceptions of survey design and implementation.

Methods

In 2019, residents at a large Midwestern academic medical center were recruited via email to participate in mixed specialty focus groups (FGs). Narrative comments were recorded, transcribed, and then analyzed via conventional content analysis, utilizing cognitive sociology as a conceptual framework. Themes and subthemes were generated iteratively.

Results

Postgraduate year 1–4 residents (n = 33) from internal medicine, surgery, and neurology participated in 7 FGs (3–7 participants/group) from April–May 2019. Eight themes were generated during content analysis: Negative emotions, professionalism, accuracy, impact, survey design/implementation, biases, survey fatigue, and anonymity. Residents questioned the accuracy of survey data, given the tendency for self-selection to drive survey participation. Residents wanted survey participation to be meaningful and reported non-participation for a variety of reasons, including doubts over impact. Satisficing and breakoffs were commonly reported.

Conclusions

Though residency program cultures differ across institutions, the findings from this study, including potential barriers to survey participation, should be relevant to anyone in graduate medical education using survey methodology for programmatic data collection, accreditation, and research purposes.

Objectives

This study explored residents' experiences with survey participation and perceptions of survey design and implementation.

Findings

Graduate medical education (GME) trainees reported non-participation in numerous surveys, and satisficing and breakoffs were commonly reported across focus groups.

Limitations

Like other qualitative studies with small sample sizes, results may not generalize to other contexts and residency cultures.

Bottom Line

While it is unclear how prevalent satisficing behavior is in surveys of GME trainees, this problematic response behavior should be considered when reporting survey results and relying on survey data for decision-making or research purposes.

In medical education, self-administered questionnaires are routinely used to gather information for educational needs assessments, innovation and quality improvement/patient safety projects, program evaluations, and research studies.14  Survey results are used to revise curricula, influence program development, develop or challenge policies, aid in funding,1,46  and meet accreditation requirements.7,8  Despite the importance of survey methodology across numerous fields, response rates have been declining for years, especially for physicians912  and other healthcare providers.13  Physicians, in particular, are perceived as “challenging populations to survey,”11  and physician trainees are considered to be members of hard-to-reach11,12  and vulnerable populations.14,15 

Low response rates affect the ability to generalize to target populations16  and can lead to inaccurate assessments of stakeholders' needs when developing educational curricula, erroneous conclusions in survey studies, and the inability to update policies and guidelines. While Wiebe et al17  noted that numerous medical journals recommend response rates of at least 60%, response rates under 60% are common in physician populations.10,18  Diverse factors appear to be driving low physician response rates, including but not limited to increasing clinical pressures, survey burden, and survey fatigue.12  For trainees, other factors may be at play, including discomfort with providing bidirectional feedback within organizational units.19 

In addition to declining response rates, the overall quality of data derived from self-administered questionnaires in health professions education is often low, due to survey design issues and response processes,20  all of which contribute to measurement error.1,3,2123  Importantly, the validity of inferences based on survey results is closely tied to factors such as survey design,5,24,25  assessment of survey item quality,1,5,2123,2630  respondents' cognitive and sociocultural response processes,5,22,28,3032  response rates and sampling,16  and whether respondents actually resemble the population of interest (ie, sample representativeness).12  Breakoffs, where respondents accept survey invitations, start surveys, and then break off prior to finishing, also affect data quality and ultimately survey inferences.20 

Social desirability bias, or the tendency for some respondents to provide answers perceived to be more acceptable (ie, desirable) to researchers,26  also affects data quality and the validity of inferences.3335  While social desirability bias is typically associated with sensitive questions and topics,33  it may also be a concern in vulnerable populations (eg, trainees), who may feel they cannot be completely honest when providing feedback. Data quality is also affected by satisficing, a decision-making strategy that respondents use to minimize cognitive effort, choosing “good enough” answers24  rather than expending extra effort to make decisions. Satisficing can result in poor data quality when respondents meet minimal requirements for survey completion by skipping items, “straight lining” (eg, choosing the middle option), or selecting items which may not reflect their true beliefs or answers.8,24 

Researchers have called for additional studies to better understand barriers to survey participation by physicians.1113  Few studies have focused on response rates and survey use within resident populations.3639  While Chaiyachati et al reported the positive influence of messaging during survey invitations in a study of internal medicine residents,38  they noted their findings were not generalizable to single survey studies, common in medical education. In one of the few survey methodology studies with residents, Akl et al compared email with postal delivery of surveys.36  We could not find any studies which examined residents' perspectives on survey participation via qualitative methodology. Given the prevalence of survey methodology in medical education and graduate medical education (GME) in particular, we sought to explore residents' experiences with survey participation and their perceptions of survey methodology via a focus group (FG) study.

Conceptual Frameworks

We used cognitive psychology and cognitive sociology theories to inform study design and qualitative analysis. While cognitive psychology has allowed survey researchers to better understand cognitive processes such as attention, comprehension, recall of information, and their impact on survey responses,22,23,2931  cognitive sociology serves as a useful framework to understand the influence of context and culture on cognitive processes.40  Responses can thus be understood not only from a cognitive perspective (eg, item difficulty leading to recall problems on surveys), but also via a sociocultural paradigm,32  where language and culture are seen as interwoven and inseparable. In other words, from a cognitive sociological perspective, terms and phrases used by survey developers are not assumed to have universal meanings across sociocultural groups.32,40  In this study, we used the cognitive sociological concept of verstehen, or respect for participants' lived experiences and perspectives,40  to guide the conventional content analysis and reduce investigator bias. The study was grounded within a naturalistic research paradigm.

Design, Setting, and Context

In this prospective qualitative FG study, we used an inductive approach with no a priori hypothesis.

The study was conducted at a large Midwest academic health science center from April through May 2019. GME training programs at our institution are culturally/ethnically diverse, and programs recruit from both US and foreign medical schools. Most residents at our institution go on to pursue fellowships.

Recruitment

Permission to participate was granted by program directors prior to recruitment. Emailed invitations with an attached cover letter were sent to residents in internal medicine, general surgery, and neurology by a chief resident or coordinator in each residency program. The principal investigator (PI), who has no role in resident assessment, also provided a brief in-person introduction to the study at the beginning or end of select class meetings.

Interview Guide

The PI developed a standardized interview guide (see online supplementary data), and feedback was obtained from research team members and chief residents who act as junior faculty in the internal medicine residency program. Responses to the following FG prompts were analyzed: “Tell me about your thoughts/beliefs about participating in surveys and providing feedback via a survey tool (in general). Is this something you enjoy/not enjoy? Why? What could make this data collection process better, from your perspective?” and “In general, what are your perceptions about taking surveys while you are a trainee?”

Data Collection and Management

Qualitative: 

FGs were used to gather narrative data from participants. These face-to-face, facilitated meetings are often used to explore how people think or feel about an issue through guided group discussion.41  FGs were scheduled outside of clinical duties and required educational sessions, and in locations convenient for trainees. Small research incentives ($10 gift cards) and snacks were provided to FG participants. During FG, residents responded to verbal questions and narrative data were collected via a digital audio-only recorder. At the conclusion of each FG and interview, recordings were transferred and stored within a password-protected, secure network drive. Recordings were transcribed by an institution-approved vendor. Data collection continued until data saturation was reached, as indicated by a repetition of ideas across consecutive FG transcripts and no new categories.4244  When deciding how many FGs to hold, we followed the parameters for saturation in FG research.44 

Quantitative: 

A data collection sheet was used to collect anonymous demographic data on gender, age group, training program, postgraduate year (PGY), medical school (US/non-US), fellowship plans after residency, academic career plans, survey research experience, training in feedback, and training in survey methodology. Items also included a question about reasons for participating, with the following (check all that apply) response options: helping researchers, improving survey methodology, and interest in FG methodology.

Maintenance of Confidentiality: 

J.C.F. and A.B., faculty within the surgery and internal medicine residency programs, did not have access to data collection sheets or voice recordings and thus could not link participant comments to individuals within their programs. All data were analyzed in aggregate and anonymously.

Analyses: 

Descriptive statistics were used to characterize the sample. A conventional content analysis45  was initially conducted by team members (C.Y.C., J.C.F., A.B.). In a conventional content analysis, predetermined categories are not imposed on narrative data.45  The process is inductive46  and iterative, and theme and subtheme development are closely aligned with data that have been collected, rather than predetermined codes. This research stance is compatible with the concept of verstehen in cognitive sociology. After reviewing the narrative data, team members developed codes individually, as recommended,47  then met several times in an iterative process to refine codes and develop themes, subthemes, and select exemplars. One author (A.P.L.) provided feedback on themes and exemplars, and the group reached consensus on themes, sub-themes, and exemplars. Coding was completed by hand.

Researcher Triangulation: 

Two research team members (J.C.F., A.B.) are embedded within residency programs at our institution. Two team members are PhD educators (C.Y.C., J.C.F.) and have experience in qualitative methodology. One author (A.B.) is a clinician and one (A.P.L.) has a MEd in higher education administration. The diversity of researcher backgrounds and perspectives aided credibility of findings, a concern in qualitative studies.47,48 

Methodological Triangulation: 

A short demographic form also provided quantitative data which reinforced some of the qualitative findings (eg, items querying the reason for FG participation).

The institutional review board at Cleveland Clinic approved the study as exempt from further oversight.

Demographics

Thirty-three residents from internal medicine, surgery, and neurology participated in mixed specialty FGs (Table 1). Internal medicine, the largest specialty at our institution, was represented by 73% of participants. A total of 7 FGs were held during the 2-month period, with 3 to 7 participants in each group. Additionally, 2 one-on-one interviews were held with residents who were unable to attend scheduled FGs. Ninety-four percent (31 of 33) were categorical residents, with 70% (23 of 33) graduating from US medical schools. The majority of FG participants were PGY-1 (n = 16) and PGY-3 (n = 11), and 45% were female. Most participants (85%, 28 of 33) were interested in helping researchers and 70% (23 of 33) accepted the invitation because they wanted to improve survey methodology (Figure). Most residents had never participated in a FG before and 30% were interested in FG methodology. (Figure).

Table 1

Characteristics of Focus Group Participants

Characteristics of Focus Group Participants
Characteristics of Focus Group Participants
Figure

Reasons for Signing up to Participate in a Focus Group

Figure

Reasons for Signing up to Participate in a Focus Group

Close modal

Conventional Content Analysis

Eight themes were generated during the content analysis: Negative emotions, professionalism, accuracy, impact, survey design/implementation, biases, survey fatigue, and anonymity. See the Box for subthemes and exemplars.

Box Themes, Subthemes, and Exemplars
Theme 1: Negative Emotions
Subtheme: Annoyance/Frustration
  • “Depending on the type of survey and kind of the way it's implemented, (it) can be very quick, easy and painless, or it can be very annoying, depending on the kind of the format it's given to you in.” (Group 6, Participant 1)

  • “If you want me to fill it out, show some investment. Like, hey, I really appreciate you filling this out for me. Just sending it out to everybody—I know it's easy on them, but it just fills my inbox and on principal, I don't like responding to those.” (Group 8, Participant 2)

  • “I guess my frustration would be the length of them…” (Group 8, Participant 1)

Subtheme: Under Duress
  • “We're doing these just to do them because it's a requirement overall for people to enter these in, but they go straight into a fire somewhere or something.” (Group 8, Participant 2)

  • “Like sometimes there's some residents doing like some kind of research and they harass us with emails saying, ‘Please fill out my survey. I'll give you something if you do it,' and then I'll feel sort of a responsibility…” (Group 7, Participant 3)

  • “It's like I don't know if it's required or not, so I just do them.” (Group 8, Participant 1)

Subtheme: Emotions Influence Answers
  • “It affects the answers because if you're not happy to be filling them out—and then especially if it's about feedback or anything—It can definitely negatively skew it.” (Group 7, Participant 2)

Theme 2: Professionalism
Subtheme: Altruism
  • “Will it help somebody out that I care about? Is it for a cause that I care about? That changes things too, in which case you kind of forget about it (incentives).” (Group 6, Participant 1)

  • “…and then I realize like it could help someone with their individual research project.” (Group 1, Participant 1)

Subtheme: Code of Conduct
  • “I kind of view it as a requisite, like part of our work is to give feedback, just for continuous improvement. Obviously, nothing is perfect. I don't particularly enjoy it. I mean, I don't look forward to it. But, I mean, we have the avenue to feedback.” (Group 2, Participant 3)

  • “There's a sales representative who actually did a really awesome job and I want to appreciate them, and this is probably the more substantial way of doing that. It's like a moral obligation at that point.” (Group 5, Participant 1)

Subtheme: Support for Peers
  • “Like I participated in studies where my friends were doing a startup and they needed a focus group to kind of pitch their idea. Like those things, you know, I am much more receptive and much more interactive with the survey.” (Group 9, Participant 3)

  • “If it's a short one and someone asks me for the favor of filling it—and you know that it's for a cause or something that you're interested in—then I'll fill it. Otherwise, we're already burned out.” (Group 7, Participant 3)

Theme 3: Accuracy
Subtheme: Satisficing
  • “So I would be one of the persons that my colleague here mentioned that would just be clicking through 3 or 4 or 5 and not providing anything more worthwhile to the survey because I just want them away from my inbox.” (Group 5, Participant 3)

  • “A lot of the time, being honest with you, I'll just click the middle option. If there's a really bad or really good, I'll just click the middle all the way through, because I have nothing really to contribute and I just want to get through it.” (Group 9, Participant 2)

  • “I have so many surveys I don't fill, and sometimes you are obligated to fill, so I just pick whatever. Unless, like everybody says, you have a strong opinion.” (Group 9, Participant 1)

  • “I think if you look at the distribution, it's probably a nice bell curve where most people are just “whatever” about it and pick “3” down the line.” (Group 9, Participant 3)

  • “You know what, throughout most of the Likerts I don't care, but give me a box so that I can provide substantive feedback, because otherwise I'm just click, click, click.” (Group 5, Participant 2)

  • “…but then sometimes towards like page 3 or 4 you start just going down the 3 columns.” (Group 8, Participant 1)

Subtheme: Honesty
  • “If it's a colleague doing the survey then I'll try to be more truthful because it's someone that I'm more morally obligated to directly.” (Group 2, Participant 9)

  • “And I have a hard time being completely, I guess, honest on surveys.” (Group 1, Participant 4)

  • “I think sometimes I don't always give honest feedback. Like I find it difficult to be completely harsh. There are certain things that we want to change, but kind of sugarcoat it a little when I'm giving my feedback.” (Group 4, Participant 1)

  • “I'll start to do a survey and I literally have to go, or I get paged, like we all have somewhere to be like right at that moment, and I will just start filling it out and just purposely click answers where, because you know there are those questions where if you click ‘yes' it's going to generate 5 other questions. And you're like, you know, I don't have time for this. Like, I'm honestly just going to click “no” and I'll make it go some other way, but like this is not the time, I don't have time for this. Again, back to the issue of just no time to do it and do it with accuracy.” (Group 7, Participant 2)

Subtheme: Questioning Results
  • “You're generally getting the people that are really happy or really upset about something and everyone else is just like, ‘Yeah, I think the program's fine' and I want this 5 minutes to do X, Y, or Z. You're going to lose out on those people, but it's just the overall problem of—you can spread it to a lot of people, but what people are you actually getting the responses from? And is that truly representative of the majority of your residency or whatever sample that you're going after, if you're only getting a fraction of those people responding?” (Group 4, Participant 3)

  • “So I question like the accuracy—especially of like a longer, more cumbersome survey.” (Group 7, Participant 6)

  • “I don't know actually how useful the surveys are in giving accurate feedback in a lot of situations because I think a lot of people just click (the middle option).” (Group 9, Participant 2)

Theme 4: Impact
Subtheme: Expectations of Change
  • “If multiple people, just using an example here, multiple residents have complained about the same thing and nothing has happened, why should I fill out a survey about the same thing that we've all been complaining about and there's been no change? Especially when I only have like maybe one or 2 things on my mind and it's not like multiple issues I want to talk about. So I think part of it is like, and like even going to a car wash, like was mentioned earlier, it is like are they actually just going to take the positive reviews—or are they actually going to look at everything?” (Group 5, Participant 1)

  • “And then there's a little sense of ineffectualness, like, are they really using the results?” (Group 1, Participant 3)

  • “I think it's hard to sometimes get actionable information from survey results, but if it's designed properly perhaps it could.” (Group 2, Participant 3)

  • “And then not knowing the results, or not knowing that it does anything good for anything, that's also very frustrating.” (Group 8, Participant 2)

Subtheme: Meaningful Effort
  • “I think I would enjoy surveys about my program or about the academics and all that type of stuff if I knew it was making any form of difference in my life, but I think we're all very—What's the word? The things are so change-resistant that the surveys are just cursory, you know. …The general sense is if you surveyed just [program name] and people in general, like “What do you feel like when we send you these things?”, that they actually go to a purpose, I think the majority of people would say, “I think someone looks at them and then ultimately nothing happens.” (Group 8, Participant 2)

  • “And how much do you think that the survey will impact your day in and day out. If it's like a certain aspect of our residency program, I guess I'm more invested to fill that out.” (Group 4, Participant 2)

  • “I think it depends on what the topic or what the purpose of the survey is. So oftentimes people may feel more strongly about completing a survey because they just completed a rotation and they really want to share their thoughts or something that they feel strongly about. Whereas oftentimes we get so many surveys in—and like was said earlier, when you're so busy and you have so many things to do, you may just pick and choose which surveys you want to complete based off your level of interest.” (Group 4, Participant 4)

  • “If you think about the satisfying things or like the polls on ESPN, it's like, oh, then I can see how everybody voted, or I can see like, you know—It's just to make it seem like this has a purpose. There's something at the end that shows you, “Hey, this is why you did this.” (Group 8, Participant 2)

  • “You question whether or not this is going to apply any meaningful change.” (Group 7, Participant 6)

Subtheme: Drives Change
  • “I think surveys are crucial for providing feedback to those who are soliciting that feedback and there's usually, you know—The best of the best have targeted questions that are designed to elicit responses that are actually helpful to those who are seeking those responses.” (Group 3, Participant 1)

  • “But I think—Like this is the way you get the feedback to make changes that are helpful to your residency program or your fellowship program or your administration, but I guess it's hard to bring it in.” (Group 4, Participant 3)

  • “And so kind of extrapolating that to the caregiver survey we do here. It's kind of the same thing. You should do that survey so that you can make sure that any concerns you have are being heard and appreciated as opposed to kind of just going by the wayside. So there are times when it is very important to do the surveys.” (Group 6, Participant 1)

  • “I 100% believe that survey participation is crucial. It's not just important, it's mandatory in terms of being able to, you know—As we move about the new methodology of how we ‘plan, do, study, act' things in education or, you know, elsewhere outside of medical education. You know, the lack of survey data will prohibit you from adequately catering to your audience's needs—And so kind of coming back full circle to the thought of participating in surveys, I think it's sort of mandatory in terms of being able to grab data and act upon that objectively.” (Group 3, Participant 1)

  • “I think if you're like passionate about seeing a change. Like say for us, if it's a rotation or a service that's just not running the way it should, then yeah, the more people that fill out a survey the better.” (Group 5, Participant 4)

Theme 5: Survey Design/Implementation
Subtheme: Transparency
  • “We get surveys that we open up and then, you know, you think, ‘Okay, it's going to be one or 2 questions.' …and then it's like 10 questions and then there's a next page and a next page and a next page. So that pretty much alarms you that the next survey you fill out is probably going to be just as long or have too many entry fields or things like that, that it really just makes you not want to complete them at all in the first place because it's like once you start and you're like, oh man, this is going to be a 20-minute or 30-minute commitment. I don't have time to do that.” (Group 8, Participant 2)

  • “Some surveys even have an estimated amount of time, which if it's short, is definitely something that's very helpful to see at the beginning…” (Group 8, Participant 1)

Subtheme: Researcher Expectations
  • “I feel like sometimes when people probe us for quantitative and qualitative answers to surveys, they'll send us 150 questions and say, “Well, it will only take 10 minutes of your time. My personal experience is when you open the survey and you realize that it's either 150 questions or the first page has 30 questions and you're not really sure how many questions follow those pages, my default is to close it and defer it ‘til later, which often is never.” (Group 1, Participant 1)

  • “…If they had asked maybe 20 questions I'd say, ‘Well, they don't really have a—you know, they're not being considerate or maybe they don't have a sense of what's realistic. But I think most of our peers have a sense of what's realistic.” (Group 1, Participant 1)

Subtheme: Timing
  • “And the surveys that I'm more likely to fill out are the ones that are provided in a timely fashion. So in our brain session, they give us a paper in the beginning and so before you leave, you fill it out. You're not committing additional time outside of that framework.” (Group 5, Participant 1)

  • “I think it totally depends on when the survey is sent out… Like if I'm at home, and I'm not really doing anything, I might just click through it. But if I get it in the middle of the day, like during work, and I open the email, I'm probably not gonna go back and look at it.” (Group 1, Participant 2)

Subtheme: Questions
  • “If I can't understand it within the beginning of the first 2 sentences of it, then I feel like it doesn't apply to me or that my information in filling this out won't be relevant. Then it's just X and move onto the next thing.” (Group 2, Participant 8)

  • “…if you have limited options and limited answers, based on what the person who wrote the survey wanted to report. So I think it depends. …Some surveys are more detailed and give you some kind of options so that you can probably end up with a solution or improvement.” (Group 7, Participant 1)

  • “I think the vast majority of surveys I've taken in my life have gone toward the side of generic comments, as well as, you know, standard Likert scales from 1 to 5, 1 to 10, things like that, asking for generic responses.” (Group 3, Participant 1)

Subtheme: Completion Time
  • “If it's more than one page, then I will just click 3 all the way through. Where if it's just a few sections, then I might think about it a bit more and give a better opinion or something.” (Group 9, Participant 2)

  • “I don't have time to be filling out these long surveys. So if it's too long, I'll just quit it.” (Group 7, Participant 3)

  • “Is it worth my time? Some of these are really—I mean they're not that long, but it takes time to sit down for 10 minutes and actually think about it.” (Group 5, Participant 1)

  • “If you tell me, ‘Oh this survey is going to take you 1 minute to do,' no problem. I'll get it done right away. If the survey could take who knows how long, I'm probably not going to do it because I'm not going to risk it taking forever and ever and ever.” (Group 6, Participant 1)

  • “We've been getting some short surveys recently for QI projects from the senior residents. And the one thing that they do that's nice is they keep them very short. There's like only 4 or 5 questions and I open it and I realize that it's very short…so they're limiting their surveys to 5ish questions, and you can really do it in 30 seconds. 30 seconds to 60 seconds. I think 5 minutes is way too long.” (Group 1, Participant 1)

  • “The ACGME ones are like pretty long. They're about 10 to 15 minutes sometimes…” (Group 8, Participant 1)

Subtheme: Ease of Use
  • “I think it's an easy way to gather data, especially in dealing with a very large group of people that you're trying to get through to. They are easy to make and put together and they are easy to distribute, usually via email. Everyone is on their phone. For the most part, they are pretty quick to complete so even for the person who has to complete the survey, it's not too time-consuming. There're very few recheck spots usually on the survey, so it's just kind of ‘yes' or ‘no,' get down to the point. So I think they're often used for those reasons.” (Group 4, Participant 1)

  • “I think the analysis of them is also easier now. So like if you use Survey Monkey they graph it for you almost immediately. It also lets you track how many responders actually responded to your survey, so that has advantages, as well.” (Group 4, Participant 2)

Subtheme: Incentives
  • “Or if there's a gift card involved or something, then I'm more likely to do it.” (Group 1, Participant 4)

  • “I don't know if like giving incentives like gift cards or things like that would actually improve this, but I'm guessing it might probably, but I guess it doesn't hurt. But I don't know if it actually does improve participation.” (Group 4, Participant 2)

  • “Compensation is always a strong motivator, but at the same time if it's something that compensation is absolutely not an option, just brevity and direction are probably the 2 virtues that you could have in a survey if you want a physician to fill it out because every health profession is now tasked with thousands of clicks. Even 1 or 2 more seems daunting.” (Group 8, Participant 1)

  • “When there is the possibility of a reward at the beginning, it's much more enticing. As long as it's like a guaranteed reward. If there's a reward of you might be able to win something then, okay, I'm not going to do it. If you've got a guaranteed one dollar, okay, maybe. It kind of depends. It's a risk-benefit thing. Is my time worth filling this out? That's kind of what it comes down to. I recently thought about how our free time is, in general anybody's free time, is worth about $80 an hour, and if you kind of take that into the light of doing a survey, well a 15-minute survey should pay you about $20, based off of that. It's kind of like as an aside, I guess, but going into it you think, okay, what's my risk/benefit here in doing this survey?” (Group 6, Participant 1)

Subtheme: Invitations
  • “Even though it may sound cliché—And they can say, not ‘the survey is being sent to all residents.' They can say that the survey is specifically geared toward this specific group that you fit—demographics that we're very interested in.” (Group 8, Participant 2)

  • Any sort of psychological parlor tricks kind of help to at least, maybe anecdotal, but when people say ‘because of your title, because of your position, you have a unique opportunity to help us figure out how we can better this process.'” (Group 8, Participant 1)

Theme 6: Biases
  • “Unless I felt like there's something that I strongly feel didn't go well. Then I would take my time to write a survey.” (Group 5, Participant 3)

  • “So if a survey is emailed or sent on my phone, I'm more likely to actually fill it out if I had a very good experience or a very bad experience. If I'm like somewhere in the middle, like I don't care, then I probably won't do it.” (Group 5, Participant 1)

  • “You probably don't write everything—The things that probably just recently happened or things that made like a lasting impression on you. In that case, its utility I think does go down.” (Group 4, Participant 2)

  • “I think my problem with surveys is like that classic bias where you wanna answer what you think you're supposed to answer. …You have an inclination to answer what would imply that I'm doing well in my residency program. For some reason, like—I think that bias definitely does play a role and maybe, like, I don't give the answers that exactly what's reality vs what would look good.” (Group 1, Participant 4)

  • “I'll usually only respond if I have strong opinions about something. If I have just no opinions about something or, you know, very neutral opinions, like, I'm not motivated to fill out a survey.” (Group 7, Participant 2)

Theme 7: Survey Fatigue
Subtheme: Volume of Invitations
  • “I think the biggest issue for me with surveys is that you get so many. I think because you get so many, then you feel less inclined to do them. That's probably the number one thing for me. (Group 1, Participant 3)

  • “I guess for our program at least we get bombarded with too many surveys, so they lose, at least for me, they lose value. …most of the time everything is status quo in terms of how a rotation goes or anything goes, so we just keep getting bombarded with survey after survey, so they lose meaning to me.” (Group 5, Participant 3)

  • “I think we are also inundated with surveys, so it's the law of diminishing returns, and by the time we get to the end of multiple surveys we've just had it.” (Group 7, Participant 2)

Subtheme: Volume of Email
  • “I think that overall we all have a lot of email right now. I've got so many emails that they (surveys) are really not my priority and I'm already working on my own research. … I sometimes just end up filling them because I don't like having my email full and I need them to get out of my email.” (Group 7, Participant 3)

  • “…If it's a spam email to everybody, I already shut down because I get spam emails to everybody from all people in the program, just all the time.” (Group 8, Participant 2)

Subtheme: High Burden
  • “Particularly if the format of the survey is like free text responses, then it's much harder to participate in the survey…” (Group 4, Participant 2)

  • “It's just the burden, and I can only imagine it gets worse as you get higher and higher up with people that have more on their plates and that are in higher positions in terms of the administration. I don't have that much, and I have a ton to do. I can only imagine what these people that are higher up have on their plate and how they do their time stuff and they see all these surveys and they're like, ‘I don't have time for it.' …It's just like an overburden. Like with the evaluations and everything else that we have to do, the hours that we put in—Like sometimes a survey may only take 5 minutes, but I'm just like, ‘I don't want to do this.’” (Group 4, Participant 3)

  • “Usually it's something that if it's convenient for you, you're able to do, but unfortunately it probably becomes a low priority on your ever-expanding list of priorities…” (Group 8, Participant 1)

Theme 8: Anonymity
  • “I feel like it's safe to voice my concerns with the ones (surveys) through our program, but that one—I think being through ACGME makes me a little bit—Because you have to type in your social security number to even start it. So I'm wondering how do you de-identify that? I think paper surveys, unless someone is going to actually try to, like, link my handwriting to my name, they feel anonymous rather than like an electronic survey when you have so many different forms of identifiers that go beyond name.” (Group 5, Participant 4)

  • “I think also one other thing with surveys is like sometimes they say it's anonymous, but there are ways to identify the person. Like for example we got a survey, but they asked us very specific questions, like your clinic site and your PGY year, which already narrows it down to like 3 people.” (Group 1, Participant 2)

Theme 1–Negative Emotions: 

Negative emotions appeared to play a significant role in residents' experiences with survey participation. Some participants noted anger and annoyance when confronted with survey requests, which ranged from accreditation and program improvement questionnaires to quality improvement/patient safety and research surveys. Others said they felt obligated to complete them but had limited time within their busy schedules. For a number of participants, annoyance was reportedly accompanied by breakoffs or increased satisficing.

Select exemplars:

“I think I'm doing a favor for someone that I don't know. I don't feel good about it. I don't like it.” (Group 7, Participant 3)

“…So from a physician's standpoint, it's something that's looked at more as a nuisance than something that's an opportunity.” (Group 8, Participant 2)

Theme 2–Professionalism: 

Tenets of physician professionalism appeared to drive some residents to participate in surveys. Physicians' sense of duty to patients, colleagues, their communities, and society all played roles in residents' survey participation. Empathy for peers and altruism appeared to mitigate the need for incentives.

Select exemplars:

“If it's a friend, I fill it out, you know, because I know I've been in a similar situation…” (Group 8, Participant 2)

“I kind of view it as a requisite—like part of our work is to give feedback, just for continuous improvement.” (Group 2, Participant 3)

Theme 3–Accuracy: 

Residents questioned the accuracy of survey results, as many acknowledged not being entirely honest when completing surveys due to time pressures and concerns over anonymity. They also noted the tendency to get through a survey as quickly as possible, which can decrease data quality. One resident said an invitation from a colleague would prompt a more truthful response “because it's someone that I'm more morally obligated to directly.” (Group 2, Participant 9)

Select exemplars:

“If it's mandatory (question), I'm going to give you an answer. It just doesn't make it the right one, or even remotely close to the truth.” (Group 7, Participant 3)

“I don't know actually how useful the surveys are in giving accurate feedback in a lot of situations because I think a lot of people just click—a lot of the time, being honest with you . . . I'll just click the middle all the way through because I have nothing really to contribute, and I just want to get through it.” (Group 9, Participant 2)

“That's one thing about surveys is that they are probably not always accurate.” (Group 4, Participant 1)

Theme 4–Impact: 

Participants noted the importance of survey impact during various decision points, including invitation acceptance and survey completion. The need for survey participation to be a meaningful endeavor (eg, related to research, social issues, etc.) echoed across FGs. When residents saw surveys as a vehicle to effect change within their residency programs, results often led to disillusionment, according to participants.

Select exemplars:

“But I think—like this is the way you get the feedback to make changes that are helpful to your residency program or your fellowship program or your administration, but I guess it's hard to bring it in.” (Group 4, Participant 3)

“I think I would enjoy surveys about my program or about the academics and all that type of stuff if I knew it was making any form of difference in my life, but I think we're all very…What's the word? The things are so change-resistant that the surveys are just cursory…” (Group 8, Participant 2)

Theme 5–Survey Design/Implementation: 

Residents noted problems in survey design and implementation with the questionnaires they received. Perceived problems included lack of researcher transparency regarding completion time, overall length of a survey (particularly for online surveys), question quality, and unclear survey purpose. Lengthy surveys didn't merely drive breakoffs, they also influenced the amount of satisficing which occurred, according to FG participants. Calculations regarding cost/benefit (ie, time cost vs benefit derived) were commonly reported.

Select exemplars:

“We honestly avoid surveys when we have no idea how long it's going to take.” (Group 6, Participant 1)

“If there's no incentive to do something, then we're most likely not going to do it.” (Group 2, Participant 2)

Theme 6–Biases: 

Residents described response behaviors which were indicative of a range of biases—from self-selection to recall to social/organizational desirability bias. Residents were keenly aware of self-selection bias,48  noting the propensity for only those with strong positive or negative feelings to accept invitations and complete surveys.

Select exemplars:

“If the survey is for an institution that my job is to represent very well and nicely, I will tend to answer in the way that is expected of me.” (Group 6, Participant 1)

“I think usually people who complete surveys in a truthful manner in line with the spirit of the survey are usually going to be the outliers who feel very strongly one way or the other…” (Group 9, Participant 3)

Theme 7–Survey Fatigue: 

Factors driving survey fatigue were not only the sheer volume of survey invitations, but the high burden in completing surveys. According to residents, high volume often led to breakoffs, satisficing, and refusals to participate. As many self-administered questionnaire invitations are routinely administered via email, the high volume of email received by residents also played a role in non-response to survey invitations.

Select exemplars:

“I think the biggest issue for me with surveys is that you get so many. I think because you get so many, then you feel less inclined to do them. That's probably the number one thing for me. (Group 1, Participant 3)

“I think we are also inundated with surveys, so it's the law of diminishing returns, and by the time we get to the end of multiple surveys, we've just had it.” (Group 7, Participant 2)

Theme 8–Anonymity: 

Participants expressed concerns regarding anonymity when surveys were used as a data collection method. They noted that online surveys can ultimately be traced back to the source, given the nature of technology, thus their low confidence in maintaining anonymity during the data collection process. Residents who expressed concerns regarding anonymity noted the superiority of paper surveys over online when it comes to data collection.

Select exemplars:

“Like, we're told over and over that surveys are anonymous, etc. I don't know if I 100% believe that.” (Group 5, Participant 1)

“But if it's (the survey) something that I am not going to be affiliated with at all and there is no way of tracing me, I will answer much more accurately and not follow the lead (leading question).” (Group 6, Participant 1)

Participant Recommendations

Residents were asked, “What could make the data collection process better, from your perspective?” They offered a number of recommendations for improving the likelihood that physician trainees will complete surveys and provide meaningful data (Table 2). For residents concerned about anonymity, paper surveys were the chosen mode of delivery. Additionally, residents recommended survey researchers enhance transparency during the invitation stage (eg, completion time, purpose).

Table 2

Residents' Recommendations for Survey Improvement

Residents' Recommendations for Survey Improvement
Residents' Recommendations for Survey Improvement

Findings from this qualitative study provide new insights into potential factors affecting survey response rates and response behaviors among physician trainees. Negative emotions and concerns regarding survey design and implementation, data accuracy, and anonymity were mentioned across FGs. Residents' cognitive and affective responses appeared to influence participation in a variety of ways. They reported engaging in behaviors such as breakoffs and satisficing when confronted with longer surveys, unclear or leading questions, and impersonal survey invitations, which is aligned with findings from survey science. A number of residents questioned the accuracy of survey data, given the tendency for self-selection to drive survey participation. To our knowledge, residents' perceptions and concerns over these issues have not been explored in the medical education literature. These findings have potential implications for curricular renewal efforts and buy in from residents.

Our findings differed from previous health professions research in the area of satisficing. While other researchers have noted that increased satisficing is associated with lower participant “ability” or education, the difficulty of tasks, and lower motivation,24  our qualitative study found that highly educated GME trainees reported satisficing behaviors during survey completion. Due to high clinical demands, survey fatigue, perceived lack of impact, and negative emotions surrounding participation, residents reported selecting “the middle option,” a common satisficing behavior,24  when completing surveys. In addition, not wanting their department or program to “look bad,” akin to social desirability bias, also appeared to factor into satisficing behaviors for some residents who chose “safe” answers.

Attributes related to professionalism (eg, altruism, code of conduct) and positive emotions enhanced survey participation for some FG participants. Results also revealed that residents were self-aware of their own potentially problematic response behaviors (eg, breakoffs, satisficing) during survey participation. These results have not, to our knowledge, been previously described in the literature on physician response processes or response rates. Messaging incorporating aspects of physician professionalism is an area to explore in future research.

Our results also supported previous findings in the literature related to physician surveys. Low response rates have been associated with lack of time due to clinical pressures, volume of surveys, and survey fatigue.12,13  FG participants also noted that surveys with personalized invitations were more likely to be accepted and lead to completions.10,13 

Findings also point to the need for residency program leadership, accreditation organizations, and educational researchers to examine their assumptions regarding data quality and objectivity when choosing survey methodology for research and program evaluation purposes. Surveys may not produce high-quality data and valid interpretations, especially when social desirability bias and satisficing play roles in response processes. This study, in highlighting residents' lived experiences with surveys, provided narrative evidence related to these issues. For practical steps to creating surveys (which is beyond the scope of this article) we direct readers to the survey literature21,23,26,29,31,32  and articles designed for health professions education audiences.13,22,49,50 

Given the confidential nature of data collection and anonymous nature of data analysis, we were unable to engage in member checking. Like other qualitative studies with small sample sizes, results may not generalize to other contexts and residency cultures. In addition, self-selection bias potentially played a role during study recruitment. Residents who volunteered may have been more knowledgeable about survey methodology than other residents. And while FGs were mixed specialty, FGs with a more diverse range of specialties may have yielded different comments. Last, a greater percentage of participants were PGY-1s, perhaps due to the timing of FGs or interest in surveys for quality improvement projects. It was beyond the scope of this study to examine the influence of program or training level on perspectives shared in FGs.

Klabunde and colleagues, in a “call to action,”11  noted the need for qualitative research into barriers to survey participation for physicians. Our study highlighted barriers to residents' full participation at multiple stages of the survey process, including during the invitation and survey completion stages. We found that GME trainees reported non-participation in numerous surveys. In addition, both satisficing and breakoffs were commonly reported by participants across FGs. While it is unclear how prevalent satisficing behavior is in surveys of GME trainees, this problematic response behavior8,24  should be considered when reporting survey results and relying on survey data for decision-making or research purposes. Though organizational and residency program cultures differ across institutions, we believe study findings, including potential barriers to resident participation, should be relevant to anyone in GME using survey methodology for programmatic data collection, accreditation, and research purposes.

The authors would like to thank the focus group participants from residency programs at Cleveland Clinic who generously contributed their time and thoughtful comments to make this study possible.

1. 
Colbert
 
CY,
French
 
JC,
Arroliga
 
AC,
Bierer
 
SB.
Best practice versus actual practice: an audit of survey pretesting practices reported in a sample of medical education journals
.
Med Educ Online
.
2019
;
24(1).
2. 
Artino
 
AR,
Phillips
 
AW,
Utrankar
 
A,
Ta
 
AQ,
Durning
 
SJ.
“The questions shape the answers”: assessing the quality of published survey instruments in health professions education research
.
Acad Med
.
2018
;
93
(
3
):
456
463
.
3. 
Phillips
 
AW,
Artino
 
AR.
Lies, damned lies, and surveys
.
J Grad Med Educ
.
2017
;
9
(
6
):
677
679
.
4. 
Colbert
 
CY,
Diaz-Guzman
 
E,
Myers
 
JD,
Arroliga
 
AC.
How to interpret surveys in medical research: a practical approach
.
Cleve Clin J Med
.
2013
;
80
(
7
):
423
435
.
5. 
Dillman
 
DA,
Smyth
 
JD CL.
Aural versus visual design of questions and questionnaires
.
In:
Internet, Phone, Mail, and Mixed Mode Surveys. The Tailored Design Method. 4th ed
.
Hoboken, NJ
:
John Wiley & Sons, Inc;
2014
:
169
227
.
6. 
Desimone
 
LM,
Le Floch
 
KC.
Are we asking the right questions? Using cognitive interviews to improve surveys in education research
.
Educ Eval Policy Anal
.
2004
;
26
(
1
):
1
22
.
7. 
Accreditation Council for Graduate Medical Education.
Resident/fellow and faculty surveys
.
2021
.
8. 
Barge
 
S,
Gehlbach
 
H.
Using the theory of satisficing to evaluate the quality of survey data
.
Res High Educ
.
2012
;
53
:
182
200
.
9. 
Brtnikova
 
M,
Crane
 
LA,
Allison
 
MA,
Hurley
 
LP,
Beaty
 
BL,
Kempe
 
A.
A method for achieving high response rates in national surveys of US primary care physicians
.
PLoS One
.
2018
;
13
(
8
):
e0201755
.
10. 
Cunningham
 
CT,
Quan
 
H,
Hemmelgarn
 
B,
et al
Exploring physician specialist response rates to web-based surveys
.
BMC Med Res Methodol
.
2015
;
15(1).
11. 
Klabunde
 
CN,
Willis
 
GB,
Casalino
 
LP.
Facilitators and barriers to survey participation by physicians: a call to action for researchers
.
Eval Health Prof
.
2013
;
36
(
3
):
279
295
.
12. 
Taylor
 
T,
Scott
 
A.
Do physicians prefer to complete online or mail surveys? Findings from a national longitudinal survey
.
Eval Health Prof
.
2018
;
42
(
1
):
41
70
.
13. 
Cho
 
YI,
Johnson
 
TP,
VanGeest
 
JB.
Enhancing surveys of health care professionals: a meta-analysis of techniques to improve response
.
Eval Heal Prof
.
2013
;
36
(
3
):
382
407
.
14. 
US Food & Drug Administration.
Sec. 56.111 Criteria for IRB approval of research. Code of Federal Regulations Title 21.
2021
.
15. 
Sullivan
 
GM.
Education research and human subject protection: crossing the IRB quagmire
.
J Grad Med Educ
.
2011
;
3
(
1
):
1
4
.
16. 
Draugalis
 
JLR,
Plaza
 
CM.
Best practices for survey research reports revisited: implications of target population, probability sampling, and response rate
.
Am J Pharm Educ.
2009
;
73(8).
17. 
Wiebe
 
ER,
Kaczorowski
 
J,
MacKay
 
J.
Why are response rates in clinician surveys declining?
Can Fam Physician
.
2012
;
58
(
4
):
e225
e228
.
18. 
Sebo
 
P,
Maisonneuve
 
H,
Cerutti
 
B,
Fournier
 
JP,
Senn
 
N,
Haller
 
DM.
Rates, delays, and completeness of general practitioners' responses to a postal versus web-based survey: a randomized trial
.
J Med Internet Res
.
2017
;
19
(
3
):
e83
.
19. 
Ramani
 
S,
Post
 
SE,
Könings
 
K,
Mann
 
K,
Katz
 
JT,
van der Vleuten
 
C.
“It's just not the culture”: a qualitative study exploring residents' perceptions of the impact of institutional culture on feedback
.
Teach Learn Med
.
2017
;
29
(
2
):
153
161
.
20. 
Peytchev
 
A.
Survey breakoff
.
Public Opin Q
.
2009
;
73
(
1
):
74
97
.
21. 
Groves
 
RM,
Fowler
 
FJ,
Couper
 
MP,
Lepkowski
 
JM,
Singer
 
E,
Tourangeau
 
R.
Survey Methodology. 2nd ed
.
Hoboken, NJ
:
John Wiley and Sons, Inc;
2009
.
22. 
Willis
 
GB,
Artino
 
AR.
What do our respondents think we're asking? Using cognitive interviewing to improve medical education surveys
.
J Grad Med Educ
.
2013
;
5
(
3
):
353
356
.
23. 
American Association for Public Opinion Research.
Willis GB. Current developments in cognitive testing of survey questions.
2021
.
24. 
Hamby
 
T,
Taylor
 
W.
Survey satisficing inflates reliability and validity measures: an experimental comparison of college and Amazon Mechanical Turk samples
.
Educ Psychol Meas
.
2016
;
76
(
6
):
912
932
.
25. 
Sanchez
 
ME.
Effects of questionnaire design on the quality of survey data
.
Public Opin Q
.
1992
;
56
(
2
):
206
217
.
26. 
Bradburn
 
NM,
Sudburn
 
S,
Wansink
 
B.
Asking Questions: The Definitive Guide to Questionnaire Design—For Market Research, Political Polls, and Social and Health Questionnaires. 2nd ed
.
San Francisco, CA
:
Jossey-Bass;
2004
:
317
318
.
27. 
Fowler
 
FJ.
How unclear terms affect survey data
.
Public Opin Q
.
1992
;
56
(
2
):
218
231
.
28. 
Introduction.
 
Miller K.
In:
Miller
 
K,
Willson
 
S,
Chepp
 
V,
Padilla
 
JL,
eds.
Cognitive Interviewing Methodology. Wiley Series in Survey Methodology
.
Hoboken, NJ
:
John Wiley & Sons, Inc;
2014
:
1
5
.
29. 
Willis
 
GB,
Royston
 
P,
Bercini
 
D.
The use of verbal report methods in the development and testing of survey questionnaires
.
Appl Cogn Psychol
.
1991
;
5
(
3
):
251
267
.
30. 
Collins
 
D.
Cognitive interviewing: origin, purpose and limitations
.
In:
Collins
 
D,
ed.
Cognitive Interviewing Practice
.
Los Angeles, CA
:
SAGE Publications Inc;
2015
:
3
27
.
31. 
Tourangeau
 
R,
Rips
 
LJ,
Rasinski
 
K.
The Psychology of Survey Response
.
Cambridge, UK
:
Cambridge University Press;
2000
.
32. 
Willis
 
GB,
Miller
 
K.
Cross-cultural cognitive interviewing: seeking comparability and enhancing understanding
.
Field Methods
.
2011
;
23
(
4
):
331
341
.
33. 
Organization for Economic Co-operation and Development.
Methodological considerations. OECD Guidelines on Measuring Trust
.
2021
.
34. 
Krumpal
 
I.
Determinants of social desirability bias in sensitive surveys: a literature review
.
Qual Quant
.
2013
;
47
(
4
):
2025
2047
.
35. 
Van De Mortel
 
TF.
Faking it: social desirability response bias in self-report research
.
Australian J Advan Nurs
.
2008
;
25
(
4
):
40
48
.
36. 
Akl
 
EA,
Maroun
 
N,
Klocke
 
RA,
Montori
 
V,
Schünemann
 
HJ.
Electronic mail was not better than postal mail for surveying residents and faculty
.
J Clin Epidemiol
.
2005
;
58
:
425
429
.
37. 
Yarger
 
JB,
James
 
TA,
Ashikaga
 
T,
et al
Characteristics in response rates for surveys administered to surgery residents
.
Surgery
.
2013
;
154
(
1
):
38
45
.
38. 
Chaiyachati
 
KH,
Roy
 
J,
Asch
 
DA,
et al
Improving longitudinal survey participation among internal medicine residents: incorporating behavioral economic techniques and avoiding Friday or Saturday invitations
.
J Gen Intern Med
.
2019
;
34
(
6
):
823
827
.
39. 
Grava-Gubins
 
I,
Scott
 
S.
Effects of various methodologic strategies: survey response rates among Canadian physicians and physicians-in-training
.
Can Fam Physician
.
2008
;
54
(
10
):
1424
1430
.
40. 
Chepp
 
V,
Foundations
 
Gray C.
and new directions
.
In:
Miller
 
K,
Willson
 
S,
Chepp
 
V,
Padilla
 
JL,
eds.
Cognitive Interviewing Methodology. Wiley Series in Survey Methodology
.
Hoboken, NJ
:
John Wiley & Sons Inc;
2014
:
7
14
.
41. 
Krueger
 
RA,
Casey
 
MA.
Focus Groups: A Practical Guide for Applied Research
.
Thousand Oaks, CA
:
SAGE Publications Inc;
2000
.
42. 
Guest
 
G,
Namey
 
E,
Mckenna
 
K.
How many focus groups are enough? Building an evidence base for nonprobability sample sizes
.
Field Methods
.
2017
;
29
(
1
):
3
22
.
43. 
Saunders
 
B,
Sim
 
J,
Kingstone
 
T,
et al
Saturation in qualitative research: exploring its conceptualization and operationalization
.
Qual Quant
.
2018
;
52
(
4
):
1893
1907
.
44. 
Hennink
 
MM,
Kaiser
 
BN,
Weber
 
MB.
What influences saturation? Estimating sample sizes in focus group research
.
Qual Heal Res
.
2019
;
29
(
10
):
1483
1496
.
45. 
Hsieh
 
H-F,
Shannon
 
SE.
Three approaches to qualitative content analysis
.
Qual Health Res
.
2005
;
15
(
9
):
1277
1288
.
46. 
Thorne
 
S.
Data analysis in qualitative research
.
Evid Based Nurs
.
2000
;
3
:
68
70
.
47. 
Bengtsson
 
M.
How to plan and perform a qualitative study using content analysis
.
NursingPlus Open
.
2016
;
2
:
8
14
.
48. 
Korstjens
 
I,
Moser
 
A.
Series: practical guidance to qualitative research. Part 4: trustworthiness and publishing
.
Eur J Gen Pract
.
2018
;
24
(
1
):
120
124
.
49. 
Gehlbach
 
H,
Artino
 
AR.
The survey checklist (manifesto)
.
Acad Med
.
2018
;
93
(
3
):
360
366
.
50. 
Artino
 
AR,
Rochelle
 
JS
La, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE Guide No. 87
.
Med Teach
.
2014
;
36
:
463
474
.

Author notes

Editor's Note: The online version of this article contains the interview guide used in the study.

Funding: The authors report no external funding source for this study.

Competing Interests

Conflict of interest: The authors declare they have no competing interests.

This work was previously presented at the Midwest Association of Public Opinion Research, Chicago, IL, November 22–23, 2019; 75th Annual Meeting for the American Association for Public Opinion Research, Atlanta, GA, May 14–17, 2020; and the International Conference on Residency Education, Vancouver, Canada, September 25, 2020.

Supplementary data