Background Despite calls for increased direct observation for workplace feedback and assessment, such forms of observation are not always feasible. A considerable amount of trainee observation occurs via indirect means.
Objective To explore attending and resident perspectives on direct and indirect observation to determine their impact on workplace feedback and assessment.
Methods Ten attending and 8 resident physicians were interviewed about their experiences and perspectives regarding direct and indirect observation. Data were collected from January to November 2021. Interview transcripts were analyzed using inductive thematic analysis.
Results Major themes identified included: varying descriptions of direct and indirect observation, factors influencing the selection of observation type, the perceived utility of each observation type, and the perceived quality and credibility of feedback generated by each observation type. Direct observation was preferred for assessment of technical, communication, and leadership skills. Attending physicians felt they could provide an accurate assessment of the learner’s clinical reasoning and management using indirect observation. However, residents did not consistently find the resultant feedback to be credible. This tension seemed to stem from residents not being aware of how indirect observation informed attending physicians’ judgements and feedback. Residents with more insight on the use and methods for indirect observation perceived this feedback as credible.
Conclusions Participants identified that indirect methods can be useful and appropriate for the assessment of clinical reasoning and for fostering independence. Residents demonstrating an understanding of how indirect observation informed attending physicians’ assessments appeared to find this feedback more credible.
Introduction
It is widely accepted in the era of competency-based medical education (CBME) that direct observation in the clinical setting is a desired approach to workplace-based assessment.1,2 Direct observation has been incorporated as a major component of competency-based training programs, as it supports the assessment of the knowledge, skills, and attitudes required to become a competent physician.3-7 Further, direct observation facilitates the identification of trainee deficiencies and has been shown to improve patient care through enhanced supervision.8 However, multiple barriers to direct observation implementation have been identified in the literature.9,10
Previous studies have highlighted that direct observation is not always feasible or appropriate.9 Competing administrative and clinical duties have been identified as challenges that often preclude supervisors from directly observing their trainees in the workplace.9 LaDonna et al also describe the “observer effect,” which suggests that direct observation does not always lead to accurate assessment of the learner’s authentic skills and patient interactions.11 Additionally, a considerable amount of assessment and feedback throughout medical training occurs through means other than directly watching a trainee perform a task. Indirect forms of observation may provide an avenue to address some of these concerns. As such, there have been further calls to consider the role and value of indirect observation.12,13
To date, little research has explored how other forms of supervision are operationalized and what their perceived impact is on learning. A recent study of family medicine residents found that indirect observation via oral case presentations can provide a strong surrogate for medical information gleaned from a resident-attending physician interaction.14 Indirect observation also likely continues to play an important role for providing feedback and assessment of non-procedural physician tasks such as clinical reasoning and generation of management plans.10,12
Although there has been increased attention within the CBME literature, indirect observation remains poorly defined, and there is ongoing debate about the distinction between direct and indirect methods.9,10 Gauthier et al propose that some forms of observation that have traditionally been considered indirect, such as the case presentation, may actually enable supervisors to “directly observe” a trainee’s cognitive skills, such as their clinical reasoning abilities.10 They emphasize that some skills require a different approach, as they are not observable in the literal way that procedural skills can be observed.10 Other studies have delineated direct versus indirect observation more strictly as the physical presence of the supervisor watching the clinical encounter.8,9
Despite our limited understanding, both direct and indirect observation continue to be utilized as methods of enacting trainee supervision for assessment and feedback. Therefore, the nuances regarding the educational value and impact on learning afforded by different observation types requires further characterization. For example, what performance data do supervisors gather from direct versus indirect observation? Do learners receive feedback generated by direct and indirect observations differently? And if so, why? These important questions have yet to be examined in the literature. This study aimed to explore attending and resident perspectives on direct and indirect observation and the impact of these different observation types on workplace learning.
KEY POINTS
What Is Known
Indirect observation can be a way to supervise when direct observation is not possible, but we lack an understanding of the context it adds to the workplace environment.
What Is New
This qualitative study of trainees and faculty using inductive thematic analysis reveals that while direct observation is favored for assessing technical skills and leadership, indirect observation is deemed adequate by attendings for evaluating clinical reasoning and management. However, residents question the credibility of feedback from indirect observation unless they understand the underlying methods and rationale for its use.
Bottom Line
Enhancing resident understanding of how indirect observation informs assessments could improve the perceived credibility of feedback, suggesting a need for clearer communication about observation practices to support effective assessment and feedback mechanisms in medical training.
Methods
Given the paucity of literature exploring the perspectives and value of various observation types, we conducted a qualitative study using thematic analysis to examine resident and attending physicians’ experiences and perspectives.15 Thematic analysis was chosen as we anticipated diverse perspectives from resident and attending physicians. We sought to use a structured approach to break down complex data, allowing us to summarize key features while providing a nuanced understanding of the participants’ experiences and perspectives.
Participants were recruited via email to program directors in all specialties with residency programs at The Ottawa Hospital, a large tertiary care medical center in Ontario, Canada. Program directors distributed the recruitment email to attending and resident physicians within their specialty. They were invited to participate in semistructured interviews exploring their perspectives on observation and feedback in the workplace. Eighteen physicians participated in the study (10 attending and 8 resident physicians). Purposive sampling was used to ensure representation from a wide variety of specialties, both procedural and non-procedural, as well as gender, resident years, and attending physician seniority.
Interviews were conducted by the primary investigator (K.O.) between January and November 2021. Interviews were 30 to 60 minutes, conducted via conference call, and audio recorded for transcription. Initial transcription was completed via Microsoft Office voice detection software, followed by manual verification by author M.R. Two interview guides were used, one for attending physicians and one for residents. Interview guides were developed by the research team (K.O., W.J.C., N.D., J.L.). The development of the guide was informed by the existing literature on direct and indirect observation as well as the teaching and learning experiences of the research team. The interview guide was piloted with one attending physician to ensure clarity prior to data collection. The attending physicians were asked how they observe and interact with their learners, as well as their perspectives on how the type of observation used impacted the feedback provided to residents. Residents were asked how they are observed in various clinical settings, and what their perceptions were of the feedback and assessment they receive from various observation strategies. The interview guides can be viewed in full in the online supplementary data.
Data analysis followed an inductive thematic analysis approach. Interview transcripts were read and reread to ensure familiarity with the data. Initial codes were generated by identifying key words and phrases that captured participants’ perspectives. The first 6 interviews were coded independently by 2 members of the research team (K.O., J.L.) to ensure consistency and comprehensiveness in coding. Subsequent interviews were coded by the primary investigator (K.O.). The research team met in full after every 3 to 4 interviews to review and refine identified themes. The interview guide was modified as needed to explore identified themes in greater depth. For example, as it became apparent that there were concerns around feedback credibility in relation to indirect observation, questions were added to the resident interview guide to explore this further. Through an iterative process of reviewing and refining, the research team developed a final thematic framework by consensus, and all interviews were recoded according to this framework. Data collection continued until the research team agreed that data sufficiency had been reached—the point at which new interviews did not generate substantially new themes or insights. Quotations below from participants illustrate the themes and are presented with the participant identification number in parenthesis (eg, A1 indicates attending 1, and R1 indicates resident 1). Representative quotes were chosen if they accurately and concisely represented one of the main themes of our results.
This research study was approved by the Ottawa Health Science Network Research Ethics Board.
Personnel and Reflexivity
This research was conducted by a resident physician (K.O.), 3 clinician educators (W.J.C., J.L., N.D.), 1 PhD-trained qualitative methodologist (A.M.), and 1 medical student (M.R.). K.O. and M.R. had experience being observed via direct and indirect methods, and receiving feedback and assessment based on both. This allowed for nuanced discussion during interviews, particularly from the trainee perspective. The clinician educators were experienced in utilizing various forms of observation regularly in different clinical settings and thus were keen to explore how direct and indirect observation were perceived by their colleagues and trainees. A.M. offered nonclinical viewpoints that helped the research team constantly question assumptions.
Results
Participant demographics are displayed in Table 1. Residents and attending physicians had a broad range of seniority and represented procedural and non-procedural specialties. The number of participants from each specialty is outlined in Table 2.
Variable Descriptions of Indirect and Direct Observation
Residents often spoke of direct and indirect observation as “black and white,” as in, the attending physician either directly watching (observing the patient encounter in the room) or not (eg, a case purely discussed via oral case review). By contrast, attendings seem to approach observation as more of a spectrum.
Direct observation was often described by residents as the attending physician being physically present in the room with the resident and patient. For example, observing them in the operating room, performing a physical examination maneuver during bedside rounds or in clinic, or watching them lead the resuscitation of a patient in the emergency department or intensive care unit. Attending physicians, however, described other “direct” methods of observation that did not involve their physical presence in the room. For example, some attendings described listening to parts of an encounter from behind the curtain, which allowed them to assess and provide feedback on communication and history taking skills. Others discussed going to see the patient on their own after reviewing the case with the resident to confirm the accuracy of the resident’s assessment. Discrepancies between the attending physician’s assessment and what was reported to them by the resident often led to meaningful feedback and learning interactions.
Various methods of indirect observation were reported by both attending physicians and residents. This included reviewing medical records, triage notes, and obtaining feedback from nurses, respiratory therapists, and patients. In addition, attendings would often ask probing and clarifying questions during case review to confirm the resident’s understanding and synthesis of data gathered. This was further assessed by asking the learner to generate a differential diagnosis and management plan. These methods are exemplified in the following quote from participant A4:
“Maybe there’s something in the way it’s presented leading me to have some questions or concerns. I’ll ask questions and the answer is still unknown multiple times… There’s different ways to do that where I’ll go back and see the patient with [the resident] and ask a few questions. Or I will sort of stand behind the curtain, listening in at times.”
Type of Observation Selected: Influenced by Various Factors
Attending physicians and residents described multiple factors that contributed to the selection of observation method. These included the clinical environment, medical acuity, stage of training, who initiated the observation (attending vs learner), the goal of the observation, and the desire for independence. Exemplified in the following quote, participants frequently discussed the working relationship between the learner and the attending, for example, if they had previously worked together and if they had already established trust in each other.
“[If I have] worked with them at least once before, you get a little bit more confidence or comfort with each other. So [they] get a bit more latitude.” (A4)
Of note, there was a mismatch between residents and attendings regarding the goals of direct observation. Attendings physicians often used direct observation to ensure patient safety and during high acuity cases, whereas residents expressed a desire for direct observation for the purpose of receiving coaching and feedback on specific clinical skills tailored to their learning needs.
Perceived Utility of Direct vs Indirect Observation
In general, attending physicians felt they could accurately assess the learner’s thought process, clinical reasoning, management plans, and case presentation through indirect observation.
“I can identify if they have asked the right questions, gotten all the right information… I can identify whether they’ve interpreted investigations appropriately. I can interpret how they synthesize everything, how they come up with their impression and I can see their decision-making.” (A6)
Residents reported that indirect observation was a valuable tool to help them develop confidence through independence while still being supervised. One of the senior resident participants stated:
“I think there’s a huge value of, you know, like running the first arrest you run with no staff in the room” (R4).
The importance of fostering independence through indirect observation was echoed by attending physicians, with one participant reflecting that it “is really required to allow residents and medical students to gain independence and confidence.” (A1)
Residents and attendings thought direct observation was valuable for junior residents to support the development of foundational clinical skills. However, senior residents also had a desire for direct observation to fine-tune their skills and receive additional coaching prior to independent practice. One senior resident noted in particular that they needed to prompt attendings to watch them perform procedures for the purpose of coaching. “I know that they trust me, but I feel like there’s a lot to be learned from staff that have been doing it for a long time.” (R7)
Perceived Quality of Feedback Generated by Observation Type
Resident participants reported that both direct and indirect observation could yield high-quality, specific, timely, and targeted feedback. Participant R4 stated, “Some of the best feedback I’ve had around medical management has been indirect.” (R4) However, the content of the feedback differed between direct and indirect observation. The same resident also noted, “It’s not that it’s better feedback (compared to direct observation), it’s just different.” (R4)
Attendings felt they could gather a strong assessment of thought process, clinical reasoning, managerial skills, and interpersonal skills, and provide high-quality feedback through indirect observation. One attending physician stated, “Evaluation of their clinical synthesis doesn’t need to be done with me standing in the room right behind them while they talk.” (A4)
When discussing indirect observation through case review, one attending reflected, “I can see their decision-making.” (A6)
Whether the information was obtained via direct or indirect observation, feedback was felt to be higher quality when it was intentional and actionable. Similarly, feedback was felt to be lower quality if specific steps for improvement were not offered.
Perceived Credibility of Feedback by Observation Type
Residents, in general, reported that they perceived feedback from direct observation to be more consistently credible than feedback from indirect observation. This seemed to stem from the knowledge that judgments were not made from inferences, but rather the attending’s own experience observing the encounter. This is demonstrated in the following 2 statements from resident participants:
“If I know that you’ve actually watched me do stuff then I would say their feedback is sound.” (R5)
“…during inpatient rotations there’s always been an opportunity for staff to [directly] observe me… so any feedback I would receive would make sense.” (R8)
Despite their acknowledgement that feedback from indirect observation could be useful, resident physicians struggled at times to perceive it as credible. This tension seemed to stem from not understanding how indirect observation informed attending physicians’ judgements and the subsequent feedback and assessment provided.
“There are definitely times where you’re getting feedback and you’re like—I don’t know where you got this from.” (R5)
“When I’m not observed, all they have to rely on is what I tell them I found.” (R1)
While some residents questioned the credibility of feedback and assessments generated from indirect observation, residents with insight on the use and methods for indirect observation perceived this feedback as more credible. For example, when asked about feedback from indirect observation and how staff were confirming their clinical assessments, one resident stated, “It’s more of a conversation than it is me just presenting a case and them saying yes or no. It’s overall discussing my thought process and how I came to the decisions that I made.”
This seemed to align with attending physicians’ perspectives about feeling confident with the use of indirect observation to make judgments about the learner’s thought process and clinical decision-making.
Furthermore, the perceived credibility afforded by a resident’s understanding of how indirect methods of observation informed their attending’s judgments enabled them to see value in the feedback and assessments generated. One resident noted, “I found it really useful for discussing nuances… it was easier to discuss practice variation and like some, I guess, some of the grayer areas of medicine.” (R4)
Discussion
Direct observation, although important in medical training, is not always feasible or practical, and feedback and assessment are still generated in large part through indirect methods of observation. Through thematic analysis, this study aimed to explore attending and resident perspectives on direct and indirect observation and the impact of these different observation types on workplace learning.
Direct observation is often used by attendings to ensure patient safety,9 and more recently there has been an emphasis on direct observation for coaching and feedback. Additionally, feedback from direct observation is typically perceived by residents as credible.16,17 On the other hand, the literature suggests that indirect observation appears well suited for assessment of a learner’s clinical reasoning, and also provides opportunities to foster a learner’s independence.9,10 Our study results align with these prior findings and suggest that the credibility of feedback generated from indirect observation may be influenced by the recipient’s understanding of how indirect methods are used to inform judgments of performance.
Feedback credibility is determined by a multitude of previously described factors. Credibility is known to be influenced not only by the content of the feedback, but also by the methods of delivery, the context of the teacher-learner relationship, the learning culture and environment, and the perceived knowledge and skill of the individual providing feedback.18,19 Eva and colleagues17 identified that the feedback recipient’s perception of source credibility was an important factor in feedback acceptance. They suggest that it is just as important to consider feedback from the perspective of how it is received, rather than simply focusing on the facets of effective delivery. Research has shown that credibility judgements influence which information will or will not be used by learners to improve later performance.19 It is therefore important that we fully understand how learners will process, integrate, or reject feedback provided to them.
Our study adds a new facet of the learner’s credibility assessment. Our findings suggest that for indirect observation to be effective, learners should be made aware of how they are being indirectly observed and assessed. Attending physicians in this study often described very clear methods of how they ensured their learners were providing safe and effective care to their patients, but many residents were completely unaware of how they were being indirectly observed. As a result, when it came time to receive feedback, many of them had little sense of how their attendings were drawing conclusions about their performance. Conversely, residents who did understand how attendings were collecting and using information from indirect observation did not share the same concerns. This element of discernment is not typically required in situations of direct observation as it is clear where the data is coming from; therefore, residents felt that they could more reliably “trust” the feedback that was being provided. Our study suggests that the recipient needs to understand not only that they have been observed, but also how they have been observed and how observations have informed judgements of performance, especially when it is not as obvious as being watched directly from an attending in the same room. From this, we suggest that attending physicians need to be more transparent about how they are gathering data to observe their learners through indirect means, in order for the residents to perceive this feedback as credible and strengthen the educational alliance.
While greater transparency may enhance the credibility of feedback generated from indirect observation, it is possible that greater recognition of when and how they are being observed could lead to changes in trainee behavior. The observer effect described by LaDonna et al11 suggests that a trainee’s performance in the presence of an observer may not always reflect authentic performance or their “usual” practice, which has important implications for feedback credibility and assessment validity. Whether this observer effect manifests similarly when trainees are indirectly observed, particularly if they are more cognizant of how performance information is gathered, remains an area for future inquiry.
Limitations
This study examined resident and attending physicians’ perspectives of direct and indirect observation at a single academic medical center, and our findings are likely influenced by local cultural norms. Additionally, a large number of participants were from an emergency medicine training program, though their perspectives appeared to align with those of study participants from other disciplines.
Conclusions
Indirect observation methods appear to be well suited for assessments of learners’ clinical reasoning and can be used to help foster learner independence. While direct observation has been touted as fundamental to credible feedback, our study suggests that residents with a deeper understanding of how indirect methods of observation are used to inform attending physicians’ assessments perceive feedback as more credible.
References
Editor’s Note
The online supplementary data contains the interview guides used in the study.
Author Notes
Funding: The authors report no external funding source for this study.
Conflict of interest: The authors declare they have no competing interests.
This work was previously presented at the International Conference on Residency Education, October 27-29, 2022, Montreal, Quebec, Canada, and the Canadian Association of Emergency Physicians Conference, May 26-June 1, 2022, Quebec City, Quebec, Canada.