Background Core to competency-based medical education (CBME) is the use of frequent low-stakes workplace-based assessments. In the Canadian context, these observations of performance are framed around entrustable professional activities (EPAs).

Objective We aimed to explore residents’ real-world perspectives of EPAs and their perceived impact on learning, because assessments perceived to be “inauthentic,” or not truly reflective of their lived experiences, may interfere with learning.

Methods Using constructivist grounded theory, we conducted 18 semistructured interviews in 2021 with residents from all programs that had implemented CBME at one tertiary care academic center in Canada. Participants were recruited via email through respective program administrators. Data collection and analysis occurred iteratively, and categories were identified using constant comparative analysis.

Results Residents were strikingly polarized, perceiving EPAs as either a valuable opportunity for professional growth or as an onerous requirement that interfered with learning. Regardless of what view participants held, all perspectives were informed by: (1) the program administration and the perceived messaging from program to residents; (2) faculty assessors and their perceived degree of engagement, or “buy-in” with the EPA system; and ultimately (3) learner behavior. We theorized from these findings that all 3 aspects must be working in tandem for the assessment system to function as intended.

Conclusions From the learners’ perspective, there exists a dynamic, interdependent relationship between the 3 CBME stakeholders. As such, the perceived value of the EPA assessment system can only be as strong as the weakest link in the chain.

Core to competency-based medical education (CBME) is the use of frequent low-stakes workplace-based assessments. In the Canadian context, entrustable professional activities (EPAs) stimulate the direct observations and formative feedback used to identify and address training gaps.1  In theory, EPAs provide a practical means of observing and documenting a learner’s acquisition of competence by collecting assessments of clearly defined skills and abilities that learners must acquire at various stages in their training.2-5  In practice, however, EPA data may not be truly “authentic” or reflective of learners’ abilities and clinical experience. Thus while EPAs may facilitate assessment, critics argue that deconstructing the complexities of medical practice into discretely measurable components risks becoming reductionist, failing to generate an accurate, holistic picture of learners’ competence.6,7 

This lack of perceived authenticity may be why some residents view workplace-based assessments like EPAs not as valuable learning opportunities, but rather as “checklists.”8,9  This is relevant because not only do learners tend to disregard feedback generated from assessments they perceive to be inauthentic, but by being selective about what assessments are documented to fulfill program requirements, learners may miss opportunities to address their weaknesses.10 

CBME as a whole aims to be learner-centered, yet the current literature focuses largely on faculty perspectives and implementation challenges.11-17  The few studies that have examined trainee perspectives were conducted within a specific specialty or program during early implementation.16,18,19  Ensuring the long-term success of CBME relies on a broader understanding of whether residents as primary stakeholders perceive EPA assessments to be truly reflective of their clinical experiences. We aimed to address this gap in the literature by exploring residents’ real-world perspectives of EPAs, and their perceived impact on learning.

What Is Known

Learners buy-in for competency-based medical education (CBME) is critical, yet their voices remain relatively uncaptured in the literature.

What Is New

This qualitative study of residents adds their perspective to the literature informing CBME approaches.

Bottom Line

Program directors wishing to improve their entrustable professional activity or CBME system can incorporate these findings to improve the likelihood of success.

Study Design

Since EPA-based assessment processes are relatively new, little is known about their implementation and perceived value for learning. Therefore, we used a constructivist grounded theory (CGT) approach to guide our inquiry. CGT analysis is grounded in participant experience and co-constructed with researchers, meaning that researchers’ insider perspectives are positioned as a strength for generating meaning.20-22 

Setting and Population

In Competency By Design (CBD), the Canadian CBME model, EPAs are conceptualized as discipline-specific tasks that learners can eventually be entrusted to perform without supervision within a given health care context.5,23  To progress to the next training stage, residents must demonstrate sufficient competence and independence in performing EPAs.24  EPAs are assessed by individual faculty members, and data are compiled and reviewed by competence committees. Judgements of EPA “achievement” are guided by a national set of recommendations meant to inform competence committee decisions, not provide strict criteria.25,26  Throughout this article, we use “EPA” when referring to a discrete task, and “EPA assessment” when discussing the process of assessing EPAs.

All residents enrolled in training programs that had implemented CBD at one tertiary care academic teaching hospital in Canada received an invitation to participate. The invitation was sent by program administrators via email on behalf of the research team. Eighteen residents (11 female, 7 male) from 13 different medical and surgical training programs agreed to be interviewed (Box). Since CBD had been implemented by many programs only within 3 years prior to data collection, most participants (n=13) were in postgraduate year 2 (PGY-2). However, there were participants in PGY-3 (n=3), PGY-4 (n=1), and PGY-5 (n=1).

Box Participants’ Programs

Internal Medicine

Emergency Medicine

Neurology

Anesthesia

Pathology

Physical Medicine and Rehabilitation

Psychiatry

General Surgery

Neurosurgery

Obstetrics and Gynecology

Otolaryngology

Urology

Orthopedic Surgery

Data Collection and Analysis

Individual, semistructured interviews lasting from 45 to 60 minutes were conducted between January and November 2021 virtually via Zencastr.com by one of 2 authors (E.A. or R.M). During interviews, residents were asked to describe their day-to-day experiences with EPA assessments, and to reflect on their EPA data in relation to their perceived clinical performance and lived experiences (interview guide provided as online supplementary data). Questions also explored residents’ perceptions of their respective programs and assessment culture, and how they felt EPA assessments affected their learning. Interviews were recorded, transcribed verbatim, and de-identified prior to analysis.

Analysis was performed in 3 progressively interpretive stages: initial, focused, and theoretical. Initial codes and preliminary categories were identified and refined over multiple rounds of focused coding. Throughout this process, the interview guide evolved dynamically to theoretically sample new ideas and evolving insights. After finalizing codes and categories, the team engaged in theoretical coding—an interpretive, abstract process in which researchers determined how individual cateogories fit together to form a higher level analytical story around a core category. A constant comparative approach was used throughout, meaning that data within and among transcripts were continuously scrutinized. All authors documented their reflections, interpretations, and questions as they independently read either full transcripts or selections of coded data. One author (E.A.) led analysis, compiling these reflections and team discussions into formal analytical memos that all team members reviewed and contributed to. Through these written reflections and verbal discussions, we determined that participants’ strikingly polarized views about EPAs as an assessment and feedback mechanism was the primary “story” of our data. After identifying “striking polarization” as the core category, we drew on our interpretations of participants’ accounts to theorize about why EPAs may or may not be perceived as valuable for learning. After 18 interviews, our team agreed that, while continuing data collection may generate additional nuance, the collected data were sufficient for answering the exploratory research question.18,27 

Personnel and Reflexivity

This research was conducted by a resident physician (E.A.), 2 clinician educators (W.J.C., J.M.L.), a PhD-trained qualitative methodologist and medical education researcher (K.L.), and a research assistant (R.M.) with qualitative research expertise. E.A. had experience collecting EPA assessments from a resident perspective, and both W.J.C. and J.M.L. were experienced in completing them as faculty. E.A., W.J.C., and J.M.L. all were from an early-adopting, higher-functioning CBME program and thus were keen to understand the challenges and factors that may play a role in the successful implementation of EPAs across different programs. K.L. and R.M. offered nonclinical viewpoints that helped the other researchers constantly question assumptions.

This study received ethics approval from the Ottawa Health Science Network Research Ethics Board (protocol ID: 20200665-01H).

Residents believed that EPAs have the potential to serve as a roadmap to competence, providing a real-world list of practical skills, abilities, and tasks they would need to be able to perform independently within their respective fields of practice. Although a few participants spoke in shades of grey, most described EPAs and their assessment in black and white terms, perceiving them as either a valuable opportunity for professional growth or as an onerous requirement that interfered with learning. Regardless of what view participants held, all perspectives were informed by a dynamic interplay between: (1) the program administration and the perceived messaging from program to residents; (2) faculty assessors and their perceived degree of engagement, or buy-in with the EPA assessment system; and ultimately (3) learner behavior (Figure).

Figure

Entrustable Professional Activities (EPA) Stakeholders and Their Interrelationships

Figure

Entrustable Professional Activities (EPA) Stakeholders and Their Interrelationships

Close modal

Messaging From Program Administration

Many residents felt that the EPA assessment system was communicated to them by their respective programs as “just a numbers game,” (participant [P]2) and that the quantity of achieved EPAs was “basically the only thing that matters in our training” (P3). These learners felt the need to “get their numbers” (P16) with the goal of meeting strict perceived quantitative program requirements to progress to the next stage in their training.

“It has a major detrimental impact on learning because it diverts a lot of our time towards chasing them, towards pre-filling them out, towards worrying about our numbers versus time that we could have been spending actually learning, actually studying, actually doing something productive.” (P16)

As a measure of their skills and abilities, residents generally felt that EPA assessments “probably do not currently serve as a good measure of competence” (P2) because of the metric-driven selective triggering of these assessments. Residents would tend to “only do [EPA assessments] in situations that they do well because it doesn’t count if you don’t pass it.” (P14) They believed this behavior would generate an incomplete overall picture by only measuring skills in which they already felt competent, and not the areas requiring further development.

However, residents also seemed to believe that enough EPA data could still form a relatively accurate assessment of one’s competencies as a whole:

“My final competency, probably, yeah. My process of becoming competent, no. At the start of residency you know nothing, and then at the end you know something. All my EPAs I think demonstrate is that I now know something, but they don’t demonstrate that at one time I didn’t know things. And they didn’t demonstrate that at one time I knew half the things and didn’t know half the things.” (P14)

Thus, although many residents perceived that EPA assessments collected over time might be reflective of their skills and experiences, they were not perceived as useful for capturing either their day-to-day progress or their trajectory toward overall development.

Some residents perceived a program culture of “service over teaching,” where learning was “just not a priority,” (P1) with “no good system to integrate EPAs into our daily work.” (P2) In contrast, in programs where EPAs were felt to be valuable feedback and assessment tools, there was an expectation from both learner and faculty at the end of the workday, or even immediately after an observed task, that an EPA assessment should be completed. Day-to-day workload, even when intense, was not perceived by their faculty as an excuse for failing to complete EPA assessments or to provide useful feedback, as “even if it was a really busy service, they made time for every resident.” (P11)

In these cases, it seemed that EPAs were clearly communicated to residents by their respective programs as a useful learning tool, and not merely a quantitative metric representing a dichotomous pass-or-fail judgment. Moreover, the overall quality and richness of the assessment were valued over their quantity.

“The way we use EPAs is not like ‘if we don’t have enough EPAs we’re going to fail.’ The way we use EPAs is really meant for teaching… It’s almost like I can use this tool to get more feedback from my seniors, …as opposed to the pressure of ‘if I don’t get this EPA done, I’m never going to move on.’” (P12)

Faculty Buy-In

Residents were clearly influenced by the attitudes and behaviors of their faculty and seemed more prone to “just play the numbers game” if they perceived their faculty as having low levels of buy-in.

“If the majority of staff don’t even care about it at all, like it’s a hindrance to them too and they feel like it wastes their time, then we feel like it wastes our time.” (P16)

One of the most commonly described effects of an unengaged faculty was the perception that asking for an EPA assessment was akin to asking for a favor. One resident described the process as a “very polite, formal, nervous request.” (P14) Being off-service and unfamiliar with new faculty also proved challenging for obtaining EPAs, as there was significant uncertainty surrounding “which staff are receptive to filling out EPAs… It really varies.” (P6)

One resident expressed that the “gold standard is to have them done in real time and to have them submitted with both people present, right there.” (P8) Adherence to this “gold standard” directly affected the learning value residents ascribed to assessments, suggesting that a face-to-face conversation was far more informative than narrative comments documented after the fact:

“I find it more useful when you’re actually discussing with that person… The ones you send through email…are also useful, but I find it never sticks in your head as well as if you actually just talk to the person and remember what they say for the rest of your life.” (P13)

Residents who did not find EPAs useful for learning described a pattern where assessments were rarely completed in-person, and often not until several days after the actual encounter. One participant estimated that “probably 70% of my EPAs were at least 2 weeks after.” (P2) This delay was perceived to interfere with learning because residents struggled to align the narrative feedback documented in assessments with specific patient encounters and learning points. Sometimes, residents struggled to perceive value in an assessment process where “the whole onus is on you” (P6), and faculty only sign off or make cursory edits:

“I think I can count on my hand the amount of times staff has actually written any words on EPAs. They usually just type ‘I agree’ or ‘disagree’ with the assessment.” (P11)

Many expressed wanting to receive more feedback from their faculty, highlighting the perceived limited value of self-assessment:

“I find people just agree. I’m more interested to hear what they have to say from their own brain, rather than from mine. The point of getting feedback isn’t to hear my own self-assessment. I can give myself feedback all day at home.” (P1)

On the other hand, when faculty were perceived to have “bought in,” residents noted they would often preemptively offer to participate in EPA assessments with the learners even before being asked. These residents described a departmental culture that alluded to strong faculty training around in-the-moment workplace assessments such as EPAs.

“It’s evolved a little bit more from us having to nag the staff for evaluations early on, so now there’s more of them who are initiating the conversation. [The program has become] better at encouraging the staff to be involved in this process and reviewed with them why it came to be and, what the benefits are and trying to encourage participation.” (P8)

Learners’ Perceptions and Behaviors

When residents felt supported by both their program and their faculty, they appeared much more in charge of their own education and training. Specifically, these learners perceived EPAs as a productive tool for course-correction, seeking out opportunities to improve on suboptimal past performance:

“When I don’t do well on a case, I know I need to see it again and improve my approach to it. So I try to use that as a motivating tool to be like ‘I didn’t do well with this case and I need to get more feedback.’ There’s a bit of frustration with a poor evaluation. I think that’s natural. But trying to do better next time is the key.” (P9)

EPA assessment selectivity thus appeared to work in favor of individualized growth and development in these cases. This was despite the acknowledgement that EPAs can be time-consuming: “they are a time suck but they are not a waste of time.” (P9)

Lastly, an unexpected finding from the interview process was how cathartic residents found the interview to be, particularly those who perceived little value in the EPA system. It would appear that they often did not have a satisfactory outlet within their respective programs to express their frustrations and suggestions for improvement. They unanimously conveyed their hope that their experiences, negative or otherwise, would serve to improve those of future generations of residents within the CBME system.

Our study suggests that there exists a dynamic relationship between 3 key facets of CBME (resident, faculty, and program), and that all 3 are required to be high-functioning and interacting meaningfully for the EPA system to be perceived as working as intended (Figure).

Learner Engagement

To be perceived as valuable and authentic, assessment requires the engagement of not only the assessor, but more importantly, of the learner.28  The most engaged learners tended to be participants who felt as challenged as they felt supported, both by their faculty and by their program. These findings align with Daloz’s model for mentoring.29  Daloz believed a low level of support and high degree of challenge would theoretically result in anxiety, and conversely, low challenge and high support would merely result in confirmation. Ultimately, both challenge and support levels need to be high for growth to occur, and our findings were in agreement with this.

It is possible that those who completed EPA assessments for the express purpose of progressing to the next stage were unincentivized to use EPAs for their own learning and growth. Vygotsky theorized that individuals can achieve a higher level of ability by leaving comfort zones and stepping into zones of proximal development.30  The current culture of certain programs that focus on “EPAs by numbers” may be inhibiting proximal development. Leaving the comfort zone can be difficult due to the fear of failure, and attendant perceptions of high-stakes consequences. Earlier, more frequent failure has been associated with greater future learning gains.31,32  However, it appears that even those residents who are fully engaged tend to experience significant difficulty with the EPA system if they feel unsupported by their program or faculty.

EPA Selection Process

The selective completion of EPA assessments described by our participants has often been pejoratively referred to as “cherry-picking” or “tick-boxing of checklists” in the literature.8,33-35  However, checklists in and of themselves can be useful and have been associated with improved outcomes in many areas of health care.36-41  The residents in our study believed EPAs were a beneficial roadmap for outlining the various competencies required of them to graduate as safe, independent practitioners. This was in keeping with resident expectations in other studies at different institutions prior to CBD implementation.14,16,18 

When selective triggering, or “cherry-picking,” of EPA assessments is employed effectively, learners can quickly demonstrate achievement of the EPAs they are proficient at, affording them greater opportunity to focus on tasks that require improvement. This system ostensibly works best when the learner is motivated, mature, and does not have competing interests, such as the perception that without a certain number of a certain type of assessment, they will not “pass” residency. This selective cherry-picking behavior can thus be productive when in keeping with Locke’s mastery goals, where learners’ behavior reflects a desire to gain knowledge and skills.42  This is in contrast to Locke’s performance goals, which are oriented toward appearing competent in others’ eyes, and have been shown to lead to inauthentic learning experiences.10  This is evidenced by residents who only cherry-pick EPA assessments with the sole purpose of passing to the next stage in training. This phenomenon of duality in selection behavior was observed across multiple program contexts, suggesting resonance and transferability across postgraduate medical education.

Reliance on Self-Assessment

Most of the participants reported that they were expected to complete EPA assessment forms themselves, including narrative feedback. While self-assessment does have certain benefits,43,44  regularly relying on self-assessment as a feedback mechanism is not benign.45  A systematic review on physician self-assessment found the worst accuracy among physicians who were the least skilled and the most confident.46  Furthermore, overreliance on self-assessment can lead to residents dismissing feedback they find incompatible with their own assessment, regardless of feedback accuracy.47  Triangulating self-reported competence with other assessments, such as from multiple faculty assessors over time and across contexts, is necessary. Self-assessment is also most effective when combined with credible external feedback.48  Without it, residents may struggle to calibrate their perceived competence against a reliable standard.

Reclaiming Authenticity

At the inception of EPAs as building blocks of CBME, there was concern that their implementation would result in a loss of authenticity due to reductionism, or the oversimplification of complex tasks of a profession.1,6,7,49  Calls persist for a robust system of assessment that incorporates EPAs and holistic assessments.50-52  However, these types of assessments were already in place at our institution.

The loss of authenticity perceived by the learner may not be due to the EPA itself. Rather, it appears for EPAs to function to their full potential, all 3 key facets identified in this study—the program, the faculty, and the learner—must work together synchronously and meaningfully (Figure). The identification of these specific areas for improvement builds on previous work that has made similar, more general recommendations.53  Programs should be careful to avoid conveying the perception that EPAs are a metric-driven, learner-only responsibility that must be “tickboxed” to progress to the next stage of training. Faculty should weave EPA assessments and subsequent real-time feedback into the culture of daily workflow and be proactive about providing assessment opportunities. Importantly, an engaged growth mindset is required of the learner to push the boundaries of what they are comfortable with, and continually attain new skills. Lastly, a mechanism should be in place to collect feedback from all 3 stakeholders on a regular, ongoing basis to ensure quality improvement and to continually course-correct for success.

Limitations

Qualitative research such as ours is not intended to be representative or generalizable, and findings reflect the experiences and perspectives of those who volunteered to participate. However, participants shared a range of experiences and perspectives that align with the empirical literature and anecdotal evidence, thus boosting the transferability of our findings.54  Our study reports learners’ perspectives; however, meaningfully improving workplace-based assessments depends, in part, on understanding the sociocultural and structural factors that affect messaging from program administrators and buy-in from individual faculty assessors.

Program messaging, faculty buy-in, and learner behavior appear to form links in a perceptual chain, in which the perceived value of the EPA system is only as strong as the weakest link. For EPAs to be perceived as truly reflective of residents’ competence and as a feasible means of formative feedback and accurate assessment, all stakeholders need to not only value the EPA assessment process, but also convey perceived value through deliberate actions.

1. 
Frank
JR,
Snell
LS,
ten Cate
O,
et al.
Competency-based medical education: theory to practice
.
Med Teach
.
2010
;
32
(8)
:
638
-
645
.
2. 
Chang
A,
Bowen
JL,
Buranosky
RA,
et al.
Transforming primary care training—patient-centered medical home entrustable professional activities for internal medicine residents
.
J Gen Intern Med
.
2013
;
28
(6)
:
801
-
809
.
3. 
ten Cate
O.
Entrustability of professional activities and competency-based training
.
Med Educ
.
2005
;
39
(12)
:
1176
-
1177
.
4. 
ten Cate
O,
Chen
HC,
Hoff
RG,
Peters
H,
Bok
H,
Van Der Schaaf
M.
Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99
.
Med Teach
.
2015
;
37
(11)
:
983
-
1002
.
5. 
Royal College of Physicians and Surgeons of Canada
.
Entrustable professional activity (EPA) fast facts
.
6. 
Caverzagie
KJ,
Nousiainen
MT,
Ferguson
PC,
et al.
Overarching challenges to the implementation of competency-based medical education
.
Med Teach
.
2017
;
39
(6)
:
588
-
593
.
7. 
Hoang
NS,
Lau
JN.
A call for mixed methods in competency-based medical education: how we can prevent the overfitting of curriculum and assessment
.
Acad Med
.
2018
;
93
(7)
:
996
-
1001
.
8. 
Bindal
T,
Wall
D,
Goodyear
HM.
Trainee doctors’ views on workplace-based assessments: are they just a tick box exercise?
Med Teach
.
2011
;
33
(11)
:
919
-
927
.
9. 
Martin
L,
Sibbald
M,
Brandt Vegas
D,
Russell
D,
Govaerts
M.
The impact of entrustment assessments on feedback and learning: trainee perspectives
.
Med Educ
.
2020
;
54
(4)
:
328
-
336
.
10. 
LaDonna
KA,
Hatala
R,
Lingard
L,
Voyer
S,
Watling
C.
Staging a performance: learners’ perceptions about direct observation during residency
.
Med Educ
.
2017
;
51
(5)
:
498
-
510
.
11. 
Vanlangen
KM,
Meny
L,
Bright
D,
Seiferlein
M.
Faculty perceptions of entrustable professional activities to determine pharmacy student readiness for advanced practice experiences
.
Am J Pharm Educ
.
2019
;
83
(
10
):
2072
-
2079
.
12. 
Pearlman
RE,
Pawelczak
M,
Yacht
AC,
Akbar
S,
Farina
GA.
Program director perceptions of proficiency in the core entrustable professional activities
.
J Grad Med Educ
.
2017
;
9
(5)
:
588
-
592
.
13. 
Weizberg
M,
Bond
MC,
Cassara
M,
Doty
C,
Seamon
J.
Have first-year emergency medicine residents achieved level 1 on care-based milestones?
J Grad Med Educ
.
2015
;
7
(4)
:
589
-
594
.
14. 
Boet
S,
Pigford
AAE,
Naik
VN.
Program director and resident perspectives of a competency-based medical education anesthesia residency program in Canada: a needs assessment
.
Korean J Med Educ
.
2016
;
28
(2)
:
157
-
168
.
15. 
Fédération des médecins résidents du Québec (FMRQ).
Implementation of competence by design in Quebec—year 2: ongoing issues
. Published August
2019
.
16. 
Mann
S,
Hastings
AT,
Beesley
T,
Howden
S,
Egan
R.
Resident perceptions of competency-based medical education
.
Can Med Educ J
.
2020
;
11
(5)
:
e31
-
e43
.
17. 
Rich
JV,
Young
SF,
Donnelly
C,
et al.
Competency-based education calls for programmatic assessment: but what does this look like in practice?
J Eval Clin Pract
.
2020
;
26
(4)
:
1087
-
1095
.
18. 
Branfield
LD,
Miles
A,
Ginsburg
S,
Melvin
L.
Resident perceptions of assessment and feedback in competency-based medical education: a focus group study of one internal medicine residency program
.
Acad Med
.
2020
;
95
(11)
:
1712
-
1717
.
19. 
Blades
ML,
Glaze
S,
McQuillan
SK.
Resident perspectives on competency-by-design curriculum
.
J Obstet Gynaecol Can
.
2020
;
42
(3)
:
242
-
247
.
20. 
Lingard
L,
Albert
M,
Levinson
W.
Grounded theory, mixed methods, and action research
.
BMJ
.
2008
;
337
:
a567
.
21. 
Kennedy
TJT,
Lingard
LA.
Making sense of grounded theory in medical education
.
Med Educ
.
2006
;
40
(2)
:
101
-
108
.
22. 
Bowers
BJ.
Grounded Theory
. 2nd ed.
Sage
;
1988
.
23. 
ten Cate
O.
A primer on entrustable professional activities
.
Korean J Med Educ
.
2018
;
30
(1)
:
1
-
10
.
24. 
Sherbino
J,
Bandiera
G,
Doyle
K,
et al.
The competency-based medical education evolution of Canadian emergency medicine specialist training
.
CJEM
.
2020
;
22
(1)
:
95
-
102
.
25. 
Bhanji
F,
Miller
G,
Cheung
WJ,
et al.
The future is here! Pediatric surgery and the move to the Royal College of Physicians and Surgeons of Canada’s competence by design
.
J Pediatr Surg
.
2020
;
55
(5)
:
796
-
799
.
26. 
Royal College of Physicians and Surgeons of Canada.
Competence by design technical guide 3: competence committees
.
27. 
Morse
JM.
The significance of saturation
.
Qual Health Res
.
1995
;
5
(2)
:
147
-
149
.
28. 
Watling
CJ,
Kenyon
CF,
Schulz
V,
Goldszmidt
MA,
Zibrowski
E,
Lingard
L.
An exploration of faculty perspectives on the in-training evaluation of residents
.
Acad Med
.
2010
;
85
(7)
:
1157
-
1162
.
29. 
Daloz
LA.
Effective Teaching and Mentoring: Realizing the Transformational Power of Adult Learning Experiences
.
Jossey-Bass
;
1986
.
30. 
Guha
M.
APA College Dictionary of Psychology
. Vol
31
. 2nd ed.
American Psychological Association
;
2017
.
31. 
Steenhof
N,
Woods
NN,
Van Gerven
PWM,
Mylopoulos
M.
Productive failure as an instructional approach to promote future learning
.
Adv Health Sci Educ Theory Pract
.
2019
;
24
(4)
:
739
-
749
.
32. 
Anderson
CG,
Dalsen
J,
Kumar
V,
Berland
M,
Steinkuehler
C.
Failing up: how failure in a game environment promotes learning through discourse
.
Think Skills Creat
.
2018
;
30
:
135
-
144
.
33. 
Pinsk
M,
Karpinski
J,
Carlisle
E.
Introduction of competence by design to Canadian nephrology postgraduate training
.
Can J Kidney Health Dis
.
2018
;
5
:
2054358118786972
.
34. 
Shalhoub
J,
Marshall
DC,
Ippolito
K.
Perspectives on procedure-based assessments: a thematic analysis of semistructured interviews with 10 UK surgical trainees
.
BMJ Open
.
2017
;
7
(3)
:
1
-
8
.
35. 
Royal College of Physicians and Surgeons of Canada
.
Oswald
A,
Cheung
W,
Bhanji
F,
Ladhani
M,
Hamilton
J.
Mock competence committee cases for practice deliveration
.
36. 
Carlos
WG,
Patel
DG,
Vannostrand
KM,
Gupta
S,
Cucci
AR,
Bosslet
GT.
Intensive care unit rounding checklist implementation: effect of accountability measures on physician compliance
.
Ann Am Thorac Soc
.
2015
;
12
(4)
:
533
-
538
.
37. 
Chen
C,
Kan
T,
Li
S,
Qiu
C,
Gui
L.
Use and implementation of standard operating procedures and checklists in prehospital emergency medicine: a literature review
.
Am J Emerg Med
.
2016
;
34
(12)
:
2432
-
2439
.
38. 
Haugen
AS,
Søfteland
E,
Almeland
SK,
et al.
Effect of the World Health Organization checklist on patient outcomes: a stepped wedge cluster randomized controlled trial
.
Ann Surg
.
2015
;
261
(5)
:
821
-
828
.
39. 
Hazelton
JP,
Orfe
EC,
Colacino
AM,
et al.
The impact of a multidisciplinary safety checklist on adverse procedural events during bedside bronchoscopy-guided percutaneous tracheostomy
.
J Trauma Acute Care Surg
.
2015
;
79
(1)
:
111
-
116
.
40. 
Sharma
S,
Peters
MJ,
Brierley
J,
et al.
“Safety by DEFAULT”: introduction and impact of a paediatric ward round checklist
.
Crit Care
.
2013
;
17
(5)
:
R232
.
41. 
Howie
WO,
Dutton
RP.
Implementation of an evidence-based extubation checklist to reduce extubation failure in patients with trauma: a pilot study
.
AANA J
.
2012
;
80
(3)
:
179
-
184
.
42. 
Locke
EA,
Latham
GP.
Building a practically useful theory of goal setting and task motivation: a 35-year odyssey
.
Am Psychol
.
2002
;
57
(9)
:
705
-
717
.
43. 
Nayar
SK,
Musto
L,
Baruah
G,
Fernandes
R,
Bharathan
R.
Self-assessment of surgical skills: a systematic review
.
J Surg Educ
.
2020
;
77
(2)
:
348
-
361
.
44. 
Rizan
C,
Ansell
J,
Tilston
TW,
Warren
N,
Torkington
J.
Are general surgeons able to accurately self-assess their level of technical skills?
Ann R Coll Surg Engl
.
2015
;
97
(8)
:
549
-
555
.
45. 
Fleming
M,
Vautour
D,
McMullen
M,
et al.
Examining the accuracy of residents’ self-assessments and faculty assessment behaviours in anesthesiology
.
Can Med Educ J
.
2021
;
12
(4)
:
17
-
26
.
46. 
Davis
DA,
Mazmanian
PE,
Fordis
M,
Van Harrison
R,
Thorpe
KE,
Perrier
L.
Accuracy of physician self-assessment compared with observed measures of competence: a systematic review
.
JAMA
.
2006
;
296
(9)
:
1094
-
1102
.
47. 
Watling
CJ,
Kenyon
CF,
Zibrowski
EM,
et al.
Rules of engagement: residents’ perceptions of the in-training evaluation process
.
Acad Med
.
2008
;
83
(suppl 10)
:
97
-
100
.
48. 
Colthart
I,
Bagnall
G,
Evans
A,
et al.
The effectiveness of self-assessment on the identification of learner needs, learner activity, and impact on clinical practice: BEME guide no. 10
.
Med Teach
.
2008
;
30
(2)
:
124
-
145
.
49. 
Ginsburg
S,
McIlroy
J,
Oulanova
O,
Eva
K,
Regehr
G.
Toward authentic clinical evaluation: pitfalls in the pursuit of competency
.
Acad Med
.
2010
;
85
(5)
:
780
-
786
.
50. 
Hodges
B.
Assessment in the post-psychometric era: learning to love the subjective and collective
.
Med Teach
.
2013
;
35
(7)
:
564
-
568
.
51. 
Holmboe
ES,
Sherbino
J,
Englander
R,
Snell
L,
Frank
JR.
A call to action: the controversy of and rationale for competency-based medical education
.
Med Teach
.
2017
;
39
(6)
:
574
-
581
.
52. 
Lockyer
J,
Carraccio
C,
Chan
MK,
et al.
Core principles of assessment in competency-based medical education
.
Med Teach
.
2017
;
39
(6)
:
609
-
616
.
53. 
Dagnone
JD,
Chan
MK,
Meschino
D,
et al.
Living in a world of change: bridging the gap from competency-based medical education theory to practice in Canada
.
Acad Med
.
2020
;
95
(11)
:
1643
-
1646
.
54. 
Ott
MC,
Pack
R,
Cristancho
S,
Chin
M,
Van Koughnett
JA,
Ott
M.
“The most crushing thing”: understanding resident assessment burden in a competency-based curriculum
.
J Grad Med Educ
.
2022
;
14
(5)
:
583
-
592
.

The online version of this article contains the interview guide used in the study.

Funding: This work was supported by the Department of Emergency Medicine at the University of Ottawa, through the Department of Emergency Medicine Academic Grant (2020-SPF-21), and by the Department of Innovation in Medical Education at the University of Ottawa, through the Healthcare Education Grant.

Conflict of interest: The authors declare they have no competing interests.

This work was previously presented at the International Conference on Residency Education, October 29, 2022, Montreal, QC, Canada, and Canadian Association of Emergency Physicians Annual Conference, May 29, 2022, Quebec City, Quebec, Canada.

Supplementary data