Self-assessment and self-directed learning are essential to becoming an effective physician.
To identify factors associated with resident self-assessment on the competencies, and to determine whether residents chose areas of self-assessed relative weakness as areas for improvement in their Individualized Learning Plan (ILP).
We performed a cross-sectional analysis of the American Academy of Pediatrics' PediaLink ILP database. Pediatrics residents self-assessed their competency in the 6 Accreditation Council for Graduate Medical Education competencies using a color-coded slider scale with end anchors “novice” and “proficient” (0–100), and then chose at least 1 competency to improve. Multivariate regression explored the relationship between overall confidence in core competencies, sex, level of training, and degree (MD or DO) status. Correlation examined whether residents chose to improve competencies in which they rated themselves as lower.
A total of 4167 residents completed an ILP in academic year 2009–2010, with residents' ratings improving from advanced beginner (48 on a 0–100 scale) in postgraduate year-1 residents (PGY-1s) to competent (75) in PGY-3s. Residents rated themselves as most competent in professionalism (mean, 75.3) and least competent in medical knowledge (mean, 55.8) and systems-based practice (mean, 55.2). In the adjusted regression model, residents' competency ratings increased by level of training and whether they were men. In PGY-3s, there was no difference between men and women. Residents selected areas for improvement that correlated to competencies where they had rated themselves lower (P < .01).
Residents' self-assessment of their competencies increased by level of training, although residents rated themselves as least competent in medical knowledge and systems-based practice, even as PGY-3s. Residents tended to choose subcompetencies, which they rated as lower to focus on improving.
Self-assessment and self-directed learning are essential to becoming an effective physician, but it is not known whether residents' learning plans focus on self-assessed areas of weakness.
Residents' confidence in their competencies increased with level of training, yet throughout residents remained most confident in their competence in professionalism and least confident in the areas of medical knowledge and SBP.
The study is based on self-assessment, not external measures of competency; results may be influenced by participants completing the assessment at different times during the academic year.
In their individual learning plans, residents' selection of areas for improvement tended to focus on subcompetencies in which they had rated themselves lower.
Self-assessment and self-directed learning are essential to lifelong learning, medical professionalism, and becoming an effective physician.1,2 Documentation of lifelong learning is required by the Accreditation Council for Graduate Medical Education (ACGME) and American Board of Medical Subspecialties for residency training, board certification, and maintenance of certification.3,4 In pediatrics, all training programs must use Individualized Learning Plans (ILPs) to document resident self-assessment and self-directed learning.5 However, for many physicians, self-assessment and external measurements of competency are not well correlated,3,4 with the least competent often least able to accurately self-assess.4,6 Although much work has already been done, much work is still needed to better understand how to make self-assessment useful in determining learning needs and improving competency.
Although residents feel comfortable assessing their individual strengths and weaknesses, they are less comfortable developing learning goals to improve their areas of weakness.7,8 It also is unknown whether residents choose to develop learning goals in their areas of self-assessed weakness.
The American Academy of Pediatrics (AAP) has developed an online ILP, embedded in a web-based portal (PediaLink), which has been widely adopted by pediatric residency programs. Using the AAP's PediaLink ILP database, we explored whether self-assessment influences learning efforts in pediatrics residents. We examined which areas residents identified as strengths and weaknesses, whether those areas differed based on level of training, and whether residents chose to work on learning goals in areas that they identified as weaknesses.
We performed a cross-sectional analysis of the existing deidentified AAP PediaLink ILP database for the 2009–2010 academic year. As part of their ILP, residents completed a self-assessment (rating of competencies and relative ranking of personal attributes), determined which competencies and personal attributes they wanted to improve, and developed learning goals and strategies to achieve their goals.
Residents self-assessed their competence on the 6 ACGME competencies: patient care, medical knowledge, interpersonal and communication skills, professionalism, practice-based learning and improvement (PBLI), and systems-based practice (SBP). Each competency consisted of subcompetencies based on ACGME pediatrics residency requirements.5 For example, the SBP competency consisted of 5 subcompetencies: (1) knowing types of medical practice and delivery systems, (2) practicing cost-effective health care, (3) advocating for quality patient care and assisting patients in dealing with system complexities, (4) advocating for health promotion and disease prevention, and (5) acknowledging medical errors and examining systems to prevent them. The number of subcompetencies ranged from 2 for medical knowledge to 7 for patient care.
Residents self-assessed each subcompetency using a slider visual analog modified from the Dreyfus Scale of Skill Acquisition,9,10 with left and right end anchors labeled novice and proficient, respectively. Residents could view the definitions of each level of the Dreyfus scale while completing their self-assessment. Color coding was used to divide the scale into 4 regions from left to right: light blue, dark blue, magenta, and gray. For the purposes of this analysis, competency assessments were on a 0 to 100 scale that correlated with the visual analog scale: 0 to 29, novice (light blue); 30 to 60, advanced beginner (dark blue); 61 to 91, competent (magenta); and 92 to 100, proficient (gray). The mean competency score was the sum of subcompetency scores within each competency divided by the number of subcompetencies. The overall competency score was the mean of all competency scores. We assessed the internal consistency and unidimensionality of scale items by using Cronbach α and by determining the number of principal factors satisfying the Kaiser-Guttman rule (eigenvalue > 1), respectively.11 By default, competency assessments are scored as 0 in the PediaLink dataset. Therefore, to exclude noncompleters, we only computed mean competency scores when at least one nonzero item score was recorded among the final set of items within a domain.
Residents also self-assessed the relative strength of 10 personal attributes: communication, initiative, time management, ability to recognize limitations, attention to detail, desire to strive for excellence, confidence, ability to work with others, response to feedback, and perseverance. For the purposes of this analysis, the residents' strongest personal attribute was scored 1, and the residents' weakest personal attribute was scored 10. It was possible for residents to equally weight several personal attributes. For example, if a resident indicated that perseverance, communication, and initiative were his or her strongest attributes, all 3 items would be rated a 2 [(1 + 2 + 3) / 3].
Residents selected subcompetencies and personal attributes they wanted to focus on for improvement. These subcompetencies and personal attributes did not have to be in their weakest areas, although residents were asked to reflect on their assessment prior to selecting the area(s) they would like to improve.
We used univariate and bivariate statistics to describe the relationship between residents' self-assessment of competencies and personal attributes. We used multivariate regression to explore the relationship between overall confidence in competencies and sex of the participant, level of training, and degree (MD/DO) status. Level of training was determined by year of graduation. Residents graduating at the end of the year of assessment (2010) or the end of the prior year (2009) were designated postgraduate year-3 residents (PGY-3s) and above. Those graduating in 2011 were designated PGY-2s, and those graduating in 2012 and beyond were designated PGY-1s. Residents who indicated they had a Doctor of Medicine degree, a Bachelor of Medicine/Bachelor of Science (MBBS, conferred outside the United States) were collapsed into the MD category.
We received Institutional Review Board expedited review approval from the AAP and exemption from the University of California, Davis.
A total of 4167 unique residents completed an ILP on PediaLink. Of the residents who completed an ILP, 3053 (70.4%) were women, 3876 (89.0%) had an MD, 453 (10.4%) had a DO, 1526 (34.7%) were PGY-1s, 1353 (30.7%) were PGY-2s, and 1455 (33.0%) were PGY-3s and above (table 1). In addition, 208 residents had year of graduation missing and 68 were listed as having completed residency prior to 2009. Residents who completed an ILP on PediaLink were similar to all residents nationally in terms of sex and PGY distribution.12
Applying principal factor analysis separately to each ACGME domain, we found that for each domain, items loaded onto a single underlying factor. Also, Cronbach α showed extremely high internal consistency for all 6 ACGME competency domains (.95–.97), with an overall Cronbach α of .99 for the entire instrument. This high internal consistency supports the validity of using mean core competency score and an overall competency score to summarize resident self-assessment of performance.
We found that residents' confidence in their competencies increased by 25 points, from an average overall rating of 48 (advanced beginner) in PGY-1s to 75 (competent) in PGY-3s (table 2). Residents rated themselves as most competent in professionalism (mean, 75.3) and least competent in medical knowledge (mean, 55.8) and SBP (mean, 55.2). These differences were most pronounced in residents at the beginning of their training (>25-point difference between professionalism and medical knowledge or SBP in PGY-1s and ∼13-point difference in PGY-3s).
figure 1 shows that residents felt that they achieved competency (>60) in different areas at different rates. For example, most residents (64.2%) rated themselves as competent in professionalism as PGY-1s. As PGY-3s, almost all (95.7%) felt competent in professionalism, and 37.2% felt proficient. In contrast, most residents did not feel competent in patient care, interpersonal and communication skills, or PBLI until they were PGY-2s, and most did not feel competent in medical knowledge or SBP until they were PGY-3s.
In the adjusted regression model, residents' confidence in their competencies increased by level of training (table 3). In PGY-1s, there was no difference based on type of degree, although female residents rated themselves slightly lower than male residents. In PGY-3s, there were no differences based on sex, but there was a slight difference based on type of degree, with MDs rating themselves slightly higher than DOs (figure 2). The findings were similar across competencies and subcompetencies.
In some competency domains (medical knowledge, PBLI, and professionalism), residents rated themselves similarly across subcompetencies. In other competency domains (patient care, interpersonal and communication skills, SBP), there was up to a 23-point difference between subcompetencies, which persisted across level of training. In patient care, residents were more confident about their performance in “gathering essential/accurate information about patients” (69.8) than in “performing medical procedures” (53.6). In interpersonal and communication skills, residents were more confident in “working together as a team” (73.4) than they were in “acting as a consultant” (56.7). In SBP, residents were more confident “advocating for health promotion and disease prevention” (59.5) than in “practicing cost-effective care” (50.4).
Residents chose a mean of 8.7 subcompetencies they wanted to improve, which encompassed an average of 4.5 different ACGME competencies (table 4). As level of training increased, residents selected fewer individual subcompetencies to improve.
Residents rated their attributes of communication skills, ability to work with others, and perseverance as relative strengths, and their confidence and time management skills as relative weaknesses (data not shown). The relative ranking of personal attributes did not change by year of training. Residents chose a mean of 2.3 (SD, 0.8; range, 0–3) attributes they wanted to improve. There was no difference in the number of attributes residents chose to improve by level of training.
Residents who rated themselves lower in a subcompetency area were more likely to want to improve that area (P < .001 by hierarchical logistic regression analysis).14 In the adjusted logistic regression model, residents who rated themselves as not yet competent in a domain were more likely to want to improve that area (odds ratios, 1.35–1.95; 95% confidence intervals, 1.13–2.30).
Residents' confidence in their ACGME competencies increased with level of training. Although the discrepancy between residents' highest and lowest self-ratings for competencies decreased as residency training level increased (point difference from greater than 25 to 13), residents continued to rate themselves as most competent in professionalism and least competent in medical knowledge and SBP. Most residents felt competent in professionalism as PGY-1s, developed competency in patient care, interpersonal and communication skills, and PBLI as PGY-2s, and developed competency in medical knowledge and SBP as PGY-3s. Residents selected subcompetencies to focus on learning goals that correlated with those areas that they rated lower.
In our previous work, we found that residents were more likely to identify medical knowledge and patient care learning goals as most important.15 Residents were less likely to report professionalism and SBP goals as most important, and they reported relatively less progress on SBP goals.15 It is possible that residents were less likely to value professionalism goals as most important, because most residents rated themselves as competent in professionalism as PGY-1s, with 37% rating themselves as proficient as PGY-3s. Although residents continued to rate themselves as least competent in SBP goals, they were also less likely to identify these goals as most important. This finding may be related to their greater perceived difficulty in understanding SBP, which is then reflected in difficulty developing and achieving SBP learning goals, because residents also reported relatively less progress in achieving SBP goals.15
In several competency domains, residents were more confident in their ability to perform certain subcompetencies over others. This discrepancy may be understandable because PGY-1s are more confident in subcompetencies more often emphasized during medical school (history and physical, teamwork, advocating for patients) and less confident in subcompetencies less emphasized (medical procedures, functioning as a consultant, practicing cost-effective medicine). In addition, it may be possible that certain subcompetencies (functioning as a consultant) may be more difficult to achieve until a person gains additional clinical experience. However, there remained a difference, albeit a smaller one, across subcompetencies, even in PGY-3s. If achievement of all subcompetencies is equally valued by program directors, this may be addressed by residency programs providing trainees with additional resources/curricula to help them achieve competency in areas where PGY-3s still feel less confident.
We found that female PGY-1s initially rated themselves somewhat lower on many competencies compared with their male counterparts. However, no difference was found in PGY-3s. This finding implies that the training and confidence received during residency mitigate the difference in confidence in competencies found in new female medical student graduates during their first year of residency. It would be interesting to further explore whether confidence among men versus women in competencies was found at the time of medical school graduation among medical students going into all specialties and, most importantly, whether that confidence was associated with external measurements of competency.
We also found that PGY-1 MD and DO residents rated themselves similarly, but PGY-3 MDs rated themselves slightly higher than PGY-3 DOs. This difference in confidence in competencies among PGY-3 MDs and DOs would be interesting to explore in subsequent studies, in addition to exploring whether there were any differences in external measures of competency.
A limitation of our study is that our analysis is based on residents' self-assessment of competency, not on external measurement of competency. Although self-assessment and external measurements of competency may not be well correlated,3,4 it is resident self-assessment of competencies that informs program directors of specific areas where residents feel more and less competent in their training. Second, residents took the self-assessments at different times during the training year. In particular, PGY-3s did not necessarily take the self-assessment at the end of their training. It is possible that confidence in areas such as procedures, functioning as a consultant, and practicing cost-effective medicine comes with deliberate practice afforded by the final year of residency, and it is possible that responses may have been different if all residents had been surveyed at the completion of their training. Although we were able to distinguish residents who graduated from osteopathic medical schools (DOs) from allopathic medical schools (MDs), we were unable to determine whether graduates of allopathic medical schools were international medical graduates. Some residents (25) indicated they had an MBBS, indicating they graduated from a British medical system, but we were not able to identify other foreign medical graduates. Therefore, we collapsed MD and MBBS into one category, and our results may have been different if we could have compared graduates of international medical schools to US medical school graduates. Finally, although residents indicated that they wanted to work on subcompetencies that corresponded with those for which they had rated themselves as less proficient, this analysis does not specifically look at the actual learning goals residents wrote for themselves.
Future research should focus on determining whether residents' written learning goals correlate with their areas of relative weakness and the areas on which they would prefer to focus. In addition, future research could explore whether residents who wrote learning goals focused on improving a subcompetency (1) subsequently attempted to improve that subcompetency, (2) made progress in achieving their learning goal, and (3) reported improved confidence in that subcompetency the following year compared with residents who did not write a learning goal on that subcompetency. If written learning goals improved the ability of residents to achieve competency, this would support the value of this educational method for trainees, as well as the expansion of the ILP concept to other training programs and maintenance of certification programs for graduated physicians.
Residents' confidence in their competencies increased as the level of training increased. Although the gap between competencies decreased as level of training increased, residents remained most confident in their professionalism and least confident in their medical knowledge and SBP competencies throughout training.
Su-Ting T. Li,MD, MPH, is Associate Professor, Vice-Chair of Education, and Pediatrics Program Director in the Department of Pediatrics, University of California Davis School of Medicine; Daniel J. Tancredi, PhD, is Assistant Professor in the Department of Pediatrics and the Center for Healthcare Policy and Research, University of California Davis School of Medicine; Ann E. Burke, MD, is Associate Professor and Pediatrics Program Director in the Department of Pediatrics, Wright State University Boonshoft School of Medicine; Ann Guillot, MD, is Professor and Pediatrics Program Director in the Department of Pediatrics, University of Vermont College of Medicine; Susan Guralnick, MD, is Associate Professor, Director of Graduate Medical Education, and Designated Institutional Officer at Winthrop University Hospital; R. Franklin Trimm, MD, is Professor, Vice- Chair, and Pediatrics Program Director in the Department of Pediatrics, University of South Alabama; and John D.Mahan, MD, is Professor, Vice-Chair, and Pediatrics and Pediatric Nephrology Fellowship Program Director in the Department of Pediatrics, Nationwide Children's Hospital/Ohio State University.
Funding: The authors report no external funding source for this study.
The authors wish to thank the residents who participated in the American Academy of Pediatrics' PediaLink Individualized Learning Plans; Charlette Nunnery, MS; and Scott Bradbury, MS, for assistance with this project, and the American Academy of Pediatrics for providing deidentified data for this analysis. The American Academy of Pediatrics had no role in the concept and design, analysis, interpretation of data, or drafting or revising of the manuscript.
Drs Li, Burke, Guillot, Guralnick, Trimm, and Mahan are members of the American Academy of Pediatrics' PediaLink Resident Center Workgroup.