Precision medical education (PME) leverages data about individual learners to guide personalized education.1-3 A PME cycle starts with data inputs that are analyzed to generate insights about a learner. Insights then inform personalized interventions that lead to measurable outcomes. Outcomes inform iterative adjustments to the cycle.3 PME cycle inputs in graduate medical education (GME) have expanded from traditional sources (eg, knowledge-based examinations, direct observation) to include data from the electronic health record (EHR), real-time location systems, and other technologies. Multimodal data capture may allow for a more complete picture of learners, their behaviors, and their environment to improve educational and patient outcomes. Methods for analyzing these data to generate insights are nascent.1 This perspective discusses current challenges and opportunities in using data analytics to optimize clinical skills training (Table).
US residency programs rely on Clinical Competency Committees (CCCs) to make determinations regarding trainee advancement.4,5 The cognitive load of the CCC can lead to decision fatigue, groupthink, and bias.6 CCCs may have a narrow view into a trainee’s performance, with at best a limited data set and at worst one that is misrepresentative. For example, internal medicine residents may spend little time in direct contact with patients,7-10 which will lead to few opportunities for direct observation. Simulation is often used to address this issue,11,12 but there is renewed interest in observations of real patient interactions.13-17 One method is to capture frequent assessments of entrustable professional activities (EPAs) during patient care.18-23 However, this data can be difficult to synthesize in a way that leads to actionable insights, and may be costly due to the need for faculty participation in data collection.24 There is also no clear consensus on the common elements of clinical competence within and across specialties.
Passively acquired (“no-touch”) data can increase the quantity of information about trainees.13 Real-time tracking systems provide information about movement through the clinical environment, including data about inter-resident and service-based differences in clinical activities.1,7,10 These data could help suggest improvements in trainee workflow or modifications to specific rotations to generate desired clinical experiences. EHR data includes how users interact with the EHR (metadata), clinical reasoning in notes, clinical exposure, prescribing habits, and patient outcomes.25-27 Given that the attributions of tasks or clinical outcomes to an individual trainee can be challenging,28 EHR-derived measures like Trainee Attributable & Automatable Care Evaluations in Real-time (TRACERs) might help more directly link outcomes to individual behaviors and provide feedback to improve performance. For example, TRACERs can give feedback on whether a trainee is ordering anti-hyperglycemic therapies based on current guidelines.29 The promise of such approaches cannot be fully realized until medical informaticists find solutions to the lack of standardized data collection across different EHRs.30-32 Common data languages standardize clinical data for research33,34 ; similar data dictionaries could help address this issue in medical education. A downside to grounding assessment in EHR metrics is that it might provide additional incentive for learners to focus on the EHR at the expense of time with patients.
Data analytics, including the use of artificial intelligence (AI), may help to generate insights from vast and disparate data sources. AI is starting to be used in assessment, clinical reasoning, teaching, and other activities.35 AI could potentially help with data collection, personalized analytics, participatory interventions, and prediction of outcomes.36 Natural language processing (NLP) can analyze narrative feedback about trainees to potentially improve evaluation processes while reducing administrative burden.37 NLP can also evaluate trainees’ notes for clinical reasoning.38 Generative AI can incorporate multimodality inputs that may be too difficult for human evaluators to synthesize. This synthesis could guide educational interventions, such as suggesting learning content to review or assigning specific clinical encounters to address a gap in prior experience.3 Additionally, the ability of generative AI to develop learning cases at scale could revolutionize simulation and create new avenues for trainee assessment.39
Appropriate use of AI in medical education requires ethical frameworks, interdisciplinary collaborations, investment in education, promotion of transparency and accountability, and monitoring to evaluate impact. Key ethical principles include privacy, security, transparency, accountability, and fairness.40 Data about trainees must be managed to protect learners as well as patient privacy. Challenges include determining who will have access to the insights generated from learner data and for what purpose. For example, should insights be shared with prospective employers or patients? The potential consequences of using GME data to inform subsequent training and practice need to be explored. If learners and assessors suspected that data would be shared outside of the educational environment, this could undermine the entire assessment system. The use of AI must be disclosed to trainees, and appropriate monitoring methods must be instituted. Additionally, AI-based analytics should complement bedside assessments, not replace them. The cost of data collection, storage, and third-party systems must be considered. These costs may be offset by improved clinical performance with reduced need for remediation, or by freeing up faculty time for other tasks.
Mistrust in learner assessments may be another barrier to PME implementation.41 Interpretation of assessments is hampered by variation among programs in assessment methods used, reduced in-person feedback, and concerns about validity evidence.42 Many clinicians are unaware of the use of EHR metadata to generate insights about their clinical performance. Some mistrust the data or fear its misuse by employers.43 This highlights the need to accrue evidence of validity for new analytic techniques, including with direct observation of clinical behavior. The issue of mistrust also emphasizes the importance of engaging stakeholders in the analytic process. With the potential effects of PME on future training and employment, trainees should have a seat at the table when deciding on how data insights are used. One could imagine a hybrid system where GME program directors agree on a minimum specialty-specific standard for clinical competency and then, using personalized data, partner with trainees to identify individual professional goals.
Data analytics are only valuable if the generated insights provide actionable guidance to trainees. Currently that guidance is fragmented, in part reflecting the disorganized manner of data collection and synthesis. Even high-quality, well-organized data can overwhelm trainees’ efforts to make sense of feedback. One hopes that formal coaching programs that translate insights into developmental plans in partnership with trainees will result in a more satisfying professional experience and improvements in clinical skills.44-47
Similar to how the mapping of the human genome enabled the launch of the National Institutes of Health Precision Medicine Initiative a decade ago, we must articulate PME’s key building blocks to realize its potential. Discussion and clarification about data analysis in the educational setting may enable innovators to define these building blocks. Development of shared data definitions, learner models, educational outcomes of interest, and guidelines for privacy and security are some of the first areas educators should tackle. EHR data and AI will likely play key roles in PME, but we must also prioritize obtaining validity evidence for assessments that incorporate direct observations of clinical skills. These efforts will be enhanced by engaging learners in this process; as stakeholders they can contribute unique perspectives on professional growth and effects on future career opportunities. Finally, as clinical competence is specialty-specific, development of PME tools will require collaboration across multiple specialties to ensure that analytic processes work across the continuum of medical education.