There is a clear need to educate health professionals in genomic medicine. Pathologists, given their critical role in cancer diagnostics, must understand core concepts in genomic oncology. Although high-quality evaluation is a cornerstone of medical education, to our knowledge a rigorously validated genomic oncology assessment tool has not been published.
To develop and validate a genomic oncology exam.
A previously developed exam was updated and validated using 3 approaches: pretesting/posttesting in relation to a live genomic pathology workshop; comparison of scores of individuals at a priori defined knowledge levels; and use of Rasch analysis. This last approach is used in high-stakes testing, such as licensing exams. The exam included both knowledge-based as well as skills-based questions related to the use of online genomics tools.
There was a significant difference in exam scores preworkshop and postworkshop (37.5% to 75%; P < .001). Individuals at a priori defined beginner, intermediate, and expert levels scored 35%, 58%, and 89%, respectively (P < .001). Rasch analysis demonstrated excellent fit and reliability and led to further exam refinement with the removal of 2 questions deemed unnecessary for assessment.
A rigorously validated exam has now been created to assess pathologist genomic oncology knowledge and skills. The exam can be used to assess both individual learners as well as educational interventions. The exam may also be applicable to other specialties involved in genomic-based cancer care.
The importance of educating health professionals in genomic medicine has been widely recognized.1 Physicians must have not only the knowledge but also the skills to apply genomic technology to patient care. These include understanding core concepts as well as the ability to use online tools to help interpret results.2
Oncology is one area in which genomic approaches have rapidly entered clinical practice, and there are several educational resources available to assist with teaching.3 However, the ability to effectively evaluate curricula and learners is critical in medical education. As with any testing, such evaluation tools need to be validated to ensure accuracy and precision. To our knowledge, however, there are no published rigorously validated genomic oncology assessment tools.
In this study, we revised and extensively validated a genomic oncology exam that had previously been used to test the efficacy of an online module educational intervention.2 Given the vital role of pathology in genomic oncology, the validation included both pathology trainees and practicing pathologists.4 The exam tested both knowledge and performance-based skills, and the validation process first consisted of comparing scores before and after an 8-hour genomic oncology workshop for pathology residents. The exam was also given to others at a priori defined knowledge levels. The results were assessed using Rasch analysis, which is routinely used in high-stakes testing, such as licensing exams.5 The end product is a validated genomic oncology assessment tool that can be used to critically evaluate curricula and learners.
MATERIALS AND METHODS
Exam Development
The Training Residents in Genomics (TRIG) Working Group is a committee of the Pathology Program Directors Section of the Association of Pathology Chairs and is made up of experts in molecular pathology, genetics, and medical education. The TRIG Working Group developed a workshop curriculum implemented at multiple national pathology meetings and with published positive evaluation results.6 A 19-question exam was previously developed by workshop faculty and experts in evaluation and included both knowledge-based multiple choice and skills-based questions related to the core objectives of the TRIG workshop curriculum.2 For the skills-based questions, the exam asked participants to identify the appropriate online tool for a task and then report results through real-time use of that tool. Two molecular pathologists familiar with the TRIG curriculum also reviewed the exam for clarity and content validity. Previous use of the exam consisted of the pre and post results from 9 participants at a live workshop (3 pre and 6 post) and 15 individuals taking online modules based on the live workshop.2
In 2017, the TRIG Working Group significantly revised the original workshop curriculum (pathologylearning.org/trig; accessed May 28, 2020). The authors made updates to the original exam questions related to these changes, with the answers based on the expert-developed TRIG curriculum. The final exam consisted of 18 questions. One question related to the Genetic Information Nondiscrimination Act was removed because it was considered too US-centric. The exam included the following topics:
Knowledge-based multiple choice questions (8): somatic versus germ-line testing; variants of uncertain significance; Sanger sequencing method; depth of coverage; cancer gene panels; molecular method sensitivity; prognostic genes in breast cancer; annotation definition.
Skills-based questions related to online tools (10): ClinVar (4), COSMIC (2), CIViC (4).
Validation Participants
In July 2019, the UK Royal College of Pathologists held a TRIG-based genomic workshop. This 2-day event had 2 components. There was an 8-hour workshop for pathology residents that included the full 4-exercise curriculum based on the revised TRIG materials. Using Google Forms (Google, Mountain View, California), participants were given the exam and a demographics survey immediately prior to the start of the workshop, and then the exam again immediately after the workshop ended. To link the results from the pretests and posttests while preserving anonymity, participants were asked to provide the last 4 digits of their phone number at the beginning of each test administration.
The genomics workshop also included a train-the-trainer session for practicing pathologists who were not molecular pathologists but who had an interest in genomics. Participants were presented with information on leading a team-based learning workshop and then had the opportunity to work as teams on a portion of the workshop exercises. These trainers were given the exam and a demographics survey immediately prior to the start of the session. All results were anonymous.
In August through September 2019, the TRIG Working Group members distributed the exam and demographic survey link to practicing molecular pathologists. Aside from the online tools related to question tasks, the participants were asked not to use any external resources while taking the exam. All results were anonymous.
This study was approved for exempt status by the Beth Israel Deaconess Medical Center Institutional Review Board (Boston, Massachusetts).
Validation Process
Preworkshop and postworkshop resident results were compared using a paired t test. Results of those with a priori expected beginner (preworkshop trainees), intermediate (trainers), and expert (molecular pathologists) knowledge were compared using 1-way analysis of variance. These results were also evaluated using Rasch psychometric analysis, which is used to validate exams used in high-stakes testing (eg, licensing exams) (Ministep; http://www.winsteps.com/ministep.htm; accessed May 28, 2020). Briefly, Rasch analysis compares observed results to those expected using a model based on item difficulty and test-taker expertise.5,7 Fit with the model is calculated using χ2 analysis, with scores above 1.5 generally considered as meriting review of question inclusion in the exam.8 Reliability, similar to Cronbach α, can also be calculated.
RESULTS
Preworkshop and Postworkshop Comparison
There were 31 pathology trainees who took the preworkshop assessment. Their characteristics are shown in Tables 1 and 2 (“beginner”). There was a range of participants with regard to postgraduate year (PGY), and most had not had a molecular pathology rotation. More than 95% (30 of 31) either did not have training or rated their molecular pathology training as “poor” or “fair,” with approximately 90% rating their molecular (28 of 31) and genomic (29 of 31) pathology knowledge as “poor” or “fair.” There were 20 trainees who had postworkshop results that could be compared to those before the workshop. There was significant improvement in exam performance after the workshop, with a mean preworkshop score of 37.5% (range, 17%–61%) and a mean postworkshop score of 75% (range, 50%–100%; P < .001).
Comparison Based on A Priori Knowledge Classification
There were 9 trainers and 10 experts who completed the exam. Their characteristics with regard to perceived molecular and genomic pathology knowledge are shown in Table 2. A higher percentage considered their knowledge at least “good” or “excellent” as compared with the trainees. Demonstrating the differences in background, the only online tools that more than 50% of beginners had at least heard of were clinicaltrials.gov (21 of 31) and PubMed (29 of 3; Table 3). In contrast, more than 50% of those in the intermediate group had also at least heard of ClinVar (5 of 9) and COSMIC (6 of 9), with one-third (3 of 9) having used each Web site. For experts, more than 50% had used all of the Web sites except CIViC (1 of 10), ClinGen (4 of 10), and PharmGKB (2 of 10). The scores (and ranges) of the beginner, intermediate, and expert individuals who completed the exam were 35% (11%–72%), 58% (22%–83%), and 89% (78%–94%), respectively (P < .001).
Validation Using Rasch Analysis
The results from the 3 different a priori defined groups were further evaluated using Rasch analysis. The mean question fit was 1.00 and the reliability was 83%. There were 2 questions with fit scores greater than 1.5 (prognostic genes in breast cancer; annotation definition). Members of the TRIG Working Group reviewed the results and determined that these questions could be removed from the exam because they were not related to critical aspects of genomic oncology knowledge. As such, the final validated exam contains 16 questions.
DISCUSSION
We report, to our knowledge, the first published extensively validated genomic oncology assessment tool. The exam focused on oncology, which is highly relevant because much of pathology practice is related to cancer diagnostics. While based on a previously published exam, significant revision was needed given changes in the field. In addition, the original validation approach was limited to measuring prescores and postscores of 24 individuals following exposure to a genomic oncology curriculum.2 We have undertaken a much more rigorous evaluation, including performance in individuals at different expected knowledge levels and use of psychometric analysis.9
The 3 separate analytic approaches demonstrated that the exam can be used in the evaluation of both curricula and individuals. First, similar to the original version of the exam, scores significantly increased following an interactive team-based learning workshop developed by experts. Second, there was a significant difference in exam scores based on the a priori defined categories of beginner, intermediate, and expert. Division into these categories appears to be justified based on the baseline demographic data regarding perceived molecular and genomic pathology knowledge and use of online tools. Lastly, Rasch analysis demonstrated excellent fit scores and reliability. This rigorous psychometric approach is used in high-stakes examinations and allowed us to further refine the exam and shorten it by 2 questions.5
This study also demonstrates the relatively poor knowledge of trainees and nonexpert practicing pathologists with regard to genomic pathology. Only 16% of trainees and 33% of practicing pathologist trainers had used COSMIC, an important tool for the interpretation of somatic variants, suggesting a need for educational interventions. Of note, there was both a statistically significant and an educationally significant increase in scores (>35%) on the postworkshop exam. All of the materials needed to run a similar workshop, including a 72-page instructor handbook, are available on the TRIG Web site (pathologylearning.org/trig) free of charge after a brief registration process. We hope training programs consider use of this resource and others to improve genomic pathology knowledge.3
There are a number of limitations to our study. Given the length and time constraints, the exam could not include assessment of all the important genomic oncology knowledge and skills. However, the exam was based on the objectives of a previously vetted genomic oncology curriculum created by experts.6 In addition, the exam could clearly differentiate individuals of different expected abilities who had not participated in the workshop. Another potential limitation is that the validation population was only pathologists. As such, we do not know if the same results would be obtained in other groups. Of note, TRIG-based workshops have been conducted at several major oncology meetings. Such overlap suggests that the exam could also be used to assess nonpathologist health care providers, and the utility of the exam in other groups could be a focus of a future study. Finally, most of the participants were from the United Kingdom. It may be reasonable to suggest that the involvement of pathology with large-scale genomic analysis is more recent in the United Kingdom than it is in the United States. In addition, most molecular testing in the United Kingdom is overseen by clinical scientists and not pathology-trained physicians. As such, there may be significant differences in pathology education that would affect the external generalizability of our results, and it would likely be informative to perform similar trainee assessments in other countries.
In summary, we have developed a rigorously validated genomic oncology assessment tool. This exam tests both knowledge- and performance-based objectives and shows clear differentiation among practitioners with different levels of expected knowledge. The final 16-question exam is available at no charge by contacting the corresponding author and can be a valuable resource to use locally to assess individual knowledge of genomic oncology and the effect of educational interventions.
References
Author notes
This work was supported by National Institutes of Health grant R25CA168544.
The authors have no relevant financial interest in the products or companies described in this article.