Context.—

With increasing availability of immediate patient access to pathology reports, it is imperative that all physicians be equipped to discuss pathology reports with their patients. No validated measures exist to assess how pathology report findings are communicated during patient encounters.

Objective.—

To pilot a scoring rubric evaluating medical students’ communication of pathology reports to standardized patients.

Design.—

The rubric was iteratively developed using the Pathology Competencies for Medical Education and Accreditation Council for Graduate Medical Education pathology residency milestones. After a brief training program, third- and fourth-year medical students completed 2 standardized patient encounters, presenting simulated benign and malignant pathology reports. Encounters were video recorded and scored by 2 pathologists to calculate overall and item-specific interrater reliability.

Results.—

All students recognized the need for pathology report teaching, which was lacking in their medical curriculum. Interrater agreement was high for malignant report scores (intraclass correlation coefficient, 0.65) but negligible for benign reports (intraclass correlation coefficient, 0). On malignant reports, most items demonstrated good interrater agreement, except for discussing the block (cassette) summary, explaining the purpose of the pathology report, and acknowledging uncertainty. Participating students (N = 9) felt the training was valuable given their limited prior exposure to pathology reports.

Conclusions.—

This pilot study demonstrates the feasibility of using a structured rubric to assess the communication of pathology reports to patients. Our findings also provide a scalable example of training on pathology report communication, which can be incorporated in the undergraduate medical curriculum to equip more physicians to facilitate patients’ understanding of their pathology reports.

Pathology reports provide valuable diagnostic information, but are at risk of being misunderstood by patients and clinicians.1  Patients express a strong interest in patient-pathologist consultation programs, in which pathologists communicate the results of pathology reports directly to patients.2,3  However, pathologists make up less than 1.3% of practicing physicians in the United States,4  so the explanation of pathology reports typically falls upon treating physicians in other specialties. With increasing availability of immediate patient access to pathology reports (eg, through online patient portals), it is imperative that graduating medical students, regardless of specialty choice, be trained to discuss pathology reports with their patients.

The Pathology Competencies for Medical Education, first proposed in 2017, state that medical students should be able to understand the various components and terminology of the pathology report and accurately explain pathology report results in a language patients can understand.5,6  However, no standard approach exists for evaluating the quality of student or physician communication of pathology report results to patients. This lack of a replicable assessment tool limits opportunities to compare different approaches for teaching the presentation of pathology reports in undergraduate medical education.

In this pilot study, we developed and trialed a scoring rubric to evaluate medical students’ communication of a pathology report to standardized patients (SPs) after students completed short training on interpreting and presenting pathology reports. We aimed to demonstrate the feasibility of training medical students to explain pathology reports to patients and assess their performance in an objective and replicable fashion. Our secondary aims were to generate estimates of interrater reliability to power larger, confirmatory studies and to identify scenarios and rubric items requiring modification for future implementations of this training.

This study was conducted at a public medical school located in eastern North Carolina and was certified exempt by the local institutional review board (UMCIRB 23-000400). Written informed consent was obtained from participating students. The scoring rubric was iteratively developed by a team of pathologists (including a pathology resident), medical students, and a sociologist. A hematologist-oncologist with a unique blend of familiarity with pathology reports, patient preferences, and communication techniques—gained through practical experience and training in hospice and palliative care—confirmed that the scoring criteria addressed cancer patients’ essential concerns regarding their pathology reports. The rubric was modeled after the Pathology Competencies for Medical Education guidelines for pathology report communication,5,6  as well as the Accreditation Council for Graduate Medical Education pathology milestones.7  These included confirming patient identifiers and the referenced procedure on the pathology report, informing patients about the nature of a pathology report, describing the pathologist’s role, addressing all pathology report components (final diagnosis, microscopic description, comments, gross description, synoptic report, and immunohistochemical or special stains), handling uncertainty within the report, and establishing plans for patient follow-up.7  Each criterion was scored as met or not met, and a summary score was calculated based on the total number of met criteria. Criteria were tailored to 2 simulated scenarios: a finding of tubular adenoma after a routine colonoscopy (benign case), and a finding of adenocarcinoma of the colon after right colon resection (malignant case). Reports used for the scenarios are included as Supplemental Reports 1 and 2 (see supplemental digital content at https://meridian.allenpress.com/aplm in the February 2025 table of contents).

The study team developed a PowerPoint (Microsoft, Redmond, Washington) presentation that corresponded with elements of the scoring rubric: the purpose of pathology reports, the components of the reports, and pathologic terms providers might expect to see in the reports. The presentation slides included sections of pathology reports formatted similarly to the reports used in the SP encounters. The presentation explained the expectations for the encounters and gave a general overview of how students’ performances would be evaluated. Lastly, investigators video-recorded example encounters demonstrating ideal pathology report explanations, based on scripts developed by the lead investigator. These recordings were shared with student participants during their training sessions.

To pilot the scoring rubric, we recruited third- and fourth-year medical students at our institution via email during spring 2023. We targeted a total of 10 participants based on training facility and SP availability. Nine students completed the study after one prospective participant dropped out. Each student participated in an hour-long training module consisting of the presentation and videos described above. Once this training module was completed, participants were asked to review the benign pathology report (Supplemental Report 1) for a few minutes and were provided an opportunity to ask any questions before communicating the pathology report to the first SP. Participants were then asked to review and communicate a malignant pathology report (Supplemental Report 2) to a different SP. Investigators provided SPs with copies of the pathology reports and a general script for their reactions ahead of the sessions. Each encounter was video recorded and observed via live video feed by a pathologist in a separate room, and participants had a live debriefing session with the observing pathologist and SP immediately after each individual SP session.

After completing both SP encounters, participants were asked to complete a 10-question survey consisting of 8 quantitative questions using a 5-point Likert scale, 1 rank-order question, and 1 open-ended response question. Questions were developed following best-practice recommendations for educational research, using Association for Medical Education Europe guidelines such as maintaining a consistent visual layout, using only verbal labels, avoiding double-barreled items, and ensuring a balanced midpoint in the response range.8–10  These questions were designed to obtain quantitative feedback on participant experiences and to collect participants’ qualitative recommendations for improving the training or scenarios. Participants also ranked their preferred format for the pathology training among the following options: in person, live video (via remote meeting), interactive PowerPoint, online module, or recorded video. After answering the survey, participants received a gift card incentive for completing the study.

Student performance in the SP encounters was evaluated in an objective, quantitative manner. The video recording of each encounter was scored by the pathologist who had viewed the encounter in real time and provided immediate in-person feedback and by another pathologist who did not observe the encounter during the live portion of the study. For each scenario (benign and malignant), we evaluated interrater reliability across 4 unique raters (2 raters for each scenario) using 1-way analysis of intraclass correlation coefficients (ICCs) for continuous measures (total rubric score)11  and the Gwet first-order agreement coefficient for categorical data (rating on each rubric item).12  ICC values of less than 0.4 indicated poor agreement, 0.4 to 0.54 indicated weak agreement, 0.55 to 0.69 indicated moderate agreement, 0.7 to 0.84 indicated good agreement, and 0.85 to 1 indicated excellent agreement.13  Data analysis was completed in Stata/SE 16.1 (StataCorp, LP, College Station, Texas) using the kappaetc package.14  We present 95% CIs for interrater reliability estimates, but do not report null hypothesis test results (P values) because of the exploratory nature of this study.

The data set included 846 ratings, representing 9 participants, 4 raters (2 for each participant), and 47 rubric items. Benign report scores assigned by each rater ranged from 15 to 18 (average, 17 ± 1), whereas malignant report scores ranged from 22 to 28 (average, 26 ± 2). The ICC for benign report scores was 0 (95% CI, 0–0.62), indicating minimal agreement among raters, whereas the ICC for malignant report scores was 0.65, indicating moderate agreement (95 CI, 0.01–0.90).

Item-specific ratings for the malignant and benign reports are summarized in Table 1. The items most often missed in the benign reports were explaining the role of the pathologist, explaining the purpose of the report, and confirming the specimen type. Explaining the purpose of the report and confirming the specimen type in the benign report also showed the lowest interrater reliability. The items most often missed in the malignant reports were explaining the role of the pathologist, explaining the purpose of the report, confirming the date of the procedure, describing the size of the specimen, explaining the block (cassette) summary, and acknowledging uncertainty when the report was inconclusive or the student was unclear about a portion of the report. For example, to meet the last 2 criteria, a student might acknowledge uncertainty referring to the indeterminate lymphovascular invasion in the colon cancer report or acknowledge unfamiliarity with mismatch repair immunohistochemistry testing. Describing the block (cassette) summary was also the item with the lowest interrater reliability in the malignant report. Item-specific measures of interrater reliability are summarized in Supplemental Tables 1 and 2.

Table 1.

Item-Specific Scores on the Benign and Malignant Report Standardized Patient Explanationsa

Item-Specific Scores on the Benign and Malignant Report Standardized Patient Explanationsa
Item-Specific Scores on the Benign and Malignant Report Standardized Patient Explanationsa

Student postassessment surveys demonstrated an overall positive learning experience for all participants (Table 2). Participants’ ranking of their confidence in explaining the limitations of the pathology report to patients showed the lowest median score (3 on a 5-point Likert scale), indicating a need for further training. Participants’ ranking of their confidence levels in explaining pending studies (molecular, microbiology, etc) exhibited the largest interquartile range (2 on a 5-point Likert scale), signifying greater variation in students’ understanding of pending studies. The most preferred training format was in-person training.

Table 2.

Student Responses to 5-Point Likert Scale Post-Encounter Survey Questionsa

Student Responses to 5-Point Likert Scale Post-Encounter Survey Questionsa
Student Responses to 5-Point Likert Scale Post-Encounter Survey Questionsa

In free-text responses, students noted that their current medical school curriculum did not include requirements for communicating pathology report results to patients, and 5 participants emphasized the need for such training in medical school, citing improvements in their ability to interpret pathology reports because of the training. One student specifically mentioned that, as they were about to enter residency, the experience not only helped with explaining pathology reports but also improved their ability to interpret them systematically. Participants also suggested prerecording the training and making the recording available ahead of the SP encounters and providing information on following up with patients based on the pathology reports.

Medical curricula in the United States vary across institutions, but all medical schools are required to adhere to the Liaison Committee on Medical Education (LCME) standards.15,16  Although pathology education (LCME elements 7.1 and 7.2) and communication skills (LCME element 7.8) are mandatory components of medical school curricula, current LCME standards do not specifically address communication of pathology report results to patients.16  Our hematologist-oncologist coauthor notes that cancer patients are primarily interested in the location, size, and grade of the tumor; extent of invasion; completeness of surgical resection (ie, margins); and evidence of metastatic disease—precisely the areas evaluated by our scoring rubric. In response to this gap in medical education, our study aimed to pilot an objective scoring rubric for assessing medical students’ proficiency in explaining pathology reports to SPs. Additionally, we developed a concise training protocol designed for third- and fourth-year medical students. Our results demonstrate the feasibility of objectively evaluating students’ performance in communicating simulated benign and malignant reports and highlight the positive reception of the brief training program.

Students’ variable performance in presenting pathology reports to SPs mirrors previously reported discrepancies in clinicians’ interpretation of pathology reports. Studies have shown that up to 30% of surgeons1  and approximately one-third of medical interns17  struggle with understanding pathology reports. Terms expressing uncertainty in pathology reports are the greatest source of miscommunication between the pathologists’ intended meaning and interpretation by clinicians.18,19  Although efforts have been made to standardize pathology reports20,21  and provide resources for patients to increase their understanding of these reports,22  to our knowledge, the existing literature has not yet documented educational interventions aimed at improving medical students’ comprehension of pathology reports. Our survey results, along with informal feedback from our pilot study, provide valuable insights for enhancing education in this domain. Specifically, areas for improvement include the students’ confidence in discussing the limitations of pathology reports and their proficiency in explaining pending studies.

Our study’s strength lies in the rigorous evaluation of simulated encounters using a standardized assessment tool. In a meta-analysis of pathology education initiatives, most studies focused on medical knowledge acquisition but lacked robust assessment of how educational interventions influenced subsequent practice.23  In our pilot study, overall scores on the malignant report rubric demonstrated good interrater reliability, with a few items showing disagreement among raters, including students’ discussion of the block (cassette) summary, students’ explanation of the purpose of the pathology report, and students’ acknowledgment of uncertainty.

A thorough analysis of the data indicated that the ICC of 0 in the benign report scoring was primarily attributed to minimal variability in rater scores, creating a ceiling effect. For their presentation of the benign report, nearly all students received the maximum possible score, or just 1 or 2 points below it. This clustering of high scores made it challenging to distinguish differences among students, thereby diminishing the overall reliability of the assessment in the benign scenarios. The much higher ICC for malignant cases suggests the issue was with the ease of the benign scenario, rather than with the rubric piloted in this study. Based on our results, we aim to further improve scorer training by providing a better definition of what counts as met versus not met to increase interrater reliability.

Findings from our pilot study are inherently preliminary and subject to several limitations. The sample size was determined based on feasibility of enrollment during the study period and may not be representative of students at other institutions. Additionally, we lacked a sufficient sample size to stratify students by intended specialty or training year. Collecting this information in future studies could help identify the ideal timing for pathology report education. Training could be integrated either longitudinally, such as correlating with organ system blocks, or at specific points in the medical curriculum, such as before clinical rotations, during rotations, or prior to graduation. In the next phase of this project, we are refining the training materials to ensure comprehensive coverage of frequently missed items. Additionally, we are implementing enhanced rater training to improve interrater reliability on items that showed lower interrater agreement during this pilot phase.

Even in hematology-oncology fellowship programs, trainee instruction on pathology reports is often informal and may only occur incidentally during tumor board discussions. These observations support the potential role of a scoring rubric, along with accompanying training materials, even beyond undergraduate medical education. Our structured assessment may be particularly relevant in graduate medical education programs such as surgery, oncology, dermatology, and gastroenterology, where tissue diagnosis and pathology reports play a crucial role in daily clinical practice. The relevance of this assessment also extends to general practitioners who may be the first to interpret pathology reports for their patients. Equipping all medical providers, including but not limited to pathology and hematology/oncology trainees, with the skills needed to understand and explain pathology reports can significantly contribute to reducing patient anxiety and associated morbidity, regardless of the clinical setting.

Developing a comprehensive rubric for assessing the communication of pathology reports to patients is a critical element of designing educational interventions to teach pathology report explanation. Our pilot study successfully implemented a standardized evaluation tool and a brief pathology report training program with a sample of third- and fourth-year medical students. Based on these pilot data, we are working to validate the rubric in a larger sample of learners presenting the simulated malignant report. We recognize a potential need to design and validate rubrics for different types of pathology reports and procedures; however, we have accomplished the first step of demonstrating the feasibility of both teaching about pathology reports and evaluating their explanation to patients.

As participant feedback in our pilot study demonstrated, pathology report interpretation and communication would be valuable additions to the medical school curriculum and help prepare graduating students explain pathology reports to patients during their intern year of residency. Specifically, such training could equip them to confidently address questions and concerns patients have regarding their pathology results and facilitate better coordination of patient care. These pilot data lay the groundwork for future studies correlating trainees’ pathology report presentation performance with perceived quality of care, as assessed through feedback from a more diverse group of SPs. Adding patient satisfaction and comprehension into the evaluation strategy will further facilitate truly patient-centered explanations of pathology reports by physicians in all specialties.

We are grateful for the support and assistance from the Office of Clinical Skills Assessment in the Brody School of Medicine (BSOM) at East Carolina University: Rebecca Gilbird, MPH, director of clinical simulation; Patrick Merricks, MBA, associate director and standardized patient trainer; David Schiller, AAS, technical operations manager; Jessica Cringan, AAS, simulation specialist; and Kim Haga, MBA, program coordinator. We are also grateful for the time and valuable feedback provided by our standardized patients, Linda Kelly and Larry Cox, MBA, and for assistance from Benjamin Wise, MA, BSOM medical student.

1.
Powsner
SM,
Costa
J,
Homer
RJ.
Clinicians are from Mars and pathologists are from Venus
.
Arch Pathol Lab Med
.
2000
;
124
(
7
):
1040
1046
.
2.
Gibson
B,
Bracamonte
E,
Krupinski
EA,
et al.
A “pathology explanation clinic (PEC)” for patient-centered laboratory medicine test results
.
Acad Pathol
.
2018
;
5
:
2374289518756306
.
3.
Lapedis
CJ,
Horowitz
JK,
Brown
L,
Tolle
BE,
Smith
LB,
Owens
SR.
The patient-pathologist consultation program: a mixed-methods study of interest and motivations in cancer patients
.
Arch Pathol Lab Med
.
2020
;
144
(
4
):
490
496
.
4.
Association of American Medical Colleges (AAMC).
Number of people per active physician by specialty, 2021 (Table 1.1)
. Published December 31,
2021
. Accessed September 23, 2023. https://www.aamc.org/data-reports/workforce/data/number-people-active-physician-specialty-2021
5.
Knollmann-Ritschel
BEC,
Huppmann
AR,
Borowitz
MJ,
Conran
R.
Pathology Competencies in Medical Education and educational cases: update 2023
.
Acad Pathol
.
2023
;
10
(
3
):
100086
.
6.
Knollmann-Ritschel
BEC,
Regula
DP,
Borowitz
MJ,
Conran
R,
Prystowsky
MB.
Pathology Competencies for Medical Education and educational cases
.
Acad Pathol
.
2017
;
4
:
2374289517715040
.
7.
Accreditation Council for Graduate Medical Education (ACGME).
Pathology Milestones
. Published February
2019
. Accessed September 23, 2023. https://www.acgme.org/globalassets/pdfs/milestones/pathologymilestones.pdf
8.
Gehlbach
H,
Artino
AR.
The survey checklist (manifesto)
.
Acad Med
.
2018
;
93
(
3
):
360
366
.
9.
Sullivan
GM,
Artino
AR.
How to create a bad survey instrument
.
J Grad Med Educ
.
2017
;
9
(
4
):
411
415
.
10.
Artino
AR,
La Rochelle
JS,
Dezee
KJ,
Gehlbach
H.
Developing questionnaires for educational research: AMEE Guide No. 87
.
Med Teach
.
2014
;
36
(
6
):
463
474
.
11.
Koo
TK,
Li
MY.
A Guideline of selecting and reporting intraclass correlation coefficients for reliability research
.
J Chiropr Med
.
2016
;
15
(
2
):
155
163
.
12.
Wongpakaran
N,
Wongpakaran
T,
Wedding
D,
Gwet
KL.
A comparison of Cohen’s kappa and Gwet’s AC1 when calculating inter-rater reliability coefficients: a study conducted with personality disorder samples
.
BMC Med Res Methodol
.
2013
;
13
:
61
.
13.
Schober
P,
Mascha
EJ,
Vetter
TR.
Statistics from A (agreement) to Z (z score): a guide to interpreting common measures of association, agreement, diagnostic accuracy, effect size, heterogeneity, and reliability in medical research
.
Anesth Analg
.
2021
;
133
(
6
):
1633
1641
.
14.
Klein
D.
KAPPAETC: Stata module to evaluate interrater agreement
.
Statistical Software Components. Published August 11
,
2022
. Accessed September 23, 2023. https://ideas.repec.org//c/boc/bocode/s458283.html
15.
Programmatic accreditation vs. institutional accreditation.
Liaison Committee on Medical Education (LCME)
. Published
2023
. Accessed September 23, 2023. https://lcme.org/about/programmatic/
16.
Standards, publications, & notification forms.
Liaison Committee on Medical Education (LCME)
. Published
2023
. Accessed September 23, 2023. https://lcme.org/publications/
17.
Zare-Mirzaie
A,
Hassanabadi
HS,
Kazeminezhad
B.
Knowledge of medical students about pathological reports
.
J Med Educ
.
2017
;
16
(
1
):
26
34
.
18.
Bracamonte
E,
Gibson
BA,
Klein
R,
Krupinski
EA,
Weinstein
RS.
Communicating uncertainty in surgical pathology reports: a survey of staff physicians and residents at an academic medical center
.
Acad Pathol
.
2016
;
3
:
2374289516659079
.
19.
Gibson
BA,
McKinnon
E,
Bentley
RC,
et al.
Communicating certainty in pathology reports
.
Arch Pathol Lab Med
.
2022
;
146
(
7
):
886
893
.
20.
Renshaw
AA,
Mena-Allauca
M,
Gould
EW,
Sirintrapun
SJ.
Synoptic reporting: evidence-based review and future directions
.
JCO Clin Cancer Inform
.
2018
;
2
:
1
9
.
21.
Snoek
JAA,
Nagtegaal
ID,
Siesling
S,
van den Broek
E,
van Slooten
HJ,
Hugen
N.
The impact of standardized structured reporting of pathology reports for breast cancer care
.
Breast
.
2022
;
66
:
178
182
.
22.
Lafreniere
A,
Purgina
B,
Wasserman
JK.
Putting the patient at the centre of pathology: an innovative approach to patient education—MyPathologyReport.ca
.
J Clin Pathol
.
2020
;
73
(
8
):
454
455
.
23.
McBrien
S,
Bailey
Z,
Ryder
J,
Scholer
P,
Talmon
G.
Improving outcomes
.
Am J Clin Pathol
.
2019
;
152
(
6
):
775
781
.

Author notes

Supplemental digital content is available for this article at https://meridian.allenpress.com/aplm in the February 2025 table of contents.

Vora is currently employed at Kaiser Permanente, Denver, Colorado

This work was supported by ECU Health, Department of Pathology and Laboratory Medicine, Seed Research Funding.

The authors have no relevant financial interest in the products or companies described in this article.

Supplementary data