Clinician feedback is an important source of information for laboratory quality improvement programs.
To pilot a program for nearly real-time solicitation and analysis of physician feedback regarding clinical laboratory services.
Laboratories distributed either electronic or paper survey forms to physicians. Results were tabulated by College of American Pathologists staff. Free-text comments were shared promptly with the participating laboratories to facilitate follow-up.
Forty-seven clinical laboratories participated in the study and submitted results for 987 physician surveys, including both paper and electronic forms. Of 694 responses submitted electronically within the study period, 460 (66.3%) included at least 1 free-text entry, for a total of 951 free-text comments.
Point-of-service solicitation of physician feedback regarding clinical laboratory services is feasible and can provide a substantial quantity of potentially useful information regarding laboratory performance from the customer perspective.
Many organizations consider customer-centricity, defined as putting the customer at the center of everything an organization does, to be the key to success.1,2 Organizations that misunderstand and/or fail to meet customer needs and expectations may lose customers as those customers seek alternative providers.2 Organizations that are tightly attuned to customer needs, on the other hand, gain an invaluable source of insight into how to better meet those needs.3 There is nothing to suggest that customer centricity is any less important in laboratory services than it is in any other organizational setting.
Businesses commonly use surveys as a primary tool to measure their success in meeting the needs of their customers. However, customer surveys vary widely in both reliability and usefulness.4,5 During the past decade, many companies have changed the way that they have monitored customer satisfaction.2 First, they have shifted from administering long customer satisfaction surveys that measure only the overall contentment with the relationship with the provider (also termed relationship surveys) to shorter, more frequently administered point-of-service surveys that measure customers' satisfaction with their last customer interactions (termed transactional surveys). Unlike relationship surveys, transactional surveys allow service providers to recognize and repair service defects soon after they occur.6
Second, many companies have embedded within their surveys validated scores, such as Net Promotor Scores (NPS), which ask customers to rate the degree to which they will recommend the product or service to other potential customers.2 Such scores support longitudinal assessment of organizational performance.
Health care accrediting bodies, including the College of American Pathologists (CAP) and The Joint Commission (TJC), require institutions to measure customer (ie, physician, client, and/or patient) satisfaction with laboratory services at least every 2 years.7,8 Neither of these accrediting agencies specifies the manner in which institutions are to record customer satisfaction or the type of surveys they must use.
Since 1989, the CAP Q-Probes/QP studies have measured performance across many functions in anatomic pathology and laboratory medicine.9–11 Participants in these studies, representing a wide spectrum of practice settings worldwide, have been able to compare their performance to that of their peers and to share among their peers laboratory practices associated with superior performance.
Previous Q-Probes physician satisfaction studies were performed in 2007 and 2014, and 2018.12,13 These CAP studies used long, infrequently administered relationship-type surveys to assess physician satisfaction with clinical laboratory services and anatomic pathology services. The number of pathology practices participating in these satisfaction survey programs decreased over the years, and this was a motivating factor to find ways to make the surveys more useful. In this current QP study we examine the feasibility of administering a shorter, more frequent survey as an initial step toward point-of-service feedback.
MATERIALS AND METHODS
The CAP Quality Practices Committee developed a transactional-type satisfaction questionnaire for use within the CAP Q-Probes program14 (Figure). The survey was implemented in 2 formats: (1) a paper-based (hard copy) survey, which could also be scanned and emailed or faxed to participating laboratories' physician customers, and (2) an electronic questionnaire using a survey tool from Qualtrics (Provo, Utah).
Laboratories ordered the Physician Satisfaction with Clinical Laboratory Services study from the CAP through CAP's marketing division. The study was offered in the CAP 2021 Surveys and Anatomic Pathology Education Programs catalog. CAP's paper catalog is sent to laboratories worldwide, and it can also be accessed online without barriers at the CAP Web site, www.cap.org. Laboratories received the paper catalog in October 2020, and the study was orderable through June 7, 2021—a 9-month period.
Paper study instructions and result forms were mailed to the laboratories on June 1, 2021, online result forms and kit instructions were posted on CAP's e-LAB Solutions Suite (CAP, Northfield, Illinois), and electronic links to the online questionnaire were sent to the study contact emails for each facility on June 7, 2021. The study was in the field from June 7, 2021, to August 20, 2021; data collection was granted through August 30, 2021, to allow for late submissions, for a total of 12 weeks.
Participating laboratories determined their own means of survey distribution and collection. Unfortunately, there was no mechanism to monitor the number or frequency of surveys distributed by participating laboratories, and so it was not possible to calculate response rates. Physicians could choose whether to complete a survey anonymously or whether to provide their name and contact details in order to facilitate follow-up communication. Survey responses provided online through the Qualtrics tool were available directly to CAP staff for analysis and did not require additional processing on the part of participating laboratories. For laboratory participants using hard copy physician response forms, laboratory study coordinators manually entered the data onto specially designed QP input forms for up to 50 physician questionnaires per laboratory. Participants were instructed to fax or mail these input forms to the CAP data center or to enter their questionnaire data electronically onto data collection forms accessible on the CAP e-LAB Solutions Suite database.
Physicians completing this survey graded 9 laboratory service categories and provided their overall satisfaction with clinical laboratory services. A 5-point rating scale was used to rate the service categories as follows: 5, excellent; 4, good; 3, average; 2, below average; and 1, poor. Respondents could also select “not applicable.”
For responses of “below average” or “poor,” physician customers were prompted to provide free-text explanations. We requested that physicians identify the service category that was most important to them and also indicate whether they would recommend their institution's laboratory to another physician. (Note that this last question required a binary yes/no response, rather than a numeric NPS-style score. This choice was made in order to allow comparisons with previous surveys that had used the same question.) We provided survey results and percentile distributions, including overall satisfaction scores as well as satisfaction scores for each of the 9 service categories, to each participating laboratory for their internal use.
After the study period ended, CAP staff distributed a follow-up survey to participating laboratories. It included questions regarding impressions of, and subsequent actions in response to, the physician feedback.
Exclusion Criteria
Physicians-in-training (interns, residents, and fellows) were excluded from receiving satisfaction surveys.
Statistical Analysis
Statistical analyses were performed to determine which factors were significantly associated with the overall institution physician satisfaction score. The overall physician satisfaction score was observed to be skewed in distribution; therefore, nonparametric Kruskal-Wallis tests were used to test for univariate differences in discrete-valued independent variables. Discrete-valued independent variables were only eligible for analysis if each category reported represented about 20% of the sample.
Spearman correlation tests were performed to test for rank-based (nonparametric) correlations between overall satisfaction score for various quantitatively measured general practice variables, including 2020 billable test volume, percentage of billable tests performed on particular patient types, total testing and nontesting laboratory full-time equivalent employees (FTEs), and laboratory FTEs devoted to particular laboratory areas.
Other service category satisfaction scores were not analyzed because they are significantly correlated with overall satisfaction. A statistical significance level of .05 was used for this analysis. All analyses were performed with SAS 9.4 (SAS Institute, Cary, North Carolina). The data from laboratory participants who submitted 5 or fewer physician survey responses were excluded from the data analysis. We calculated institutional satisfaction scores as the mean of the ratings.
RESULTS
A total of 66 laboratories signed up to participate in the study. Of these, 47 facilities returned at least 1 physician survey response. Seven participating facilities submitted fewer than 5 physician surveys and were excluded from the laboratory-specific analyses, resulting in an analysis sample size of 40 institutions. Of these 40 institutions, 28 used the Qualtrics survey, 10 distributed the survey using some other method, such as paper, and 2 used both the Qualtrics survey and some other method.
Of 694 responses submitted electronically within the study period, 460 (66.3%) included at least 1 free-text entry, for a total of 951 free-text comments. Approximately half of the electronic responses (362; 52.2%) included the respondent's specialty.
Summary of Free-Text Responses
Six of the free-text questions (5, 6, 7, 8, 9, and 10; note that these numbers are from the Qualtrics version of the survey rather than the numbering represented in the Figure) were only posed to respondents who had entered a score of “poor” or “below average” to the corresponding question. Because only a small fraction of respondents indicated dissatisfaction on the corresponding questions, there were many fewer responses to these questions than to questions 12 and 13, which were posed to all respondents.
Question 5 requested “specifics and/or suggestions for improvement” from those respondents who had entered a score of “poor” or “below average” regarding the adequacy of the test menu. There were 22 free-text responses to this question. Almost all of them involved frustration with searching for tests in the local electronic health record. Sample response: “The menu has duplicate test names and not all are valid. It is confusing. Pathology specimens have to be hand written requests and should be in the EHR.”
Question 6 requested “specifics and/or suggestions for improvement” from those respondents who had entered a score of “poor” or “below average” regarding the ease of placing test orders. There were 32 responses to this question. Most of the responses involved frustration with either the functionality of the electronic health record system or the available order sets in that system. Sample response: “Having to place orders that are not routine is not intuitive and requires significant assistance, as the specific codes are needed when placing the order. Each time I need to do this, I have to contact the lab to walk me through it.”
Question 7 requested “specifics and/or suggestions for improvement” from those respondents who had entered a score of “poor” or “below average” regarding the quality or reliability of results. There were 15 responses to this question. The responses described a range of suspected laboratory errors, ranging from specimen collection errors to result entry errors. Sample response: “Potassium is always hemolyzed no matter the day or the nurse drawing so high suspicion this is a lab issue.”
Question 8 requested “Please indicate the specific tests” from those respondents who had entered a score of “poor” or “below average” regarding turnaround times. There were 51 responses to this question. Although some respondents indicated specific tests, most described more general frustration and/or indicated clinical scenarios, such as emergency department orders or STAT orders. Sample response: “HIV testing takes 2-3 weeks to get back. Sti testing in general is several days.”
Question 9 requested “specifics and/or suggestions for improvement” from those respondents who had entered a score of “poor” or “below average” regarding critical value notifications. There were 23 responses to this question. Some of them questioned the criteria for critical value notifications on particular tests, and others expressed dissatisfaction with the process. Sample response: “Critical lab results notification has to be consistent. We may get notified about electrolytes, hemoglobin extreme values but I have never been notified about a positive culture for example.”
Question 10 requested “specifics and/or suggestions for improvement” from those respondents who had entered a score of “poor” or “below average” regarding the performance of clinical laboratory staff. There were 40 responses to this question. Many of them involved concerns about difficulty in reaching lab personnel, for example, by telephone; others were concerns about professionalism. Sample response: “It is so hard to know who to contact if we have a question about a lab at times. Not just a question about a result but what types of labs are available, how best to order, etc.”
Question 12 asked, “What do you like most about our services?” Of the completed surveys, 61% included a response to this question (421 of the 694 responses). The responses covered all aspects of laboratory performance, including fast turnaround time, courteous and responsive staff, accurate results, and fast critical value notification. Sample response: “Efficient, customer oriented, ready to address unusual clinical challenges, and overall a pleasure to deal with their staff.”
Question 13 asked for “[G]eneral comments or suggestions for improving services.” Of the completed surveys, 51% included a free-text response to this question (355 of the 694 responses). These responses covered an extremely wide range of both compliments and constructive suggestions, some quite general and others quite specific. Sample response: “I'd like to be able to order synthetic cannabinoid testing electronically only, without having to make an extra call to the lab.”
Institution demographic information was obtained from 34 participating institutions that also submitted the 2021 CAP Q-Probes/Q-Tracks demographics form (QDEM). The following variables are summarized in Table 1: occupied bed size, teaching hospital status, pathology resident training status, institution location, governmental affiliation, institution type, and CAP and Joint Commission inspection status. A total of 25 of 34 institutions (73.5%) were from the United States. The remaining participants were from Qatar (5), Saudi Arabia (3), and Brazil (1). Participants' annual clinical laboratory billable test volumes in 2020 ranged from 200 000 to 7 760 769, with a mean of 1 712 126 billable tests.
Nineteen institutions responded to questions regarding customer and patient satisfaction programs, although not all responded to every single question on that survey. Of the 18 of these that responded to the question on whether their institution had an ongoing patient satisfaction program, 17 (94.4%) indicated that they did. Of the 18 that responded to the question on whether they had a formal customer satisfaction training program for all laboratory employees, 7 (38.9%) reported that they did (Table 2).
Physicians were asked to select the category that was most important to them; 395 physicians of 956 respondents (41.3%) selected quality/reliability of results, followed by 162 (16.9%) who selected STAT test turnaround times (Table 3).
No significant associations were identified between the overall institution physician satisfaction score and various individually tested, quantitatively measured general practice variables, including 2020 annual billable test volume, percentage of billable tests performed on particular patient types, total testing and nontesting FTEs, and laboratory FTEs devoted to particular laboratory areas. Similarly, there were no significant associations (at a significance level of .05) between overall institution satisfaction scores and individually tested, various qualitative practice characteristics or demographic variables, such as occupied bed size and institution location.
A follow-up survey (see the supplemental digital content at https://meridian.allenpress.com/aplm in the June 2024 table of contents) aimed at determining the usage and impact of the weekly expedited report enhancement was distributed to all participating laboratories that signed up for the study (64). Of the 12 laboratories that responded to a question about helpfulness of the physician comments, all 12 indicated that the free-text physician comments regarding specific areas rated “poor” or “fair” were either “moderately helpful” (1 of 12; 8.3%) or “very helpful” (11 of 12; 91.7%). Of the 13 respondents to a question about follow-up actions, 3 (23.1%) reported plans to change a process based on comments received in the physician surveys. A total of 3 of the 13 respondents (23.1%) reported that they had contacted 1 or more survey respondents in response to comments received. A total of 1 of the 13 respondents (7.7%) reported being able to correct a problem “promptly” because of 1 or more comments received.
DISCUSSION
This study demonstrates the feasibility of obtaining physician feedback regarding laboratory services, using a brief survey that can be administered electronically on a frequent basis, and close in time to when the services were provided. Transactional surveys that evaluate customer satisfaction soon after services are delivered allow providers better opportunities to identify and correct service defects expeditiously than do relationship surveys that evaluate overall customer satisfaction at times distant from when those services were delivered.1,5 Once the trail is cold it may be difficult if not impossible to reconstruct, analyze, and repair service defects that undermine services. In this study, only 2 of 18 participating laboratories (11.1%) reported surveying customer satisfaction annually, and zero participants reported surveying customer satisfaction at the point of service. A total of 4 of 18 (22.2%) provided a survey frequency of “other,” which unfortunately did not provide much clarity into their survey practices. Future studies might compare the abilities of transactional and relationship surveys to eliminate laboratory service defects and improve satisfaction.
Of 953 physician respondents, 92.2% (879) reported that they would recommend their laboratory to other physicians. This figure does not represent a true NPS because responses were binary rather than scaled (eg, 1–10). It does, however, suggest to us a high degree of customer satisfaction among the laboratory customer populations participating in this study. Because NPS scores are not recommended for benchmarking across institutions, this number should be considered only in the context of monitoring institutional scores over time. Different institutions, especially those dispersed geographically, may differ in the populations they serve, the demands of their medical communities, and the testing capabilities of their laboratories, all of which make NPS-type scores less comparable across settings. Each laboratory must determine its own level of customer satisfaction, correcting service defects in pursuit of a perfect NPS.
The relatively small sample sizes of laboratories sharing similar practices and comprising peer groups may have compromised our ability to establish statistically significant associations. We also expect that the heterogeneity of laboratory environments mentioned above would also confound statistically significant associations of satisfaction with the few laboratory characteristics about which we chose to inquire.
The follow-up survey, despite its small numbers, does suggest that participating laboratories found constructive feedback from physicians to be helpful, and that some of them made process changes as a result of the feedback. Subsequent studies of this program should identify what attributes of feedback might lead to faster and more effective improvements in laboratory performance.
References
Author notes
Supplemental digital content is available for this article at https://meridian.allenpress.com/aplm in the June 2024 table of contents.
Competing Interests
The authors have no relevant financial interest in the products or companies described in this article.