Context.—

Health care providers were surveyed to determine their ability to correctly decipher laboratory test names and their preferences for laboratory test names and result displays.

Objective.—

To confirm principles for laboratory test nomenclature and display and to compare and contrast the abilities and preferences of different provider groups for laboratory test names.

Design.—

Health care providers across different specialties and perspectives completed a survey of 38 questions, which included participant demographics, real-life examples of poorly named laboratory orders that they were asked to decipher, an assessment of vitamin D test name knowledge, their preferences for ideal names for tests, and their preferred display for test results. Participants were grouped and compared by profession, level of training, and the presence or absence of specialization in informatics and/or laboratory medicine.

Results.—

Participants struggled with poorly named tests, especially with less commonly ordered tests. Participants’ knowledge of vitamin D analyte names was poor and consistent with prior published studies. The most commonly selected ideal names correlated positively with the percentage of the authors’ previously developed naming rules (R = 0.54, P < .001). There was strong consensus across groups for the best result display.

Conclusions.—

Poorly named laboratory tests are a significant source of provider confusion, and tests that are named according to the authors’ naming rules as outlined in this article have the potential to improve test ordering and correct interpretation of results. Consensus among provider groups indicates that a single yet clear naming strategy for laboratory tests is achievable.

Laboratory test names have been reported to be ambiguous and a source of ordering and interpretation error in multiple countries and by multiple provider specialties.113  Passiment et al stated, “…efforts to obtain even the most commonly ordered tests are often derailed by excessively complex nomenclature.”2  Prior publications and national efforts have noted problems with laboratory test names.1,2,5,714  The only clarity surrounding laboratory test names is that clinicians often find them ambiguous and obscure.

There are multifold reasons for the obscurity. Laboratory test names may be assigned using locally known nonubiquitous terms or abbreviations.3  This can cause the same test to have different names among organizations as well as within them when tests have to be sent to different laboratories based on location or insurance requirements.15  Laboratory test catalogs may use insufficient or uncommon synonyms, cryptic abbreviations or symbols, or similar names for completely different tests (eg, lactate and lactate dehydrogenase),2  or may have the same analyte name for different tests that use different methods (eg, HIV-1). The same name can mean different things at different locations. For example, “CBC” may mean just a complete blood count or may mean “CBC with automated and/or manual differential.” “FG” may designate fasting glucose, finger-stick glucose, or even a fecal gastric occult blood test.16  When analyte names are similar, it can be difficult for clinicians to know which test to use.17  Many laboratory information systems (LISs) and electronic health records (EHRs) have character length limitations, usually 40 or fewer, necessitating abbreviations that may cause further confusion.

Personnel of different expertise and backgrounds contribute to ambiguous test names. For example, the person naming tests may not have medical or even laboratory expertise (eg, information technology analysts). Differences between 2 laboratory tests with overlapping names may not be understood, resulting in erroneously choosing names that lack sufficient specificity or accuracy. For example, a thyroglobulin level that includes a pretest assessment for anti-thyroglobulin antibodies was misnamed “anti-thyroglobulin antibodies.” When tests are named by different people without a good or any style guide, variability in structure of a test name and quality of test ordering is certain to occur. Variable language used by people with different medical backgrounds causes failures in communication,10,18  and laboratory test names are no different. For example, “dialysis” means a procedure performed on patients in kidney failure but may also mean a particular test method to nonphysician laboratorians.14  Similarly, “free” may mean an unbound state to physicians and laboratorians but may mean without cost to other providers.

In a survey of primary care physicians,1  more than 14% expressed uncertainty about proper test ordering, including challenges regarding different names for the same test, tests available only within a panel without specifying the content of the panel, and different tests in panels with the same names (eg, MyoMarker Panel I and MyoMarker Panel II). There have been efforts to standardize laboratory test names, but these have not been overtly successful.4,6,1921  Many naming convention authorities are federal agencies, laboratory specialists, and instrument manufacturers and do not always include front-line clinicians.2  Other efforts have focused on clinical decision support at the point of order to help clinicians order the correct test at the appropriate time (eg, vitamin D).9,12  All these endeavors have been undertaken because improper test ordering leads to diagnostic errors, increased costs, and delays in diagnosis.11,2229 

Errors that occur when a physician is deciding which test to order have been referred to as pre-preanalytical errors, and misinterpretations of a valid laboratory result have been referred to as post-postanalytical errors.22  The overall amount and impact of these errors is largely unknown. This is because most studies on laboratory test errors examine reported incidents or risk events.23,24,2629  Medical staff do not report all errors, and errors that question the ordering or interpretation judgment of a physician are even less likely to be reported unless patient harm has occurred. As a result, these errors have mostly flown under the radar except when the volume of inappropriate orders is high (eg, vitamin D) or confusion about the meaning of test results is rampant. Sadly, no amount of Lean or Six Sigma processes in the laboratory will help resolve these errors.

A laboratory test name serves 3 purposes: (1) to accurately represent what the test does, (2) to clearly differentiate the test from other tests that may have similar names, and (3) to be understood by an ordering provider so that the correct test is ordered for the patient. Therefore, the name of a laboratory test is a critical communication tool in which the message intended by the sender (the pathologist and/or PhD laboratorian) must be accurately received and understood by the receiver (the ordering provider). The inputs of patient-facing providers, laboratorians, pathologists, and informatics experts are all critical to the development of ubiquitously understood laboratory tests.1,25,28  No one can be an expert on every single laboratory test. Informaticists were included in this survey population because, in addition to being ordering providers, they understand the importance of a properly designed EHR to good communication. Proposals to limit certain laboratory orders to specific physician specialties do not address the root cause of a poor test name9  and are not optimal for patient care because some specialties are not always available.

The 2009 Health Information Technology for Economic and Clinical Health act initiated Meaningful Use (now Promoting Interoperability),30  which regulated that all LISs and EHRs use Logical Observation Identifiers Names and Codes (LOINC) as the standard for encoding laboratory terms.31  The intent was to enable improved interoperability, but the legislation and regulations imposed no standardization of laboratory names. LOINC has a number of shortcomings,3236  and most institutions do not display LOINC codes to end users because they are generally unhelpful to busy clinicians and sidestep the need to have a simple, clear, and unambiguous name for the test.

The first and last authors of this study are informaticists (1 practicing pathologist and 1 practicing hospitalist) who have regularly encountered ordering errors as a result of obscure or misleading test names, resulting in perplexed clinicians requesting help to find tests they want to order. Publications and efforts that describe this problem have mostly been opinion papers or case studies. Therefore, a study assessing the abilities of health care providers to decipher laboratory test names and to indicate preferred names and result displays is sorely needed, hence this manuscript. This project also assessed whether providers preferred test names that adhered to naming rules previously developed by the authors.

This study assessed participants’ abilities to decipher laboratory test names, their preferences for laboratory test names, and their preferred EHR display of laboratory results. The results were further analyzed for differences between providers’ professions and specialties.

Survey

The survey was designed for health care providers who order, perform, or review laboratory tests. The institutional review board determined that this project did not meet the definition of research as defined in 45 CFR §46.102(d) and therefore was not subject to regulations or board oversight.

The survey collected information about the participant or the participant’s organization only as provided by the participant. Internet protocol addresses or other behind-the-scenes information were not collected. Names of information systems and organizations were not requested. The entrance page to the survey revealed all of the above and the intent to publish the accumulated results in a peer-reviewed journal and granted the ability to exit the survey at any time. Although a few questions in the demographic section were conditional based on whether the participant had informatics certification or experience, all remaining questions were required.

The survey was designed in SurveyMonkey (San Mateo, California). It consisted of 38 total questions split into 5 question groups (QGs): Demographics, Decipher, Vitamin D, Ideal Name, and Display (Table 1). An example question from the Decipher QG, a comparison of vitamin D test names, and an example question from the Ideal Name QG are in Supplemental Tables 1 through 3, respectively, and the answer choices presented to the participants for the Display QG are in the Supplemental Figure (see supplemental digital content containing 5 tables, 1 figure, and survey questions, available at https://meridian.allenpress.com/aplm in the February 2024 table of contents). available in the supplemental digital content.

Table 1

Question Groups (QGs)

Question Groups (QGs)
Question Groups (QGs)

The Demographics QG collected individual and organizational demographics, including the individual’s profession, degrees, specialty, and subspecialty and the individual’s organization’s size and type. The Decipher QG presented laboratory test names from actual EHR production environments and asked participants to decipher the intended test. One of the answers was the full name of the intended laboratory test, another was “I don’t know,” and the remaining 2 or 3 were different tests, some of which were plausible because of the ambiguity of the test name or were an incorrect answer for the test name presented. Five questions had a single ridiculous and intentionally humorous answer to keep survey participants engaged. The “I don’t know” answer was always last; other answers were in random sequence. “I don’t know” was included as an answer choice to allow the participant to communicate lack of certainty surrounding the test name. Although ordering providers who lack certainty may order a test based on their best guess, they may instead get more information by accessing linked material or by calling the laboratory. Because neither of the last 2 options were available in the survey, an “I don’t know” answer was provided to communicate lack of certainty, which would otherwise be hidden if the only answer choices were actual test names.

Table 2 lists questions in this group; Supplemental Table 1 gives a specific example, and the entire list of questions and possible responses are in the full survey data provided in the supplemental digital content. We included blood and urine tests from chemistry, hematology, infectious disease/microbiology, transfusion medicine, and molecular diagnostics.

Table 2

Laboratory Test Names Presented in Decipher Question Group

Laboratory Test Names Presented in Decipher Question Group
Laboratory Test Names Presented in Decipher Question Group

The Vitamin D QG assessed respondents’ knowledge of vitamin D test names. Vitamin D tests are notorious for being incorrectly ordered because each form of vitamin D can have multiple synonyms, none of which have any reasonable association with the biological functions assessed (see Supplemental Table 2). These questions were included to determine whether participants in this survey performed comparatively with other published data, that is, poorly.9  Q19 asked participants to mark all the tests that were the same, maybe the same, or different from a test named “vitamin D (25-[OH]) serum.” The correct answer for one of the answer choices (“vitamin D (25-[OH]), serum, total”) is “same” whereas the answers for the remaining choices (“vitamin D2 (25-[OH]), serum,” “vitamin D3 (25-[OH]), serum,” and “hydroxycholecalciferol”) are “different,” although the authors admit this was tricky.

The Ideal Name QG presented participants with full and unambiguous descriptions of laboratory tests and asked respondents to select the best shorter name for the test. Supplemental Table 3 shows an example. The majority (49 of 52) of answer choices were within 40 characters to best emulate character restrictions for test names in LISs and EHRs. Answer choices ranged from ambiguous and unstructured (no standard naming guide with test elements in variable order from start to finish) to specific and highly structured (using a set of consistent naming rules outlined in Table 3). These naming conventions or rules were developed before this study was conducted and are currently used at both the authors’ institutions. The rules were developed in response to problems experienced with various poorly named tests in order to eliminate common sources of error in test names. This survey-based experiment was designed to evaluate the utility of these naming conventions as a communication tool. Table 3 describes these rules. As some rules only apply to certain types of laboratory tests, the total number of rules applicable to the test name was recorded as well as the number of rules the specific answer choice followed.

Table 3

Rules for Good Laboratory Naming Practicea

Rules for Good Laboratory Naming Practicea
Rules for Good Laboratory Naming Practicea

The Display QG asked participants to select the best of 4 displays of laboratory result names in a spreadsheet result view commonly found in EHRs. These included an ungrouped alphabetical list of laboratory test names in mixed case, a grouped list in mixed case with many abbreviations, a grouped list in mixed case with very few abbreviations, and a grouped list in all upper case. Lists were grouped by medical test category (eg, blood gases, chemistry, endocrinology, immunology). Ubiquitous abbreviations are commonly used among all health care organizations in the authors’ experience (eg, CBC, ALT, AST, Hct). Any abbreviation that was not commonly used among various health care organizations was considered nonubiquitous.

The authors distributed the survey to various professional societies and workgroups, including the Association of Medical Directors of Information Systems, the Association for Pathology Informatics, the Association of Molecular Pathology, the Society for Pediatric Pathology, and the Epic Smartserv User Group, as well as to several working groups within the American Medical Informatics Association that included physicians, nurses, and other health care providers. The authors also sent the survey link out via social media, including Twitter and LinkedIn (Microsoft Corporation). Because of the mechanism in which the survey invitation was distributed, it was not possible to determine the number of people who received the call for participation.

The survey was open for 4 weeks, with 2 subsequent weekly reminders to complete it. A participant’s response qualified for analysis if he or she completed the survey and demonstrated clinical health care experience in the Demographics QG questions. Participants’ responses were excluded if the survey was incomplete or if the demographics indicated a nonmedical profession.

Statistical Analysis

The purpose of the analysis was to summarize survey responses with frequency counts and percentages. Responses to the Decipher QG were either dichotomous (“I don’t know” versus all other responses) or as 3 categories comprising the “I don’t know” responses against the most popular test description responses versus all others. Categorization of question (Q) 12 was only “I don’t know” versus all others because only 15 respondents answered with something other than “I don’t know.” Responses to Q19 are summarized by the 3 possible responses for each of 4 tests, and responses to Q20 are dichotomous, as “same” or “different.” Responses to Ideal Name QG and Display QG are also dichotomous responses of the most popular response versus all others. The results include the number and percentage of missing responses for each survey question. The proportion of actual responses represents nonmissing response percentages. Statistical analysis software (SAS version 9.4; SAS, Cary, North Carolina) was used for analysis, and P values < .05 were considered significant.

In the Ideal Name QG, the number of participants who selected each answer choice was correlated against the percentage of applicable rules for laboratory test naming practices developed by the authors that the selected answer choice followed. The percentage of rules for each answer choice was tested for normal distribution using Shapiro-Wilk and Jarque-Bera tests for normality. The strength of association between the number of participants who selected responses and the percentage of naming rules the selected answer choice followed were calculated using the Pearson correlation coefficient. Statistical calculations for this portion of the study were performed in RStudio Cloud (RStudio Server Pro version 1.3.1056-1) using the tidyverse package (version 1.3.0) and the tseries package (version 0.10-47).

Responses were also summarized about laboratory test names, comparing participants grouped into different categories by their responses to the Demographics QG. Five rounds of grouping and analysis were performed: (1) by profession alone, (2) by profession and with or without laboratory or pathology expertise, (3) by only the presence or absence of laboratory or pathology expertise, (4) by the presence or absence of informatics expertise with or without laboratory or pathology expertise, and (5) by only the presence or absence of informatics expertise. As with part 1, the intrinsic design of the survey responses was to enable analysis by frequency counts and percentages.

Responses to each of the 5 QGs were split according to participant responses to the Decipher QG for each round. Missing responses are not included as a category in the tests of comparison. For each round, a statistician compared the responses of each category of participants using Pearson χ2 or Fisher exact tests, as appropriate. Statistical analysis software (SAS version 9.4) was used for analysis, and P values < .05 were considered significant.

All Participant Responses

Demographics QG

A total of 269 participants responded to the survey, of whom 107 did not finish it (40% abandonment rate) and were excluded from analysis. All 162 respondents who completed the survey (60% completion rate) were medical professionals. Specifics about demographics are shown in Tables 4 and 5. The supplemental digital content file provides all responses to all survey questions. Among those who completed the survey, the average time to complete the survey was 26.79 minutes (SD, 90.8 minutes), which included 8 participants with outlier times of greater than 60 minutes up to a maximum of 17 hours. With these 8 outliers excluded, the average time to complete the survey was 12.84 minutes (SD, 9.09 minutes). Statistically significant differences between groups are summarized in Table 6.

Table 4

Groups of 162 Participants Examined in Each of 5 Different Rounds of Survey Data Analysis

Groups of 162 Participants Examined in Each of 5 Different Rounds of Survey Data Analysis
Groups of 162 Participants Examined in Each of 5 Different Rounds of Survey Data Analysis
Table 5

Participant Characteristics

Participant Characteristics
Participant Characteristics
Table 6

Significant Differences Between Various Analyses Performed by Categorizing the Participants in Different Waysa

Significant Differences Between Various Analyses Performed by Categorizing the Participants in Different Waysa
Significant Differences Between Various Analyses Performed by Categorizing the Participants in Different Waysa

Decipher QG

Analysis of the Decipher QG responses from all participants is shown in Figure 1. More participants correctly deciphered commonly used tests with ubiquitous abbreviations compared with answers that had more cryptic, location-specific abbreviations or incorrect spellings. A considerable number of participants deciphered a different test that, although not the intended name of the test by the laboratory that used it, was medically plausible. For some tests, many participants deciphered the test names into incorrect (unavailable, not medically plausible) tests. For all tests except “Free T4,” there were significant numbers of “I don’t know” responses among the 162 participants, ranging from 22 (13.7%) to 147 (90.7%).

Figure 1

Comparison of Decipher question group responses by response type (all participants). Abbreviations: ABS, absolute; Adeno, adenovirus; ANAL, analysis; BVK, misspelled abbreviation for BK virus; DAT, direct antiglobulin test; HCGB, human chorionic gonadotropin beta; HIV, human immunodeficiency virus; NIDA, National Institute on Drug Abuse; qPCR, quantitative polymerase chain reaction; T4, thyroxine; UTPQ, urine total protein quantitative; WBC, white blood cell. Note: These names represent actual tests that displayed in electronic health records, and some of the abbreviations used could not be discerned by the authors (eg, SC).

Figure 1

Comparison of Decipher question group responses by response type (all participants). Abbreviations: ABS, absolute; Adeno, adenovirus; ANAL, analysis; BVK, misspelled abbreviation for BK virus; DAT, direct antiglobulin test; HCGB, human chorionic gonadotropin beta; HIV, human immunodeficiency virus; NIDA, National Institute on Drug Abuse; qPCR, quantitative polymerase chain reaction; T4, thyroxine; UTPQ, urine total protein quantitative; WBC, white blood cell. Note: These names represent actual tests that displayed in electronic health records, and some of the abbreviations used could not be discerned by the authors (eg, SC).

Close modal

Vitamin D QG

Analysis of the Vitamin D QG responses corroborates prior published findings (see Table 7).3739  For the name comparisons in the more difficult question (Q19), 45 of 162 participants (28.5%) to 104 of 162 participants (67.1%) selected uncertain (“maybe the same”) or incorrect answers to each answer choice (see Figure 2). Even for the most similar answer choice, 45 participants (28.5%) were uncertain or incorrect. The proportion of incorrect and uncertain answers increased with increasing dissimilarity of names. For the easier question (Q20), 132 participants (83.5%) chose the correct response of “different.” Given the low accuracy of responses to Q19, the percentage of correct answers may be indicative of good guesses based on both the D2 and D3 forms being included rather than knowledge.

Table 7

Vitamin D Answer Choices Among 162 Total Participantsa

Vitamin D Answer Choices Among 162 Total Participantsa
Vitamin D Answer Choices Among 162 Total Participantsa
Figure 2

Question 19 asked participants to “mark all the tests that are the same, maybe the same or different from vitamin D (25-[OH]) serum,” based on whether the response was correct, unsure, or incorrect (see Table 7). Abbreviations: OH, hydroxy group; vitamin D2, vitamin D derived from dietary sources; vitamin D3, vitamin D derived from sunlight.

Figure 2

Question 19 asked participants to “mark all the tests that are the same, maybe the same or different from vitamin D (25-[OH]) serum,” based on whether the response was correct, unsure, or incorrect (see Table 7). Abbreviations: OH, hydroxy group; vitamin D2, vitamin D derived from dietary sources; vitamin D3, vitamin D derived from sunlight.

Close modal

Ideal Name QG

The highest-ranked answers for each question in the Ideal Name QG are shown in Supplemental Table 4. The Shapiro-Wilk test (W = 0.96 516, P = .17) and the Jarque-Bera test (χ2 = 0.77 941, P = .68) confirmed normal distribution of the naming rules percentage met by each answer choice. Because the data were normally distributed, Pearson correlation coefficients were used to describe the association between percentage of naming rules met by each answer choice and the number of participants who selected the answer (t = 4.3333, df = 45, R = .54, P < .001). This indicates significant differences in responses and a positive correlation between participants’ choices and answers that met the criteria of the authors’ naming rules in Table 3. See Figure 3.

Figure 3

This graph displays participants’ choices for ideal test names against the percentage of total possible naming rules (“category_total_rules”) developed by the authors that each answer choice satisfied. Pearson correlation is displayed (R = .54, P < .001). Legend: 6 applicable rules: answer choice had a total of 6 possible naming rules that could be applied to it; 7 applicable rules: answer choice had a total of 7 possible naming rules that could be applied to it; 8 applicable rules: answer choice had a total of 8 possible naming rules that could be applied to it.

Figure 3

This graph displays participants’ choices for ideal test names against the percentage of total possible naming rules (“category_total_rules”) developed by the authors that each answer choice satisfied. Pearson correlation is displayed (R = .54, P < .001). Legend: 6 applicable rules: answer choice had a total of 6 possible naming rules that could be applied to it; 7 applicable rules: answer choice had a total of 7 possible naming rules that could be applied to it; 8 applicable rules: answer choice had a total of 8 possible naming rules that could be applied to it.

Close modal

Display QG

The Display QG asked participants to select the best of 4 result views. Respondents selected a grouped list in mixed case with few to no abbreviations far more often compared with other displays (115 [71.0%] versus 23 [14.2%] of 162 respondents for the next highest option). See Figure 4.

Figure 4

Participants rated this result view as the best among the 4 options presented in question 38. All result options presented to the participant are available in the Supplemental Figure. Abbreviations: IgA, immunoglobulin A; IgG, immunoglobulin G; IgM, immunoglobulin M.

Figure 4

Participants rated this result view as the best among the 4 options presented in question 38. All result options presented to the participant are available in the Supplemental Figure. Abbreviations: IgA, immunoglobulin A; IgG, immunoglobulin G; IgM, immunoglobulin M.

Close modal

Responses by Provider Groups

Full response data are available as tables in the supplemental digital content. Only those QGs that had significant differences between respondent groups are described below.

By Profession

Each participant was categorized into one of the following nonoverlapping professions: attending physicians (n = 115), trainee physicians (n = 8), PhD nonphysicians (n = 14), nurses (n = 10), and “other” (n = 15). Survey responses for each QG were compared among these participant categories by profession.

Decipher QG

Analysis by profession showed significantly different responses between profession groups for “HIV .5 AT SC” (P = .02), “HIV-1 Genotype” (P = .004), “DAT” (P = .01), “Free T4” (P = .02), “BVK qPCR” (P = .003), and “Hemogram” (P = .03). The differences were primarily attributed to trainee physicians answering “I don’t know” the least of any profession category for “DAT,” “Free T4,” and “BVK qPCR,” and the most for “HIV .5 AT SC” and for older terms like “Hemogram.” However, the differences were also attributed to nursing and “other” professions answering “I don’t know” more frequently for “HIV-1 Genotype,” “DAT,” “Free T4,” and “BVK qPCR.” The most popular response that was not “I don’t know” was the intended test, and when these were included in the analysis, responses continued to be significantly different for “HIV-1 Genotype” (P = .01), “DAT” (P = .046), “Free T4” (P < .001), and “BVK qPCR” (P = .007). Physicians selected this most popular response more frequently than nonphysicians. Although no physicians or nurses answered “I don’t know” for “Free T4,” physicians selected the correct answer for the meaning of “Free T4” significantly more than other groups. Physicians in practice and in training selected the intended test name more often than the other groups with the exception of “BVK qPCR.” “BVK qPCR” was deciphered correctly more by trainee physicians than by the other groups, including physicians in practice.

Vitamin D QG

Responses were significantly different by profession for Q19. This one was tricky because it required the participant to know that a test for vitamin D 25-OH includes both the D2 (25-hydroxyergocholecalciferol; diet-derived) and D3 (25-hydroxycholecalciferol; sunlight-derived) forms. Attending physicians, followed by nurses and then PhD nonphysicians, selected the correct answer of “different” much more often than trainee physicians for “vitamin D3 (25-[OH]), serum” (P = .03). However, nurses, followed by PhD nonphysicians and the other groups, selected the correct answer of “different” much more often than physicians as a whole for “hydroxycholecalciferol” (P = .004).

By Profession and Laboratory

Participants were categorized by profession and laboratory experience as follows: physicians without laboratory or pathology expertise (n = 83), pathologists and physicians with laboratory expertise (n = 40), nonphysicians without laboratory expertise (n = 19), and nonphysicians with laboratory expertise (n = 20).

Decipher QG

There were more significant differences among these groups than for other groups of participants. Analysis showed significantly different responses among these groups for “HIV .5 AT SC” (P = .03), “HIV-1 Genotype” (P = .007), “UTPQ” (P = .005), “DAT” (P < .001), “Free T4” (P = .01), “CHROMOSOME MICROARRAY ANAL” (P < .001), “BVK qPCR” (P < .001), and “Adeno 40/41” (P = .01). For these hard-to-decipher names, both physicians and nonphysicians with laboratory and/or pathology expertise were far less likely to answer “I don’t know.” When the responses were compared between “I don’t know,” the most popular test description, and all other responses, statistically significant differences remained for “HIV-1 Genotype” (P = .001), “UTPQ” (P = .01), “DAT” (P < .001), “Free T4” (P < .001), “CHROMOSOME MICROARRAY ANAL” (P = .004), “BVK qPCR” (P < .001), and “Adeno 40/41” (P = .01). Pathologists and physicians with laboratory expertise selected the most common and intended laboratory test description far more often than other groups, whereas nonphysicians without laboratory expertise selected the intended test the least.

Vitamin D QG

Responses to Q19 were significantly different by profession and/or laboratory expertise when determining whether “hydroxycholecalciferol” (P < .001) or “vitamin D (25-[OH]), serum, total” (P = .02) was the same as “vitamin D (25-[OH]), serum.” Nonphysicians with laboratory expertise were most likely to answer the “hydroxycholecalciferol” component correctly (14 of 19; 73.7%), followed by nonphysicians without laboratory expertise (8 of 16; 50.0%). Physicians, both those with and those without laboratory expertise (7 of 40 [17.5%] and 22 of 80 [27.5%], respectively), were least likely to answer this component correctly. For “vitamin D (25-[OH]), serum, total,” physicians and laboratorians were most likely to choose the correct answer of “same” (27 of 40 physicians with laboratory expertise [67.5%], 64 of 81 physicians without laboratory expertise [79.0%], and 14 of 19 nonphysicians with laboratory expertise [73.7%]) than nonphysicians without laboratory expertise (4 of 18 [22.2%]).

Ideal Name QG

There were significant differences among responses for this group for the “Parathyroid hormone related peptide” question (P = .03), with laboratory physicians selecting the most commonly chosen response the most frequently and with nonlaboratory nonphysicians least frequently choosing this option.

By Laboratory Only

Participants were separately categorized by the presence (n = 60) or absence (n = 102) of laboratory expertise.

Decipher QG

The influence of laboratory and/or pathology expertise in the ability to decipher a cryptic laboratory test name was evident by the results for this analysis. Respondents with laboratory expertise were less likely to answer “I don’t know” versus all other options for “UTPQ” (P = .002), “DAT” (P = .006), “CHROMOSOME MICROARRAY ANAL” (P < .001), “BVK qPCR” (P < .001), and “Adeno 40/41” (P = .004). Similarly, participants with laboratory and/or pathology expertise more frequently selected the most common and intended test description that was not “I don’t know” for the same tests: “UTPQ” (P = .004), “DAT” (P < .001), “CHROMOSOME MICROARRAY ANAL” (P < .001), “BVK qPCR” (P < .001), and “Adeno 40/41” (P = .02).

Ideal Name QG

Significant differences between laboratory and nonlaboratory participants were present when selecting an ideal name for “Tissue transglutaminase IgA antibody” (P = .04) and “N-terminal pro b-type natriuretic peptide” (P = .04). The laboratory group more frequently selected the most popular ideal name of “Tissue transglutaminase IgA” and “Pro-B-type Natriuretic Peptide N-terminal” compared with the nonlaboratory group.

By Informatics and Laboratory

Participants were separately categorized by the presence or absence of informatics and/or laboratory specialty expertise as follows: American Board of Medical Specialties (ABMS) Clinical Informatics board certified with (n = 10) or without (n = 51) laboratory expertise, informatics expertise not board certified with (n = 15) or without (n = 33) laboratory specialization, and no informatics with (n = 35) and without (n = 18) laboratory specialization.

Decipher QG

Significant differences among these groups were seen for percentages of “I don’t know” responses when participants were asked to decipher “NIDA 5DRUG S” (P = .009), “UTPQ” (P = .01), “Free T4” (P = .03), “CHROMOSOME MICROARRAY ANAL” (P < .001), “BVK qPCR” (P < .001), and “Adeno 40/41” (P = .04). These differences were primarily due to ABMS Clinical Informatics board–certified laboratory specialists and noninformatics laboratory specialists answering “I don’t know” far less frequently than other participants. This is corroborated by the binary laboratory group analysis shown below.

Vitamin D QG

There was a significant difference in responses for Q20, asking participants to compare 4 differently named vitamin D 1,25(OH)2 tests to determine if they represented the same test or different tests (P = .04). Participants who were board certified in clinical informatics responded with the correct answer of “different” the most frequently, followed by those practicing informatics but not board certified and then by those who did not practice informatics. Expertise in laboratory medicine did not appear to influence the results. These results are supported by the binary comparisons of informatics expertise and laboratory expertise.

By Informatics

Participants were separately categorized by the presence (n = 109) or absence (n = 53) of informatics specialty expertise.

Decipher QG

A significant difference was detected for the binary informatics group for the “Adeno 40/41” question only when “I don’t know” responses were compared against the most popular test description response versus all other responses (P < .001). There was no statistical difference when just comparing the “I don’t know” versus all other responses. Those with informatics expertise selected the intended and most common response, “Adenovirus PCR for strains 40 and 41,” more often than those without informatics expertise.

Vitamin D QG

This analysis confirmed the source for the significant difference between the responses for the question (Q20) asking whether 4 different test names for vitamin D 1,25(OH)2 were all the same or different. Participants with declared informatics expertise, either board certification or practice, gave the correct answer of “different” more frequently than those without declared informatics expertise (P = .002).

Ideal Name QG

In the Ideal Name QG, participants with informatics expertise were more likely to select the most commonly chosen ideal name than those without informatics expertise for 2 of the questions: “Thyroid stimulating hormone with reflex free T4 if indicated” (P = .03) and “Parathyroid hormone related peptide” (P = .02).

Laboratory test names have been cited as sources of medical error as early as 1988.7  Applying codes to laboratory tests (eg, LOINC, SNOMED) will not solve this problem because a poorly designed test name with or without a code will remain unintuitive for providers.40,41  It is not reasonable to request providers to check a laboratory test name against a code, especially with physician burden and burnout at unprecedented levels. Improvements in test names for specialized tests have been shown to improve ordering practices,9,12  and the name of a laboratory test has been shown to be more important than its relative location in an EHR display.12  Improved names also have the advantage of being noninterruptive (ie, do not contribute to alert fatigue) and remove the additional maintenance and workflow requirements of provider restrictions on ordering. A better naming system is imperative.

The results offer numerous lessons. Respondents were quite consistent in rejecting abbreviations and obscure test names. This comports with the professional experience of both authors, one of whom observed tests for triglycerides abbreviated as “TG” being interpreted as tissue transglutaminase. The intent of computerized provider order entry was to do away with such misinterpretations, but this is not possible if laboratory test names lack clear guidelines. Participants preferred full names for tests, even for common laboratory tests like arterial blood gases. This reduces the likelihood that an abbreviation will be interpreted erroneously. Participants demonstrated difficulty deciphering names across all categories of laboratory testing unless they are ubiquitous (eg, “Free T4”).

Prior studies have shown that certain categories of laboratory tests are especially problematic, particularly protein names, blood banking, coagulation, and genetics.21,4246  The National Center for Biotechnology Information has International Protein Nomenclature Guidelines, which is a helpful style guide for naming of proteins.47  It is designed for accurate naming but is not designed for ordering provider readability, nor was it designed for use in an EHR with limited characters. Gene names and gene symbols for each human gene are approved by an international organization known as the HUGO Gene Nomenclature Committee (HGNC).48  These names and symbols are the de facto standard for reporting genetic test results in the pathology and genetics communities. The proteins produced by genes have historically often been named differently from their gene of origin and vice versa. For example, the gene symbol of protein analyte commonly known by most health care providers as “alanine aminotransferase (ALT)” is GPT, and its approved HGNC gene name is “glutamic-pyruvic transaminase.” This only references cytosolic ALT, however. The protein ALT2 is mitochondrial and produced by gene GPT2. Other proteins can be a combination of subunits produced by different genes (eg, hemoglobin). In another example provided by Fujiyoshi et al46  with regard to immunohistochemistry, “TTF-1” is the more commonly known, albeit nonubiquitous, abbreviation for the thyroid transcription factor 1 protein, but it is actually the gene product of NKX2-1 and could be confused with the gene TTF1, which is transcription termination factor 1, a completely different protein. Sadly, gene symbols (eg, PDCD1) are currently unknown by most health care providers and are therefore cryptic abbreviations by definition. Gene names and symbols approved by the HGNC change periodically as gene functions and gene families are better understood, further complicating the issue. Resolving these ambiguities and standardizing protein names to be shorter and simpler is necessary, but resolution of ambiguities and confusion will be difficult given the pervasiveness of some protein names in common use, the overlap of these common names with symbols and HGNC names for unrelated genes, and the general inability of humans to memorize approximately 20 000 symbols for protein-coding genes in human species.

Formatting, or a style guide, for laboratory test names is clearly useful. Participants avoided test names in all upper case; mixed upper and lower case is better. The preferred sort order from top to bottom is logical categorization, not alphabetical. Respondents favored structured string architecture for names: analyte first, followed by attributes such as specimen type, antibody, fasting, etc (eg, “Prolactin level fasting” or “Troponin T high sensitivity”). Participants also preferred drug levels to contain the word “level” after the drug name, presumably to distinguish a drug level from a medication order. It is crucial to choose distinctive names when analytes look alike (eg, “lactic acid” instead of “lactate” to differentiate it from “lactate dehydrogenase”). Participants preferred antibody tests to use a simple “IgG” or “IgM” after the antigen name without prefixing with “anti.” The exception was “anticardiolipin IgG,” probably because the use of the term “anticardiolipin” is so common. Because some tests detect antibodies of antibodies, use of “anti” in front of the antigen name is not recommended by the authors of this study. For reflex testing, participants preferred including the name of reflex tests with the initial test (eg, “TSH with reflex T4”). When the test name lacks any indication that reflex testing will be performed, the authors’ experience is that physicians also order the reflexed test, unaware that it is a duplicate. Vitamin D test names remain challenging.3739  Including short clarifying words in the test name has been shown to be more helpful than the position of the test on the screen, but results of other interventions have been mixed.9,12 

Laboratory ordering errors often involve ordering the wrong test, but also may result in a needed test not being ordered at all.5  Inadequate contrast and poor spacing such as row offsets encourage adjacency errors.49  This is the primary reason why the pharmacy community began to use TALLman lettering for drug names (eg, diazePAM, diltiaZEM).50  Incorrect language encourages incorrect test ordering. For example, using the term “ratio” suggests the laboratory will report a calculation such as “albumin/creatinine ratio,” when in fact both analytes will be reported, and the clinician may need to make the calculation. In some EHRs, certain types of text may be automatically converted to a symbol (eg, “HIV 1/2” converted to “HIV ½” or “HIV 0.5”), leading to further errors.

Significant differences in deciphering ability were related to several factors. Years of experience among physicians had an interesting dichotomous effect of trainees being less likely to answer “I don’t know” except where older test names were used (eg, hemogram). Test names should avoid outdated terminology and should not leave room for younger physicians to leap to conclusions about what is being tested. Informaticists did strangely well with interpreting “Adeno 40/41” compared with other groups. Combined with the results of the vitamin D QG, this may be due to informaticists’ abilities to notice details and numbers. Overall, it is no surprise that laboratory specialists were the least likely to answer “I don’t know” and the most likely to correctly decipher cryptic names. In addition to laboratory expertise, many LISs have very severe character limitations for test names, some as low as 15 characters, so laboratory specialists are often acclimated to a large number of nonubiquitous abbreviations. Even though their ability to decipher such cryptic names was greater, like clinical providers, they still preferred unambiguous names devoid of nonubiquitous abbreviations.

Much of the literature on wrong vitamin D orders has been primarily from the laboratory community, but these results indicate that laboratory experts fared no better and sometimes worse with the names of vitamin D tests than nonlaboratory physicians and nurses, indicating that these tests are confusing for everyone. For Q19, physicians of all specialties and nurses were more likely to differentiate “vitamin D3 (25-[OH]) serum” from “vitamin D (25-[OH]) serum,” whereas nurses and PhD nonphysicians, especially PhD laboratorians, were more likely to differentiate “hydroxycholecalciferol” from this term. Strangely, when asked to compare 2 names that were the same except for the word “total” at the end, participants who were not physicians or laboratory experts were least likely to select the correct answer of “same.” Board-certified physician informaticists were most likely to answer Q20 correctly. Q19 was more difficult in its wording because it could have been misinterpreted as a vitamin D (25-[OH]) test including, rather than being the same as, the test names for comparison. Q20 was clearer, and the key was noticing vitamins D2 and D3. Noticing detailed differences is a helpful skill in informatics work.

Despite all the reported communication struggles between providers inside and outside the laboratory, participants across all specialties were remarkably consistent in choosing ideal names for laboratory tests. There were only minor differences on which specialty was more likely to choose the most commonly selected ideal name. The highest degree of consensus was the preferred display of results.

There are a number of limitations to this study. We do not have any way to know how many potential participants received the invitation to complete the survey, as it was sent to numerous listservs and was posted on social media. The completion rate of 60% (162 finished of 269 who started the survey) is gratifying. The numbers of tests and test names were limited to improve the likelihood that participants would complete it. A disproportionate number of participants had laboratory expertise, which has some inherent bias when looking at responses across all participants. However, the survey was also sent to participants with informatics expertise and without laboratory or pathology expertise, which introduces separate and potentially offsetting biases. Informaticists are more likely to have experience in dealing with the ramifications of poorly named laboratory tests in the EHR. Some test names were egregiously poor but in clinical use so were fair game. Participants may have encountered tests they had never or rarely ordered, thereby limiting confidence in their answers, but the rate of “I don’t know” across all participants was high, suggesting that the name, rather than the confidence of the participant, was the key factor. In many organizations, personnel are accustomed to local test name conventions; we could not factor local practice into this study. This study cannot interpolate the accuracy of test ordering at a specific organization.

The survey was not a validated instrument, nor was it sent to large numbers of different populations or over time, which may limit its generalizability. Although this survey was not large enough to determine a statistically valid comparison among all areas of laboratory nomenclature, the study offers statistically sound and valuable insights into cognitive processes regarding laboratory test nomenclature.

Many tests potentially fall into multiple test groups (eg, a thyroid peroxidase antibody test could be immunology, endocrinology, or a subset such as thyroid tests). This has a greater impact on the creation of a logical and intuitive results review display. We did not attempt to analyze our results from this perspective. Lastly, participants were not asked why they chose the answers that they did, so it was not possible to determine the specific aspects of a test name that were most or least preferable. Participants were not challenged with names that had extreme overlap, nor were they given keyword searches, for which the degree of confusion may be higher or lower, respectively. Participants self-reported their area of specialty and expertise, and these were not verified by the authors.

Our survey population was representative of those who order, view, and name laboratory tests. Health care providers prefer clearer and more meaningful terminology to help prevent ordering errors, including avoiding abbreviations except where ubiquitous and/or strictly controlled. Vitamin D names may require additional text to indicate appropriate ordering context (eg, first-line versus secondary/special test). Judicious use of appropriate synonyms will also help. Respondents were less certain about uncommon tests than common ones. Laboratory test names need to be described as fully as possible and in a structured manner. Mixed case should be used for test names. Use of all upper case should be avoided. Other recommendations from the authors are in Supplemental Table 5.

Even though laboratorians may be better able to decipher cryptic tests, preferences for ideal names and result displays were amazingly consistent across groups and support the authors’ recommendations for standardized test naming according to rules the authors previously developed in their own organizations. Future work beyond both parts of this study includes generating a standard style guide and tool for standardizing laboratory test names that could be used freely by others.

1.
Hickner
J,
Thompson
PJ,
Wilkinson
T,
et al
Primary care physicians’ challenges in ordering clinical laboratory tests and interpreting results
.
J Am Board Fam Med
.
2014
;
27
(2)
:
268
274
.
2.
Passiment
E,
Meisel
JL,
Fontanesi
J,
Fritsma
G,
Aleryani
S,
Marques
M.
Decoding laboratory test names: a major challenge to appropriate patient care
.
J Gen Intern Med
.
2013
;
28
(3)
:
453
458
.
3.
Abhyankar
S,
Demner-Fushman
D,
McDonald
CJ.
Standardizing clinical laboratory data for secondary use
.
J Biomed Inform
.
2012
;
45
(4)
:
642
650
.
4.
Inoue
Y,
Nakamura
J.
Codes and names of clinical laboratory tests and shared interlaboratory databases [in Japanese]
.
Rinsho Byori
.
1997
;
45
(6)
:
577
580
.
5.
Peute
LW,
Jaspers
MW.
The significance of a usability evaluation of an emerging laboratory order entry system
.
Int J Med Inform
.
2007
;
76
(2–3)
:
157
168
.
6.
Pontet
F,
Magdal Petersen
U,
Fuentes-Arderiu
X,
et al
Clinical laboratory sciences data transmission: the NPU coding system
.
Stud Health Technol Inform
.
2009
;
150
:
265
269
.
7.
Finn
AF
Jr,
Valenstein
PN,
Burke
MD.
Alteration of physicians’ orders by nonphysicians
.
JAMA
.
1988
;
259
(17)
:
2549
2552
.
8.
Gabrieli
ER.
Standardized laboratory test name nomenclature: a requirement for data base exchange
.
Clin Lab Manage Rev
.
1992
;
6
(1)
:
108
110
,
113
116
.
9.
Krasowski
MD,
Chudzik
D,
Dolezal
A,
et al
Promoting improved utilization of laboratory testing through changes in an electronic medical record: experience at an academic medical center
.
BMC Med Inform Decis Mak
.
2015
;
15
:
11
.
10.
Rucker
DW,
Steele
AW,
Douglas
IS,
Coudere
CA,
Hardel
GG.
Design and use of a joint order vocabulary knowledge representation tier in a multi-tier CPOE architecture
.
AMIA Annu Symp Proc
.
2006
:
669
673
.
11.
Valenstein
PN,
Walsh
MK,
Stankovic
AK.
Accuracy of send-out test ordering: a College of American Pathologists Q-Probes study of ordering accuracy in 97 clinical laboratories
.
Arch Pathol Lab Med
.
2008
;
132
(2)
:
206
210
.
12.
White
AA,
McKinney
CM,
Hoffman
NG,
Sutton
PR.
Optimizing vitamin D naming conventions in computerized order entry to support high-value care
.
J Am Med Inform Assoc
.
2017
;
24
(1)
:
172
175
.
13.
Ziemba
YC,
Lomsadze
L,
Jacobs
Y,
Chang
TY,
Haghi
N.
Using heatmaps to identify opportunities for optimization of test utilization and care delivery
.
J Pathol Inform
.
2018
;
9
:
31
.
14.
Singh
I.
Standardizing lab test names: the TRUU lab initiative
.
Centers for Disease Control and Prevention Web site. https://www.cdc.gov/cliac/docs/addenda/cliac0919/13_TRUU-LAB_Singh.pdf. Accessed June 13, 2021
.
15.
Lin
MC,
Vreeman
DJ,
McDonald
CJ,
Huff
SM.
A characterization of local LOINC mapping for laboratory tests in three large institutions
.
Methods Inf Med
.
2011
;
50
(2)
:
105
114
.
16.
HEMAPROMPT FG: fecal & gastric occult blood testing
.
Aerscher Diagnostics Web site. http://www.pointofcare.net/vendors/Hemoprompt/HemaPromptFG.pdf. Accessed June 13, 2021
.
17.
Duca
DJ.
Challenges in ensuring effective communication among laboratory information systems
.
Lab Med
.
2013
;
44
(1)
:
e77
e78
.
18.
Powsner
SM,
Costa
J,
Homer
RJ.
Clinicians are from Mars and pathologists are from Venus
.
Arch Pathol Lab Med
.
2000
;
124
(7)
:
1040
1046
.
19.
Beauchemin
N,
Draber
P,
Dveksler
G,
et al
Redefined nomenclature for members of the carcinoembryonic antigen family
.
Exp Cell Res
.
1999
;
252
(2)
:
243
249
.
20.
Larsen
PR,
Alexander
NM,
Chopra
IJ,
et al
Revised nomenclature for tests of thyroid hormones and thyroid-related proteins in serum
.
Arch Pathol Lab Med
.
1987
;
111
(12)
:
1141
1145
.
21.
Wright
IS.
The nomenclature of blood clotting factors
.
Can Med Assoc J
.
1962
;
86
:
373
374
.
22.
Laposata
M,
Dighe
A.
“Pre-pre” and “post-post” analytical error: high-incidence patient safety hazards involving the clinical laboratory
.
Clin Chem Lab Med
.
2007
;
45
(6)
:
712
719
.
23.
Bonini
P,
Plebani
M,
Ceriotti
F,
Rubboli
F.
Errors in laboratory medicine
.
Clin Chem
.
2002
;
48
(5)
:
691
698
.
24.
Green
SF.
The cost of poor blood specimen quality and errors in preanalytical processes
.
Clin Biochem
.
2013
;
46
(13–14)
:
1175
1179
.
25.
Henricks
WH,
Wilkerson
ML,
Castellani
WJ,
Whitsitt
MS,
Sinard
JH.
Pathologists as stewards of laboratory information
.
Arch Pathol Lab Med
.
2015
;
139
(3)
:
332
337
.
26.
Lichenstein
R,
O’Connell
K,
Funai
T,
et al
Laboratory errors in a pediatric emergency department network: an analysis of incident reports
.
Pediatr Emerg Care
.
2016
;
32
(10)
:
653
657
.
27.
Miligy
DA.
Laboratory errors and patient safety
.
Int J Health Care Qual Assur
.
2015
;
28
(1)
:
2
10
.
28.
Plebani
M.
Quality in laboratory medicine: 50years on
.
Clin Biochem
.
2017
;
50
(3)
:
101
104
.
29.
Randell
E,
Schneider
W.
Medical errors in laboratory medicine: pathways to improvement
.
Clin Biochem
.
2013
;
46
(13–14)
:
1159
1160
.
30.
Promoting interoperability
.
Centers for Medicare & Medicaid Services Web site. https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/index.html?redirect=/EhrIncentivePrograms/. Updated January 9, 2023. Accessed June 13, 2021
.
31.
Forrey
AW,
McDonald
CJ,
DeMoor
G,
et al
Logical observation identifier names and codes (LOINC) database: a public use set of codes and names for electronic reporting of clinical laboratory test results
.
Clin Chem
.
1996
;
42
(1)
:
81
90
.
32.
Carter
AB,
de Baca
ME,
Luu
HS,
Campbell
WS,
Stram
MN.
Use of LOINC for interoperability between organisations poses a risk to safety
.
Lancet Digit Health
.
2020
;
2
(11)
:
e569
.
33.
Farahani
N,
Gigliotti
T,
Henricks
WH,
Riben
M,
Hartman
D,
Pantanowitz
L.
Comparison of LOINC codes for commonly ordered lab tests provided by different medical centers [abstract]
.
J Pathol Inform
.
2016
;
7
(suppl)
:
S11
S12
.
34.
Stram
M,
Gigliotti
T,
Hartman
D,
et al
Logical Observation Identifiers Names and Codes for laboratorians
.
Arch Pathol Lab Med
.
2020
;
144
(2)
:
229
239
.
35.
Stram
M,
Seheult
J,
Sinard
JH,
et al
A survey of LOINC code selection practices among participants of the College of American Pathologists coagulation (CGL) and cardiac markers (CRT) proficiency testing programs
.
Arch Pathol Lab Med
.
2020
;
144
(5)
:
586
596
.
36.
Wiitala
WL,
Vincent
BM,
Burns
JA,
et al
Variation in laboratory test naming conventions in EHRs within and between hospitals: a nationwide longitudinal study
.
Med Care
.
2019
;
57
(4)
:
e22
e27
.
37.
Genzen
JR,
Gosselin
JT,
Wilson
TC,
Racila
E,
Krasowski
MD.
Analysis of vitamin D status at two academic medical centers and a national reference laboratory: result patterns vary by age, gender, season, and patient location
.
BMC Endocr Disord
.
2013
;
13
:
52
.
38.
Holick
MF.
Vitamin D deficiency
.
N Engl J Med
.
2007
;
357
(3)
:
266
281
.
39.
Krasowski
MD.
Pathology consultation on vitamin D testing
.
Am J Clin Pathol
.
2011
;
136
(4)
:
507
514
.
40.
SNOMED International Web site
.
https://www.snomed.org/. Accessed June 13, 2021
.
41.
LOINC from Regenstrief Web site
.
https://loinc.org/. Accessed June 13, 2021
.
42.
Haga
SB,
Burke
W,
Ginsburg
GS,
Mills
R,
Agans
R.
Primary care physicians’ knowledge of and experience with pharmacogenetic testing
.
Clin Genet
.
2012
;
82
(4)
:
388
394
.
43.
Harding
B,
Webber
C,
Ruhland
L,
et al
Bridging the gap in genetics: a progressive model for primary to specialist care
.
BMC Med Educ
.
2019
;
19
(1)
:
195
.
44.
Hull
LE,
Vassy
JL,
Stone
A,
et al
Identifying end users’ preferences about structuring pharmacogenetic test orders in an electronic health record system
.
J Mol Diagn
.
2020
;
22
(10)
:
1264
1271
.
45.
Simons
A,
Shaffer
LG,
Hastings
RJ.
Cytogenetic nomenclature: changes in the ISCN 2013 compared to the 2009 edition
.
Cytogenet Genome Res
.
2013
;
141
(1)
:
1
6
.
46.
Fujiyoshi
K,
Bruford
EA,
Mroz
P,
et al
Opinion: standardizing gene product nomenclature—a call to action
.
Proc Natl Acad Sci U S A
.
2021
;
118
(3)
:
e2025207118
.
47.
European Bioinformatics Institute (EMBL-EBI), National Center for Biotechnology Information (NCBI), Protein Information Resource (PIR), Swiss Institute for Bioinformatics (SIB)
.
International protein nomenclature guidelines
.
National Center for Biotechnology Information Web site. https://www.ncbi.nlm.nih.gov/genome/doc/internatprot_nomenguide/. Accessed February 6, 2023
.
48.
HUGO Gene Nomenclature Committee Web site
.
https://www.genenames.org/about/. Updated March 9, 2022. Accessed March 21, 2022
.
49.
Rizk
S,
Oguntebi
G,
Graber
ML,
Johnston
D.
Report on the safe use of pick lists in ambulatory care settings: issues and recommended solutions for improved usability in patient selection and medication ordering
.
Office of the National Coordinator for Health Information Technology Web site. https://www.healthit.gov/sites/default/files/report-on-the-safe-use-of-pick-lists-in-ambulatory-care-settings.pdf. Accessed June 13, 2021
.
50.
DeHenau
C,
Becker
MW,
Bello
NM,
Liu
S,
Bix
L.
Tallman lettering as a strategy for differentiation in look-alike, sound-alike drug names: the role of familiarity in differentiating drug doppelgangers
.
Appl Ergon
.
2016
;
52
:
77
84
.

Author notes

Supplemental digital content is available for this article at https://meridian.allenpress.com/aplm in the February 2024 table of contents.

Competing Interests

The authors have no relevant financial interest in the products or companies described in this article.

A portion of this paper was presented at the virtual American Medical Informatics Association Clinical Informatics Conference; May 18–20, 2021.

Supplementary data