It’s been over a decade since the San Francisco Declaration on Research Assessment (DORA) argued that a Journal Impact Factor (JIF) should no longer be used as a measure of research quality or effectiveness1  (Box 1). Yet the JIF still dominates discussions of journal value and may factor in promotion and funding decisions as part of publication reviews. The outsize importance of the JIF is also apparent in the pride displayed by journals whose JIF has increased a few points from the prior year. A simplified JIF definition is the total number of citations for a journal in 1 year, divided by the total number of citable articles published in the journal, in the prior 2 years (Box 2).

Box 1 DORA San Francisco Declaration of Research Assessment, Recommendation #1

“Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.”1 

Box 2 Clarivate Analytics Journal Impact Factor (JIF)9 

JIF=Total number of citations in the past year/total articles published in the past 2 years

Articles must be published in journals selected by Clarivate for inclusion in Web of Science databases.

There are numerous Web of Science criteria. Some that are disclosed include:

  • Content considered useful

  • International content and interest

  • High publication standards, such as clarity of peer review, timeliness, ethics

  • A citation analysis (eg, is there self-citation to editorials)

Note: This description is simplified: there are adjustments for review articles, which are usually cited more often than other articles, and other calculation factors.

The JIF persists as a dominant quality indicator for journals and individuals, despite growing evidence that the JIF is seriously flawed as a measure of scholarly impact.2  What other dissemination and quality measures should be considered by authors, institutions, and journals? In this editorial our goal is to shine a bright light on how dissemination measures are created, supported, and distorted, and to encourage authors to examine a range of metrics regarding their work.

Eugene Garfield established the Institute for Scientific Information (ISI) and created the JIF in the early 1960s for the new Science Citation Index (SCI). Thomson Scientific & Healthcare, which became Thomson Reuters, acquired the ISI in 1992. In 2016, ISI, including the JIF, was sold and became Clarivate Analytics. The JIF was originally suggested as an aid for librarians in selecting journals. That is, the JIF could indicate which journals might be most often requested by library users. The JIF was not designed to measure the value of individual articles or overall journal quality. As the popularity of the JIF for journal comparisons increased, studies revealed considerable problems with this secondary use.2-9  As summarized by DORA, these limitations include: “A) citation distributions within journals are highly skewed; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews; C) Journal Impact Factors can be manipulated (or ‘gamed’) by editorial policy; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public.”1 

Clarivate Analytics also calculates a 5-year JIF, which may be somewhat more relevant for journals whose scope requires a longer trajectory for new information uptake. For example, while changes in biological sciences, such as immunology, can occur over a year, new knowledge in fields like medical education may require more time for discussion and adoption. In addition, smaller or “niche” fields may have lower JIFs due to their smaller readership.

It is important to note that Clarivate is a for-profit entity; access to Clarivate’s Journal Citation Report for the various metrics is by paid subscription, and Clarivate determines which journals they will review and select to receive a JIF.10  For a journal to receive a JIF, it must be indexed in one of the Web of Science (WoS) databases, also controlled by Clarivate. Inclusion in Clarivate’s WoS has been an opaque process, although there are anecdotal reports that this may be improving.

Clarivate’s murky process contrasts with the indexed database MEDLINE, which uses a transparent process, consistently applied, primarily for scholarly journals. Created by the US National Library of Medicine (NLM), MEDLINE includes more than 31 million journal articles focused on biomedicine from 1966 to the present, with some earlier articles as well.11  PubMed provides free online access to MEDLINE (and other databases) and records are indexed with NLM Medical Subject Headings. For most journals, all article categories are indexed (ie, editorials and letters as well as original research). Although MEDLINE focuses on biomedicine and health, it does include “related educational activities.”11 

Other commercial entities collect citation information. For example, Google Scholar uses the Google Scholar platform and Elsevier produces Scopus. Google Scholar (with free access) uses web crawlers to obtain scholarly citation information from many sources, in addition to articles published in journals.12  Scopus includes citations from over 39 000 journals (past plus current) and requires a subscription for access. As measures of dissemination are derived from different databases or sources, they will vary related to the included journals, article categories, and calculation methods (Table 1).13 

Table 1

Databases That Provide Citation Information for Journals

Databases That Provide Citation Information for Journals
Databases That Provide Citation Information for Journals

Measures for Authors

Medical educators have large “impacts” on others through their teaching, program administration, and other activities. However, when under consideration for promotion, grants, awards, or merit pay, a faculty member’s scholarship may become narrowly defined as publications in journals indexed in WoS, Scopus, or MEDLINE. Other institutions may only count articles published in journals with high JIFs. Given research showing JIF’s biases and its original purpose,1-8  authors should advocate for other measures of their scholarly influence (Table 2).

Table 2

Article Dissemination Measures and Resources

Article Dissemination Measures and Resources
Article Dissemination Measures and Resources

Despite the vagaries that will result from Google’s serendipitous web crawler approach, Google Scholar My Profile offers automatically updated, free, personal h-indexes and i10-indexes. H is the maximum value for an author in which the author has published h papers that have each been cited h times.14  For instance, an h-index of 10 means that an author has published at least 10 articles that have each been cited at least 10 times. The i10-index is the number of publications with at least 10 citations for an individual author. A free software program, Publish or Perish, developed by Dr. Anne-Wil Harzing at Middlesex University in London and updated through 2023, combs a variety of data sources to retrieve and analyze academic citations.15  The software displays publications, citations, and metrics, such as h-indexes. Undoubtedly other retrieval and organization systems exist. Much like the JIF, these individual indexes have numerous deficiencies, but they can offer citation information that extends beyond the commercial databases.

iCITE, a US National Institutes of Health (NIH) dashboard for article metrics, organizes outcomes by Influence, Translation, and Citations modules.16  This work was developed in direct response to concerns around the opaque, commercial, and restrictive nature of the JIF and similar measures, and to enhance research quality through improved access to scholarship.16,17  The NIH Open Citation Collection is an open-access, free database derived from unrestricted data sources such as MEDLINE, PubMed Central, and Crossref as well as full-text articles found through internet searches.16,17 

The Altmetric Attention Score summarizes the immediate reach of a publication through a wide variety of sources, including news articles, blog posts, policy statements, and social media mentions.18  These reports are weighted by source type, and weights are publicly available. For example, the current weights for news stories and Facebook are 8 and 0.25, respectively.18  The process attempts to combat gaming by capping measures (eg, more than 200 Facebook posts are not counted).19  Journals and publishers can subscribe to Altmetric Explorer for article scores and institutions can subscribe to track institutional or department scores. It is not clear what a “good” Altmetric Attention score should be. The design of this measure ensures that more popular or controversial topics will receive high scores; as a result, scores may not measure scientific value or correlate closely with citation measures.13  Thus, Altmetric scores offer new information about the immediate reach of articles, which may be relevant for some education work.

Measures for Journals

In 1976 the Institute for Scientific Information first published the Journal Citation Reports, which included impact factors and other descriptive data. Since then, the JIF has dominated journal websites and editorial board discussions. Besides the current opaque selection of journals for WoS and details of JIF calculations, other publishing factors can bias the JIF. Some of these are:

  1. Journals vary in the number of permitted references. If a journal places limits on the number of references, authors may cite literature reviews rather than original work, to reduce the number of references.

  2. Disciplines vary in research output and timeline for research conclusion: experiments with cells or mice take less time than longitudinal clinical or medical education studies.

  3. Disciplines vary in citing the most recently published articles. Discussion, uptake, and implementation will take more than 2 years in some fields; the JIF measures the past 2 years. Comparing across all disciplines, rather than comparing within a discipline, makes little sense.

  4. Journals excluded from WoS will affect the citation scores of articles that are in WoS. If a paper published in a journal indexed in WoS is cited extensively in journals that, however well-read, are not indexed in WoS, the JIF will not reflect these citations.

  5. Assignment of journals to subject categories is arbitrary (eg, assigning a medical education journal to a clinical discipline).

  6. Clarivate Analytics does not divulge which categories are counted in the JIF calculation (eg, editorials and letters are not usually counted, but specifics are not necessarily provided to individual journals).

  7. Journals can game the system via reducing the total number of journal articles, accepting only articles that are most likely to have high citation possibilities, and by encouraging self-citations or adding references to journal articles published in the past 2 years.

Fortunately, there are other measures to consider. The Eigenfactor Score measures the number of times journal articles have been cited over the past 5 years and excludes self-citations from the score.20  Perhaps less helpful is that the score uses weights, such that more highly cited journals are given more weight, and WoS is the source database.20  Another version, the Normalized Eigenfactor Score, normalizes the score: 1 is the average for all journals.20  Eigenfactor Scores are free, and the work is sponsored by the University of Washington.

CiteScore is derived from the Scopus database and represents citations from all documents (eg, articles, reviews, conference presentations, book chapters, etc) over the prior 4 years divided by the number of articles from the journal indexed by Scopus over those 4 years. Thus, CiteScore takes into account other dissemination targets beyond traditional journal articles.13 

There are several other measures that use the Scopus database, such as Scimago Journal & Country Rank, which factors in the prestige of the citing journal and Scimago H-Index, which looks at the number of journal articles that were cited at least h times, similar to the h-index for authors.20  The process of placing a journal in a category is not transparent yet it affects citation metrics greatly: anecdotally, for the Journal of Graduate Medical Education (JGME), changing to the correct category was not a smooth process.

Google Scholar calculates a no-cost h5-index for journals (ie, the number of articles in the past 5 years with at least 5 citations). JGME’s Google h5-index is 31, whereas the New England Journal of Medicine’s h5-index is 439.21  For comparison, current h5-indexes for well-established medical education journals, with a more expansive scope and international audiences, are: Academic Medicine: 80; Medical Teacher: 62; Medical Education: 59; Advances in Health Sciences Education: 39; and Teaching and Learning in Medicine; 26.22  As Google Scholar’s internet search method for finding citations is variable and unknown, the completeness of citations is unclear.13 

As a result of the surge to online access to journals and the increasing number of online-only journals, additional metrics are tracked by editors as a measure of reader interest overall and for specific articles: page views, downloads, and other digital markers. These can be tracked over time and compared with activities, such as social media posts, table of contents emails, or virtual journal clubs, for possible cause and effect. However, like the JIF and other scores, these metrics do not measure journal quality, yet they may reflect the usefulness or “staying power” of journal articles over time.

If you have read to this point, you know more about article dissemination measures than you perhaps ever wanted. We hope we have enhanced your understanding of the JIF as well as piqued your interest in exploring other dissemination measures. These measures can affect what studies are initiated and published, and as a result, possibly the academic careers of medical educators. While many journals and institutions seem wedded to the JIF, some are supporting more promising developments such as the NIH’s iCite score. We hope that future metrics will be accurate, less biased, and more transparently calculated. We encourage continued conversations regarding the responsible use of such metrics in considerations for faculty promotion, research assessment, and journal reach, relevance, and rigor.

What is your take on dissemination and quality measures for scholarship? Which ones are you using to follow the scholarly impact of your work? Let us know.

1. 
DORA
.
San Francisco Declaration on Research Assessment
.
Accessed February 5, 2024. https://sfdora.org/read/
2. 
Adler
 
R,
Ewing
 
J,
Taylor
 
P.
Citation statistics: a report from the International Mathematical Union (IMU) in cooperation with the International Council of Industrial and Applied Mathematics (ICIAM) and the Institute of Mathematical Statistics (IMS)
.
3. 
Seglen
 
PO.
Why the impact factor of journals should not be used for evaluating research
.
BMJ
.
1997
;
314
(
7079
):
498
-
502
.
4. 
Not so deep impact
.
Nature
.
2005
;
435
:
1003
-
1004
.
5. 
Vanclay
 
JK.
Impact Factor: outdated artefact or stepping-stone to journal certification
.
Scientometric
.
2011
;
92
:
211
-
238
.
6. 
PLoS Medicine Editors
.
The impact factor game. It is time to find a better way to assess the scientific literature
.
PLoS Med
.
2006
;
3
(
6
):
e291
.
7. 
Rossner
 
M,
Van Epps
 
H,
Hill
 
E.
Show me the data
.
J Cell Biol
.
2007
;
179
(
6
):
1091
-
1092
.
8. 
Rossner
 
M,
Van Epps
 
H,
Hill
 
E.
Irreproducible results: a response to Thomson Scientific
.
J Cell Biol
.
2008
;
180
(
2
):
254
-
255
.
9. 
Eigenfactor
.
Comparing Impact Factor and Scopus CiteScore
.
11. 
National Library of Medicine
.
Medline Overview
.
12. 
Scholastica
.
How does Google Scholar indexing work?
13. 
Suelzer
 
EM,
Jackson
 
JL.
Measures of impact for journals, articles, and authors
.
J Gen Int Med
.
2022
;
37
(
7
):
1593
-
1597
.
14. 
Hirsch
 
JE.
An index to quantify an individual’s scientific research output
.
Proc Natl Acad Sci USA
.
2005
;
102
(
46
):
16569
-
16572
.
15. 
Harzing
 
AW.
Publish or perish
.
Harzing
.
Published February 6, 2016. Accessed February 12, 2024. https://harzing.com/resources/publish-or-perish
16. 
National Institutes of Health
.
iCite User Guide
.
17. 
Hutchins
 
BI,
Baker
 
KL,
David
 
MT,
et al.
The NIH Open Citation Collection: a public access, broad coverage resource
.
PLoS Biol
.
2019
;
17
(
10
):
e3000385
. .
18. 
Altmetric
.
Accessed February 12, 2024. https://www.altmetric.com/
19. 
Altmetric
.
How is the Altmetric Attention Score calculated?
20. 
Eigenfactor
.
Accessed February 12, 2024. http://eigenfactor.org/index.php
21. 
Scimago Journal
.
Journal of Graduate Medical Education
.
22. 
23. 
ORCID
.
Accessed February 12, 2024. www.orcid.org
24. 
Clarivate
.
Web of Science researcher profiles
.
Accessed February 12, 2024. www.researcherid.com
25. 
Egghe
 
L.
Theory and practice of the g-index
.
Scientometrics
.
2006
;
69
(
1
):
131
-
152
.
26. 
Plum Analytics
.
About PlumX Metrics
.
27. 
Clarivate
.
Web of Science Platform. Streamline your research
.