Journal Impact Factors

Home page Description: 
Uncovering the pros, cons and alternatives.
Posted On: October 22, 2018
Image Caption: 
Understanding the role of the Journal Impact Factor in assessing journals. Image Courtesy of Tyler Saumur.

By: Tyler Saumur, UHN Trainee and ORT Times Writer

As a trainee, the publish or perish cliché is an all too real representation of the constant pressures we put on ourselves to advance our careers. To further complicate things, you’ve published – great, but in what journal? What was the impact factor? It’s a commonly question asked, whether we believe in the metrics or not.

The origin of the journal impact factor (JIF), might surprise some people. It was initially created to help libraries decide what journals to purchase, depending on how often articles are cited relative to how often they are published. This allowed for the highest quality of journals to be purchased. Over time however, it has slowly been adopted as a means of evaluating the quality of a researcher and their work. This comes with a number of drawbacks, including that the metric does not account for self-citations, which can lead to inflated numbers; also, it can dissuade authors from publishing in journals with low JIFs even when they are the most appropriate choice based on scope and fit within the field. Conversely, when particularly relevant studies are published in low impact journals, they may be undervalued and receive lower attention from fellow researchers and the general public.

The difference in JIF values between different fields further complicates the usefulness of this metric. For example, in the field of rehabilitation science the top journal is Archives of Physical Medicine and Rehabilitation, which has a JIF of 3.077; compare this to the field of oncology, where the top ranked journal is CA: A Cancer Journal for Clinicians, which has a JIF of 244.585. Thus, using JIF to measure the standing and impact of a researcher’s publications becomes difficult when they are in different fields. While this might be a dramatic example of the disparities that can occur if scientists are being measured by the JIF of their publications, it begs the question: what other choice do we have? The answer to that is not an easy one. From a journal ranking perspective, other citation-based metrics and altmetrics are beginning to gain attention. These include the SCImago Journal Rank (SJR), Source Normalized Impact per Paper (SNIP), Eigenfactor, journal performance indicators and non-traditional altmetrics. While these metrics provide a different window into impact rankings, they also have inherent limitations. For example some of these metrics are not complete in that they do not capture manuscripts from certain fields; the value and applicability of the metrics can be difficult to ascertain; and again, they are really only meant to evaluate journals, not scientific work.

Conversely, the Hirsch index (h-index) is a metric that ranks a scientist’s paper based on the number of citations they have received. For example, an h-index of 8 would mean that the author has at least eight papers that have been cited at least eight times. This however, still limits the comparability of two scientists in different fields with varying publishing histories. From a more personalized standpoint, Nobel Laureate Dr. Randy Schekman proposes that candidates submit an impact statement to describe their key discoveries throughout their career. While this may be useful for ranking researchers, the approach would still be limited by its inherent subjectivity and the expertise of the review committee.

Overall, while JIFs and the emerging supporting cast of additional metrics may assist in selecting journals, a more detailed and customized approach is necessary for assessing the performance of researchers.