Get Even More Visitors To Your Blog, Upgrade To A Business Listing >>

A Defense of Journal Impact Factors

Vilified, journal impact factor may still be useful for scientists. But use it with caution.

Funny enough, I had just written a post execrating Journal impact factors (JIFs), these obnoxious indexes polluting the scientific sanctity. But then I made the mistake of collecting some data to illustrate my point and, for my surprise, I realized that I couldn’t stand for the criticism anymore. For some weird reason, the facts willfully refused to fit into my opinions.

I just realized that JIFs aren’t always an instrument of the devil; there are occasions where they may be a useful tool for science. Let’s see that through three examples.

Please, meet Mr. Books, Dr. Labs, and Prof. Files.

Mr. Books, the librarian

The impact factor of a journal is a measure of how often papers published in that journal are cited. The JIF is simply given as the number of citations that the papers published in a certain journal received, divided by the number of papers published in that journal. In its more popular version, published annually by Thomson Reuters, these numbers are computed for a period of two years. Thus, impact factors for 2017, will collect citations and publications from 2015 and 2016.

JIF has been introduced as measure of how important a journal is for its field, to help people like Mr. Books.

Mr. Books is a university librarian normally juggling with a short budget. He has to decide which journals to subscribe and he collects a series of data on price, popularity, and importance of the several publications. Among these data, JIF is a helpful parameter helping Mr. Books make his decision. For a certain research field, he sees that journal A has larger JIF than journal B. This tells him that journal A is more relevant than journal B; therefore, he subscribes to A.

If things had rested on this level, any polemics would be pretty much circumscribed to the exciting world of library budgets. But JIF overflowed from its original goal and started to be used to evaluate not the journals, but the scientists publishing there. Then, things got messy.

JIFs have been criticized for many reasons and, to myself, the most serious one has been that the JIF tells the average of an extremely skewed distribution, prone to distortions caused by few super-cited papers. There is the bizarre extreme case of Acta Crystallographica Section A, whose JIF sky-rocked from 2.051 to 49.926 in 2009, thanks to a single super-cited paper.

Dr. Labs, the scientist

Dr. Labs just finished her project, wrote her paper, and now is deciding where to send it for publication. Dr. Labs is ambitious and proud of her work. She wants to maximize the impact of her findings; how to choose a journal to fulfill this goal?

Dr. Labs tackles the problem with the same objective mind she approaches her research topics. First, she defines which observable she could count on. Well, she wants repercussion in the short-term; therefore, she sets as her goal to maximize citations within two years.

What Dr. Labs needs is to know how many citations her paper will potentially have if published in a certain journal.

She is a physical-chemist and she has two possible journals to choose: either JACS or JPCA. She goes to Web of Science and, then, collects information on how many citations papers published in each of these journals had in the last two years, 2015 and 2016.

She plots the distribution and runs the statistics, which are shown in the figure and table below. Now Dr. Labs is well-informed to make her decision.

Citations

Publications Mode Median Mean

JACS

31539 4273 0 4

7.8

JPCA 2950 1940 0 1

1.5

She can’t predict how well her paper will do, then she starts by reasonably guessing that it won’t be either exceptional or mediocre. Looking at the Table, she learns from the median values that in this case the paper will be cited about four times in two years if it appears in JACS, but only once if it appears in JPCA.

If her paper, however, echoes exceptionally well in the community, she may expect about 8 citations in two years if it appears in JACS, but only about 2 citations if it appears in JPCA. This she learned from the mean value.

If, finally, the paper doesn’t do too well, it doesn’t really matter in which journal it appears: the mode (the most frequent value of the distribution) is zero citation in two years wherever she publishes.

It’s clear now, Dr. Labs must go for JACS if she wants to stick to her goal.

But she was too zealous going through all this work of collecting data and running statistics. The impact factor of JACS (13.038 for 2015) is 4.5 time bigger than that of JPCA (2.894). This is about the same ratio that she found between the means (5.2) and the medians (4). Therefore, she could have concluded that JACS was her best option by simply checking the JIFs.

By the way, Dr. Labs is intrigued on why her mean values are so much different from the official JIFs. They should have been very similar. (Can you help her?) But she doesn’t give a second thought on that. There is a new project to start.

Prof. Files, the director

Well, JIF was a fine tool for Mr. Books and Dr. Labs. But there are many ways of using JIF in an inappropriate way.

Prof. Files is the institute director. He’s presenting the institute production to a board of evaluators. He proudly points out that 35% of the institute’s papers in the last five years were published in journals with JIF larger than 4.

So what? This figure doesn’t contain any relevant information. Specially because the performance of the papers published in the first three years of this period is already known. Anyone can easily check how many citations they got over two years.

As we saw in Dr. Labs’ distribution graph, the most common situation is that a paper published in JACS doesn’t do better than one in JPCA in terms of citations, regardless the four times difference in the JIFs of the two journals. Therefore, there’s a reasonable chance the institute’s papers are mediocre despite the good JIFs.

Prof. Files thinks that to maximize JIF is relevant. He often compares the JIF of the journals where he and his colleagues are publishing. (He’s always hopeful that his will be bigger.) When he opens a new applicant’s CV, Prof. Files immediately checks the JIFs of the journals in the publication list. He has gone as far as to propose to award institute’s staff publishing in high JIF journals.

Again and again Prof. Files misuses the index: JIF doesn’t measure either a paper or a scientist performance; it measures a journal performance. Moreover JIFs of journals in different fields aren’t even comparable.

Conclusions

These few examples show that JIFs may be helpful when searching for the most influential journal in a field or estimating the potential impact a research paper will have. But JIF is a terrible tool to check how a paper already published performed or how competent a scientist is.

Unfortunately, university bureaucracy often overlooks such nuances.

In response to these indexes misuses, I often see people longing for qualitative evaluations to replace quantitative indexes. Nonsense.

To follow this path would be a big (and awfully expensive) mistake. To embrace qualitative evaluations would be to open our doors to favoritism, nepotism, subjectvism, racism, sexism, and all those many -isms we should be ashamed of.

The problem isn’t in adopting quantitative indexes. The problem is to not have clarity of what these indicators are supposed to measure.

MB

  • If you want to play with (or check) the numbers used in Dr. Labs analysis, the are in this Excel worksheet.


This post first appeared on Much Bigger Outside, please read the originial post: here

Share the post

A Defense of Journal Impact Factors

×

Subscribe to Much Bigger Outside

Get updates delivered right to your inbox!

Thank you for your subscription

×