Slow interview #1 – An interview with Paolo Cherubini

Author: Sofia Pagliarin

Impact Factor: how a useful tool turned into fever

Paolo Cherubini is a senior scientist at the Swiss Federal Research Institute WSL studying forest ecological processes using tree rings, i.e., dendroecology. Dendroecology provides information that can help us understand and predict how our forests and environment have changed over centuries, from carbon dioxide concentrations to warm or cold, dry or wet weather conditions.

I met Paolo Cherubini when I was working as a post-doc at the Swiss Federal Research Institute WSL. During an informal event, I discovered that in 2008, Paolo published a letter titled “Impact Factor Fever”  in Science (Vol. 322), where he strongly criticised the abuse and misuse of bibliometric in evaluating academic life and performance. He also told me different stories about the Impact Factor and his opinion about it.

As a contributor to the Slow Science network, I hence got the idea to organise in written form those early conversations. Actually, we scheduled a Skype interview where we talked about his thoughts about the Impact Factor, its increasing importance in academia and his experience as an author, reviewer and editor with it.

The following text is a synthetic re-elaboration of these interviews, structured in two posts and reviewed together with Paolo before its online publication on the Slow Science blog.

paolo1
Paolo Cherubini

Sofia: Paolo, first of all, what is the Impact Factor?

Paolo: The Impact Factor is a measure that calculates how frequently articles published in a journal are cited. The Impact Factor of a journal for a certain year (e.g., 2015) is calculated by dividing the number of citations accumulated by all citeable items, e.g., articles, reviews, personal commentaries, of this journal with the average of all published citeable items in the two previous years (e.g., 2013-2014). So, if a journal had 100 citations over 50 published items, its Impact Factor will be 2. The Impact Factor is a great tool that was developed by Eugene K. Garfield, the owner and founder of the Institute of Scientific Information (ISI), when staff was still typewriting and comparing the number of times the publications in a certain journal were cited by other journals, and now run by Thompson Reuters.

Sofia: In your letter to Science published 2008, you wrote that “The exacerbated pressure to publish we all suffer from is induced by an exaggerated reverence for the impact factor”. How did this all happen?

Paolo: Well, the story is long but basically the Impact Factor comes out of a true necessity to navigate through the different journals and rank them according to their reputation and utility. Although it is a measure that changes over time, it can give a good idea of how important a certain journal is for scientific publications in a certain domain, based on the ratio of citations of the journal articles.

However, this practical need for comparing international scholarly journals has “backfired” and evolved into a way to evaluate scientific performance, institutes and personal careers and a “citation rush”. So today not only journals are fighting to increase their Impact Factor, but also authors crave for the quantification of their research publication records. Personally, I am not against the Impact Factor; as an editor-in-chief of a small journal, I care about it for my journal, but I also know that it’s a measure that does not tell everything about a journal.

Sofia: But why is there such a “reverence”?

Paolo: Because the quantification of scientific results through the Impact Factor is extremely effective for decision-making: it is easy to use. From a measure useful to rank journals, the impact factor now is used to decide which departments have to die and which ones can survive, live or are “of excellence” and be funded. It is a measure that is functional if one has to demonstrate that “objective” decisions have been taken on who should be hired, who be funded and who be prized because of high scientific productivity.

The Impact Factor is functional to maintain a certain structure and functioning of academia, which is beneficial to many parties, that’s why I call it reverence. The journals gain from it as true “brands” in a scientific sector, and the scientists and universities gain, because they all compete for funding and the Impact Factor serves as objective quantitative measure to figure out who has been a good or a bad scientist. In a way, “it’s a simple game for simple minds”.

Sofia: Can you provide an example?

Paolo: Yes, sure. The Impact Factor serves specific academic structures and arrangements. For instance, the supervisor always has to be included in the publications of her/his post-docs and doctoral students. This is not only to increase her/his productivity record, but also because without the supervisor having such a productivity measure, s/he will not be able to get funding, and nurture this hierarchical structure of post-docs and PhDs who otherwise will not be working. Don’t get me wrong: this is not a problem per se, but of course it affects how academic research is done, the number of students in a certain department or research group, and their mentoring and supervising.

Another example is the impact this had on the social sciences, that have been forced to adapt to this parameter, created in and for the natural sciences; the social sciences have started to compete with the natural sciences for funding using the same measures and weapons, in spite of the fact that probably in the social sciences publishing a book makes more sense than publishing articles in ISI journals, making so the use of the Impact Factor perhaps less appropriate.

Sofia: So do you think we should get rid of it?

Paolo: Let’s say that the Impact Factor was invented for a certain purpose, that is ranking international scientific journals. Over time it altered into something that is almost a dictatorship in scientific research and production. It has been misused, and we should get rid of it being used in such a wrong way.

However, it is also true that the Impact Factor was a good thing in those university systems where competitiveness and productivity were not awarded. For instance, in Italy, where I originally come from, the traditional university system was based on professors making a career publishing one book every 10 or 20 years, or only a bunch of articles written in the local language in non-ISI journals. This corrupts the system and makes it extremely inefficient and not innovative at all, which is especially frustrating for students and for the careers of new researchers. It was a system that worked autoreferentially for decades, and recently, thanks to the Impact Factor culture, the university system has been changing. [See here for more background information on the effect of adopting metrices on self-citational practices in an Italian context]

Once I was giving a talk in a formerly Eastern European country, and when I said that I was critical about the Impact Factor, because it is not necessarily a good measure of good research, some colleagues got angry because they were trying to change the academic system there in order to put more pressure on the “dinosaurs” that did not publish and reward young researchers and professors who really did make an effort to compete on the international research scene.

So, it’s good to have a measure such as the Impact Factor that can tell us which journals are best and who is publishing the most. But the Janus-face of this is that, for instance in China, professors get paid thousands of euros more if they get published in one of those top-end journals in the Impact Factor list. This isn’t good, it can become a problem if academic research is solely oriented to get a publication done and cited: the quality of academic research will suffer.

Sofia: Thank you for your time, Paolo. Anything to add?

Paolo: Yes. As a dendroecologist and forest scientist I don’t think these are so far from the social sciences. Actually, ecological systems are very much connected to social sciences … So I believe we are all on the same boat, and the Impact Fair is currently misused in both the natural and the social sciences. Thank you for this opportunity to share with you these thoughts, that I have been discussing with so many other colleagues over the past two decades.

Sofia: Thanks to you Paolo!

 


 

“Slow Interviews” is a column published on the blog of the Slow Science network. The “Slow Interviews” articles/posts are conceived, written and reviewed by Sofia Pagliarin, as one of the collaborators of the Slow Science network. The publication and the content of each interview, in one or multiple posts, are discussed, reviewed and reciprocally agreed through a cooperative dialogue and effort taking place between Sofia Pagliarin, the interviewee(s) and some other members of the network.

These interviews have the aim to enrich the topics and debates that are central to the network, and are conceived to be in direct dialogue with the course on the critical analysis of academia and academic production organised annually by the Slow Science network: https://slowscience.be/our-doctoral-school/. They add to our understanding through adding different points of view, and are not intended to replace the topics and debates dealt with during the course. Interviewees are academics or informants that have experience and/or knowledge on a particular topic, and who are not necessarily related to either the Slow Science Network or the course.

Disclaimer: The views and opinions expressed in the “Slow Interviews” articles and blog posts are those of the authors and respondents and do not necessarily reflect an official policy or position of the Slow Science network.