Author: Sofia Pagliarin
Why journal editorials are disappearing, and why we should care
Paolo Cherubini is a senior scientist at the Swiss Federal Research Institute WSL. He has been previously interviewed by Sofia Pagliarin as a collaborator of Slow Science when he shared some of his critical thoughts about the Impact Factor.
He is Editor-in-Chief of Dendrochronologia, an international scholarly journal publishing tree-ring science. Recently, he published an editorial about the rates of changes in scientific publishing and the “extinction” of editorials.
Sofia: Paolo, first of all thank you to be again with us.
Paolo: Thanks to you. Already 10 years ago I was thinking about the need to create a network about “slow science”, so I’m glad that it emerged and that I can contribute to it.
Sofia: It’s our pleasure! Anyway, let’s talk business. So are editorials as an introduction of a journal issue disappearing?
Paolo: In the old days, when journals were published on paper, editorials were an important part of a journal because they gave an opinion of the editor(s) on a certain topic possibly of interest for the readership of the journal. So you went to the library, you skimmed through editorials published on paper journals, and through published articles, too. Now, literature search is different. It’s digital, and scientists search and find exactly what they’re looking for. A readership of a journal almost doesn’t exist anymore, nobody takes a hardcopy of a journal in their hands, and nobody reads editorials anymore. The way of reading is also different. Everything is changing!
Paolo: Well, the fact that editorials are disappearing also frees up editors’ time, so you do not lose time anymore by writing an editorial that nobody will ever read. What I am a bit worried about is the decrease of serendipity in academic research. But first I want to make a premise: I am one of those scientists who, despite having a high environmental awareness, still likes to print out papers, take notes and highlight stuff on them. Some of my colleagues make fun of me, but I think that reading papers digitally is quite another thing than reading them on paper. It’s not only a different feeling, it’s a different way to “study”, to get the content of the paper in your mind, and to use it possibly in your own research.
That said, when you looked at the editorial of a journal in your domain in the past, you also checked the table of contents. You certainly found stuff related to your own research, but you could also look through published articles in other topics, for instance about the flight trajectories of a butterfly of a certain species. Although now this might seem as waste of time, it was actually enriching, and it stimulated cross-fertilization in research, “side-thinking” and ability to make connections across topics. Now scientists are so specialised also because they have less opportunities to know what others are doing. ((Note to Freek: Reference Abbott’s digital paper for both a practical and sociological view on how to deal with these issues)
Sofia: Do you mean that nowadays researchers are behaving “badly”?
Paolo: No, not at all. But today’s researchers, especially the younger ones, when they look for a journal where to publish their own research, they look at the Impact Factor and publication time. So it’s a “fast science”, where editorials, as well as papers providing personal opinions, commentaries and ideas cannot survive. On the other hand, I think that, “fast science” can easily induce not-so-well-done peer-review.
It takes time and care to make a good peer-review process, while today researchers can opt to pay open-access journals to get their research online. This is not good for scientific research. Furthermore, it is obvious that a molecular researcher will have many more citations than a scientist working on a Himalayan beetle.
First, the size of the academic community for a certain topic affects the Impact Factor and number of citations. We are maybe 1000 dendrochronologists in the entire world, so our publications will never reach a very high number of citations. But is our research less relevant only because of this?
Secondly, and more importantly, how to judge the quality of the research? We may argue that research on the Himalayan beetle has its good relevance. The Impact Factor is a quantification of the utility, not a proof of quality, and less so of how relevant the performed research is. Or better, other measures complementary or alternative to the Impact Factor should be developed that can account for the topic and certain characteristics of the academic domain. Similarly, the Impact Factor should not be used – or at least not solely – to hire or not people: hiring a new person in a research group is not only about choosing “Impact Factor Stars” – people with many top publications – but it is also necessary to have the sensitivity to weight other aspects, for instance if this person made other contributions (media, software, events, and so on) and on her/his personality, how it would fit to the research group overall. But these aspects are of course not so easily quantifiable as with the number of citations.
Sofia: Do you think that there is an “antidote”?
Paolo: As I said, the Impact Factor should be considered one of the possible measures of scientific research, and possibly not applicable to all the scientific disciplines. Other measures should be developed which are domain-specific, and at best which can also include qualitative assessments of the research. So I urge scientists to work on this and to propose other measures than Garfield’s Impact Factor, which was created to evaluate different journals basing on their utility.
Sofia: Thank you for your time and for sharing your experience Paolo.
Paolo: My pleasure!
Disclaimer: The views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of Slow Science.