13 Aug 2023 - Community
They have been sitting on our desk for several weeks. In the corner, just over there. We know that they are waiting for us. Reminders keep popping in the mailbox: the deadline is coming soon. That’s the same story every time. We know we want to have them done on time. We know it is a matter of priorities. We know we decided to be more proactive. And yet, two days to go and there are five of them waiting to be scrutinised. We will have to block the day tomorrow to work on our paper reviews. In the pool of all our research activities, to peer-review papers is one of these background tasks for which we often demonstrate ambivalent feelings. Having knowledge progress keeps us eager to pursue our research journey. After all it is pretty exciting to participate in the construction of a research that moves forwards through the confrontation of ideas. At the same time we know that peer-reviewing (whether a paper, some codes, or a dataset, etc.) is something time-consuming that can rapidly turn overwhelming. So where should we draw a line? The article shares some thoughts on how we may want to engage more effectively in this task. For the sake of clarity, the article focuses on the case of paper peer-review.
Research is a lot about communicating. Communication is essential to share ideas and perspectives, and keep an open mind on our own work. The traditional way to communicate in research is through the publication of so-called papers. Papers are (highly) specialised written pieces of work. They typically take the form of articles that follow a set of field-specific rules. On one hand, the rules define the nature and type of content in the paper. For instance, the paper can be a presentation of an initial idea about a problem, e.g., there is potential in using technology X to reduce the energy needed to operate a datacentre and here are the arguments why. It can also be the report of a fully-fleshed solution to a specific problem, e.g., we use technology X to create a system that reduces the energy cost by Y% when operating a datacentre and that is easy to deploy. On the other hand, the rules dictate editorial constraints. For instance papers often come with a length limit, either in terms of number of words or number of pages. Papers are made available to the interested audience as published articles by specialised publishers. Publishers are just the end point of a long process orchestrated by peers in the research community. The goal of this process is to decide whether a paper should be accepted for publication or not, and this decision is taken as the result of a peer-review phase. Peer-review consists in soliciting, for a paper, the community of contributors of a relevant field of knowledge with interests, experience and/or expertise related to the focus of the paper. The objective is to get an informed view from the community on the quality of the reported work. Quality is in essence a broad term that covers rigour (e.g., the findings are evidence-based, the construction of ideas follows a valid methodology, etc.); novelty (i.e., the paper reports something that contributes to enriching knowledge); as well as presentation and expression (i.e., the paper can be read – a non-negligeable aspect…). In practise peer-review is an efficient way to foster the validation and exchange of ideas. It helps better shape our understanding of a problem. It has however a crucial facet: it operates on a benevolent basis. In other words, there are no financial incentives for participating in a peer-review process. Peer-reviewers are thus facing a paradoxical situation. We are on one hand happy to get involved in the construction of an healthy research. On the other hand, we are trapped in a mechanism that capitalises on de facto altruism to sustain an ecosystem that treats as a (free) service the enrichment of knowledge. We therefore need to set limits to ourself and to the rest of the community on what we can decently agree to commit.
It might be trivial but it is nevertheless important to decide upon receiving a solicitation for review if we should accept it or not.
Let’s make it clear. It takes time (and efforts) to do a good review. It does take time. The job includes 1) critically reading the work of others and understanding if their arguments make sense; and 2) formulating constructive feedback and synthetising some remarks in a clear and structured way. We should not forget that the end goal (from researchers’ eyes) is to help peers (aka. the authors of the paper in this case) to correct any potential weaknesses. The ability to read an often complex piece of work while keeping an open and yet critical mind, and putting in words meaningful recommendations, is not something that can be ditched in two minutes.
So before being highly enthusiastic about the idea of helping the community, it seems to be reasonable to spend a bit of time on deciding whether we should accept a review invitation.
To accept a review comes with some responsibilities. Can we honestly do a good job? The reviews we provide should be the reviews we would like to receive. If there is a chance we will not be able to commit more than a few minutes to do a good job, let’s just be honest, and decline.
There is besides no point in accepting to review a paper dealing with a topic for which we have rather limited knowledge. We are not only unlikely to provide any meaningful feedback, we are also prone to rapidly feel discouraged and unable to complete the review on time.
Reviews are just one step in the long process of publication. When committing to provide a review, we should remember that any delay in returning our assessment can have an impact on the rest of the process. In particular this can constitute extra workload for the people involved in collecting and orchestrating these reviews (see organising an event) who would need to chase after late-returners to get everything ready.
It just takes a few minutes of our time to be realistic about our current workload and capability. There is no harm in saying no to an invitation.
Every community has an ageing factor. The longer one in, the more exposed one becomes. As we grow in maturity in our field of research, we tend to receive more invitations to peer-review papers.
The ability to provide good reviews comes with experience. As such it is definitely a good practise for “young” researchers to be trained to develop critical thinking and to produce constructive, synthetic, feedback. In that respect there is clear value in delegating a review to a less experienced colleague.
However, delegation is not necessarily a straightforward thing, especially in the context of peer-review where there is responsibility engaged.
If the review is delegated to a less experienced researcher, it is essential to ensure that the person has the competence to work on it independently. If not, we have the responsibility to provide guidance, and keep an eye on the final version of the review text. This aspect should be factored in.
If the review is delegated to a peer (same or higher level of experience) or independent researcher, it is important to trust the delegee can do a good job. After all, given that the invitation first came to us, it seems wise to redirect it to someone we believe could have been contacted in the first place.
In either case, it should be systematic for the person who did the review to be directly credited. We know that doing a good review requires skills, efforts and time. It is therefore logical to have the person recognised for the work done. More musings about being credited below.
It is evident that our ability to review a paper very much depends on how comfortable we feel with the reviewed topic.
Let’s take a paper and look at the range of reviews the work received. There is a high chance of overlaps between the comments from different reviewers. At the same time it is also very likely that the paper got a diversity of views. By soliciting a pool of reviewers, different aspects of the work can be scrutinised.
While each review bears its significance, it is sensible for the different points of view to be weighted based on how confident a reviewer feels with respect to the topic. A very confident reviewer has the ability to evaluate a paper in depth, while a reviewer with limited knowledge on a topic is less likely to be able to comment on the novelty of an idea for instance.
The issue is that it is typically up to the reviewer to self-assess her/his level of expertise on a topic (ranging from novice to expert, through limited knowledge and knowledgeable – or derivatives of these broad qualifiers). This practise is in design subjective since a number of factors can directly impact how one is likely to rate her/his competence.
There are of course safeguards to limit biases in self-assessment. The most obvious one is to solicit a diverse pool of reviewers. Another is to introduce a discussion phase among reviewers, which can give a chance to everyone to rethink the initial assessment. Another level of control can come from the committee who coordinates the reviews and is responsible for making the final decision.
Different research communities are likely to implement different methods of expertise assessment. The bottom line is that as reviewers, we should do our best to strike the right balance between being highly confident and too timid with respect to the experience we have already acquired.
The objective of peer-review is to help everyone getting feedback on their work. It helps sanitising the development of ideas by confronting point of views and perspectives.
It is in our ethos of researchers to contribute to enriching knowledge. Our perspectives can be different. Our ideas can be contradictory. Our thinking can be wrong. Reviews are an opportunity to make things better. In that regard let’s respect each other’s time and efforts.
As reviewers we have in our hand the result of week(s)/month(s)/maybe year(s) of work. We may not believe that the paper is good enough. We may think that it has too many weaknesses. Whatever its quality, we need to remember that people put time and efforts in creating something. We should be respectful of this work and 1) provide feedback that is relevant, 2) present the comments in a decent form (it is easy to write a sentence), and 3) have the diligence to do our best to understand the paper.
We, reviewers, do our job in a benevolent fashion. We take on our time to help others. So it can be highly irritating to keep receiving daily reminders about the deadline for reviews coming soon. Reminders are good as long as they do not become intrusive.
Another point of the utmost disrespect is to close the submission of reviews before the initially communicated deadline. If we are given a deadline, we have by definition up to that deadline to return the review. It is highly impolite to solicit someone and not wait for the person’s feedback.
Finally it may sound rather obvious but it is disrespectful to discard someone else’s opinion. Again research is about confrontation of point of views. Reviews realise part of this vision.
Research can have this tendency towards over-emphasising what is good for personal development. It seems easy to forget that being a researcher is a job. As a job, it does matter to be rewarded for what we do (and not necessarily for our personal gratification).
In the outside world, people can call themselves food critics, film journalists, novel critics, etc. The essence of their job is to review others’ work.
Reviews are central to how research operates today. But there is a limit on us contributors in the system being best-effort based, whether in terms of number of solicitations, workload associated with each invitation, or imposed timeline.
Again peer-reviewing a paper necessitates specific skills, time and efforts. We need a system that can concretely reward our contributions. It is somehow agreed that researchers have to participate in peer-review activities. That’s fair enough but is this valued from outside the small circle of our solicitors? Yes, a thank you is nice but an accumulation of thank you is after all just a thank you. Tell me if I am wrong but no one would buy much in this world with a thank you.
Some systems have been developed in the recent years to give a formal recognition for the reviews we do (see for instance publons https://publons.com/about/home/). It remains however unclear how these systems can be used as validators of competence within and outside the research arena.
Paper peer-review forms an ecosystem in the ecosystem of XXIst century research. Relying on volunteering and self-engagement for supporting this system going is however questionable. In a time where sustaining a research career has become challenging, we collectively need to have an open discussion on the role and mechanisms of reviews.