Scientists shouldn’t be allowed to recommend their own peer reviewers

Leo Tiokhin, PhD
7 min readFeb 10, 2022

When scientists submit their research to academic journals for publication, journals typically ask them to recommend peer reviewers. An analogous process often occurs for grant applications to science funders, where applicants are asked to nominate potential reviewers for their grant proposal. Journals and funders solicit reviewer recommendations because scientists often know of other scientists in their field who are experts in relevant topics and can critically evaluate the research¹

If you’re thinking, “wait a second, aren’t the people who get to recommend reviewers the same people who benefit most from receiving positive reviews?” then you’re not alone.

Several studies have looked into whether reviewers nominated by authors or grant applicants are more likely to provide positive evaluations.

One analysis of peer review in biomedical journals by Sara Schroter and co-authors found that author-nominated reviewers were more likely to give positive recommendations (“accept” or “revise and resubmit”) than editor-nominated reviewers, but did not find differences in the quality of reviews when measuring quality using a Review Quality Instrument.

Another study by Anna Severin and co-authors looked into grant applications at The Swiss National Science Foundation (SNSF).² Potential reviewers for SNSF grant applications are nominated by either the grant applicants themselves or by external parties, such as the Swiss National Resource Council and the SNSF administrative offices.³

The below figure shows the results.

Frequency distributions of external evaluation scores by source of nomination of the reviewer, ranging from 1 (poor) to 6 (outstanding).

Applicant-nominated reviewers gave higher scores than reviewers nominated by external parties (the category “SNSF-nominated” lumps together all referees nominated by external parties). Partly due to this finding, the SNSF abandoned the possibility for applicants to nominate reviewers.

Another analysis of grant applications by Herbert Marsh and co-authors focused on applications to the Australian Research Council (ARC) in 1996. Potential reviewers for ARC grant applications are nominated by two sources: the applicants themselves or the external ARC funding panel.

The below figure shows the results.

Ratings of grant applications as a function of whether grant reviewers were nominated by the grant applicant or by an external panel.

Again, applicant-nominated reviewers gave higher ratings than panel-nominated reviewers, an effect that held across all disciplinary panels.

This study also found that 1) applicant-nominated reviewers gave higher ratings to the quality of the research team listed on grant proposals, 2) applicant-nominated reviewers provided reviews that had a weaker correlation with a final integrative ARC assessment, and 3) controlling for applicant-nominated reviews led to more reliable ratings of applications.

The authors conclude:

Applicant-nominated assessor ratings of ARC grant proposals are biased, inflated, unreliable, and invalid, leading the ARC to abandon use of applicant-nominated assessors.

None of this should be surprising.⁴ When individuals are given a low-cost opportunity to increase their chances of obtaining a desirable outcome, they will take it. Scientists want to get their research published and obtain funding for their grant proposals. Scientists are given a low-cost opportunity to influence the review process. And they take it.

There is an alternative possibility for why applicant-nominated reviewers provide more positive reviews: applicant-nominated reviewers may be more familiar with a research area and be better able to determine whether the research is important or impactful.⁵

It would be interesting for a study to evaluate these competing explanations.⁶ For example, one could analyze whether author-nominated reviewers can more accurately predict the future impact of papers or grant proposals. Note that the two explanations are not mutually exclusive: applicant-nominated reviewers could both have conflicts of interest and more relevant expertise. In an ideal world, we’d have this information in a database somewhere and select reviewers based on a combined evaluation of their expertise and potential for bias.

But I am skeptical that the positivity-bias among applicant-nominated reviewers is better explained by reviewer expertise than by conflicts of interest.

First, it’s not clear how to reconcile the expertise hypothesis with the finding that applicant-nominated reviewers’ ratings were less correlated with the ARC final panel assessment. Marsh and colleagues describe this as follows:

…the best single index of the quality of a proposal is the final panel rating, which is based on a critical review of assessments by all assessors, a critical reading of the proposal by panel members, and a subsequent rejoinder to the external assessments of the proposal by authors of the proposal.

…final panel ratings were systematically less correlated with applicant-nominated assessors than with the three panel-nominated assessors. These results suggest that applicant-nominated assessors are less valid than panel-nominated assessors, in relation to final panel ratings.

Second, if applicant-nominated assessors were better at estimating application quality, then we’d expect a distribution of ratings closer to the true underlying distribution of application quality. Maybe a slightly right-skewed distribution where most applications are good but few are exceptional, as has been found in other studies. Maybe a normal distribution where most applications are nothing special. Maybe even a bimodal distribution, where many applications are poor, a few are outstanding, and the rest are scattered in-between.

But a distribution where, for the SNSF, the most common rating is 6 out of 6? Where 80% of grant applications receive a rating of “outstanding” or just below outstanding?

I doubt it.

Scientific funders and journals are not completely naive to the problem of biases introduced by author-recommended reviews. Funders and journals also seek out reviewers themselves and often have policies to minimize conflicts of interest, such as not allowing reviews from scientists who have recently co-authored papers with the authors or applicants. And it’s promising that some funders have stopped using applicant-nominated reviewers due to the obvious conflict of interest.

Still, at least for academic journals, editors are busy, underpaid (or not paid at all) and overwhelmed with submissions. So I suspect that editors rely on authors’ recommendations more than they’d like to admit. And if this is the case, then it’s not difficult to come up with ways to game the system. Off the top of my head, this could include recommending reviewers who 1) you had fun drinks with at that conference, 2) are friends with you, 3) used to be in the same department with you, 4) subscribe to the same theoretical paradigm or epistemological assumptions as you do, 5) play on your sports team, 6) play in your D&D league, 7) have their work positively cited in your paper, 8) liked your paper on Twitter, and so on.

That last one — recommending reviewers who liked your paper on Twitter — was something that I heard about at a dinner a few years ago. A reasonably well-established scientist mentioned that they used this approach to come up with reviewer recommendations. I don’t remember my exact reaction, but at the time, my internal monologue went something like “but….but….you. can’t. do. that.”

The scientist justified their strategy by arguing that “liking” a paper on Twitter meant that someone was interested in it and so would be more likely to accept a review request. Sure. Probably. The problem is that the person who liked your paper on Twitter is also more likely to like like your paper. Not a great way to get unbiased reviews.

It seems to me that using reviewer recommendations to receive favorable reviews has become so commonplace that scientists just accept it as part of the game. Sure, it undermines a mechanism of quality-control in science. Sure, it corrupts a mechanism for ensuring fairness — all papers from all authors receive equal evaluation — despite the fact that fairness prevents the type of individual-level competition that harms group outcomes. Sure, it risks reducing public trust in science. But hey, that’s just the world we live in, right? Something something, incentives, something something.

In the beginning of my academic career, I remember at least trying to recommend reviewers in a fair and honest way — scientists who I thought were qualified, likely to provide feedback that improved the paper, and so on. I don’t remember if I ever deliberately recommended antagonistic reviewers, but I also wasn’t recommending my buddies. After a few years, I moved towards recommending a balanced portfolio of reviewers: one likely favorable, one somewhat-but-not-too critical, and one neutral. I started doing this after a more-senior scientist mentioned that they used this approach.

But now that I have a better sense for how the game works, I think I’ll start recommending my friends, colleagues, and Twitter followers. And if that works out — and I very much think it will — the next step will be to just recommend myself.

1. Asking for recommendations also saves journals and funders a bit of time, relative to having to find all qualified reviewers by themselves.

2. Severin and colleagues also looked into other factors that could influence grant evaluation, such as the country affiliation of the reviewer and the gender of the applicant. International-based reviewers gave higher scores than national-based reviewers, and male applicants received higher scores than female applicants.

3. Reviewers could also be recommended by other reviewers who themselves declined an invitation.

4. Keeping in mind that everything is obvious once you know the answer.

5. Severin and colleagues note this as a potential alternative explanation:

“Reviewers who were nominated by applicants via the “positive list’” on average tended to award higher evaluation scores than reviewers nominated by SNSF administrative offices, referees or other reviewers. This effect can be interpreted in several ways. First, applicant-nominated reviewers may award more favorable evaluation scores because they know the applicants personally and/or have received positive evaluations from the applicant in the past (Schroter, 2006). This would mean a conflict of interest. Second, applicants may nominate reviewers who are experts within their field and therefore might be particularly familiar with their research and will recognize the impact and importance of their grant application.”

6. A third possibility, suggested to me by Karthik Panchanathan, is that authors or grant applicants nominate reviewers who share their aesthetic preferences, such as preferring research that comes from a certain disciplinary perspective or uses particular methods.

--

--

Leo Tiokhin, PhD

Senior Researcher @ Rathenau Instituut | Science Policy | Evidence-Based Advice | https://www.leotiokhin.com