Lutz Prechelt, prechelt@inf.fu-berlin.de
Review Quality Collector (RQC) is a procedure
to be introduced into the scientific publishing system, by which
researchers can be formally recognized for their performance as peer reviewers
(initially for journals and conferences, later perhaps also for funding agencies).
The first part of its software implementation is online since September 2016.
Read up on the concept there:
https://reviewqualitycollector.org
This page (which was once quite long) has now been cut down to the
not-yet-quite-outdated parts which may or may not be of interest,
but certainly only after you visited https://reviewqualitycollector.org.
This section collects (very very incompletely)
links to other initiatives aiming at improving the reviewing process in various ways.
If you know of good such material, please drop me a note (with a web link) to prechelt at inf.fu-berlin.de.
Various issues of review quality
JAMA 280(3), July 1998, had a number of studies related to reviewing quality
Providing review quality feedback to reviewers
Callaham et al. (2002) found in a controlled experiment that providing basic quality feedback (just a 1-to-5 point rating) confidentially from the editor to average and below-average reviewers did not improve the quality of their subsequent reviews.
many journals send all reviewers of a paper a copy of the editor's reply to the authors (including the text of all reviews).
Recognizing reviewers' work publicly
Publons is a service for counting and for (if you want to and are allowed to) publishing your reviews. Journals can also nominate reviews as "good", but the grounds for doing so are not transparent and not all journals use the possibility.
ReviewerCredits is another service for publicly counting your reviews. Its core is transferring "credits" to reviewers that can be redeemed for services. Some journals participate in the RCI index based on review length, speed, and agreement with editor, which reviewers can then publish in their profile.
F1000Research is an open-access journal founded by Vitek Tracz, the inventor of BioMed Central. It performs post-publication open reviewing (and commenting and author-reviewer dialog and article revisions): "All submissions to F1000Research will go through a rapid initial check by the in-house editorial team before being published with the status 'Awaiting Peer Review'." Reviewing results become part of the article metadata, only positively reviewed articles are indexed.
ScienceOpen also uses a publication style of post-publication open reviewing and commenting.
A few journals (sometimes) publish the reviewer reports along with accepted articles, for instance
Naturetrialed "open peer review" in 2006: Authors were asked to agree to post their submitted articles publicly for receiving non-anonymous and public review comments. Only 5% of the authors agreed to participate with their article. Only 54% of those articles received any comment at all (most of these only one). The editors judged only 4% of the comments to have the quality of normal reviews.
Frontiers uses a format called "collaborative peer review" and has very detailed review process guidelines.
According to the "What do we know about peer review" presentation on the BMJ reviewer resources page, three controlled trials found that signing a review did not improve its quality. (But the references are vague: Godlee et al, 1998; van Rooyen et al, 1998; van Rooyen et al, 1999)
Blogosphere: A lot is written about open peer review in various blogs. ScienceOpen collects such material in the ScienceOpen Blog and on the ScienceOpen facebook page. Or see these good articles:
PRE-Score assigns a quantitative score to a journal based on various characteristics of that journal's reviewing process, with the intention of allowing authors, readers, and libraries to identify journals where submitted work is "properly vetted".
In Spain, in order to get promoted to a professorship, a researcher needs to bring a letter attesting his or her reviews from each journal for which the researcher claims to have reviewed. These are collected by a central agency ANECA responsible for nationwide researcher quality assessment.
Furthermore,
some journals let reviewers pick manuscripts for review (rather than have an editor ask them to review a particular one), for instance
Casrai.org standardizes research administration information
ASAPbio proposes to separate the evaluation function of peer review from the gatekeeping function by installing a peer feedback step before the journal review step.
Are non-monetary rewards effective in attracting peer reviewers? A natural experiment (Zaharie and Seeber 2018). Shows that review quality evaluation can damage intrinsic motivation if framed wrongly. (Link)
A review article about research on peer review (Birukou et al. 2011)
A fixed questionnaire, called RQI, to serve as a measurement instrument for review quality (van Rooyen et al. 1999)
"Many [reviewers] wanted to see feedback on their own performance, perhaps an end of year review. Individuals who had received an acknowledgement or some form of feedback in the past implied they were more pre-disposed to review for that journal." (Mulligan, 2004)
"It was felt that many of the perceived problems afflicting the refereeing process could be remedied if the review was formally recognised. Such a system could take the form of accreditation to a journal, society or publisher. It might identify the number of times an individual reviews in different prestigious journals; such an indicator could in part be a measure of that individual's contribution to science." (Mulligan, 2004) (RQC covers this idea -- and goes far beyond it.)
A survey (sense about science, 2009) and focus groups (Mulligan, 2004) found that
The dominant motivators for reviewers today are participating in the scientific process (whether for the joy of it or out of a sense of duty), helping the authors, and interest in the content of the articles. (So RQC can create an additional source of motivation.)
Editors find it becomes harder and harder to find and keep good reviewers.
Editors perceive the nature and quality of the reviews they receive as highly fluctuating.
The online-consultation "scientific publication system" generated some comments on reviewing: too many publications, often sloppy reviews. See Section "Results 2", point 6 in this summary.
A survey of senior authors and reviewers determined various attitudes towards reviewing, the indicate predominantly (64%) satisfaction, but also the feeling that improvements are possible (only 32% think they are not). (Ware 2009)
2015 Springer survey on reviewer motivation, 1400 responses: "Reviewers were generally against a points-based system or rewards based on number of reviews and instead preferred being rewarded and recognized based on the quality of the review"
John Bohannon: "Who's afraid of peer review?: A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals", Science 342(6154):60-65, Oct 4 2013.
Peerage of Science is a service that central-sources reviews: Reviewers sign up for submissions, provide reviews, and these reviews become available to all participating journals, which can then react.
APE (Academic Publishing Europe) is a conference of mostly the publishing industry. APE 2019
Force11 "is a community of scholars, librarians, archivists, publishers and research funders that has arisen organically to help facilitate the change toward improved knowledge creation and sharing [...] through the effective use of information technology."