Peer review of stem cell research
I heard an interesting story on the radio this morning about stem cell research. It's also reported on the BBC news website, although strangely enough I couldn't find it reported anywhere else in the media.
Apparently, a group of prominent stem cell researchers have written an "open letter" to journal editors complaining that their excellent research is being blocked from appearing in prestigious journals by some kind of peer review mafia. The "open letter" doesn't seem to be very open, as I couldn't find it anywhere on the web. Anyway, according to the BBC, the letter alleges that top journals such as Nature and Cell are using a clique of peer-reviewers in stem cell research who are deliberately blocking research by rival scientists. This is, of course, denied hotly by the journals who claim that their peer review process is entirely fair.
I find it impossible to judge the rights and wrongs of this case given how little information is available (in fact the only information seems to be from the BBC, who don't link to any verifiable sources). Maybe the scientists who are complaining that their work hasn't been published have simply not submitted research of any quality and are just whingeing, or maybe they have an entirely valid point and their excellent research has been unfairly blocked. I just don't know.
There is, however, an entirely valid point behind all this, which is that the allegations about unfair peer review certainly could be true. Peer review is a deeply flawed process, which relies on human beings applying considerable skill in an perfectly fair and impartial manner. How likely is that to work all the time? Some peer reviewers (probably the majority in fact) certainly do an excellent job and give the benefit of their expertise to pass reasonable judgements on the papers they review and offer sensible and constructive criticism that helps papers be improved. However, some don't.
One bizarre feature of peer-review, given that it is fundamental to the scientific literature and that science is based on empirical evidence, is that there is practically no evidence that it is effective. A Cochrane review published in 2007 concluded "At present, little empirical evidence is available to support the use of editorial peer review as a mechanism to ensure quality of biomedical research", although they also noted that that should not be construed as evidence that it is ineffective.
What we do know, however, is that it is based on human beings, and that human beings cannot always be relied upon to act fairly and without bias. In fact one of the most fascinating pieces of research I have ever seen about peer-reviewed research, which is now over 30 years old, looked specifically at the effect of cognitive biases in peer reviewers. Mahoney reported a fascinating experiment in 1977, in which he gave 75 peer reviewers a fictitious paper to review. In all cases, the methods were identical, but different reviewers got different results. Sometimes the results agreed with the reviewer's preconceived ideas about what the results should be, and sometimes they disagreed. Where the results agreed with the reviewer's preconceived ideas, they rated the quality of the methods higher than when they didn't. This is despite the methods being identical in all cases, and is a classic example of what psychologists call confirmation bias, namely the tendency to be more inclined to believe something that fits in with what you think you know already.
We write a lot of papers for publication at Dianthus Medical, so we get to see a lot of peer-reviewers' comments. Most are fair and reasonable, but it's not uncommon to find some that are really not. One particularly common bugbear is when a clinical reviewer starts criticising statistical methods, even though that reviewer is not qualified to do so and is usually wrong in their criticism. Although that's annoying, it's not usually a big problem for us as we simply explain why their criticisms are wrong and a reasonable journal editor will listen. However, I do worry about what a relatively inexperienced researcher without access to statistical support might do in such circumstances.
Peer review is held out to be the ultimate guarantee of the quality of scientific publications, but a closer look reveals that it really can't be relied upon to be so consistently. In a way, peer review is a bit like democracy. Most people know that democracy is a lousy way to run a country (just look at how this country is being run at the moment!), but no-one has yet come up with an alternative that isn't even worse.