Breast cancer screening and peer review
I've been thinking some more about the paper on breast cancer screening that I blogged about last week.
Just to recap, a paper was published last week claiming that the benefits of breast cancer screening comfortably outweigh the harms. This paper was picked up by the media, who reported its conclusions almost entirely without any critical evaluation, simply taking the authors conclusions as established fact.
However, as I previously pointed out, the conclusions are based on some extremely precarious calculations about how often breast cancer screening leads to harm. IMHO, the conclusions simply can't be trusted.
The paper is, in fact, so deeply flawed, one might be tempted to ask how it got through peer review?
I have a theory about how that happened.
The calculations of breast cancer harms are based on some complicated mathematical modelling. I didn't understand how they arrived at the equations they used. I showed the paper to 3 of my colleagues (including another statistician), all of whom also failed to understand where the equations came from. So the logic of the modelling is far from having been clearly explained.
My guess is that the peer reviewers of the paper did not follow the logic leading to the equations, but didn't like to admit that. There is some evidence that when people are faced with impressive-sounding but actually meaningless information, they often don't like to admit that they don't understand it. Perhaps that happened here? The first equation in the paper contains an obvious typo (the right hand side of the equation is a negative quantity, which is clearly impossible for an incidence rate), so if the peer reviewers had been following the maths, they would surely have spotted this.
So, my theory, for which I admit I don't have proof, is that the paper was not peer reviewed in any meaningful sense, because the peer reviewers simply couldn't follow the logic of the paper but didn't like to admit it, and so stayed silent.
We all know that peer review is a deeply flawed process. This may be another example of how it can go wrong.
I think you are onto something here. I have just reviewed a paper and afterwards I saw the other two reviews. They clearly hadn't understood the mathematics at all. It wasn't altogether clear to me at first, so I went through the paper and re-derived the equations for myself (they were right). Of course this took a day or two. Given the vast numbers of papers being submitted, it would be impossible for every paper to be reviewed so thoroughly, even if you could find reviewers who were capable of it.
I suspect that a large fraction of biologists are verging on the innumerate, but they have a well-developed system for disguising it.
You could well be right about innumeracy among biologists. And it's certainly not just biologists.
In my spare time, I'm currently studying economics with the Open University. Here was a piece of advice they gave me about reading papers in the peer-reviewed economics literature:
"When you read articles, remember that skipping the mathematical bits is OK – even professional economists do it!"
Now, are we surprised that the economy is in a bit of a mess?
Your story about the OU course is interesting and slightly depressing.
I was slightly surprised to find that even professional statisticians don't like seminars that have a lot of mathematics. Even they can find it hard to follow during the usual high speed powerpoint talk. It takes time to sit down and work your way through it, and the current academic system could have been designed to give you little time to think.
The real culprit is the "publish or perish" culture and the "impact agenda". Their effect is to encourage high-throughput, low-thought work. It is ironic that research councils and senior academics are responsible for decreasing the quality of science.
Well, speaking as a professional statistician, I can certainly confirm that I don't like seminars with a lot of maths. I can rarely follow what's going on. As you say, it often takes time to follow complex maths, and that time is simply not available during a presentation.
Moreover, my experience has often been that many people who present a lot of maths during statistics seminars are often not skilled presenters, which doesn't help.
As for your comments on the "publish or perish" culture and the effect it has on the quality of science, I couldn't agree more!