Bias in papers about bias
I have just read a paper describing how Evil Big Pharma manipulates the medical literature so that they can make more money from selling their drugs, no matter what the science says. That paper made me grumpy.
Well, if you are going to write a scientific paper criticising someone for introducing bias into the scientific literature, would it be too much to ask that you should do it in an unbiased way? What makes me grumpy is when people write papers about how evil and biased the pharmaceutical industry is (and this is certainly not the first such paper), but then themselves distort the facts to make a point.
Now, I’m not saying that the pharmaceutical industry is populated entirely by saints. It is true that some of the things that some people in the industry have done have indeed distorted the scientific literature. That is undeniably a bad thing. However, I do think it’s important to stick to the facts, and not to embellish them to make things sound worse than they are. If a paper misrepresents some facts, how can I trust anything else in the same paper? There may be some important points to be made, but by mixing them up with distortions and untruths, the whole message is weakened.
And that’s a shame, because the integrity of the medical literature is important.
So, which paper am I talking about, and why do I think it’s distorted? The paper, by Joel Lexchin, is entitled “Those who have the gold make the evidence: how the pharmaceutical industry biases the outcomes of clinical trials of medications”. Not that that’s an emotive title or anything.
I haven’t extensively fact-checked the paper, but it did cite a couple of pieces of evidence with which I was already familiar, and I couldn’t help but notice that those pieces of evidence had been badly misrepresented.
One of the things we are told in the paper is that industry-sponsored papers have a discordance between their results and conclusions, and that results are “spun” to yield favourable conclusions. The evidence cited to support this is a paper published in the BMJ in 2007 by Yank et al. That is what Yank et al concluded, but sadly their methods were deeply flawed, as I pointed out at the time. They assessed spin with a single reviewer, who was aware of the study hypothesis, and was not blinded to whether papers were sponsored by industry or not. The potential for bias in a study like that is huge, but Lexchin made no attempt to acknowledge the limitations of Yank et al’s study, presumably because it supported his argument.
In fact there is some other evidence, not cited in Lexchin’s paper, that argues against the idea that industry papers “spin” the results in their conclusions (at least not any more so than anyone else). A far more careful study by Boutron et al did not find any difference between industry-sponsored studies and independent studies in the amount of spin in their conclusions (this wasn’t reported in the paper, for some strange reason, but was clarified in subsequent letters.
Interestingly, one of the ways in which it’s possible to spin results is to write a paper that pretends that a favourable secondary outcome was the primary outcome all along. Why did Lexchin not mention that? Could it be that a recent paper by Bourgeois et al found that a mismatch between pre-specified and reported outcomes was significantly less likely in industry-sponsored studies? That’s a highly relevant piece of information if you’re trying to answer the question of whether the industry are any worse than anyone else at spinning results, and yet it was entirely left out of Lexchin’s paper. Ignoring evidence that doesn’t fit with your preconceived notion of what the results should be is itself a pretty serious form of spin.
The paper also talks about ghostwriting. As regular readers of this blog will know, that’s something of a specialist subject of mine. What really annoys me is when people write about ghostwriting and don’t make the distinction between ghostwriting and legitimate, transparent assistance from professional medical writers. Guess what? Lexchin doesn’t once mention the latter. Much of the discussion about ghostwriting is therefore rather confused.
But one bit struck me as a particularly heinous crime against accuracy: a claim that “evidence points to [ghostwriting] being widespread and systematic”. The “evidence” (and trust me, I use that term loosely) cited in support of that claim was a paper by Ross et al published in JAMA in 2008.
So, how widespread exactly did Ross et al find that ghostwriting was? Well, I don’t know, because Ross et al did not present any quantitative data whatsoever on the prevalence of ghostwriting. They found, using a very broad definition of ghostwriting (which again, confuses ghostwriting with legitimate and transparent writing assistance) that “some” papers were ghostwritten. Now, perhaps as I statistician I have greater expectations than most people that data should come with numbers, but I don’t see how anyone can conclude that something is “widespread” just because it happens “sometimes”. Oh, and Ross et al’s paper looked only at publications on a single drug sponsored by a single company, so even if it did present quantitative data, it would be pretty bizarre to try to extrapolate its results to the entire pharmaceutical industry.
When Lexchin cites such mind-crushingly weak evidence to make his points, it’s clear that either he has made no attempt whatsoever to critically evaluate the literature he cites, or he is deliberately misrepresenting it just to fit his agenda. I honestly don’t know which would be worse. Either way, it makes it very hard to believe anything I read in the rest of the paper.
I could go on about this paper for much longer, but I imagine you’re getting slightly bored of reading this by now, so I think I’ll stop there.
All that remains is to say that the integrity of the medical literature is hugely important. It is literally a matter of life or death. We know that much of the medical literature is biased or otherwise imperfect, and that’s something that deserves to be taken extremely seriously. It’s certainly possible that the pharmaceutical industry may be responsible for biasing the literature, although as I’ve argued elsewhere, the evidence that the pharmaceutical industry does any worse than anyone else in this regard is pretty weak. It is important that we all—industry or otherwise—make an effort to improve standards in reporting of clinical trials, and highlighting any shortcomings in the way things are done at the moment is an important part of that.
But if we’re going to look at shortcomings, it is essential to do so in an unbiased and scientific manner. Distorting the evidence to fit a political agenda is not helpful. You would hope that people who write papers about other people distorting the evidence would know better.