Many papers have been published that compare clinical trial publications sponsored by the pharmaceutical industry with those not sponsored by industry. Last week, the Cochrane Collaboration published a systematic review by Lundh et al of those papers. The stated objectives of the review were to investigate whether industry sponsored studies have more favourable outcomes and differ in risk of bias, compared with studies having other sources of sponsorship.
There are some rather extraordinary things about this review.
The most extraordinary thing is a high level of discordance between the results and the conclusions. This is a little odd, since one of the outcomes they investigated was whether industry studies were more prone to discordance between results and conclusions, so you’d have thought Lundh et al would understand the importance of making them match.
But nonetheless, they don’t seem to. The conclusions of the review state “our analyses suggest the existence of an industry bias”. In their results section, however, they investigated various items known to be associated with bias, such as randomisation and blinding. They found that industry studies had a lower risk of bias than non-industry studies. I’ve written before about bias in papers about bias, and this seems to be another classic example of the genre. This is disappointing in a Cochrane review. Cochrane reviews are supposed to be among the highest quality sources of evidence that there are, but this one falls a long way short.
It appears that they drew this conclusion because they found that industry sponsored trials were more likely to produce results or conclusions favourable to the sponsor’s product than independent trials (although that finding may not be as sound as they think it is, for reasons I’ll explain below). They therefore concluded that industry-sponsored trials must be biased, because they’re systematically different from independent trials. That does not make logical sense. Three explanations are possible: either industry trials are biased in favour of favourable results, independent trials are biased towards the null, or the two types of trial investigate systematically different questions. Any of those is possible, and they have not presented any evidence that allows us to distinguish between the possibilities. However, given that where they did measure bias, they found less bias in industry studies, the conclusion that the bias must be a result of industry sponsorship seems hard to support.
Another of Lundh et al’s conclusions was that industry-sponsored trials are more likely to have discordant results and conclusions, for example claiming that a result was favourable in the conclusions when the results don’t support that conclusion (I know, it’s hard to imagine anyone could do that, isn’t it?) This is stated as fact, despite the little drawback that their meta-analysis estimate of the difference between industry and non-industry studies did not reach statistical significance. Also, there is one study I happen to be aware of which would seem to be relevant to this analysis (Boutron et al 2010) as it investigated “spin” in conclusions, which seems to me to be exactly the same concept as discordance between results and conclusions. That study, for reasons not explained in the paper, was not included in their analysis. It can’t be because they didn’t know about it, as they cited it in their discussion (and, incidentally, misrepresented its results when they did so). Boutron et al found no significant difference between industry and non-industry studies in the prevalence of spin in conclusions, so if it had been included it could have weakened their results further.
I mentioned above that I was not totally convinced by their conclusion that industry-sponsored studies are more likely to have results favourable to the sponsor’s products than independent studies. Oddly enough, until I read this systematic review, I had taken that assertion as established fact. I have seen various papers that found that result, and had felt that the finding was robust. However, I now have my doubts.
One of the big challenges for any systematic review is the problem of publication bias. This is the tendency of positive studies to be published and negative studies to be quietly forgotten. This is a big problem, because if you look at all published studies in a systematic review, you are actually looking at a biased subset of studies, usually those with positive results.
A good systematic reviewer will investigate the extent to which this is a problem. The Cochrane handbook, the instruction manual for Cochrane systematic reviews, recommends that reviewers investigate publication bias by means of funnel plots or statistical tests. The idea behind such methods is that large studies are likely to be published whatever the results, as so much has been invested in them that the final stage of publication is unlikely to be overlooked, whereas small studies may well be unpublished if they are negative, but are more likely to be published if they are positive. If you see a correlation between study size and effect size, with smaller studies showing larger effects than larger studies, that is strongly suggestive of publication bias. For those who are not familiar with these concepts, Wikipedia has a good explanation.
However, despite the recommendation in the Cochrane handbook that reviewers should investigate publication bias, Lundh et al seem to have largely overlooked it. They mention a small number of studies published only as conference abstracts or letters, and found they provided similar results to the main analysis, and concluded that publication bias was therefore unlikely. This is a very superficial examination of publication bias that falls well short of what should happen in a Cochrane review.
Fortunately, they present their data in full, so it is easy enough for anyone reading the review to do their own test for publication bias. So I did this for their primary analysis: comparing industry and non-industry studies for their probability of producing favourable results. The results are strongly indicative of publication bias. This is what the funnel plot looks like:
As you can see, there is striking asymmetry here, with most small studies (those towards the bottom: the y scale is actually the reciprocal of the standard error of the relative risk, but this is strongly related to study size) having much larger effects than larger studies, and no small studies showing smaller effects. This is very strongly suggestive of publication bias. I also did a statistical test for publication bias (the Egger test, one of those recommended in the Cochrane handbook), with a regression coefficient for effect size on standard error of 2.3, which was statistically significant at P = 0.026.
So there is clear evidence that these results were subject to publication bias. It is therefore highly likely that their estimate of the difference between industry and non-industry studies was overstated. Maybe there isn’t really a difference at all. It’s very hard to tell, when the literature is not complete.
I could go on, as there are other flaws in the paper, but I think that’s long enough for one blog post. So to sum up, this Cochrane review had methods that fell short of what is expected for Cochrane reviews. Lundh et al found that industry sponsored studies, when assessed using well established measures of bias, were less likely to be biased than independent studies, and yet drew the opposite conclusion, based on nothing but speculation. This, in a study which investigated discordance between results and conclusions, is bizarre. Their main finding, that industry sponsored studies were more likely to generate favourable results than independent studies, appears to have been affected by publication bias, which makes it considerably less reliable than Lundh et al claim.
I am normally a great fan of the Cochrane Collaboration, which usually produces some of the best quality syntheses of clinical evidence that you will ever find. To see such a biased review from them is deeply disappointing.