Research on industry vs non-industry publications
An interesting paper, by Florence Bourgeois and colleagues, was published in the Annals of Internal Medicine last week (sadly behind a paywall, but the abstract is available here).
The paper looked at outcomes of trials registered on clinicaltrials.gov, and compared those funded by the pharmaceutical industry with those funded by other sources. The results make interesting reading.
Their primary objective was to look at whether source of funding was associated with favourable published outcomes. Not surprisingly, it was: this paper adds to what is already quite an extensive literature showing that papers funded by industry are more likely to be favourable to their products. What we don't know, however, is why. This paper gives us some limited insight into the reasons, but does not really give us a definitive answer.
One possibility could be that most phase III trials, which are more likely to be positive than earlier, exploratory trials, are industry funded. However, Bourgeois et al's paper seems to speak against that possibility, as they found similar results when controlling for phase of study.
Another possible reason is that industry just does better trials, and that many non-industry trials are negative simply because they are not sufficiently well designed or conducted to detect a benefit of the study intervention even if it exists. We get a small glimpse into that possibility, as it turns out that industry-sponsored trials were significantly more likely than independent trials to recruit the planned number of patients. You would expect trials that met their recruitment target to be more likely to get a positive result, so that could possibly go some way to explaining the difference. Unfortunately, that possibility doesn't seem to have been explored in the paper. I've left a rapid response on the journal website asking for more information on that, and will report back here if it's forthcoming.
One of the most plausible reasons is that the pharmaceutical industry is just more picky about which trials to fund, and only fund those trials that have a reasonable prospect of success, but there is nothing in the paper to give us any information about that either way.
Another oft-postulated explanation is that publication bias is more prevalent in the pharmaceutical industry, and that negative results from industry are more likely to be unpublished, leading to an artificially high proportion of favourable studies in the subset of studies that get published.
Bourgeois et al's paper did not specifically investigate that possibility, although they did look at how many papers were published, and those results give us some interesting information. In the abstract, they report "Rates of trial publication within 24 months of study completion ranged from 32.4% among industry funded trials to 56.2% among nonprofit or nonfederal organization-funded trials without industry contributions (P = 0.005 across groups)". Well, that would suggest that publication bias is greater in industry funded studies.
However, it's not as simple as that. As well as reporting the number of studies published within 24 months of study completion, they also report the number of studies published at all, even if it took longer than 24 months. There the figures are very different, and we find industry funded studies are more likely to be published than government funded studies, albeit slightly (but not statistically significantly) less likely than non-profit, non-government studies. So although the industry seems to be slower to publish their studies (it's not clear whether that's because industry is just slower or because complex, multicentre studies, which are more likely to be industry sponsored, take longer to publish no matter who is doing it), they do get there in the end. If you add the number of studies that are unpublished, but have an electronic results synopsis available (which you may or may not consider to be a form of publication, but that's a discussion for another day), then industry-sponsored studies have by far the highest overall rate of publication, at 88%.
So, given the high rate of publication in industry studies, the publication bias argument is looking weak, although probably can't be ruled out altogether.
One other interesting finding from this study is that industry-sponsored studies have a significantly lower rate of publications with primary outcomes inconsistent with the trial registration (9%, compared with 40% for government studies, 18% for non-profit with some industry contribution, and 33% for pure non-profit). That would speak against the argument that industry studies are more likely to be positive because they fiddle the results by, for example, picking a significant secondary outcome and pretending it was primary when the primary outcome was non-significant.