More on industry vs non-industry publications
Yesterday I wrote about a recent paper in Annals of Internal Medicine, comparing publications funded by the pharmaceutical industry with those from different funding sources. The main focus of that post was on the reasons why industry publications were more likely to report favourable results.
Today I'd like to write a bit more about the other finding of that study, which appeared in the conclusions of the paper's abstract (actually before the conclusion relating to the primary outcome, which is a little strange) as "those [drug trials] funded by industry were less likely to be published within 2 years of study completion".
On the face of it, you might think that fewer industry trials get published. However, that would be wrong, and the abstract is not telling the whole truth: although fewer industry-funded trials were published within 24 months, if you take a longer time horizon, that finding no longer holds.
So you might be tempted to conclude that industry are slower at getting round to publishing their trials, even if they do catch up in the end. But I'm not sure that even that is a sound conclusion.
It is not clear why a cutoff point of 2 years was chosen. That is not explained in the paper. We are also not told whether that cutoff point was chosen prospectively. If it were chosen after seeing the results, that would make the choice of that cutoff point highly questionable. It would actually make more sense to choose a cutoff point of 3 years, as the authors state that their search strategy ensured that at least 3 years elapsed between trial completion and search for publication of results.
But 2 years, should be long enough, shouldn't it? If a study isn't published in 2 years, surely that's evidence that someone isn't really trying, isn't it?
Not necessarily.
Granted, if you are really making publication a priority, it should always be possible to publish within 2 years. But in the real world, it often takes longer to do things than it should in an ideal world. Let's look at some realistic timelines for a large, multicentre study. Once the study is completed, it could easily take up to 6 months for all the data management to be complete and the statisticians to analyse the data. Yes, it can be quicker, but often it isn't. Perhaps it then takes another 3 months to write the clinical study report (and it's certainly not uncommon for it to take longer).
You are then ready to start writing the paper. At least you should be ready. In real life, it may only be at that stage that you really start to think seriously about writing the paper, as all your efforts up to now have been focussed on analysing the data and getting the study report written. So perhaps you then need some time to agree which of the investigators and which of the sponsor's personnel will be authors on the paper. That should have been agreed in advance, of course, but often it isn't, and for a multicentre study with many investigators and a large sponsor project team, it can sometimes be tricky to agree on an author list. You also have to agree on which journal to submit to. So perhaps another 3 months have gone before you actually start writing the paper.
We are now a whole year after the end of the study.
Writing a paper, of course, is quick. At Dianthus, we generally expect to write a paper in about a week. But of course writing the paper is only one small part of the process when you have a paper with many authors: you then have to review drafts and make sure everyone agrees on the content. In fact you should really agree on the content before you start writing (usually by agreeing on a manuscript outline), so that will mean you can't write the paper immediately. So realistically, to agree on the content, write the paper, review the paper, revise the paper, and for everyone to agree on a final version, could easily take another 6 months.
We are now 18 months after the end of the study, and we are ready to submit the paper to our journal. Many journals can take 6 months to publish something, which takes us up to 2 years.
Now of course, all those steps can work much faster than that if everyone is committed to making the publication an urgent priority. But those timings are actually pretty realistic for large multicentre studies, and of course if unexpected delays intervene, or perhaps the first choice journal rejects the paper, then it could easily take longer than 2 years before a paper is published even if reasonable efforts are made to publish the study as soon as the trial is finished.
So choosing a cutoff point at 2 years to measure how many trials have been published tells you something about how quickly trials are published, but it would be wrong to conclude that if trials are not published within 2 years then you have evidence of unreasonable delay.
Anyway, given that it takes longer to publish the result of a large multicentre trial than a smaller trial, it is important to note that industry funded trials were much more likely to be multicentre trials and to have larger sample sizes. The conclusion that industry funded trials are less likely to be published within 2 years may therefore simply be an artefact of the larger studies funded by industry.
It would be interesting to investigate the relationship between study size and time to publication, and to do a multivariate analysis to see if industry funding is still a significant predictor of slower publication when study size is controlled for. I shall post another rapid response on the Annals of Internal Medicine website shortly to ask whether that could be done.
In case anyone is interested, the response I wrote yesterday is now online here.
[...] This post was mentioned on Twitter by Ryan Woodrow, Dianthus Medical. Dianthus Medical said: Are industry-funded clinical trials really slower to be published than other trials? http://bit.ly/cR9Ltw [...]