More zombie statistics
There is an oft-quoted figure that 50% of all clinical trials are never published. It's surprisingly popular for a figure that has no evidence, as I've written about before. And since I wrote that post, another study has been published showing disclosure rates of 89%.
So I was a little dismayed when I saw an article in Nature News with the headline "Half of US clinical trials go unpublished". My original thought was that it was simply repeating the same old zombie statistic, but after reading the article, it turned out it was talking about a new study, this one to be specific.
The only problem is that the study did not show that half of US clinical trials go unpublished. To do that, it would need to look at a random sample of US clinical trials. What it did instead was it looked at a sample of 600 clinical trials that had already had results posted on clinicaltrials.gov, and it did indeed find that half of that sample had not been published.
That is really not the same thing as saying half of all trials go unpublished. This is a specific sample of trials that had already had their results disclosed on a website. So although it's true that half of them had not been published in a peer-reviewed journal, 100% of them had had their results disclosed in the public domain.
Some may argue that there is no need to publish results in a journal as well if they are in the public domain anyway. I'm not entirely sure I agree with that argument, but that's a subject for another day. However, it is entirely plausible to suggest that some of those triallists who had posted their results on clinicaltrials.gov may have considered that their job of disclosure was already done, and perhaps didn't make publication in a peer reviewed journal as much of a priority as those triallists who had not posted results on clinicaltrials.gov.
Anyway, given that the latest study tells us nothing about what proportion of studies overall have their results published, can we please knock that "half of all trials published" statistic on the head with a large baseball bat before it rises up and goes on its zombie-like way?
Update 5 December 2013, 7.15 am:
Paul Ivsin (who has also left a perceptive comment below) has written a more thorough blogpost about this study than my few brief notes above. I'd encourage you to read it.
Your denialism is getting pretty desperate.
The best currently available evidence, published in June 2013, looks at trials on clinicaltrials.gov, the most recent trials posted, a three year slice, with three years follow up:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0068409
"The majority of the trials (54.8%) had no evidence of results, based on either linked result articles or basic summary results."
Your energetic campaign against transparency, whilst claiming you support it, is getting very peculiar.
So to be clear, Ben, do you stand by your comment earlier today that the paper I describe above shows that "half of all trials are missing"?
Wow, did i mis-express myself in a tweet from a bus last week.
The evidence on the prevalence of missing results is extensive.
This paper (as i discussed on alltrials blog, press release, and with journalists) is a really important new finding. It shows - as argued at length in Bad Pharma - that journals are often a terrible place to report trial results. They permit primary outcome switching, flawed analyses, and so on. There are many trials ostensibly "published" but with their primary outcomes and details on methods still withheld. This is why AllTrials calls for registration, summary results, and CSR where one has been produced.
I completely agree with you about CSRs, Ben. They are vastly more reliable than journal publications, and IMHO you are absolutely right to describe it as scandalous that they are not made routinely available, as I've written previously. Whether it's realistic or desirable for availability of CSRs to replace publications altogether is another question, of course, but I think we can agree that it's daft for publications to be the primary method of making results available.
However, that wasn't really the point of this blogpost, which was to pick up on the Nature News headline about half of trials being unpublished, which of course was not what the Riveros et al study showed. And since you have apparently acknowledged that your tweet supporting the Nature News interpretation was mis-expressed (no biggy, we've all tweeted things that didn't seem very wise on mature reflection), then presumably you'd agree that the headline was misleading. So I'm a little puzzled by your accusation of "denialism".
Also, if we're all agreed that publishing results on clinicaltrials.gov is more reliable than publishing them in journals, then even if it were true that half of trials were unpublished in journals, would it even matter?
Until we get away from the idea that journals should be the proper outlet for clinical trial results we won't get this fixed. Self publication is the only way we can make sure that responsibility for publication is not divided. See my comment to this effect in 'Bad Karma' http://www.ingentaconnect.com/content/maney/mew/2013/00000022/00000004/art00005
I find it hard to disagree with that, Stephen. I think it may take us a while to get there, given how firmly the whole system of peer reviewed publications is embedded into science, but you're right, it does seem bizarre that we rely on the flawed system of journals, with all their capriciousness, for something as important a the dissemination of trial results.
The Bad Karma article is, sadly, paywalled. As you know, I've already read it, but for the benefit of those who haven't, do you think you might find the time to post it on your own website? It deserves a wider audience.
Adam,
Perhaps Ben's point is that we can keep the "50% missing" statistic alive indefinitely as long as we commit to continuously changing our definition of "missing"? And also ignoring bothersome studies that contradict our predefined conclusions, of course.
(On that last point, he does seem to be engaged in a campaign to redefine "best currently available evidence" as "the study that I personally deem best." He made the same gaffe in a comment on my blog - again ignoring contradictory data and only acknowledging selected studies. That doesn't strike me as a particularly honest or respectful way to treat the data, but I admit it does save a lot of time and effort and thought - never mind the comprehensive review, just ask Ben.)
Considering the totality of the evidence, it seems reasonably clear that publication of trial results has improved consistently over the course of the last decade, with some real acceleration in the rates of public posting of data since FDAAA in 2007. That would seem to be a good thing, but sadly, "consistent improvement" and does not sell books or speaking engagements. So I suspect the zombie statistic is going to live on for quite some time.
I totally agree with Stephen - this idea that information is not in the public domain until it is published in a journal is ludicrous. A lot of trials don't show amazing new and innovative findings, they are just adding to the pool of data out there, but it is nearly impossible to get a paper published that doesn't really show anything new and exciting - the journals aren't interested. So putting the information out there in a different way (i.e. through clinicaltrials.gov) or other website is the only way forward.....
I'd be very interested to read Stephen's article on this as well (sadly, my enthusiasm does not reach the $48 level required to breach the paywall). Here's hoping that it finds a public venue.
Many thanks to Stephen for posting the Bad Karma article. See the next post on this blog (Good Pharma) for the link.
Having now looked at this paper I just can't see what Goldacre's gripe is. The conclusion of the paper is that would-be meta-analysts should go to ClinicalTrials.gov to get their data, since the data will be more likely to be found there and will be more extensive and useful. It does not conclude: "what a scandal only 50% of trial are published". In any case, Goldacre himself has conceded elsewhere that the publication obligation can be fulfilled by publishing on the web.
The technical criticism would be that we need a 'reverse study'. One in which we looked at published papers to see for how many there is publication on ClinicalTrials.gov.
Of course neither of these two studies would answer the question as to what proportion of studies are eventually published although, in a sense, what is important here is a Kaplan-Meier type analysis of time to publication. Such a paper would require a sample of initiated trials.
In any case, I think this is rather a good study of the question it attempts to answer. (Well above the usual EBM methodology standard.) I have one criticism. I think that their figure 2, which conditions on trial that were registered and published, would have been more usefully replaced by a K-M type curve for all trial,
It would not surprise me to learn that the authors disapprove of the way it is being interpreted.
The criticism 'only 50% of trials are published' is just silly. Good luck in your fight against the living dead.
[...] a statistic we’ve heard before. And as I’ve explained more than once before, it’s not true. Nonetheless, zombie statistics are very hard to [...]