Misleading statistics from Sense About Science
I'm normally a huge, huge fan of Sense About Science. They do fantastic work in raising public awareness and understanding of scientific issues. In a world where people are bombarded with pseudoscientific nonsense from politicians, pedlars of quack 'alternative' treatments, and the like, their work is necessary, important, and usually very well executed.
So they, of all people, should understand the importance of careful use of statistics.
Today, however, they have fallen short of their usual standards. Today, they have launched a campaign which aims to ensure that all clinical trials are reported: undoubtedly a worthy aim. However, their campaign is greatly diminished by the fact that they use an out of date, cherry-picked, and misleading statistic to kick it off.
The first sentence of their announcement states "Over half of all clinical trials never publish results". The evidence for this is a paper by Ross et al published in 2009, which studied clinical trials completed up to 2005. That paper did indeed find that only 46% of trials were published, although they limited their literature search to Medline, so the actual publication rate may have been larger had they used a more complete search including other databases such as Embase.
However, that's not the main problem with that paper.
The main problem is that it is out of date. Over the last few years, the problem of publication bias has become very well known, and publication practices have changed. It is worth noting that the first guideline recommending that pharmaceutical companies publish all their data, regardless of outcome, was published as recently as 2003. Uptake of those guidelines was slow at first, but the publication of GPP2 in 2009 gave the initiative a new lease of life.
Most big pharmaceutical companies now have policies committing them to publish the results of all their trials. GSK's policy is fairly typical. Those policies simply didn't exist a few years ago.
So when looking at completeness of publication, it is crucially important to look at up-to-date research, and Sense About Science seem to have failed dismally on that account.
So what does more up-to-date research tell us?
Sadly, I'm not aware of huge amounts of bang-up-to-date research, but I am aware of 2 more recent papers than the one by Ross et al quoted by Sense About Science. Bourgeois et al published a study on completeness of publication in 2010. Even their research is not wonderfully up-to-date, including studies completed only up to 2006. However, they found that 362/546 studies (66%) were published in peer-reviewed journals and a further 75 had results disclosed on a website, giving a total of 437 studies (80%) with disclosed results.
In fact, Ross et al themselves have published a more up-to-date study, which they published in 2012. That study looked at studies completed up to 2008, and found that 68% were published. That figure may well be an underestimate, for 2 reasons. First, they didn't include results disclosed on clinicaltrials.gov. While some may argue that disclosing results on a website is not the same thing as publication, it does get the results into the public domain, which is the important thing. Second, they restricted their analysis to trials sponsored by the NIH. Bourgeois et al found that government funded research had the lowest rate of disclosure, at 55% (the highest was research funded by the pharmaceutical industry, at 88%, contrary to the popular myth that incomplete publication is primarily an industry problem).
We don't know what has happened more recently, but it does seem clear that the assertion that fewer than half of trials are published is simply no longer tenable.
So does this mean that the campaign to ensure all clinical trial data are reported is a waste of time?
No.
Despite Sense About Science's misuse of statistics, completeness of publication of clinical trial results is still an important issue. While we don't know what the current rate of publication is, even if it is over 80%, as suggested by the research of Bourgeois et al, it's almost certainly still less than 100%, which is where it should be. Sense About Science also make the perfectly valid point that even trials conducted in the past, back in the days when it may well have been true that fewer than 50% were published, are still relevant. Any attempts to get that massive backlog of unpublished trials published retrospectively would certainly be welcome.
But nonetheless, I have a real problem with an organisation like Sense About Science misusing statistics in this way. The arguments for complete publication of clinical trial data are strong enough on their own merits without having to exaggerate the numbers for dramatic effect.
If Sense About Science want to retain their credibility as an authoritative voice on scientific matters, it is crucially important that they ensure their own use of statistics is beyond reproach. I fear that by their careless use of out-of-date statistics, Sense About Science are guilty of exactly the sort of pseudoscientific behaviours that they would rightly be quick to criticise from others.
Update 10 January, 13.45:
In response to this blogpost, Sense About Science have now updated their website. They now claim "Around half of all clinical trials have not been published", and instead of the previous citation of Ross et al's 2009 study, they now cite a 2010 systematic review.
A systematic review is better evidence than a single study, of course, but I'm still not sure their claim is supported. The systematic review doesn't seem to report a summary statistic for the proportion of trials reported (though it's quite a long review, and it's possible I missed it), but the claim of "around half" seems to be broadly consistent with some of the numbers in the data tables in the paper.
However, it is still based on old data. Many of the studies included in the systematic review date from the 1990s. Practices have changed a lot since then, and even if it was true that only half of clinical trials done in the 1990s were ever published (which it may well have been), we still have the problem that those statistics do not apply to trials done more recently. The fact is we do not have good data on the proportion of "all clinical trials" that have been published, so I still think it is misleading to make the claim that they do.
While I appreciate Sense About Science taking the trouble to update their website, I am still not convinced that their claims are backed up by good quality evidence.
This seems to be avery odd piece. I am unsure how you can say sense about science are misleading, when the overwhelming evidence points to substantial problems with trial access. See the evidence outlined below.
Are you saying that providing access to clinical data is a bad thing? Or asking patients what they think about publishing their data is a bad thing? Are you saying the European Ombudsman and the EMA are wrong?
Therefore, what is the alternative? not to publish in full.
I suspect if you ask patients, particularly those who participate in clincal trials, they would be horrifed to know that their data was being withheld.
If you want to know the upto date position then I suggest you look at the Cochrane tamiflu review. Which shows the current situation is no better.
1) Overall, 362 (66.3%) trials had published results. Industry-funded trials reported positive outcomes in 85.4% of publications, compared with 50.0% for government-funded trials and 71.9% for nonprofit or nonfederal organization–funded trials (P < 0.001). Rates of trial publication within 24 months of study completion ranged from 32.4% among industry-funded trials to 56.2% among nonprofit or nonfederal organization–funded trials without industry contributions (P = 0.005 across groups).
Bourgeois FT, Murthy S, Mandl KD. Outcome reporting among drug trials registered in ClinicalTrials.gov. Ann Intern Med. 2010 Aug 3;153(3):158-66.
2) Among 635 clinical trials completed by 31 December 2008, 294 (46%) were published in a peer reviewed biomedical journal, indexed by Medline, within 30 months of trial completion.
Despite recent improvement in timely publication, fewer than half of trials funded by NIH are published in a peer reviewed biomedical journal indexed by Medline within 30 months of trial completion. Moreover, after a median of 51 months after trial completion, a third of trials remained unpublished.
Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ. 2012 Jan 3;344:d7292.
3) We characterized the 79,413 registry and 2178 results of trial records available as of September 2010. From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting.
ClinicalTrials.gov provides access to study results not otherwise available to the public. Although the database allows examination of various aspects of ongoing and completed clinical trials, its ultimate usefulness depends on the research community to submit accurate, informative data.
Deborah A. Zarin, M.D., Tony Tse, Ph.D., Rebecca J. Williams, Pharm.D., M.P.H., Robert M. Califf, M.D., and Nicholas C. Ide, M.S. The ClinicalTrials.gov Results Database — Update and Key Issues. N Engl J Med 2011; 364:852-860March 3, 2011DOI: 10.1056/NEJMsa1012065
4) Clinical trials registration has the potential to contribute substantially to improving clinical trial transparency and reducing publication bias and selective reporting. These potential benefits are currently undermined by deficiencies in the provision of information in key areas of registered records.
Viergever RF, Ghersi D (2011). The Quality of Registration of Clinical Trials The Quality of Registration of Clinical Trials. PLoS ONE 6(2): e14701. doi:10.1371/journal.pone.0014701
Carl Heneghan
Director Centre for Evidence-Based Medicine. www.cebm.net
Thanks for your comment, Carl. I wonder how carefully you read my post? I thought I had explained these things, but I'll answer your points specifically in case it helps.
"I am unsure how you can say sense about science are misleading"
Because they made a very specific claim, which happens not to be true. They claimed "Over half of clinical trials never publish results" (which, in response to my blogpost, they've now changed to "Results from over half of all clinical trials have not been published", although I'm not sure that's any better). As the research from Bourgeois et al 2010 and Ross et al 2012 show, that doesn't appear to be true for recent studies.
"Are you saying that providing access to clinical data is a bad thing?"
No, and I'm really not sure why you'd think that I did. I specifically said that I thought the goal of publishing all trials was a "worthy aim".
"Or asking patients what they think about publishing their data is a bad thing? Are you saying the European Ombudsman and the EMA are wrong?"
I don't think I made any comment on either of those questions. Is there a specific part of my blogpost that you interpret as doing so?
"If you want to know the upto date position then I suggest you look at the Cochrane tamiflu review. Which shows the current situation is no better.
Not sure what the relevance of that is? I was making the point about data from clinical trials in the last few years. Weren't most of the Tamiflu studies done many years ago? How does that tell us about the "current situation"?
Though since you bring up the subject of Tamiflu, I left a comment on a blogpost you wrote about that subject last month in which I asked you some questions, to which I have yet to receive a reply. If you get a chance to answer my comment there, that would be appreciated.
But anyway, that's a digression which has little relevance to the subject of my current blogpost. If you want to continue that particular discussion, I think your own blog would be a better medium (at least until I write my own blogpost about Tamiflu, which I may well be doing in the not too distant future).
"Overall, 362 (66.3%) trials had published results."
Quite. So do we now agree that the claim that over half of trials never publish results is misleading?
"Moreover, after a median of 51 months after trial completion, a third of trials remained unpublished"
So again, if a third of trials remain unpublished, the claim that over half of trials remain unpublished really doesn't stack up, does it?
"From a sample cohort of results records, 78 of 150 (52%) had associated publications within 2 years after posting."
And since we know that some studies take longer than 2 years to publish, you are citing yet more data that makes my point for me.
Let me repeat what I said in my original blogpost, as it doesn't seem to have been understood. I totally agree that 100% publication of clinical trial results is what we should be doing. I just think that any campaign to promote that worthy goal should stick to the facts. When you misrepresent data just to make your case sound stronger, you risk losing the moral high ground.
Adam, while I respect your stance, my personal view is that our energy would be better spent supporting and getting fully behind this initiative rather than knocking the way it is communicated.
Fair point, Ryan.
I certainly do support the initiative, and have signed the petition. In addition, in my capacity as a GAPP member, I've also co-authored a paper setting out some practical suggestions for how we might get more trials published.
It's partly because I think it's such an important initiative that I feel so strongly it should be based on honest statistics!
It's always fair to challenge a figure, but as you know, in order to challenge a figure based on what is, after all, a documented and repeatable methodology, you need to cite a figure with an equally sound basis. Why not repeat the review for trials since 2009?
Mind you, I can point you to one place where the publication rate is 0%. Step forward the Burzynski Clinic...
Good question, Guy.
The problem is that it's actually quite difficult to get bang up to date statistics on publication rates, because it often takes longer than it should for papers to be published (which is another problem in its own right, of course). It's not at all uncommon for papers to take longer than 2 years to be published.
So the papers I cited in the blogpost, which show a 68-80% disclosure rate, are about the most up to date ones that I'm aware of, even though they are also not as up to date as we'd like.
I'm actually toying with the idea of doing a study myself in which we would look at studies going right up to the present and do some sophisticated statistical modelling based on parametric survival models to see if we can pin down how publication rates have changed over time. Now that we have databases like clinicaltrials.gov and Pubmed where you can download xml data, a lot of this can be automated, which could make this a much more feasible project than this sort of thing used to be, although some manual searching of databases will doubtless still be required. I haven't yet figured out whether the project I'd like to do is achievable within the amount of time I have to spend on it, but if it is, I'll be coming up with what I hope will be interesting statistics.
Watch this space.
Yes, that would be interesting. It's also a sign to others in the industry that you can't rebut actual figures with personal opinion, however well-founded that might be.
Personally I'db like to see all medical science published. One of the problems facing patients today is that there is a veritable mountain of junk freely available which punts the various forms of woo, and good science to balance it is all too often hidden behind pay walls. Worse, some in the industry deliberately skew the freely available parts of studies - abstract in particular - to support their agenda and without seeing the full text this is not easily checked.
I think that is Carl's comment that is odd. I interpret Adam as saying that we need accurate statistics when making these claims. To argue with that simply because the cause is noble is to take the position that the end justifies the means* - that inaccurate statistics can be used to support a good cause. However, if we go down that path then in the end distrust in statistics simply grows. Surely, the standards we require for evidence about the efficacy of medicines are the same standards we should use in describing and evaluating the evidence base itself. I presume that Carl would not want it said of the Centre for Evidence-Based Medicine that it was quite happy to use inaccurate data provided the cause they supported was good.
Stephen
* Of course it is the standard error that justifies the mean :-)
My declaration of interest is here
http://www.senns.demon.co.uk/Declaration_Interest.htm
[...] never published. It’s surprisingly popular for a figure that has no evidence, as I’ve written about before. And since I wrote that post, another study has been published showing disclosure rates of [...]
[...] a statistic we’ve heard before. And as I’ve explained more than once before, it’s not true. Nonetheless, zombie statistics are very hard to [...]
[...] have written more than once before about how the commonly heard statistic “50% of all clinical trials are not [...]
[...] as with the All Trials campaign, I don’t think being on the side of the good guys gives you a free pass to use iffy [...]