Dianthus Medical Blog Archive

Misleading statistics from Sense About Science

I'm normally a huge, huge fan of Sense About Science. They do fantastic work in raising public awareness and understanding of scientific issues. In a world where people are bombarded with pseudoscientific nonsense from politicians, pedlars of quack 'alternative' treatments, and the like, their work is necessary, important, and usually very well executed.

So they, of all people, should understand the importance of careful use of statistics.

Today, however, they have fallen short of their usual standards. Today, they have launched a campaign which aims to ensure that all clinical trials are reported: undoubtedly a worthy aim. However, their campaign is greatly diminished by the fact that they use an out of date, cherry-picked, and misleading statistic to kick it off.

The first sentence of their announcement states "Over half of all clinical trials never publish results". The evidence for this is a paper by Ross et al published in 2009, which studied clinical trials completed up to 2005. That paper did indeed find that only 46% of trials were published, although they limited their literature search to Medline, so the actual publication rate may have been larger had they used a more complete search including other databases such as Embase.

However, that's not the main problem with that paper.

The main problem is that it is out of date. Over the last few years, the problem of publication bias has become very well known, and publication practices have changed. It is worth noting that the first guideline recommending that pharmaceutical companies publish all their data, regardless of outcome, was published as recently as 2003. Uptake of those guidelines was slow at first, but the publication of GPP2 in 2009 gave the initiative a new lease of life.

Most big pharmaceutical companies now have policies committing them to publish the results of all their trials. GSK's policy is fairly typical. Those policies simply didn't exist a few years ago.

So when looking at completeness of publication, it is crucially important to look at up-to-date research, and Sense About Science seem to have failed dismally on that account.

So what does more up-to-date research tell us?

Sadly, I'm not aware of huge amounts of bang-up-to-date research, but I am aware of 2 more recent papers than the one by Ross et al quoted by Sense About Science. Bourgeois et al published a study on completeness of publication in 2010. Even their research is not wonderfully up-to-date, including studies completed only up to 2006. However, they found that 362/546 studies (66%) were published in peer-reviewed journals and a further 75 had results disclosed on a website, giving a total of 437 studies (80%) with disclosed results.

In fact, Ross et al themselves have published a more up-to-date study, which they published in 2012. That study looked at studies completed up to 2008, and found that 68% were published. That figure may well be an underestimate, for 2 reasons. First, they didn't include results disclosed on clinicaltrials.gov. While some may argue that disclosing results on a website is not the same thing as publication, it does get the results into the public domain, which is the important thing. Second, they restricted their analysis to trials sponsored by the NIH. Bourgeois et al found that government funded research had the lowest rate of disclosure, at 55% (the highest was research funded by the pharmaceutical industry, at 88%, contrary to the popular myth that incomplete publication is primarily an industry problem).

We don't know what has happened more recently, but it does seem clear that the assertion that fewer than half of trials are published is simply no longer tenable.

So does this mean that the campaign to ensure all clinical trial data are reported is a waste of time?


Despite Sense About Science's misuse of statistics, completeness of publication of clinical trial results is still an important issue. While we don't know what the current rate of publication is, even if it is over 80%, as suggested by the research of Bourgeois et al, it's almost certainly still less than 100%, which is where it should be. Sense About Science also make the perfectly valid point that even trials conducted in the past, back in the days when it may well have been true that fewer than 50% were published, are still relevant. Any attempts to get that massive backlog of unpublished trials published retrospectively would certainly be welcome.

But nonetheless, I have a real problem with an organisation like Sense About Science misusing statistics in this way. The arguments for complete publication of clinical trial data are strong enough on their own merits without having to exaggerate the numbers for dramatic effect.

If Sense About Science want to retain their credibility as an authoritative voice on scientific matters, it is crucially important that they ensure their own use of statistics is beyond reproach. I fear that by their careless use of out-of-date statistics, Sense About Science are guilty of exactly the sort of pseudoscientific behaviours that they would rightly be quick to criticise from others.

Update 10 January, 13.45:

In response to this blogpost, Sense About Science have now updated their website. They now claim "Around half of all clinical trials have not been published", and instead of the previous citation of Ross et al's 2009 study, they now cite a 2010 systematic review.

A systematic review is better evidence than a single study, of course, but I'm still not sure their claim is supported. The systematic review doesn't seem to report a summary statistic for the proportion of trials reported (though it's quite a long review, and it's possible I missed it), but the claim of "around half" seems to be broadly consistent with some of the numbers in the data tables in the paper.

However, it is still based on old data. Many of the studies included in the systematic review date from the 1990s. Practices have changed a lot since then, and even if it was true that only half of clinical trials done in the 1990s were ever published (which it may well have been), we still have the problem that those statistics do not apply to trials done more recently. The fact is we do not have good data on the proportion of "all clinical trials" that have been published, so I still think it is misleading to make the claim that they do.

While I appreciate Sense About Science taking the trouble to update their website, I am still not convinced that their claims are backed up by good quality evidence.

← Ben Goldacre's Bad Pharma Bad Pharma: Chapter 1 →

12 responses to "Misleading statistics from Sense About Science"