Bad Pharma: Chapter 1
I recently wrote about some of my thoughts on Ben Goldacre's new book, Bad Pharma. As I mentioned in that post, I have quite a lot to say about that book, and today I'd like to share my thoughts on chapter 1 of the book.
Chapter 1 of Bad Pharma is entitled “missing data”, and tells us about the problem of incomplete publication of clinical trials. The overall message of this chapter could be summarised as follows: it is not possible for doctors to practice evidence based medicine if the evidence is not available to them, and the evidence is frequently not available.
That message is an important one, and it is sound. However, much of the detail in this chapter is questionable. That’s a shame. When the overall message is so important and really not in doubt, I don’t understand why Goldacre feels the need to embellish it the way that he does. The facts are serious enough without sexing them up.
So let’s look at some of the detail of what Goldacre says.
I mentioned in my last post on this subject that we only have to get as far as page 2 to read our first cherry-picked statistic, where Goldacre presents an almost certainly unrepresentative, but scary sounding, statistic from a secondary analysis of a single paper, rather than quoting from a systematic review.
On page 19, Goldacre explains the importance of systematic reviews, and tells us “I am giving you a clean overview of the literature, because I will be explaining that evidence using systematic reviews”. That’s a worthy aim: citing single papers can be misleading, as any given paper may have found unusual and unrepresentative results by some kind of fluke. Systematic reviews, which take account of the entirety of the literature, are much more likely to give an honest impression. But unfortunately, Goldacre doesn’t live up to that promise.
On the very same page as he promises to use systematic reviews to make his case, he tells us about a single study—not a systematic review—that investigated publication bias. Publication bias is indeed a serious problem. This is the tendency of positive studies to be published, while negative studies are more likely to remain unpublished. This means that the published literature often gives an overly optimistic view of any given intervention, which is clearly a problem.
The study Goldacre cites to illustrate this is Turner et al 2008. Oddly, Goldacre doesn’t give a citation to this study: I only know this because I happen to be fairly familiar with the literature on this kind of thing. This study found truly scary results: 37 of 38 positive studies were published, compared with 14 of 36 negative studies, giving an odds ratio of 58.
Wow, that’s shocking, isn’t it?
Maybe, but it’s also rather unusual, and not representative of publication bias as a whole. After Goldacre had just promised us he was going to quote systematic reviews, he instead homed in on one particularly scary paper instead of quoting a perfectly good systematic review. That systematic review found odds ratios of 2.73 for inception cohort studies and 5.00 for regulatory cohort studies (the same methodology that Turner et al used).
So if you look at the systematic review, we still have clear evidence that positive studies are more likely to be published than negative studies, and that’s a bad thing. However, it isn’t nearly as quantitatively bad as Goldacre’s cherry-picked statistics would have us believe.
Some of the language used is very misleading as well. On page 27, we are are told, in the context of unpublished clinical trial data “data is withheld from everyone in medicine, from top to bottom”. That’s simply not true. Even when clinical trials of licensed drugs are unpublished, they are still made available to the regulators. Goldacre is quite correct, of course, to say that they should be available to everyone else as well, but it’s very misleading to say that they’re “withheld” from everyone, because they’re just not.
I also have a bit of a problem with the world “withheld” in this context, a word Goldacre uses repeatedly. It implies an active process of trying to hide data, for which there is simply no evidence. If a clinical trial is not published, it’s far more plausible that it’s simply because no-one could be bothered to get round to publishing it, rather than that it was actively hidden. Cock-up is always more likely than conspiracy.
Then we have data presented in misleading contexts. For example, on page 44, we are told of a study which found that a third of all trials failed to reach their recruitment target. The implication is that it’s those evil pharma companies again, not doing things properly. But the problem is that that study doesn’t look at pharma-sponsored trials, it looks at trials funded by the UK Medical Research Council (MRC) and the Health Technology Assessment (HTA) Programme. A reader who doesn’t bother to go and check the references would never know that, and assume that it is the pharmaceutical industry up to no good.
We also have plain old factual inaccuracies. In the context of reporting adverse events in clinical trials to the regulator, Goldacre tells us on page 59 that “you only have to tell the regulators about side effects reported in studies looking at specific uses for which the drug has a marketing authorisation”, which is simply not true: pharmaceutical companies are obliged to report adverse events from any indication. On page 61 he tells us that you don’t have to tell European regulators about adverse events in trials outside Europe, but in reality, you do. In fact, bizarrely, Goldacre even acknowledges on page 61 that you do have to tell regulators about side effects in unlicensed indications, despite what he said on page 59, and you have to wonder why he didn’t go back and edit page 59 at that point, or indeed why his editor didn’t (he did have an editor, didn’t he?)
There’s a whole lot of rather strange things written about access to the data about Tamiflu, but that would take a whole blogpost in itself to talk about, so I won’t go into that today. Maybe I’ll write that other blogpost sometime soon.
Anyway, despite the many flaws in this chapter, Goldacre also makes some good and valid points. Publication bias is a real problem, and although he exaggerates some of the statistics on it, it’s certainly something that deserves to be widely known, so Goldacre deserves credit for publicising it in this way.
He is also careful to point out, correctly, that publication bias is not just a problem of the pharmaceutical industry, and that it’s a big problem for academic research as well.
One of the most important points Goldacre makes in this chapter, and with which I agree wholeheartedly, is that there is a deeply entrenched and unjustifiable culture of secrecy surrounding the regulatory process. Pharmaceutical companies submit large amounts of very detailed data on their trials to regulators, but the regulators then keep the data secret. As Goldacre rightly says on page 28, “This is an extraordinary and perverse situation”. It’s hard to disagree with that.
Clinical study reports submitted to regulators are vastly more detailed than publications in peer reviewed journals. Most doctors would probably never bother to read them. However, for those doctors who do want to read about the evidence on a drug thoroughly, clinical study reports are an invaluable resource. I have to agree with Goldacre 100% when he says that it’s scandalous that doctors and patients don’t routinely have access to them.
So all in all, there are some very important messages in chapter 1 of Bad Pharma. Much of the evidence base on many of the drugs in common use is not available to doctors and patients, and that is unacceptable. This story is a powerful enough one, and I think it’s a shame that Goldacre felt the need to embellish it in the way he did. That embellishment, to my mind, is problematic, but I’ll write more about that another day.