Bad Pharma: Chapter 1
I recently wrote about some of my thoughts on Ben Goldacre's new book, Bad Pharma. As I mentioned in that post, I have quite a lot to say about that book, and today I'd like to share my thoughts on chapter 1 of the book.
Chapter 1 of Bad Pharma is entitled “missing data”, and tells us about the problem of incomplete publication of clinical trials. The overall message of this chapter could be summarised as follows: it is not possible for doctors to practice evidence based medicine if the evidence is not available to them, and the evidence is frequently not available.
That message is an important one, and it is sound. However, much of the detail in this chapter is questionable. That’s a shame. When the overall message is so important and really not in doubt, I don’t understand why Goldacre feels the need to embellish it the way that he does. The facts are serious enough without sexing them up.
So let’s look at some of the detail of what Goldacre says.
I mentioned in my last post on this subject that we only have to get as far as page 2 to read our first cherry-picked statistic, where Goldacre presents an almost certainly unrepresentative, but scary sounding, statistic from a secondary analysis of a single paper, rather than quoting from a systematic review.
On page 19, Goldacre explains the importance of systematic reviews, and tells us “I am giving you a clean overview of the literature, because I will be explaining that evidence using systematic reviews”. That’s a worthy aim: citing single papers can be misleading, as any given paper may have found unusual and unrepresentative results by some kind of fluke. Systematic reviews, which take account of the entirety of the literature, are much more likely to give an honest impression. But unfortunately, Goldacre doesn’t live up to that promise.
On the very same page as he promises to use systematic reviews to make his case, he tells us about a single study—not a systematic review—that investigated publication bias. Publication bias is indeed a serious problem. This is the tendency of positive studies to be published, while negative studies are more likely to remain unpublished. This means that the published literature often gives an overly optimistic view of any given intervention, which is clearly a problem.
The study Goldacre cites to illustrate this is Turner et al 2008. Oddly, Goldacre doesn’t give a citation to this study: I only know this because I happen to be fairly familiar with the literature on this kind of thing. This study found truly scary results: 37 of 38 positive studies were published, compared with 14 of 36 negative studies, giving an odds ratio of 58.
Wow, that’s shocking, isn’t it?
Maybe, but it’s also rather unusual, and not representative of publication bias as a whole. After Goldacre had just promised us he was going to quote systematic reviews, he instead homed in on one particularly scary paper instead of quoting a perfectly good systematic review. That systematic review found odds ratios of 2.73 for inception cohort studies and 5.00 for regulatory cohort studies (the same methodology that Turner et al used).
So if you look at the systematic review, we still have clear evidence that positive studies are more likely to be published than negative studies, and that’s a bad thing. However, it isn’t nearly as quantitatively bad as Goldacre’s cherry-picked statistics would have us believe.
Some of the language used is very misleading as well. On page 27, we are are told, in the context of unpublished clinical trial data “data is withheld from everyone in medicine, from top to bottom”. That’s simply not true. Even when clinical trials of licensed drugs are unpublished, they are still made available to the regulators. Goldacre is quite correct, of course, to say that they should be available to everyone else as well, but it’s very misleading to say that they’re “withheld” from everyone, because they’re just not.
I also have a bit of a problem with the world “withheld” in this context, a word Goldacre uses repeatedly. It implies an active process of trying to hide data, for which there is simply no evidence. If a clinical trial is not published, it’s far more plausible that it’s simply because no-one could be bothered to get round to publishing it, rather than that it was actively hidden. Cock-up is always more likely than conspiracy.
Then we have data presented in misleading contexts. For example, on page 44, we are told of a study which found that a third of all trials failed to reach their recruitment target. The implication is that it’s those evil pharma companies again, not doing things properly. But the problem is that that study doesn’t look at pharma-sponsored trials, it looks at trials funded by the UK Medical Research Council (MRC) and the Health Technology Assessment (HTA) Programme. A reader who doesn’t bother to go and check the references would never know that, and assume that it is the pharmaceutical industry up to no good.
We also have plain old factual inaccuracies. In the context of reporting adverse events in clinical trials to the regulator, Goldacre tells us on page 59 that “you only have to tell the regulators about side effects reported in studies looking at specific uses for which the drug has a marketing authorisation”, which is simply not true: pharmaceutical companies are obliged to report adverse events from any indication. On page 61 he tells us that you don’t have to tell European regulators about adverse events in trials outside Europe, but in reality, you do. In fact, bizarrely, Goldacre even acknowledges on page 61 that you do have to tell regulators about side effects in unlicensed indications, despite what he said on page 59, and you have to wonder why he didn’t go back and edit page 59 at that point, or indeed why his editor didn’t (he did have an editor, didn’t he?)
There’s a whole lot of rather strange things written about access to the data about Tamiflu, but that would take a whole blogpost in itself to talk about, so I won’t go into that today. Maybe I’ll write that other blogpost sometime soon.
Anyway, despite the many flaws in this chapter, Goldacre also makes some good and valid points. Publication bias is a real problem, and although he exaggerates some of the statistics on it, it’s certainly something that deserves to be widely known, so Goldacre deserves credit for publicising it in this way.
He is also careful to point out, correctly, that publication bias is not just a problem of the pharmaceutical industry, and that it’s a big problem for academic research as well.
One of the most important points Goldacre makes in this chapter, and with which I agree wholeheartedly, is that there is a deeply entrenched and unjustifiable culture of secrecy surrounding the regulatory process. Pharmaceutical companies submit large amounts of very detailed data on their trials to regulators, but the regulators then keep the data secret. As Goldacre rightly says on page 28, “This is an extraordinary and perverse situation”. It’s hard to disagree with that.
Clinical study reports submitted to regulators are vastly more detailed than publications in peer reviewed journals. Most doctors would probably never bother to read them. However, for those doctors who do want to read about the evidence on a drug thoroughly, clinical study reports are an invaluable resource. I have to agree with Goldacre 100% when he says that it’s scandalous that doctors and patients don’t routinely have access to them.
So all in all, there are some very important messages in chapter 1 of Bad Pharma. Much of the evidence base on many of the drugs in common use is not available to doctors and patients, and that is unacceptable. This story is a powerful enough one, and I think it’s a shame that Goldacre felt the need to embellish it in the way he did. That embellishment, to my mind, is problematic, but I’ll write more about that another day.
Thank you for writing this balanced piece. It is vitally important that everyone is held to a high standard of reporting, including Ben, and it is disappointing to hear of some cherry-picking.
On balance, however, his book is doing everyone a big service by exposing some real problems. I think you've done a great job of being both critical and complimentary.
Hi there
Always good to have other peoples thoughts, I'm always keen to update the book and have it as clean and clear as possible.
This is a very strong allegation of cherry picking on publication bias. But the book describes the systematic review data on publication bias and individual studies. It's entirely reasonable to discuss individual studies, and go on to explain what the evidence overall finds in systematic reviews, especially since the systematic reviews are *extremely* (!) well signposted as being the most reliable evidence. In fact there's no other way to do it: the methods and results of individual studies need to be explained to a lay audience, otherwise a reader cannot know what kind of research is being included in the systematic review.
"[Turner] found truly scary results: 37 of 38 positive studies were published, compared with 14 of 36 negative studies… it’s also rather unusual, and not representative of publication bias as a whole"
The Turner paper covers all antidepressants approved over a decade or so. As such it's an extremely important (and very well cited) paper, highly relevant to everyday clinical practice, on a widely prescribed class of drugs which are prescribed to millions. It would almost be perverse not to discuss it. Overall the HTA systematic review from 2010 finds that studies with positive results are about twice as likely to be published, as discussed in the book.
"Some of the language used is very misleading as well. On page 27, we are are told, in the context of unpublished clinical trial data “data is withheld from everyone in medicine, from top to bottom”. That’s simply not true. Even when clinical trials of licensed drugs are unpublished, they are still made available to the regulators."
It's bizarre to claim that the book doesn't cover this. Information given to regulators can be withheld from doctors and patients. This is one of the key themes of the book, discussed repeatedly, and one of the most widely discussed themes since it came out: it's not enough that data are given to regulators and then withheld - to greater and lesser extents in different circumstances - from doctors, researchers, and patients. This isn't glossed over: it's a central argument of the book.
"We also have plain old factual inaccuracies. In the context of reporting adverse events in clinical trials to the regulator, Goldacre tells us on page 59 that “you only have to tell the regulators about side effects reported in studies looking at specific uses for which the drug has a marketing authorisation”, which is simply not true: pharmaceutical companies are obliged to report adverse events from any indication. On page 61 he tells us that you don’t have to tell European regulators about adverse events in trials outside Europe, but in reality, you do. In fact, bizarrely, Goldacre even acknowledges on page 61 that you do have to tell regulators about side effects in unlicensed indications, despite what he said on page 59, and you have to wonder why he didn’t go back and edit page 59 at that point, or indeed why his editor didn’t (he did have an editor, didn’t he?)"
I'm sorry to say that this seems like an almost wilfully perverse reading of the book, to me, but if that sentence isn't clear enough already then I imagine a single word change will make it even clearer, I'll happily have a look at doing that.
On page 59 the book explains that GSK were able to withhold data from the regulator for off-label uses of paroxetine because of a loophole in the legislation. (This was a very widely documented loophole, and the reason why - after a 3 year inquiry - the MHRA concluded that they couldn't press charges against GSK for withholding data, as explained in that very section of the book). On page 61 I explain that this loophole was closed after the GSK/paroxetine scandal.
There's no inconsistency: on page 59 the book explains the loophole, on page 61 the book explains that the loophole gets fixed. I would imagine that Adam Jacobs knows this story (and the 3 year MHRA investigation) well, so his misunderstanding seems peculiar.
Long story here:
http://www.mhra.gov.uk/Howweregulate/Medicines/Medicinesregulatorynews/CON014153
Shorter story here:
http://www.mhra.gov.uk/home/groups/comms-po/documents/news/con014162.pdf
[[QUOTE]]
* "The MHRA has concluded its four year investigation into Glaxosmithkline and its antidepressant drug Seroxat.
The investigation focused on whether GSK had failed to inform
the MHRA of information it had on the safety of Seroxat in under 18’s in a timely manner.
The investigation was undertaken with a view to a potential criminal prosecution for
breach of drug safety legislation. It was the largest investigation of its kind in the UK, and
included the scrutiny of over 1 million pages of evidence.
* "The decision taken by Government Prosecutors, based on the investigation findings and
legal advice, is that there is no realistic prospect of a conviction in this case, and that the
case should not proceed to criminal prosecution. The legislation in force at the time was
not sufficiently strong or comprehensive as to require companies to inform the regulator
of safety information when the drug was being used for, or tested outside its licensed
indications.
…
* "Professor Kent Woods, MHRA Chief Executive, said: “I remain concerned that GSK could
and should have reported this information earlier than they did. All companies have a
responsibility to patients, and should report any adverse data signals to us as soon as
they discover them. This investigation has revealed important weaknesses in the drug
safety legislation in force at the time. Subsequent legislation has partially addressed the
problem, but we will take immediate steps to ensure the law is strengthened further, so
that there can be no doubt as to companies’ obligations to report safety issues.” "
[[ END QUOTE]]
Ben Goldacre
www.badscience.net
Many thanks for taking the time to reply, Ben.
Just a couple of points:
In fact there’s no other way to do it: the methods and results of individual studies need to be explained to a lay audience, otherwise a reader cannot know what kind of research is being included in the systematic review.
Yes, that's a perfectly reasonable position. But since you happened to pick the one study with by far the most extreme results as your first example of an individual study, I stand by my use of the phrase "cherry picking".
Information given to regulators can be withheld from doctors and patients.
Indeed it can, and as I acknowledged in my post, we're all agreed that that's a bad thing. However, in the book you say that data are withheld from everyone. That's not true if the data are disclosed to regulators, is it?
if that sentence isn’t clear enough already then I imagine a single word change will make it even clearer, I’ll happily have a look at doing that.
Yes, I think it would. Just change "have" to "had", and it would all make perfect sense. Many thanks for looking at that.
There’s no inconsistency: on page 59 the book explains the loophole, on page 61 the book explains that the loophole gets fixed.
I think there is an inconsistency as written, but if page 59 is changed to make it clear that the loophole is no longer present, that problem would go away. However, you still have the claim on page 61 that the EMA don't need to know about trials outside Europe. Perhaps that was true at some time in the past for all I know, but it isn't now, and page 61 is most definitely written in the present tense.
I agree that the Song et al meta-analysis is a very useful summary of the data on publication bias. However, in my view, it does not escape the errors in interpretation that Goldacre and others have made in looking at editorial bias or lack of it. Such authors have implicitly assumed that negative and positive studies submitted to journals were of equal quality. This is what I have called the Q hypothesis. However, if (as would be logical) authors submit to journals based on probability of acceptance, what I have called the P hypothesis (see http://f1000research.com/articles/1-59/v1 and http://f1000research.com/articles/2-17/v1 ), we would expect to see similar acceptance rates but difference in quality of negative and positive papers submitted. This is in fact what Lynch et al found (but they don't seem to have appreciated its significance). See below where the abstract is extremely revealing. Goldacre cites this paper (p34) but did he read it?
My declaration of interest is here:
http://www.senns.demon.co.uk/Declaration_Interest.htm
Reference
Lynch JR, Cunningham MR, Warme WJ, Schaad DC, Wolf FM, Leopold SS. Commercially funded and United States-based research is more likely to be published; good-quality studies with negative outcomes are not. J Bone Joint Surg Am. 2007;89(5):1010-8. Epub 2007/05/03.
Abstract: BACKGROUND: Prior studies implying associations between receipt of commercial funding and positive (significant and/or pro-industry) research outcomes have analyzed only published papers, which is an insufficiently robust approach for assessing publication bias. In this study, we tested the following hypotheses regarding orthopaedic manuscripts submitted for review: (1) nonscientific variables, including receipt of commercial funding, affect the likelihood that a peer-reviewed submission will conclude with a report of a positive study outcome, and (2) positive outcomes and other, nonscientific variables are associated with acceptance for publication. METHODS: All manuscripts about hip or knee arthroplasty that were submitted to The Journal of Bone and Joint Surgery, American Volume, over seventeen months were evaluated to determine the study design, quality, and outcome. Analyses were carried out to identify associations between scientific factors (sample size, study quality, and level of evidence) and study outcome as well as between non-scientific factors (funding source and country of origin) and study outcome. Analyses were also performed to determine whether outcome, scientific factors, or nonscientific variables were associated with acceptance for publication. RESULTS: Two hundred and nine manuscripts were reviewed. Commercial funding was not found to be associated with a positive study outcome (p = 0.668). Studies with a positive outcome were no more likely to be published than were those with a negative outcome (p = 0.410). Studies with a negative outcome were of higher quality (p = 0.003) and included larger sample sizes (p = 0.05). Commercially funded (p = 0.027) and United States-based (p = 0.020) studies were more likely to be published, even though those studies were not associated with higher quality, larger sample sizes, or lower levels of evidence (p = 0.24 to 0.79). CONCLUSIONS: Commercially funded studies submitted for review were not more likely to conclude with a positive outcome than were nonfunded studies, and studies with a positive outcome were no more likely to be published than were studies with a negative outcome. These findings contradict those of most previous analyses of published (rather than submitted) research. Commercial funding and the country of origin predict publication following peer review beyond what would be expected on the basis of study quality. Studies with a negative outcome, although seemingly superior in quality, fared no better than studies with a positive outcome in the peer-review process; this may result in inflation of apparent treatment effects when the published literature is subjected to meta-analysis.
Thank you so much for that contribution, Stephen. I must admit that it hadn't occurred to me to think of things in that way, but having read your comment here and the papers you linked to, I agree entirely with your conclusions. It does indeed seem implausible that existing evidence on editorial bias, which claims to show that editors are not biased, can be taken at face value.