Worrying scientific illiteracy among our elected representatives
Thanks to the wonders of Twitter, I have just found out (via @bengoldacre and @DrEvanHarris) that one of our esteemed elected representatives, David Tredinnick MP, has tabled 3 Early Day Motions singing the praises of homoeopathy.
These EDMs are based on 3 published papers in the peer reviewed literature, which claim to show homoeopathy is effective. As anyone who has taken the workshop that I run for EMWA on critical reading of medical literature will know, just because something is published in a peer reviewed journal does not mean it is true.
So let's look at the papers.
The first is a randomised, double-blind, placebo-controlled study of homoeopathy in the treatment of insomnia. It appears to show a significant benefit for homoeopathy over placebo. However, there are a number of problems with the study. First, it is based on a small sample size (N = 30), so even if the results are legitimate, the difference could very easily be a statistical type I error (ie obtaining a P value of < 0.05 just by chance, which happens quite often). But there are also some deficiencies in the paper that make the results less credible. First, although they report results from 30 patients, 33 patients started the study and 3 dropped out and weren't analysed. We are not told what happened to those 3 patients (2 of whom were in the homoeopathy group). OK, it's only 3 patients, but in a study of this size, excluding them from the analysis could easily skew the results, turning a non-significant result into an apparently significant one.
The statistical methods are also a little opaque. It seems odd that they used the Kruskal-Wallis test, at test generally used for comparisons of 3 or more groups, when only 2 groups were compared. OK, the Kruskall-Wallis test can be used to compare 2 groups, but generally it isn't, so its use here starts to ring alarm bells about the level of statistical expertise behind the paper.
There is also a lot of emphasis placed on significant improvements from baseline within the homoeopathy group. That's not surprising: patients enrolled in a clinical trial generally get better. The important statistical test is between the homoeopathy group and the placebo group at endpoint, and although that's also significant, it is less so, and receives less emphasis in the paper.
But perhaps most worryingly, the extent of data presentation is very limited, so there is no way to know whether the authors' conclusions are plausible. It would be usual to present some measure of variation of the outcomes (for example a standard deviation), but this does not appear anywhere in the paper. All we are shown is an average value, and we are not even told whether those average values are means or medians.
When the quality of reporting is that poor, it is hard to trust the analysis presented.
The second paper is a double-blind randomised trial of homoeopathy vs fluoxetine in the treatment of moderate to severe depression. This has already been discussed comprehensively on Michael Grayer's blog. The big problem with the study is that it was designed as a non-inferiority study, and such a design means that you need to be absolutely sure that the comparator treatment is effective. Well, there are no doubt some patients who benefit greatly from fluoxetine, but in unselected patients, most antidepressants are not all that much more effective than placebo. There was no attempt to exclude placebo responders from the study, so what we are probably seeing is that there was simply a large placebo effect in both groups. Another problem with the study is that there were substantial numbers of dropouts in each group, and the way in which the missing data was handled is unclear in the paper, but likely to be of great importance.
The third study is an in-vitro study of the activity of a homoeopathic preparation against breast cancer cells. Well, that's been discussed pretty comprehensively elsewhere (here and here), and I have little to add to that, except to say that I too am dismayed that a paper with no statistical analysis whatsoever somehow managed to slip through peer review.
Let's just hope that most of our elected representatives are not taken in by all this daft mumbo-jumbo.