More evidence of Burzynski's dishonesty
I've been following the story of Stanislaw Burzynski, fake cancer doctor and proven fraudster, for a couple of years now. Regular readers of this blog may recall that I've written about him on more than one occasion before. However, I've just spotted something new about the dodgy way in which he disseminates his research findings. I'm surprised I hadn't spotted this one before, but in case you haven't either, then I'm going to share it with you.
Although Burzynski has registered a great many trials of antineoplaston treatment, publications of those trials are extremely thin on the ground. Indeed, of those 61 trials he has registered, he has yet to publish the results of a single completed trial.
However, he does have one clinical trial publication from 2006, which presents data from 18 patients cobbled together from 4 trials. I recently used this as an example in a workshop I was teaching on critical evaluation of medical literature. It was a pretty good paper, if your criterion for judging a paper is having plenty of useful teaching points when you're trying to teach people how to spot flaws in published papers. It would be hard to think of another criterion by which it would be judged a good paper.
Perhaps the biggest teaching point from this paper (though certainly not the only one) was the importance of accounting for all the patients in the study. A good paper should tell you how many patients were considered for inclusion in the study, how many patients were actually included in the study, and how many patients were included in the analysis. If (as is often the case) the number of patients analysed is less than the number of patients included in the study, then a good paper should carefully report reasons why patients were excluded from the analysis.
Burzynski's paper failed spectacularly on that count. It simply reported that from 4 clinical trials, 18 patients were evaluable. It gave no information whatever on how many patients were originally included in the studies, nor on the reasons why patients didn't make it into the final 18. No-one in the workshop believed for a minute that the total number of patients recruited into 4 clinical trials was 18. I'm pretty sure that the workshop participants came away from the day's training having learned something about why accounting for all patients is so important.
Now, I always think that it's a poor training session where the trainer doesn't also learn something, and thanks to the discussion we had of Burzynski's paper, I learned something about it that I hadn't noticed before. I found out not only that the paper was rather misleading in saying where the patients came from, but also that the way in which patients had been lost from the analysis was far worse than I had imagined.
We are told in the paper that one of the 4 studies which contributed patients, the CAN-1 study, contributed only a single patient. But the paper also says that the CAN-1 study had been completed and had been published previously. It even gives a citation. Here is the paper.
The first thing that struck me as odd is that it describes CAN-1 as a retrospective study. Reading the 2006 paper, one is left with the impression that they all came from prospective clinical trials. Obviously that's not true, if one of them is a retrospective study.
But here's what really surprised me. The 2006 paper also says that only one patient came from the CAN-1 study. But according to the publication of the CAN-1 study, it included 43 patients, of whom 36 were evaluable.
So why does the 2006 paper include only 1 of the 36 evaluable patients from the CAN-1 study? And how many patients were recruited to the other 3 studies, but not included in the paper?
Something very fishy is going on here.