Dianthus Medical Blog Archive

Breast cancer screening part 2

I blogged yesterday about how a story about the latest research in breast cancer screening had hit the news, even though the research had not yet been published. I noticed later in the day that there were huge numbers of tweets about the study on Twitter, almost all of which seemed to say that it had now been "proven" that breast cancer screening did more good than harm. It's disappointing to see so many people uncritically believing what they hear in the media.

Anyway, the research has now been published, so as promised, here are some thoughts.

Bottom line: I'm not convinced.

There are two parts to the paper, one estimates the benefits, and one estimates the harms. The part that estimates the benefits seems reasonable up to a point, although there are some problems. Using data from a randomised controlled trial of screening, they calculate that 323 women would need to be screened every 2-3 years for 7 years to prevent one death from breast cancer. That figure is probably OK, given that it comes from a randomised trial. However, they then go on to estimate that extending the screening period from 7 years to 20 years would have a proportionally greater effect, and hence prevent one breast cancer death for every 113 women screened. That's called extrapolating beyond the limits of your data, and is a bad thing. It makes an assumption that hasn't been tested.

It's also worth noting that they only look at breast cancer deaths. That's probably a reasonable measure of the benefits, but it would be more convincing if they could show a benefit on overall mortality. Classification of death isn't always as simple as it should be.

The harms from breast cancer screening, as I mentioned yesterday, result from false-positive diagnoses: in other words, being told you have breast cancer when in reality you don't. This can lead to anxiety, distress, and possibly unnecessary treatment. It is not a trivial matter. So the authors of the new paper tried to estimate how common false-positive diagnoses were.

Sadly, when we look at how they estimated the risk of false-positive diagnoses, the problems really start.

I found their methods totally unconvincing. They did not directly measure false-positive diagnoses (which, to be fair, would be quite hard to do), but attempted to estimate them based on the number of cases diagnosed compared with the number of cases that they would have expected to be diagnosed. As you can imagine, there are a great many assumptions involved. How do you know how many cases are "expected"? Some assumptions were probably reasonable, but were nonetheless guesses. And the real problem is that the final answer was highly sensitive to the assumptions they used.

A technique that is often used in mathematical modelling in medical statistics is called sensitivity analysis. This is where you allow your estimates to vary over plausible ranges and see what effect it has on your conclusions. They didn't do this.

So I've had a go at doing it for them. And the results are not encouraging.

One of the items they estimate is the relative incidence in breast cancer 7 years after the randomised trial started compared with at the beginning, taking into account age and time trends. They estimate it as 1.35. Well, it turns out that the precise value used has a huge effect on the outcome, assuming that their equations are correct (which I'm actually not too sure about anyway, as I found it hard to follow the logic they used to come up with their equations, and the obvious typos in their first equation didn't help). If in fact the real value were 1.25 instead of 1.35, then you would calculate about 4 times as many false-positive diagnoses. When there is that much sensitivity to the inputs of the calculations, then it's foolish in the extreme to rely on the results. And that's just one example. There are other assumptions involved as well.

Now, don't get me wrong. I'm not saying that breast cancer screening is necessarily a bad thing. But it undoubtedly has harms as well as benefits, and despite all the positive stories in the news yesterday, I don't believe we are even remotely any wiser about how common those harms are.

I shall never be invited to attend breast cancer screening myself, for obvious reasons. But if I were a woman, I genuinely don't know whether I would accept an invitation or not. I hope some better research will be done that will make the decision easier for those who are faced with it. Breast cancer screening has been around for a long time, and it's shocking that we still know so little about its outcomes.

← Breast cancer screening Breast cancer screening and peer review →