Can ethics committees help tackle publication bias?
In my last blogpost, which was inspired by Ben Goldacre's latest book, Bad Pharma, I explained why I thought Goldacre was wrong about interim analyses. This blogpost is also inspired by the same book, but in the interests of balance, I'm going to talk about another area where I think Goldacre was absolutely right (this may not be my last post based on the book: watch this space).
One problem Goldacre describes in his book is the problem of non-publication of research. If studies are done in human subjects but not published, then those studies might as well not have been done at all. Worse still, if the studies that are not published are systematically different from the ones that are (and the evidence is pretty clear that they are, and that positive studies are more likely to be published than negative ones), non-publication of research can seriously bias the literature. This is known as publication bias.
The problem of publication bias has been on the radar of many people in recent years, although because of the long lag times in researching such things, it's not clear to what extent recent efforts to fix the problem have been successful. Nonetheless, it's probably safe to assume that the problem has not been 100% fixed, and that some studies are still done in human subjects and do not get published. To ensure the completeness of the medical literature, it is therefore important to ensure that such efforts continue.
A suggestion Goldacre makes in his book (on page 47) is that research ethics committees should not approve studies from researchers with a history of non-publication of their previous research. Specifically, he says:
- No person should be allowed to conduct trials in humans if a research project they are responsible for is currently withholding trial data from publication more than one year after completion. Where any researcher on a project has a previous track record of delayed publication of trial data, the ethics committee should be notified, and this should be taken into account, as with any researcher guilty of research misconduct.
- No trial should be approved without a firm guarantee of publication within one year of completion
There are some problems with definitions and details here, but in the main I think they are solvable problems, and in principle, I wholeheartedly support these recommendations.
I am a member of an NHS research ethics committee, and at the meeting I attended last week, I raised these suggestions to the committee. It's probably fair to say that they were sympathetically received, although there was some lively discussion about how they might be achieved in practice.
To some extent, we are already following recommendation No 2. We always insist that investigators commit to publishing their results (although we don't specify a time limit). The problem is that we currently have no mechanism for enforcing that commitment, which is why we really need to introduce recommendation No 1 as well.
So how might it work in practice?
This is where it gets tricky, although probably not so tricky that it couldn't be made to work. The first problem is how we would know whether a researcher had previously been involved in research that they had not published. We would need to have a record of all other projects for which the researcher had been responsible. I don't know whether the National Research Ethics Service (NRES) keeps a central database of all their applications, and no-one else on the committee seemed to know either. However, if they do, then it should be a reasonably simple matter to query the database to provide a list of all other applications the researcher has made previously. If they don't, then it could be trickier.
And of course, we would only know about studies conducted in England. If an investigator had worked on studies in other countries, it's unlikely we would know about them.
But provided we can know which projects investigators had previously been involved with, we could simply ask that they submit a list of citations to the published papers with their application. I'm sure most investigators have a list of their own publications easily to hand, so this should not be too onerous a task for them.
However, one thing we discussed on the committee is how you determine whether a researcher had been responsible for non-publication. What if a researcher had been involved in a project as a sub-investigator, and had changed jobs before it would have been reasonable to expect the project to have been published? In this case, non-publication would probably be the fault of the researcher's previous boss. Would it be reasonable to penalise the junior researcher for that? Probably not. The trouble is, when you start to think about it, you get into all sorts of grey areas. I'm not sure whether you could come up with a set of hard and fast rules about when you would hold someone responsible for non-publication. Maybe you'd just have to consider things on a case-by-case basis. But that's probably no big deal. Ethics committees are pretty much used to doing that.
There's also the question of what time interval would be a reasonable one, after which you would say that the publication has been delayed. Goldacre suggests 1 year. In an ideal world, that might be reasonable, but in reality it's actually quite difficult to publish results that quickly, particularly for large multicentre studies. It could easily take a couple of months or more for data management activities to be completed: entering data from case report forms, generating queries, getting investigators to respond to queries, QC of the database, and so on. Then the data need to be analysed. Provided the statistician is immediately available and has written all the analysis programs in advance, this needn't take long, but in real life, statisticians often have multiple projects and may not be able to drop everything when a study completes. So perhaps allow another couple of months here. Then you need to write the study report. This can easily take a couple of months, and often longer if many people are involved in review cycles. So we may not be able to start writing the paper until at least 6 months after the study finishes, and given that some journals have a 6 month lead time on publication, then you may not get the study published within a year even if you do write the paper in a morning. Which you won't.
I doubt that a one-size-fits-all approach would be reasonable here. Perhaps 1 year is a sensible time limit for single investigator studies, but a longer interval would probably be needed for multicentre studies to take account of the extra complexity. Maybe one solution would be that investigators would need to explain how far along the route to publication they have gone if the study finished more than 1 year previously, and then if 2 years had elapsed, then we might not approve any more studies.
Anyway, although this is not a completely simple idea to implement, I do believe that all these problems are solvable. I am led to understand that NRES are interested in these ideas, and I am hopeful that denying approval to researchers who have previously failed to publish research may become NRES policy before too long.
And needless to say, dear reader, if you have any thoughts on how this idea could best be implemented, then please let me know via the comments form below.