An economic analysis of which journal to choose for your publication
I saw a fascinating question asked on Twitter the other day about choosing a journal for submission of your latest research paper. The question was asked by @deevybee (aka Prof Dorothy Bishop from Oxford University), who had been discussing the best target journal with a colleague (let’s call him “Al”, as we all know that a great many co-authors on papers seem to have that name).
Dorothy wanted to submit to PLoS One, whereas Al wanted to submit to a more prestigious journal. The advantage of publishing in PLoS One is that they have a very high acceptance rate and are fast (oh boy, are they fast: they took less than 2 minutes to reject my latest paper), so the paper is likely to be published quickly. Submitting to a more prestigious journal may achieve a higher-value result in the end (if you believe that the value of a publication is related to the prestige of the journal), but the paper may be rejected, causing delays while another journal is found, and even if accepted it is likely to take longer than the speedy PLoS One. Dorothy wanted to know how to convince Al that the advantages of rapid publication outweighed any potential extra kudos of a higher-ranked journal.
So, who is right? I don’t know the answer to that, as it depends on various assumptions as we shall see below, but it occurred to me that we can borrow some tricks from economists to arrive at a rational answer of which is the better journal to choose.
The important economic concept here is that of discounting. Sometimes people find this confusing once you start thinking of it in mathematical formulas, but the underlying concept is remarkably straightforward and intuitive. Discounting is a way of measuring how we attach less value to events that happen in the future compared with the same events that happen now. It’s easy to illustrate the concept. Suppose I asked you whether you would prefer, other things being equal, to have your paper published today or next year. That would be a no-brainer. Discounting is simply a way of measuring the strength of that preference.
A discount rate, normally expressed as a percentage, is the decrease in value associated with a unit time (normally a year). If you had decided to use a discount rate of 10%, and I asked you whether you would rather have your paper published in a reasonably prestigious journal in 1 year from now, or in a journal with only 90% of that prestige, but published today, then in theory you should have no preference between those options. If you did have a preference, then you have probably chosen the wrong discount rate.
Let us assume that we have 2 options: we can submit our paper directly to PLoS One, or we can submit to a more prestigious journal, and then subsequently submit to PLoS One if the first journal rejects it. We can calculate a net present value (NPV) for each of those courses of action, and the strategy with the higher NPV is the rational choice.
(In practice, we could submit to another, intermediate prestige journal if we are rejected by our first journal, and only submit to PLoS one if rejected there as well. That would make the calculations more complicated, but no different in principle. For the sake of simplicity, I have assumed that we’re only submitting to one other journal before reverting to PLoS One.)
Now, to calculate the NPV, we need some data. Some data are reasonably objective and easy to measure, others are more a matter of judgement. For a start, we need to decide what discount rate to use. NICE uses a discount rate of 3.5% when looking at cost effectiveness over time in its health technology appraisals, but I suspect that is far too low for our purposes. The discount rate probably depends on how rapidly moving a particular scientific field is: in some fields, a paper may be almost worthless if it is delayed by a year, and then it would be appropriate to use a very high discount rate. Determining the appropriate discount rate is not easy, it is subjective, and it depends on individual circumstances. I’m going to use a discount rate of 20% here, but I should stress that that figure is no more than a completely arbitrary guess.
The next thing we need to do is to rate the value attached to the different publications. Let’s assign an arbitrary value of 100 to PLoS One. The number you would attach to the more prestigious journal will be more than 100, and again is a matter of judgement. How much value do you attach to being published in a “better” journal? Please don’t tell me that you can measure it by impact factor, as that is a very crude and imperfect measure of the worth of a journal. It really comes down to individual judgement. For the sake of illustration, let’s say our other journal has a value of 150. But in reality, that is a difficult number to pin down, and probably the most subjective part of this whole calculation. I expect if you asked 5 researchers to judge the value of the same journal you’d get 5 completely different answers.
The other bits of data are easier to determine: we need to know the time to either rejection or publication in each journal, and the probability of acceptance. For simplicity, I shall assume that the probability of acceptance in PLoS One is 100%, although in practice it is less (I know it’s less than 100%, because they rejected my paper!) Let’s also assume that PLoS One takes one month to publish. Finally, let’s assume that our more prestigious journal takes 9 months from submission to publish a paper if it accepts it, and 3 months to make a decision to reject, and that we have a 20% chance of being accepted.
If we plug in all those numbers, we see that if we submit to PLoS One our paper is worth 92 points in year 1 (not discounted, but we only have it published for 11 months of the year), 80 points in year 2 (published for the whole year, but discounted by 20%), 64 points in year 3, etc. If we add the numbers over a 5 year time horizon (let’s assume that everyone will have forgotten about our paper in 5 year’s time) we get a total NPV of 328 points.
If our paper is submitted to, and accepted by, the better journal, it is worth 37.5 points in year one (worth 150 points for a whole year, but only published for 3 months of that year), 120 points in year 2 (150 points discounted by 20%), etc, and worth 392 points over the full 5 years. However, if it is rejected, it is worth 67 points in year 1 (published in PLoS One for 8 months of the year), and then the same as if it had been published in PLoS One all along for subsequent years, giving a total of 303 points. Since the probability of acceptance is 20%, we take a weighted average of those figures (0.2 × 392 + 0.8 × 303), giving an overall NPV of 321 points.
There are, of course, a number of simplifications in that calculation, and more importantly, some wild guesses about the numbers to input into it. However, if those wild guesses turn out to be right, then the rational strategy for Dorothy et Al should be to submit their paper to PLoS One.
If anyone would like to play around with their own figures for looking at this, I have an Excel spreadsheet which I’ll be happy to email to you if you get in touch.
Hi Adam.
This is an interesting analysis. I think a few other factors may also have had an impact on which journal to go for:
1. As well as being potentially less prestigious (depending on which other journal you are comparing it to), PLOS One charges a publication fee of $1350, which is really quite expensive
2. However, on the flip side, it is open-access, does have fast publication times and it is growing in credibility.
Therefore the decision is a difficult one. That said, if you are an academic author - would you go for a journal with a high impact factor (e.g. The Lancet) - or a lower one (e.g. PLOS One)? Given that so much prestige and status is still placed on impact factors for academics, I think this will continue to be a no-brainer until the attitude towards impact factors in academia changes.
Thanks for the blog.
Ryan
Thanks, Adam for giving such detailed consideration to my question. I had not thought of this in terms of delay of gratification and I really liked your analysis.
There are, however, additional factors that come into the equation.
The cost issue mentioned by Ryan is relevant for some people, but not if you are required to make your work Open Access. My funding comes from Wellcome Trust who insist on this – and cover the costs. I think most big funders are moving in that direction. Other subscription journals will typically charge more than PLOS journals if you require Open Access. So though it seems a lot of money, for Open Access it is on the cheap side. And there are considerable benefits to the author as well as to the general science community for having work Open Access – greater accessibility leads to more citations.
Sometimes (and in my current case) the paper describes work that is relevant for a grant proposal you are writing. If it is accepted for publication you can treat it as work in the public domain, whereas if it is delayed you won't be able to do more than describe it in an appendix, and reviewers will give it less weight. So timing can be critical if you want to get funding to build on the work you are trying to publish. In other cases, it may be critical for someone's career to have a paper appear in time for an application for a job or fellowship. I guess this would mean introducing some kind of time threshold into Adam's model.
But my main reasons for favouring PLOS have more to do with ideology than strategy. First I dislike the publishing model whereby scientists do research with public funds, then supplicate to publish the work in journals which are then sold back to Universities at huge profit to the publisher. Second, I am concerned by the bias introduced into science by selective acceptance of 'interesting' results. I have done a lot of journal editing in my time, and early on I was told that a good way to decide if a paper was acceptable was to read just the introduction and methods, to determine whether the authors had identified an important question and set up a methodologically sound study to investigate it. Whether or not the results were 'interesting' should not play a part if we take scientific logic seriously. In this regard, the ideology of PLOS journals appeals to me: http://www.plos.org/journals/index.php. Third, I like the idea that a piece of work is evaluated in terms of its own merits, rather than the company it keeps. The notion that a paper must be better because it appears in Nature Neuroscience than PLOS One strikes me as a form of snobbery. But I appreciate I am in a much stronger position to take a stand than more junior scientists who have yet to establish their reputations, and who may work in situations where publications in big name journals are seen as essential for career development.
You will be interested to hear that, after raising these issues on the PLOS One blog, and subsequently discussing more specific questions about suitability of PLOS One as an outlet for psychology papers, I've now been invited to join the editorial board. So I've landed myself with more work, but I will get a chance to see how well the journal works in practice.
Many thanks for the thoughtful comments, Ryan & Dorothy.
My analysis was rather simplistic, and you've both identified important limitations. I hadn't included costs in the analysis, and that would need to be done if you were going to do it properly. Not only any publication fees charged by the journal, but also the cost of your time in going through the submission process again if you are rejected the first time.
I also made an implicit assumption that the discount rate implies a constant diminishing of the value of the paper with time, and as Dorothy rightly pointed out, that assumption is likely to be wrong. If you have important events that increase the value of the paper, such as grant applications or job applications that happen at a fixed point in time, then the value of the paper will diminish suddenly once those points pass. So you'd need to use a more sophisticated model of the relationship between the value of the paper and time than the simple linear discounting I described.
I ignored the philosophical advantages of open access in my analysis above, but I agree with every word that Dorothy said about it. I think it's just a far more ethical model of publishing. But again, my career also doesn't depend on which journal my papers are published in, so it's understandable that others may think differently. It's just disappointing that we seem to exist in a system that rewards scientists more for the extremely dubious honour of publishing in high prestige journals.
In fact it's worth mentioning that I teach a workshop on critical reading of medical literature for EMWA, and we use a couple of papers in the workshop as examples of poor-quality research to pull to pieces. I deliberately choose papers from high-ranked journals (I think it was one from BMJ and one from JAMA last time), so that participants come away learning that just because a paper is published in a "good" journal doesn't mean that it's believable.
Of course, if you do believe that journals such as PLoS One have intrinsic advantages anyway, then you can easily incorporate that in my model when you assign the value to the different journals, by giving a higher value to PLoS One than to a journal with a different philosophy. Whether you can put an objective value on the advantages of open access using the tools of economics is a question I shall leave to proper grown up economists: it's probably not a question for a humble statistician who merely dabbles in economics as an amateur.
So, clearly a proper, sophisticated, usable analysis of which is the best paper to submit to is going to be a serious research paper in its own right.
Now, which journal should we submit it to?
I highly appreciate your analysis.the sad part with respect to publication is most of the papers from the third world countries, particularly Middle Eastern are indiscriminetly rejected without any comments. Most of the time acceptance is based on the name of the institution and the co authors.
i wish people revise this attitude and judge the paper for its quality of work.
Thank you for giving this oppertunity for expressing our views.To be honest i really like PLoS ONE, but i cannot publish because of the above reasons.
I pray and hope that the scientific community changes their attitude.
I'm afraid LK brings up an important issue regarding publication bias.
May I say that I learned my English as a second language both at home and school, hold a CEP by the University of Cambridge ESOL examinations, have spent more than one year as a research and clinical fellow at several Medical Schools in the USA, one of them as Int'l Guest Scholar of the AMCS. I do enjoy frequent speaking, listening and reading English either personally or through printed and electronic media. I have published more than 10 papers in high IP journals in my field. All in English, most of which were written completely, or almost, by myself.
To be fair, I feel that , say, 40% of my submissions were dealt with regardless of their provenance from a small country in the Southern Cone of South America . However ... I have perceived (well, yes, this is rather subjective) in other cases that my credibility or priority were "second class". Particularly if no prestigious American sponsor appeared somewhow underwriting my work.
While one reviewer praised one of my submitted artcicles as "clearly written", other from a different journal suggested a complete rewriting "by somebody who is familiar with Basic English". I slightly changed no more than 2 sentences and a few words, and it was promptly accepted and published.
Reviewer's ignorance of other countries elementary facts of geography or culture shows up now and then, sometimes in an almost funny, even ridiculous way. As for instance: an article analyzing intensity and duration of cold intolerance in severely injured digits sustained by people living in the southernmost state of Brazil was initially rejected by a prestigious Northern Hemipshere journal on the grounds that Brazil is a tropical country, therefore how could the author assess exposure to low temperatures? (unless by ethically borderline experiments, I presume). The reviewer had to be informed that in southern Brazil, winter can be quite chilly in fact, with lower temperatures under 40º Fahrenheit for about 3 months, or longer.
In sum: I know this is not the main point of Dr. Adam Jacobs text. I have taken due notice of his use of discounting applied to scientific publishing and will use it in another context, another journal, another language, giving him credit for this interesting insight.
On the other hand, I felt compelled to contribute to KL's relevant issue with publication bias. But fear not: I am not pleading for an "affirmative discrimination" system applied to scientific publishing. In Spanish, it would be the case that "la enmienda fue peor que el soneto".
Dear all,
I found this blog after reading a recent paper in PLOS One on publication bias by Danielle Fanelli - (http://www.plosone.org/article/info:doi/10.1371/journal.pone.0010271#pone.0010271-Jennions1) and I must admit, as a Psychology undergraduate, aiming for a career in academia, it has left me feeling more than a little disconcerted.
Online advice given by Alex Wood of Manchester University for securing an academic post is, to put it simply, aim to have as many papers published as possible, preferably in prestigious journals such as 'Psychological Bulletin'.
However, given Dr Fanelli's finding that publication pressure increases scientific bias, one is left wondering if the traditional journals are really so prestigious or even if the research findings published are really as robust as we are led to believe.
Finally, although these findings have left me feeling a little worried, I am somewhat consoled by Professor Bishop's acknowledgement that considering a paper to be better because it is published in a traditional journal is akin to academic snobbery. I just hope that in the near future more and more of those involved in recruiting academic staff, weigh the merits of of a candidates publication on its own quality, rather than the 'quality' of the journal in which it is published.
Thanks for your thoughts, Deborah. The advice to have as many papers published as possible is probably good advice in the sense that it's the way to game the system as it currently is, but it's terrible advice in another sense as it encourages academics to prioritise quantity over quality.
I do worry about the future of academia when success is measured in such a flawed manner.