A vanity publication

Current Biology used to be an independent outfit until, probably due to its relative success, was taken over by Elsevier. Nevertheless, Geoff North, its seasoned editor, managed to keep some autonomy from the better known sibling journals, particularly the Cell family, in the way it is run and the topics it picks up for publication. It is not a bad journal and is much appreciated as part of the second tier to which many authors turn when their papers are rejected from the higher end of the market we have created. Geoff North has some things in common with other editors but others, most notably his long experience in the trade, are different. However, it is clear that as in many facets one has to defend the realm and a few days ago he published, in his own journal, a signed editorial saying, amidst much food for thought, that blogging is ‘vanity publishing’ (http://bit.ly/13KavrE). I do not know what he was reacting to (though I have been told that some internet criticism of some Current Biology papers might have been a trigger for this) but this was surprising and, perhaps, one step too far in a direction which we must redress. Who are these people?

It is not the first time that an editor writes an editorial or a review on their own journal (G. North does it regularly in Current Biology) and while there is nothing wrong with this (other than the small fact that if you tried to publish the same thing in that journal you might have a hard time), it is what he says and the manner in what he says it, that is alarming and dangerous. He is using his own journal, with all the power that we have given him, and it, through our work, to criticize our right to express ourselves in the media that technology and the times have created for us. The act of blogging, which is by now well established in all aspects of life is being called ‘vanity publication’ by someone who is using, as Casey Bergmann put it in Twitter, his own pulpit –which is not open to people- to criticize, denying others the use of an open pulpit. This is more serious than it sounds because it shows to what degree we are allowing journals to run our professional lives. Not all journals are the same, not all editors are the same but perhaps people should realize the degree of trouble that is in here. We are already patronized in the editorial rejection letters that come back 24/48 hours after submission to those journals, we are patronized when our papers are rejected after two or three rounds of review because we have failed to do one more experiment suggested by a fourth reviewer, we are patronized at meetings when we are told how good or bad our work is, how many more experiments we need to do for a submission, we are patronized on how to present our data, write our papers, do our experiments, carry out our work………..increasingly journals are telling us how to do Science and what science to do. And now we are told that the only vehicle for our thoughts and science is The Journals, Those Journals. How much more of our intellectual independence are we prepared to give up?

I do not believe in a Science without Journals. At this moment such thing is a Utopia which would be foolish to try to implement as it would break the community into two: those who will go for it because either intellectually or economically have to (remember that, increasingly in order to publish you need to pay for which you need grants which you only get if you publish…….in the right journals) and the others which are the same i.e. the rich get richer. What can we do? This is food for thought but maybe we need to teach the journals whose show is it. We need to start educating the journals and reverse the process which they have inadvertently established over the last twenty odd years. How? Think carefully where you submit your papers, refuse to undergo more than one round of reviews, remember DORA and use it.


First thoughts on SF DORA

First thoughts on SF DORA (http://am.ascb.org/dora/)

The following was posted in The Node as a response to a call for comments on the SF Declaration of Research Assesment (http://thenode.biologists.com/san-francisco-declaration-on-research-assessment/news/)

The declaration is a very important step forward to untangle the situation we have got ourselves into. However, the size and the direction of the step will be determined by how, who and how firmly the resolutions are implemented. Some habits are difficult to quit and one can see people not mentioning the impact factor (IF) explicitly but using it implicitly valuing publications in the same journals as a proxy for ‘quality’ and, basically, perpetuating the situation. As the Leopard says in the famous novel of its name: ‘everything has to change for everything to remain the same” . There have been already some voices of scepticism in this direction. However, let us use the concern to guide us moving forward and take the opportunities that lie within San Francisco’s DORA.

DORA is, above all, a commitment of individuals and institutions to a common sense principle: that the value of a piece of scientific work cannot be determined by a publisher, that scientists should be the arbiters of their own work and that judgement should be done in a transparent and considerate manner. The history of Science shows many errors of judgement (the list, while interesting would be long to discuss it here) but we are not talking, for the most part, of world changing discoveries but of something simpler, the piecemeal and steady increase in knowledge and the use that this has in allowing people to pursue it. And here it is that we, as a group, have allowed publishers to use us to create a tangled web of control over our activity that stems from their power and position.

One often hears the question “have you seen the latest paper of so-and-so in Cell/Nature/Science (CNS)?” And when you ask what is it about, after a few mumbles we are told that IT IS IN Cell/Nature/Science; as if this were the seal of authority for the validity of the publication, whatever its content. It is like those experiments in which the colour of the water determines its taste. DORA suggests that this should end and that we should begin to bother to read and consider the works on their own merits: water is water, whether pink or blue. Like many of you I have often heard that when one is faced by a large number of applicants for a position or a fellowship, the ‘publication record’ (and we know what this means) is a quick way (the best?) to sort people out. I am not going to deny that very good science is published in High IF (HIF) journals, but I also know that this is not the best measure of Science. I have suggested before and do here again, that a good practice should be to ask applicants to list their three or four best publications and write a paragraph explaining their impact and potential. Not so much the selection but its justification will tell you a great deal about the applicant and put the rest of the application into perspective. It is, I believe, suggestions like this which in the spirit of DORA, will change the attitude of individuals and allow us to move forward to the values of the past (that rest firmly on the insight and projection of a piece of research) but facing the challenges of the future (increased technical and data gathering abilities and growth of science based population).

As for the journals, DORA gives them responsibilities and we shall see if they rise to them. Here I have less faith. Nature magazine has not signed DORA and has explained why (http://bit.ly/17AFlYF) . It has also been pointed out that Nature (and others) have been encouraging care and thought with the IF for years); this is well known and welcome. Personally I take their refusal to sign as a statement of honesty: they recognize that agreeing to DORA should have some consequences for the scientific publishing world and they are not prepared to meet them (they express it in a different manner but this is essentially it). After all, Nature is a commercial enterprise and what they have to do is maximize profit and market value, both of which depend on their IF and DORA, implicitly, bears the potential to undermine these (they have recently claimed that it costs them around £30K to publish a paper). So, why should they adhere to it? They have made it clear that the IF is something created by the community which benefits them/us so, quite rightly say that it is not their doing and wonder what gain is there for them in ignoring it. They point out that many of the statements in DORA have a very broad sweep, and they might be right but this is good. Let me put forward an example of something implicit in DORA for publication policies.

I have heard editors often say that the reason to keep rejection rates high is to try to boost up the IF of their journals (there is no point in denying this; it is a fact). If the IF is not any more a measure of the merit of a publication, there is no reason to keep rejection rates as high. I do hear people saying that this is not true, that what they want with the rejection rate is quality and that they will keep their rejection rates………..if this happens, and in the short term IT WILL happen, we shall be subject to IF by stealth. But still, let us continue with the argument. PLoS, as I have discussed before (http://bit.ly/Zi7OKH) is a very good example of the failure of this policy. PLoS Biology was born with many ideals in mind, but one of them was to create an OA journal that would compete with NCS. The OA bit has succeeded (and we are all grateful for that), the competition with NCS has not and this despite their very high rejection rates. For example, compare articles in April 2013 between them and another high profile PLoS journal: PLOSBiology 19 publications (IF 11.45) PLOSGenetics 69 publications (IF 8.69). They can talk to me about quality but actually it is not justifiable that two sibling e-journals have such disparate publication rates. The data also show the poor relationship between IF and rejection rate. Why the difference?. Because they (and many others) believe that increasing the rejection rates will increase the IF; it does not work like that. But…….. one of the consequences of DORA should be that, since they do not have to pursue the HIF value, one would expect a rise on the publication rate of some of these journals. A good reason for them would be that they might want to capture the papers that will make a real impact (which might not be the ones that are presumably doing this at the moment……). Let me repeat it, not doing this will be IF by stealth.  NB: I am not talking about ‘metrics’ here, I am talking about IF and how it is understood, evaluated and perceived.

Along these lines, let me spell out what I see as a most important consequence of this declaration and how it can be harnessed for an all around win. If we waive the value of the IF in the assessment of careers and science, we get back our rightful position of running the publication of our work and have the journals compete for our toilings rather than the way it is now (the other way around, actually): we compete amidst each other for the space in the journals. Think.

What is the incentive of sending your next piece of work to a journal with a high IF? Let me become more clear in the context of the interests of The Node. When you have a good piece of work, why would you send it to one of the Cell Press journals, Nature, Science or even PLoS Biology where you know ahead of time that, if sent to review, it will linger through several rounds of review for any time between 8 months and a year, without a guarantee that it will be in the end published and, in the case of Cell Press, with the threat that it will be rejected if a related piece of work is published in the meantime? In the process, in these journals the professional editors will meddle with YOUR science, will decide WHAT experiments YOU NEED TO DO, this will make you spend time and money improving a paper 10-20% and all within the cloak of anonymity of a reviewing process which everybody admits is seriously flawed. Why don’t you send your work to Development, Dev Biology or, in terms of generality, EMBO J or maybe even eLife? In these journals, though there are similarities in the general process, you will have a fair viewing, your work will be dealt with by people in your field and you will have it out soon with a minimum of hassle. If the IF is not any more an issue, you should be confident of your Science and make sure that it is out (after a fair, transparent and helpful peer review) as fast and efficiently as possible so that it can be judged by your peers. So, DORA offers an opportunity for you and also for journals like Development to catch up and regain some territory over others which have been sailing on the wave of the HIF. By increasing their acceptance rate (and this does not mean to decrease the quality of the publications) Development, Dev Biol, EMBO J can capture a lot of very good pieces of work. This will entail changes in their editorial policies, but perhaps DORA means change for everybody and we should face this together. The real question here is how confident are journals on their possibilities. I have always wondered why Development is happy being a journal that trails on the wake of, say, Dev Cell or as they say a ‘community journal”. If they want to be the latter they should make sure that they rally the community with them. I think DORA is presenting them, and others, with a tremendous possibility to lead. I suspect that Nature magazine knows this and we should respect that they do not want to play ball. In making this statement, they are making their colours clear.

This is the time for us to choose between, for example in developmental biology, Development/Dev Biol and Dev Cell. The CoB should be aware of this and play their cards with imagination and courage.

For us, the question is: shall we seize this opportunity? As Leslie Vosshall said (and I cannot repeat this enough because it is SO right and to the heart of the problem): “scientific publishing is an enterprise handled by scientists for scientists, which can be fixed by scientists” (http://bit.ly/Q52zvY). DORA is an opportunity and the question is: are we bold enough?

In response to a comment on the relationship between rejection rates and quality

Let us not be naïve on the issue of rejection rates and scientific quality.

There is 20% rejection rate and 80% rejection rate. The first is clearly likely to lower the quality of the science that a journal publishes, the second one is more interesting. Imagine an aspiring ‘glamour journal’ (we are not allowed to talk about IF so let us not to) with a 80% rejection rate, publishing very good science (it is possible with this %) but, for reasons that we might not get into (become more elitist, increase its glamour factor, etc….) might want to increase the rejections to 90% (this happens, believe me). Given the current trends, it is likely that the 20% of the papers which could be published are very good and I can assure you that what paper falls in the 10% that would be rejected (in the change of policy) has nothing to do with quality but rather with perceived impact or, simply, arbitrary decisions. So, in this case keeping the rate at 80% would not lower the quality of what is publishing and, frankly, would not change the glamour factor that much.

In this context, the comparison between PLoS Genetics and PLoS Biology to which I have often alluded to (see above comment) is a good example of this. Do you think the papers in PLoS Genetics of lesser scientific quality than those in PLoS Biology? I don’t. Do you really think that papers in Dev Cell are better than those in Development? I don’t. In fact what I often find is that “a DevCell paper” is a hyped version of a Development paper and many papers which end up in Development come from journals like Dev Cell where, after (some times a few rounds of) review they are rejected because……reviewer 3 or simply, the editor was not ‘convinced’.  And the Development papers tend to be good. All this refers to rejections after review, not editorial rejections for which I suggest people to read a recent article  (http://www.sciencedirect.com/science/article/pii/S0169534713001262) and remember that what it says is not restricted to Ecology and Evolution. Editorial rejections (which cannot be on the basis of the science because they do not involve peer review, though in some journals do involve an interaction with the editorial board) are, in general, the most common ones.

So, in the age of e-publishing quality should indeed be the criterion but it seems to me that it is not possible to justify publication rates below 10%, which is what glamour journals often do

Personally I am fine with this, as long as it is made clear that the decision is a matter of opinion and is not justified on scientific grounds. I have never understood why in the occasions of two glowing reviews and one negative, the negative wins.

DORA opens the door to many new possibilities but for them to be realized we are going to have to leave behind many hang-ups.

On ambition

In science ambition has always been understood as the intention to do something interesting, either by solving a difficult problem that will provide a general insight into the workings of Nature (gravity, the atomic structure of matter, evolution….) or to create something that will be generally useful (a microprocessor, a steam engine, the Golden Gate or the ability to perform in vitro fertilization). However, increasingly, at least in the life sciences, the term has adopted a new meaning: ‘where you want to publish’. You are ambitious if you want to publish in one of those journals that we shall call now glamour (rather than high impact-see DORA) and you are not ambitious if you want to publish in less glamorous journals. You certainly lack ambition if you want to publish in a journal like PLoS ONE. This attitude places the challenge not in the scientific problem but in the difficulty of publishing how far one has got down to the solution i.e. independently of its real significance the value of the achievement will be given by where it is published. There is danger here, and we are deep into this path.

For most people the challenge is not any more to solve a scientific problem but to get our work past editors and reviewers; and as we all know it is some challenge. By and large, at the moment, scientific discovery is just the token for a publication which holds the key to jobs and reputations. The ‘best science’, as some of these publications want to call what they publish, is determined by the relationship between the author and the editors and whether the author is prepared to jump through hoops, sometimes expensive hoops, that will be posed to them. Quality and significance of content, really, is not the main issue.

Are we really that insecure? Do we need our science branded by the journals we publish in rather than by our peers or the meaning of our work? It could be argued that peer review achieves this, the judgement by peers, but we all know that the process of peer review has become the price we have to pay to have our work published and only rarely helps as much as we would like. DORA, we are told, will turn the focus on the work rather than on where it is published. It will take time for this to become a feature and much will depend on whether the younger generations want to continue weaving the complicated net that we have laid out for ourselves or whether they are really ambitious and prime content over cover.

In the short term, it would be good if we recovered real ambition and rather than bootstrapping sophisticated techniques dressing the product in the manner that the journals ask us to do, we dealt with real problems. At a time of maximal technical possibilities, we are not being very imaginative nor ambitious about the questions that we ask with them.

Orwell’s principle of Peer Review: all authors are equal in the eyes of the editors but in High Impact Factor journals, some Authors are more equal than others.

It happened again. Talking to someone about things of the trade: papers, publishing in the life sciences actually, happened to mention a piece of work I had just seen in Cell; “did that get published?….” my companion jumped…. “really?……..I rejected that work!….. but well, it is so-and-so and it is Cell…..”.  Indeed; the topic is what they call ‘hot’, the authors are well known in the field and, one presumes, the editors are easily impressed by names, trendy topics and technologies…….who cares about content or rigour: plus ca change…….peer review which for most of us is a complicated and treacherous passage is, in other instances, a formality which has to be dealt with but which the author, sorry THE AUTHOR, can choose to comply with or not because the work will be published anyway. In some rare, but noticeable instances, the decision has been taken before submission.

Difficult to know numbers but I am sure you know stories like this. Contrast them with the many in which the famous third reviewer is used by the editor against you. The letter that would say to one of those privileged Authors “….you will see that reviewer 3 has raised some issues and if you can answer some of them, we shall be happy to consider the paper for publication…” becomes, for many others, “……. you will see that reviewer 3 has raised substantial concerns which preclude us from considering your paper for publication….”. The difference between the two statements is not the manuscript, nor the reviews, probably not the editor either, but the author and that unmeasurable all important quantity: the relationship between the editor and the author. There is little point in denying that this happens. An interesting interview with an editor (“10 things you need to know about the publishing process” (http://elsevierconnect.com/10-things-you-need-to-know-about-the-publishing-process/) makes clear how much influence editors have in decisions and suggests that the important thing, particularly in those journals perceived as ‘influential’, is not the science but the use the data to, in collaboration with reviewers and editors, find a way to ‘tell a story”.

These situations do not make life easy and say much about the times in which we live and work, research in the life sciences, and the way it moves. It could be argued that similar developments are taking place in other areas of Life but the difference is that Science, we are told, is objective, fair, based on facts and quality through well established criteria and procedures. All of these principles (which remain true) have become relative and the journals, rather than the scientists are running the Science and the scientists. The clever thing is that they have engaged us in this game so, we cannot complain but……we can change things.

While Open Acess and the use of Impact Factors (read DORA) are important issues that need addressing (and are being addressed), a thorough revision of the Peer Review is the one issue that can have the more direct impact into our individual efforts to pursue science. At the moment, the process of Peer Review is an ugly tangled web which is not going to be easy to disentangle. Nonetheless we need to try.

Enough whinging. Over the last few weeks on related discussions I have had two suggestions to write some blueprint for ‘my ideal journal’. I am in the process of doing this and, if you are interested, stay tuned; if you have some thoughts, write to me.




Can we think of something important to say?

The death of Francois Jacob (1920-2013) has triggered many deserved tributes and comments, as well as brought back the memories of a time and a place when Science was, different, certainly more focused and the realm of few.

The first time I heard about Jacob was, of course, in the context of Jacob and Monod, in undergraduate Genetics in Madrid. This was a landmark moment as it showed me what Biology was capable of, that Biology could be beautiful, that one could find logic and order in what otherwise would be a mere collection of facts and that you can do it by thinking about the data. The genetics of the lac Operon and also of λ became a fixation: that one could infer regulatory interactions from the analysis of mutant phenotypes was intriguing and remarkable, and I got caught. How not to? Together with Monod, Brenner, Crick and Benzer, Jacob is one of the great heroes in the adventure that laid the foundation for molecular biology and which is so beautifully and intensely told by HF Judson in “The eight days of creation”. In a time honored french tradition, Jacob was also an intellectual and his books (The logic of the living; The statute within; Of flies, mice and men; The possible and the actual) are inspiring, beautifully written, with an alluring imaginery, a wonderful sense of history and of the interplay between society and science. They played an important role, particularly “The logic of the living’, the only one of my students days, in fixating and shaping my interests in biology. As times passes and I teach his science to undergraduates it is difficult not to have a sense of awe at what he and his colleagues achieved with so little.

This is the moment of memories, anecdotes, perspective. It is surprising that young people do not know much about these pioneers but perhaps there is nothing wrong with this, after all it is already History and History is for the historians. However there is a value in looking at and reflecting on History, particularly on thinking how far have we come from those days when scientists did Science and did not have to worry about its marketing and consequences. In this regard, one anecdote has come up in some of the news titbits and obituaries of F. Jacob (see e.g http://bit.ly/14HLqT0). It refers to the event in which he had gone to the cinema with his wife and leaving the theatre he said to her: I think I have just thought of something important”. It was the possibility that the regulation of prophage induction and of the regulation of the Lac operon had a common regulatory mechanism based on the binding of a protein to DNA. He was right, it was important.

Today many students, postdocs and PIs have, at times, the feeling of having found something ‘cool’, of something that is worth a pilgrimage through the reviewers efforts in HIF publication but is this something important? More to the point: do we think?  Are we able to distinguish the important from the urgent? Visiting a lab a couple of years ago, a postdoc was busily telling me about his findings and what he needed for the submission to a journal, the experiments that he needed, what the reviewers would say. The work was interesting. At some point I began to get tired and had to say to him: can we stop talking about the publication and think about the problem for a while? I realize that it was difficult. It is a worry that we have buried important questions (which exist) in mountains of experiments and data. While we can say a lot about genes I keep on wondering if we are capable of thinking of something important.

The passing of F. Jacob reminds us of that generation which laid down the foundations of what we are doing. We should be doing the same for the next generation but, for the most part, we are too worried about publications and IFs, and OA, and too many things that clutter our minds and do not allow us to go beyond data.

I believe that there are important questions waiting to be asked, answers waiting to be thought.


How to evaluate our output

It is very good to see that, slowly, the OA battle is being won. There are still a few rough edges to be smoothened out, particularly in the US, but the battle is being won and everybody is aware of the problem and the solutions. What is best, progress is being made. Now we can maybe turn on the heat on a situation which, probably, does not have an easy solution but which, increasingly is a cause of aggravation, intellectual discrimination and ……: the current mechanisms of peer review and the meaning and use of the Impact Factor (IF).

I have written about this before but it is important to continue doing so to create awareness discussion and, frankly, get some momentum on these issues. Two publications together, provide a great deal of perspective onto the problem: Ron Vale’s  Evaluating how we evaluate (Mol. Biol. Of the Cell, 2012 PMID: 22936699) and Leslie Vosshall’s “the glacial pace of scientific publishing: why it hurts everyone and what we can do to fix it (FASEB J 2012 PMID:22935905). I have mentioned the second one here (http://amapress.gen.cam.ac.uk/?p=1022) and both in Twitter. It is high time that we address the matters so clearly raised in these articles, accept our global responsibility for the situation and begin to do something about it. Both articles make some suggestions and I have added a few more. It is not going to be easy nor fast, but something has to give.

One of the common discussions in conversations with people and increasingly in the web is how to evaluate a piece of work, what can substitute the IF. We all know the impact that a publication in NCS (as they are called) can have when applying for a grant or a job, as we all know that this is a con and that while there is little doubt that these publications have good science, some times exceptional science, they are more a measure of marketing ability/power on the side of the authors or, more specifically, the senior author, than of their science. Second tier journals have equally good, and sometimes better, papers than these. It is often argued that the reason for this impact is derived from a need to choose when faced with many equally good candidates and that, in these situations, the IF (the journal) is a good surrogate for the quality of the science. Really? But, I do not want to whinge, the important thing is not just to point out the problem, far too easy here, but to offer solutions.

If the problem is how to judge the scientific potential of people, I would argue that applicants should be asked to submit, in addition to their full publication list, their choice of the three most significant pieces of work that they have to offer with a short paragraph justifying them (which could even be word limited). This surely will do for more interesting reading than a long (or short)  list of publications which may or may not include multi-author papers in NCS or second tier journals. And I feel that members of panels can see through shams. Then, after reading these three one can have some perspective on the applicant and have a look at the full list with some perspective. Ron Vale points out that this is done in HHMI evaluations and it seems to me that it should not be difficult to make it part of the procedure. Ah, and people, we, they, should not be afraid of disregarding the journal label and focus on the work; is this not what we are supposed to do? It is not very difficult….or is it?

What surprise me most of the situation is that we have relinquished our judgement to the taste of certain journals…….

More about the current state of scientific publishing and how to start change

Additional observations

Our lab has an ongoing fruitful collaboration with AK Hadjantonakis at the Developmental Biology division of MSKCC in New York (www.mskcc.org/research/lab/anna-katerina-hadjantonakis) and a recent visit coincided with the publication of an article from Leslie Vosshall on the current state of scientific publishing and the effect it has on careers and, overall, the field (www.fasebj.org/content/26/9/3589.full). Leslie kindly made time to see me and we had a good exchange during which we shared our views of the problem and ways towards solutions. She proposes some in her article by making a rational appeal to the common sense of authors, reviewers and editors and while I agree with much of what she says, my feeling is that what we need is a global active engagement of the community to shape a peer review and publishing system that is more in tune with the times and stops the increasing deterioration of the essence of our profession. As Leslie puts it “scientific publishing is an enterprise handled by scientists for scientists, which can be fixed by scientists”. It is strange that for all that we complain about the system, we do very little to implement actual change. We need to act and do this by getting actively engaged in, first and foremost, giving our views on the system. A global engagement is important because only in this manner we can implement change.

Any discussion of the current state of scientific publishing revolves around four interrelated issues: Open Access, Impact Factor, the mechanics of Scientific Publishing itself and, of course, Peer Review. I am tempted to comment on all of them but, because of my conversation with Leslie will keep a focus on the last two.

It is clear that we have forgotten that scientific publishing is all about OUR work and that while it is fine that somebody profits from it, what cannot happen is that these people determine the rules of the game. I have written before about Nature setting up rules about authorship and contributions (What’s in an asterisk: the power of Nature ) but this is not restricted to Nature and it surprises me that we just go along with it even if Nature in its website is not very clear about this and works by stealth. But this is just one example of how far we are allowing the system to go, how far we are allowing the publishers to play us. There are other examples of this. For example, how come that in the age of e-publishing many journals have a less than 10% rate of manuscript acceptance? How can this be justified? Do they not agree that there is a further 10% -or even 20%- of manuscripts that could make it…if they had space? And they have space, e-space! What is the real reason for these quotas? The reason is not to help the science and certainly not the scientists, the reason that comes to mind is that such quota plays in the hands of the Impact Factor (IF), it ensures that publication in those journals is artificially (and I mean artificially) difficult so that the journal will receive higher and higher quality papers -though this does not necessarily mean that they are good science. This of course has a circular effect on the impact of the IF and the scientists. The arbitrariness of this situation and what could be done about it is made clear in a comparison of the research articles published in the March 2013 issues of PLoS Genetics (78) and PLoS Biology (14). They are both journals from the same publisher, both pride themselves in serving the community, have similar IF 11.45 (180 articles in 2011) for PLoS Biology vs 8.69 (548 articles in 2011) for PLoS Genetics, and look at themselves as flagships for a scientist based movement. What is the reason for this difference? It could be claimed that PLoS Biology was set up to compete with the higher end of the market but I would have thought that the ambition would be to change the way scientific publishing works and not to be changed by this. Is it that PLoS Genetics is not an ‘elite’ journal? What kind of elitistm does PLoS Biology aspire to? The reason for the difference is, of course the editors and their policies. One PLoS Genetics, understands what is the role of the journal, PLoS Biology on the other hand has become a victim of what it tried to avoid. And this is the point: many journals set up an arbitrary barrier to publication because publishers believe that a way to raise the IF is to reject more papers. It is clear that this is not the case (the difference in IF between the two PLoS journals is not commesurate with their different acceptance rate i.e. the impact factor of PLoS Biology is not 5.5 times higher than that of PLoS Genetics, in fact it is 1.3 times higher. Is this worth it?  What keeping acceptance levels below 10% really achieves is to make life difficult for us, the scientists. It does not improve the IF of a journal. A paper is not a grant: within a grant committee there is a fixed pot of money and therefore a limit to what you can fund, many good grants cannot be funded and the decision is agreed to be difficult and often arbitrary; this is not the case of an e-journal. There is room. There is no justfication not to publish good papers.

All these are facts to ponder and act upon by forcing journals to change their habits. How do we do this? By moving to journals that are willing to listen to us and are actually there to do their job: help us put out our findings in an efficient, transparent manner

The real problem right now, the one we need to address urgently is, as highlighted by Leslie Vosshall, what she calls the ‘glacial pace’ of the reviewing process. Here we are all in it together. This is a state of affairs (for another  succinct review of where we are on this and how we have come to it, read the second half of the recent posting of Mike Eisen (www.michaeleisen.org/blog/?p=1346 ) which involves all of us as a community. We are all victims and perpetrators of the crime, because let us face it, the way we handle papers as reviewers is close to a ‘criminal’ activity masterminded by the editors whom we allow to use us to perpetrate the ‘crimes”. I shall state up front that I believe in peer review. I also believe in free commenting on papers after publication (and not just in journal clubs, but in blogs and journals), and that the point is not, as advocated in many sites and places these days, to remove all barriers to publication, but to adapt the system to the times and to get it to work for us, not against us. Remember: the peer review system is us and we need it, but not the way we are doing it now. The mystery of the situation is how is it that we have allowed journals to use us to make life so difficult for ourselves. While the practices that we all know too well are most prevalent in HIF journals, they increasingly operate in aspiring HIF journals and things are not getting better. But rather than ranting about this, it is good to state what we want: a fair, expedite, transparent reviewing process. Some journals, most notably EMBO J and eLife are taking steps in this direction and it is difficult to understand why other journals do not want to follow. As I have said before, many of the practices of EMBO J and eLife should be adopted by other journals.

There are other actions that we should aim for. Something that we need to consider is to waive the anonymity of the reviewing process. I have heard many reasons for keeping this as it is, some of them are worth considering but it seems to me that in most cases, the driving force behind keeping the reviews anonymous is the one behind any such activities: being anonymous allows one to be unreasonable and to, sometimes, slander the author to the editor –and this can be done in a scientific manner. Why is it that we are not capable of standing up to what we say, that we are not capable to sign our statements……..I have mentioned before an important difference between PLoS Genetics and PLoS Biology; here is another one germane to the matter we are discussing. PLoS  has this important and interesting notion of the academic editor, someone who interfaces with the non academic editor in the reviewing process. In PLoS Biology the academic editor is made known to the authors (and the public) ONLY upon successful publication of the work. Interestingly, in PLoS Genetics the academic editor is made known to the authors whether the ms is successful or not. I have never heard people complain about this and for this reason it is another indictment against anonymity. Furthermore, there are many journals that are academically led and where the authors know the editors. I have never heard that this causes big problems with authors, I have never heard of arguments and discussions. People do not refuse to be editors in journals. Why then the anonymity with the reviewers? Anonymity will (and should) always raise suspicions. It is not going to be easy to remove this from the process, there is too much inertia, but a discussion of this issue should definitely be part of the transparent process. If we are not to waive anonymity we need to change, improve the way the reviewing process works and here, there is no substitute for good editorialship!

What else can we do? Actions

The call to responsibility that Leslie Vosshall has made is very good, but we also need to move into actions, some of them structural, some of them more personal. This is going to be one long slog but, like many other people, it seems to me that we need to start acting. Here I would like to expand on Leslie’s suggestions with some aspects of the process we need to recognize and below, some actions that we can begin to take individually:


We should avoid journals that have long decision times, multiple rounds of reviews and artificial low acceptance rates (there is no reason for any of this in today’s e-world) and submit our work, preferentially, to journals with a strong academic base or with a transparent review process a la EMBO J and eLife.

It would be good if people started posting reviews in their websites. EMBO J does it routinely with accepted papers, but only with papers that are accepted. One hears a lot about unfair reviews; OK  let us see them. One note though, check with the journal that you can do this so that you do not get into legal wrangles; journals control even what we can do with the feedback we receive from them.

Refuse to review papers more than once; let us make sure that editors do their job properly and make decisions without hiding behind the reviews.

The issue of prepublication is an interesting one that is being discussed much at the moment (more out of desperation I would think than anything else) but the notion of pre-publishing a paper for comments before or while it is being peer reviewed is a good one that works in other fields and might begin to place the impact in the right place.

At a broader level:

Remember that the decision to accept or reject a paper is not in the hands of the reviewer but in the hands of the editor. So, when you write reviews do not act as an editor. NB this has been said many times in different ways and we keep on forgetting it.

The role of the reviewer is not to improve the paper but to express how reasonable the conclusions in the paper follow from the data. Then the editor has to earn her/his salary. Most reviews these days are just long lists of experiments and reasons why the work is not complete rather than an actual look at the paper. The often conflicting reviews that we all receive on an interesting piece of work are a good indictment of this. Editors use the ‘third reviewer’ to reject the paper.

The decision for publication should lie with the editor and this is very important because there is no reason in a sensible world why there should be more than one round of review. See EMBO J and eLife.

It would be good to have a ranking of editorialship in which we rate editors and journals on the way they handle manuscripts.

Something for editors: scientists should be treated scientifically. Some editors don’t do this and, in some instances, fall back on the reviwers and are unable of an articulate defense of their decision. I have a series of interesting exchanges with a senior editor from an “aspirational HIF journal” in which to every discussion I tried to raise, all I was given in exchange was that the academic they were being advised by knew a lot more than I did about the field……ah, and that they (the editors) were confident of their decision making process…..this after allowing me a 25 page rebuttal of a review!

Remember, as Vosshall said:  this “is an enterprise handled by scientists for scientists, which can be fixed by scientists”

What’s in an asterisk: the power of Nature

A recent decision by Nature to restrict the number of joint first authors to three and not to have joint senior authors (see *) is another step of this (and other HIF journals) to accumulate power in running science. They already influence decisions about position and grant income through what and who they allow to publish, they also determine the content and the timing of publications through the lengthy review process. It could be said that these effects are indirect and that we contribute to the mechanisms that foster this meddling into our affairs, but their new policy is unilateral, an editorial policy, and can have very direct effects on the careers of young scientists. It seems to me that we should not allow this to happen.

As the requirements for a publication in HIF journals become increasingly unreasonable (there are increasing comments in the web about this), a manuscript often requires a blend of diverse skills and techniques, sometimes from different groups. In these cases, the assessment of the individual contributions of different authors becomes a challenge for the senior authors. Over the years it has become customary to use asterisks to call attention to the fact that “these authors have made equal contributions to the work”. As the complexity of the work and the number of rounds of reviews increase, it is not unusual that the number of equal contributions increases proportionally and it is not uncommon that, particularly in HIF publications, there are three, four or five asterisks. The value of this to the authors, usually postdocs on the verge or in the midst of a job application, is that in their CVs they can place their name as first in the publication reflecting their work. Needless to say that while we change our scale of values, in a paper in Nature this is important. The decision of how many asterisks and the order falls to the PIs in discussion with the team.

It is with surprise that Nature has decided to meddle with a decision on a matter that is not within their power. Who are they to impose upon us how we evaluate the contributions of members of our labs to our work? Why should we allow this? What Nature should do is to publish and not to dictate neither the content nor the authorship of what they publish. This is one step to far in a process that we, as authors who provide the content of their business, should control more closely. We already have allowed the evolution of a deeply flawed peer review process and, through an ill judged greed, have allowed the development of a hierarchy which rewards publication branding over content. If we do not do something and allow this policy on number of asterisks to stand, we shall continue to grant these journals increasing powers that in the long term we shall regret. The content and authorship of a paper is not the matter for the journal and we should not relinquish our responsibilities on this matter and give them more power on issues that are so important to us.

The same applies for senior authors. Again, Nature is not going to allow senior coauthorship and this will harm junior group leaders or faculty members in publications with more established scientists.

This development on the side of Nature is dangerous. How much power are we willing to give to the journals? I do not, in principle, have anything against publishing in them, but what I believe is that it should be US who determine their policies and not THEM who determine ours. They live off our toiling and therefore WE SHOULD TELL THEM how we want the system to be and not the other way around.

I have said before and will repeat it here, all the buzz about Open Access, important as it is, is taking the attention of the issue of peer review and the influence of journals on careers, which is destructive and moral grinding.

* this is Nature’s statement:

Nature requires authors to specify the contribution made by their co-authors in the end notes of the paper (see section 5.5). If authors regard it as essential to indicate that two or more co-authors are equal in status, they may be identified by an asterisk symbol with the caption ‘These authors contributed equally to this work’ immediately under the address list. If more than three co-authors are equal in status, this should be indicated in the author contributions statement. Present addresses appear immediately below the author list (below the footnote rule at the bottom of the first page) and may be identified by a dagger symbol; all other essential author-related explanation is in the acknowledgements.

More on this to follow but any comments will be appreciated.


There have been some comments on twitter concerning my posting on Nature authorship policy (http://amapress.gen.cam.ac.uk/?p=928). Here I want to clarify two things rapidly.

1) The original quotation about authorship is taken from Nature’s website: section 5.2 in www.nature.com/nature/authors/gta/index.html. So, while not referred to I would have thought Nature knows its own policies. A second statement appeared on line which supports my understanding: www.nature.com/news/in-search-of-credit-1.12117: “our policy is to allow no more than three authors in first and last positions on a paper”.

2) The second statement also claims that they will allow three joint senior authors, which is contrary to what I have heard. I await clarification on this matter.

All this having been established, I remain concerned that we allow publishers to determine credits and authorship for our work.

On Scientific Publishing 2

On Scientific Publishing 2

I have been very surprised about the scant response Jordan Raff’s editorial has generated. The reason is that wherever one goes, one always finds people ready to spend a fair amount of time talking about the issue of publishing and peer review so, the lack of a response cannot be due to a lack of interest in the subject. Yes, the web has –as Peter Lawrence points out- many comments on the matter, but few are from what one could call the grass roots of the scientific community. The Node has opened an opportunity to this community but, there is no response.

I can only think of two possible explanations. The first one is that, as a community, we just do not believe that we have a say in how the system that we depend on works.  Basically, that we are resigned, and thereby accepting, of the current situation. I suspect that there is a lot of this but it is not good and it would be a small achievement if we could begin to speak up. It is a matter of ‘speak up or shut up’. The second possibility is that we have not caught up with the way ‘things’ work today. This is a pity because the journals and the science policy makers have caught up with the web 2.0 and related means of communication and this leaves us without an effective voice against theirs. It would be good if people would see that postings and all the paraphernalia that goes with the internet is a way of bringing a voice to the people and, in science, to the base community  that provides the essence of the system. In any event, very disappointing but this should not stop some of us to try to steer some activity.

Over a year ago Michael Eisen in his Blog (It’s not junk) published a piece (http://www.michaeleisen.org/blog/?p=694) in which he gave his views on the, then emerging, problem. The piece was good but, as some of the respondents said, it became a bit of drum beating for PLoS and, while some of the PLoS journals are fine (nobody will deny the impact of PLoS ONE), others have become competitors and imitators of that which they set out to oppose i.e they do what the big journals do, use the community to develop the PLoS business model rather than serve the original aim. Still, I would agree that some PLoS (and I like PLoS Genetics for reasons that I should explain here some day) do make a good contribution to the community, if not to the debate. But on the issue of peer review, which is very much at the heart of many of the publishing troubles,  they are no better (and are showing all the symptoms of catching up the NSC disease) than their competitors. I continue to suggest that they should look up to and adopt the policies of EMBO J.

All this a long preamble to say that the piece did not go very far. It did stimulate a healthy discussion on Michael Eisen’s views (and this is probably the point of a blog) but it did not go further.

It is time that we find a voice that allows our ideas to change the system.