I have been considering publishing my next article in #elife
I was not very convinced by their new #publishing method, but the more I think about it, the more I like it.
What convinced me is that I think of the way I review papers myself. I won't ever reject a paper unless there is something majorly wrong e.g. from an ethical point of view. Instead, I would rather spend time and give constructive and realistic feedback to improve the study.
This is because of two reasons:
1. If the study idea/methodology etc, is good but maybe is missing some key experiment, I think that the authors must have put a lot of effort, time and money into producing this. I have been through the "your work is not fancy enough for our prestigious journal" crap enough times that I will not engage in that. Ever. There is no reason your paper should not publish negative results if the study is well done.
Also, people's jobs and mental health depend on that, which is way more important.
Also, there are plenty of papers in "fancy journals" that are just piles of bs, so I really won't buy into shiny names (I have just spent an entire day trying to run code from several papers published in high-IF journals to no avail...).
2. If the study is poor, it is easy to say: "This is cr*p, straight reject". This just means the authors will submit elsewhere, hoping the next reviewer won't be bothered reading the paper in depth and will let it through. Even worse, this plays into the hands of #predatory journals. I would rather say this can be accepted after all of these major revisions.
The authors get useful feedback on how to improve their study; they might choose not to act on it, but at least I have made my part.
I would be interested in hearing other views on this.
@nicolaromano This is refreshing to hear. I agree that we need to get real about peer review and its limitations. Our papers could all do with some deflation too. Right now any flaws are removed/minimised for fear that this will jeopardise the chance of publication (or negative data is simply left in the drawer). None of that is good for science.
Spot on.
To further add: presently, scientific evaluation for grants, recruitment and career promotion is entangled with paper publishing: everybody claims not to have enough time to read the papers and instead use the journal name as proxy.
Now with #eLife there is a one-paragraph assessment that distils with some controlled vocabulary what the reviewers, reviewing editor and senior editor--4 or 5 experts in the field--thought of the work, on two axes: strength of evidence (accuracy) and significance of findings (impact).
What's not to like!
Disclosure: I'm a senior editor at eLife - @eLife
#academia #ScientificPublishing #ReviewedPreprints
@albertcardona @nicolaromano @eLife If that is the goal of the one paragraph assessment, we can save ourselves the trouble with the papers. So it sounds to me like another certificate for the work.
@ecological_fallacy @nicolaromano @eLife
If committee members aren't going to read the papers, I'd prefer they read a succinct summary statement by experts who read it carefully rather than use journal names as proxy.
@ecological_fallacy @albertcardona @eLife Honestly, I don't care too much for the short assessment... however I think we also have to be realistic and realise that grant and job applications panels won't stop looking at publications any time soon. And they are more likely to read 100 paragraphs than 100 papers.
@albertcardona @nicolaromano @eLife Based on reviewers I've been getting, I question whether these people writing the summary will actually be experts.
Given that, unlike at most other journals, reviewers’ names are disclosed to each other and to the reviewing and senior editors, there’s quite some amount of peer pressure and cross/evaluation to produce sensible, constructive reviews.
In my own personal experience as author publishing in eLife, I’ve always received useful feedback, even if at times I didn’t like it, the reviews uncovered biases and blindspots of mine and for that I am grateful.
@nicolaromano I completely agree. It is also frustrating if reviews are very negative but incorrect in some way- with this method authors will have a chance to respond and explain their rationale when challenged, rather than just having an editor reject.
@nicolaromano just fyi, eLife still desk rejects a large percent, which I've heard has gotten higher with the change to the new system.
@notoriousiptg Interesting and maybe not surprising (?).
Yes, but (my understanding was thta) in the new world, the #eLife desk rejection would not be about "quality" but rather about "did eLife have anything to say about the paper (good OR bad)" which means (theoretically) that the eLife imprimatur is no longer a signal of "importance" and the desk reject shouldn't be meaningful.
Perhaps @behrenstimb can confirm.
@adredish @nicolaromano @behrenstimb regardless of goals/stated optics, i can't imagine their not refusing review on most submissions without overloading everyone.
Do you know if this is a higher percent or just more (because there are now more submissions). If it is a higher percentage, then there seem two (at least) possible explanations:
1) The new policy is encouraging more substandard submissions that warrant outright rejection
2) The new policy has implicitly (or maybe even explicitly) changed editor behavior because they now think, gosh, if I let this go through it is in some sense published, so I better shift my criterion on decision to move to next step
3) As you mention in another post, editors realize they would overload everyone, so they have shifted criterion to prevent that
In any case, in the context of the percentage of desk rejects possibly having gone up, I am going to resurface (again) my op-ed on this issue: "Gatekeeping Without Peer Review"
https://drive.google.com/file/d/1xBHB40f1a78s1JS6s-SOR1OH9vXNdjo6/view?usp=share_link
My major point is that more transparency is needed in desk rejections. For example, a clear set of criteria for a given journal which editors should adhere to. And clear feedback to authors about what criteria were not met when a submission is desk rejected. [will mention again that this short op-ed was desk rejected at Nature, but they provided a clear reason that was entirely fair. However, eLife, The Chronicle, and Inside Higher Ed, did not do so - either no justification or spurious justification (e.g., editor just disagreed)]
@albertcardona had suggested eLife desk rejections would only be because of obvious "crank" submissions or clearly flawed submissions. Not because of novelty, impact, etc. Is that the case?
@mtarr @nicolaromano @albertcardona i don't "know" much; all second hand. But I'm sure it's a combination of the points you mention. Perhaps someone who knows things firsthand can comment.
@mtarr @notoriousiptg @nicolaromano
First, let’s wait for data.
Second, here is an example of a desk rejection I just handled: 2 BREs and myself read in depth enough to have useful feedback to the authors—even with comments with line numbers—so the “desk reject” went along with what is, in all but in name, a peer review.
If anything, eLife is doing two levels of peer-review: one by 3 or more members of the editorial board—all of us practising scientists—and then by 2 or 3 external reviewers.
@albertcardona @notoriousiptg @nicolaromano
Agreed. Data would be good. Would love to see it. But also maybe the desk rejections (anonymized) would be good to see. You highlight what sounds like a constructive desk rejection (and presumably your feedback was the details on why the paper wouldn't survive wider peer review). But I still insist this isn't always happening and that claims about lack of novelty etc. are sometimes involved. What is wrong with greater transparency, which would seem to incentivize editors behaving as you describe?
@mtarr @notoriousiptg @nicolaromano
Greater transparency, as long as is driven by the authors, would indeed be a good thing. As in, give the authors an option to make it public that they submitted a paper to the journal and what the editorial assessment prior to peer review was. As far as I am concerned, authors are free to distribute and publicise the feedback they receive from me as an editor: these aren't secrets.
If transparency on editorial desk rejects was driven and enforced by journals, I can foresee some authors finding it embarrassing or problematic in some way ("not being good enough for X"), and therefore discouraging submissions to the journal. Privacy here enables authors to apply any received constructive feedback and resubmit.
@albertcardona @notoriousiptg @nicolaromano agreed. Might be interesting to see how often authors would be willing to make public. Of course cs does this already with open review for conference papers. I am pretty sure I would make everything public because either it was constructive feedback so good for us to address or it was weak feedback and I would want to shine a light on it. Keep you the good fight for good editing!
@albertcardona @notoriousiptg @nicolaromano
And just to emphasize my point on pushing for more transparency. An editor desk rejected my op-ed and then refused to let me use any of their feedback (which was their own personal disagreements with my op-ed and nothing supported or factual) in any further communications or writing.🧐
@mtarr @notoriousiptg @nicolaromano
That seems like a bizarre experience, and I am not sure the editor has a right to ban you from using their feedback. What journal was that?
@nicolaromano Strongly agree! Journal "prestige" is a racket, pure and simple.
@jmtheodor @nicolaromano Read it, loved it, boosted it, replied to it :-)
@mike @nicolaromano oops, accidental post.
@nicolaromano In principle it's a fine system. In practice, they "desk reject" unusual papers, so the gatekeeping is still fierce, but instead of being done by reviewers who are experts and need to talk to each other (which was the great innovation of eLife) it is now in the hands of an editor who may not read the paper thoroughly and may not be a domain specialist, and does not answer to anyone.
@MatteoCarandini True, but that is a problem with essentially all of the journals. I also thought of just putting things on bioRxiv and leave them there full stop. I don't know if the academic world's ready for that though 😞
@nicolaromano sorry for the late-boosting, but I followed a hashtag and found this convo about eLife - I’m curious, what did you end up doing?
@elduvelle well... things got a little more complicated than I hoped for, then the new semester started, so the paper is still in the making, but 70% done. And I'm still thinking of sending it to eLife. Hopefully early next year if nothing else gets in the way !
@nicolaromano fingers crossed for you!
@nicolaromano I guess that with the old model, desk rejection was lower and that rejection at a later stage, *informed by reviewers actually taking some time to read the papers*, was higher. When you remove the later stage (no rejection after review, as eLife did) *necessarily* you need to increase desk rejection, if you don't want publishing everything and being essentially a preprint server with added reviews. I doubt such a system is better than the old one.
@jorgeapenas Yes, I guess that is another way of viewing it. However, I've seen editors rejecting papers without considering the referee reports as well, sometimes after long and massively expensive rounds of review, so... I'd rather have a desk rejection and get done with it. But that's my personal take.
@nicolaromano I like the approach and it is actually a logical continuation of the preprint servers and thus allows published peer review. I'll have to take a look at some articles.