Follow

'Imagine my surprise when I received reviews on a submitted paper declaring that it was the work of ChatGPT. One reviewer wrote that it was “obviously ChatGPT”, and the handling editor vaguely agreed, saying that they found “the writing style unusual”. Surprise was just one emotion I experienced; I also felt shock, dismay and a flood of confusion and alarm. Given how much work I put into writing, it was a blow to be accused of being a chatbot — especially without any evidence.'

nature.com/articles/d41586-024

@cyrilpedia

Wow. I disagree with the decision by journals that authors can't use ChatGPT to help convey their scientific discoveries more clearly.

I don't understand how it's any different than hiring an editor--something many journals recommend to authors of poorly written articles. Sure ChatGPT might make something up, but a scientific editor can similarly misunderstand the original draft and write something nonsensical.

Either way, it's up to the author to validate the product.

@cyrilpedia

I wonder if the reviewer comment 'this was written by ChatGPT' is going to replace 'get a native English speaker to edit it'.

Reviewers, please, don't do either.

I get it, some papers are badly written. As a reviewer, you want to help the author convey their science, but either phrase assumes something that might not be true.

Just be factual in your review, 'I struggled to understand the conclusions because the writing was unclear.'

@cyrilpedia

I see some reviewers being very condescending to authors.

In addition to accusing authors of not speaking English, I've seen reviewers say that the writing is 'sloppy' or so bad as to be insulting to the reader.

Reviewers, please don't do that!

It's unnecessary and hurtful. It may make the editor and author less likely to accept your review as unbiased.

I'd advise sticking to the facts. I struggled to understand, I was confused, I felt the word choice made it hard to read.

@cyrilpedia

In my reviews, I often try to give examples of things that were unclear or writing trends in the manuscript that are unclear.

'On line # it was difficult for me to understand if there was more of X in sample Y compared to Z when reading the text.'

'Throughout the text, new terms are used without being first defined for example on line # ...

This takes effort, and I understand some reviewers just want to summarize, but summarize with the facts not interpretations.

@MCDuncanLab As a former editor, I'd add that this is also a failure of journal editors - who should review reviews (and reviewers).

@MCDuncanLab A good editor will step in when the reviewers are taking personal shots, making inappropriate comments etc Sometime it can even undermine a review that makes important points about the work. There's a good case in this episode of the , where then @embojournal editor Karin Dumstrei & I discussed one such case with the authors. embo.org/podcasts/the-band-and

@MCDuncanLab @embojournal This was a case where the tone of the reviewer undermined valid concerns - that could have been phrased in a constructive manner. Thanks to the transparent review files, you can dig into how it played out (I think these are great case studies to discuss with students and postdocs, perhaps even have them re-write certain comments as a workshop exercise). embopress.org/doi/full/10.1525

@cyrilpedia @embojournal

Absolutely, but from my experience as an author and as a reviewer, not all editors are doing that.

The mud-slingers may be big enough names that the editor doesn't want to chastise the reviewer. It's also possible that the editor is too overworked to be handling manuscripts. It takes time and mental energy to ask a reviewer to tone it down. Its also possible that the editor sees nothing wrong with mudslinging.

@MCDuncanLab There are certainly cases where both editors and reviewers fail at their respective roles. I'm a fan of transparent reviews, one experience that came up often at Review Commons was authors and reviewers praising the more positive overall tone of comment vs conventional journal-based peer review.

@MCDuncanLab I wish I could recall the original source, but someone had commented a while back that it starts from the hypercritical approach nurtured at many lab meetings & journal clubs - the comment was something along the lines of "we are training pitbulls and then are shocked when they tear manuscripts apart".

@MCDuncanLab I'm a big believer in peer review, and having worked at both ends of the process, as a researcher and as an editor, I'd say the majority of reviewers do a difficult, time consuming job (for free) in the right spirit.

@cyrilpedia @MCDuncanLab

I think this partly springs from an over correction - often students, as undergrads, come with an attitude that just because someone said it in a peer-reviewed paper, it must be true. In an attempt to get them to apply more critical analysis to the article in front of them, we reward them for being negative. It takes time and maturity to reach a balanced position where neither positive or negative if the right answer every time.

How to teach this well is the question

@IanSudbery @MCDuncanLab Peer review workshops making use of real-world reviews are a good starting point (there is always the individual mentoring, but this is entirely up to each PI). One initiative that I heard from a few labs & really liked is that they discuss preprints in lab journal clubs and then write up the comments to send to the authors.

@cyrilpedia @IanSudbery

I like the idea of using preprints to teach how to evaluate a paper's strengths and weaknesses.

I agree with Ian that one problem with the traditional journal club in teaching critical analysis is that published papers selected for discussion are often excellent; therefore, students often have to nit-pick to find a problem.

@MCDuncanLab @cyrilpedia

Dunno, my lab seems to have a knack of picking papers that are considerably less than stellar, and we often find at least one major flaw. This does not do much for the cause of not seeing the purpose of peer review being to demolish papers.

@MCDuncanLab @cyrilpedia I try to finish journal clubs by asking "Despite the flaws we found, do we think we have learnt something new, that we have confidence in being correct, or do those flaws mean that we can't trust any of the conclusions?"

@IanSudbery

I really like this question to show what we can take away from the conversation. One question my PhD supervisor would ask in journal club after we pointed out a potential flaw in a paper was, "How could that lead to a systematic difference between conditions or groups?"

I liked it because there may be many things you can quibble with, but this brought me back onto the things that could plausibly actually change the outcome, and in what way.

@IanSudbery @cyrilpedia

I think a lot of students get this from course-based journal clubs. And in our department, at least, the faculty seem to select important papers in the field--which are generally going to be less flawed.

@cyrilpedia

I am meh about transparent reviews.

Fro ma reader's perspective, I don't think they add much to the actual transparency of the process. Without knowing the manuscript and related literature in detail, reading the reviews and responses does not give me much clarity about whether reviewer #2 raised valid points and was overruled by chums of the author.

From an author's perspective, they don't abolish nasty reviews, and I don't want those reviews publically available.

@cyrilpedia

Why don't I want nasty reviews publicly available?

For people who might be working on lightning-rod topics, or due to a non-science reason are personally a lightning-rod, I don't want a hyper-critical review to be easily available to bad actors who might quote it out of context to hurt the reputation of the author.

@MCDuncanLab @cyrilpedia I agree with you. I had a particularly nasty review of a manuscript at Review Commons that I did not want permanently associated with the paper because of potential negative effects for the first author down the road. We started the process over at a specialized journal where the reviewers appreciated the work.

@memerman @cyrilpedia

Thank you for sharing, I'm glad you were able to distance the manuscript from that review.

I feel for authors who might feel trapped to publish in certain journals because of their impact factor but disagree with a blanket policy on transparent review.

I feel like I'm a lone voice on this side of the debate.

@MCDuncanLab @cyrilpedia "In addition to accusing authors of not speaking English, I've seen reviewers say that the writing is 'sloppy' or so bad as to be insulting to the reader.
Reviewers, please don't do that!
It's unnecessary and hurtful."

I would go a step further and say that comments like that veer toward a breach of professional ethics, especially when the reviewer alludes to "non native" language use or similar othering.

@MCDuncanLab @cyrilpedia I can think of lots of reasons not to let the Stochastic Parrot anywhere near the scientific publishing system. With all the litigation about copyright infringement by ChatGPT in their unethical scraping of the web, I can imagine editors would want to steer well clear of any futute legal issues, as do I!

@MCDuncanLab @cyrilpedia we don’t allow plagiarism as a strategy to help convey our scientific discoveries more clearly. To me ChatGPT is much more similar to plagiarism than to hiring an editor—conflating the two ignores the fundamentally extractive and exploitative nature of how ChatGPT was built. Plus there is a real risk of plagiarizing with ChatGPT! Of course ChatGPT makes stuff up as you note, but it can also just spit out training data, aka other people’s words.

@MCDuncanLab @cyrilpedia and as a human being I hate the thought of the “ChatGPTification” of our writing and communication styles. ChatGPT is wordy, bland, and lacking insight. It may be fine for mimicking corporate-speak in mundane emails, but I don’t want that anywhere near the creative and scholarly process of academic writing.

@askennard @cyrilpedia

I am more concerned about the rampant plagiarism of ideas and ignoring prior work in the field than some struggling author who describes their novel findings using words first assembled by another author.

The former does actually hurt the victim.

I fail to see the hurt of reusing phrases such as 'Macroautophagy, hereafter referred to as autophagy'

It's just that it's easier to prove using words without attributing sources than proving someone stole an idea.

@askennard @cyrilpedia

That defense is about protecting butts, not conveying science more clearly.

I am pro-conveying science more clearly, and if AI is a good tool* to do that then yay!

*that's a big if. Considering the possibility of inadvertent plagiarism, I wouldn't recommend authors use it at this point beyond getting inspiration for a particularly hard-to-convey concept.

@MCDuncanLab @cyrilpedia I disagree that it is solely about protecting companies from liability. The folks whose writing was used without consent to build the training corpus for ChatGPT have a legitimate interest in holding OpenAI accountable. I hope they succeed, and if so then I hope that publishers do the responsible thing and avoid liability. This is interest convergence!

@MCDuncanLab @cyrilpedia "Wow. I disagree with the decision by journals that authors can't use ChatGPT to help convey their scientific discoveries more clearly."

I haven't checked Nature policy but at #PLOS we don't have that rule.

"Either way, it's up to the author to validate the product."

That's the gist of our policy: journals.plos.org/digitalhealt

#ChatGPT #AI #ScientificPublishing

@sfmatheson @cyrilpedia

IP issues aside, I think we agree.

I also agree with @askennard that authors might be opening themselves up for inadvertent plagiarism and facilitating the exploitive practices most (or all) AI companies used to build their LLMs, and depending on future litigation putting themselves in legal jeopardy.

However, assuming those issues are resolved, I think authors should be able to use AI tools to improve their writing.

@MCDuncanLab @sfmatheson @cyrilpedia @askennard - thanks for the reminder that these are the central issues… i just dont believe that they can be resolved, and so using these AI tools (for any reason) is unethical and perhaps illegal.

@jpaulgibson @sfmatheson @cyrilpedia @askennard

I think they are resolvable.

I can easily imagine that a company might legally collect enough info for AI tools--eg by adopting X's model where terms of service allow your posts to be used in AI. Imagine Microsoft, or Apple, or Google doing this.

Or scientists could donate their works to a non-profit that builds an AI model for scientific publications.

@MCDuncanLab @jpaulgibson @cyrilpedia @askennard Agree the issues are resolvable. And I love the idea of a non-profit that builds AI models for scientific writing!

#PLOS has always supported and provided fully open access, including both the ability and facility for mining of text and data. Fun fact: plos.org is one of the biggest sources of info in Common Crawl, which was used (I'm not clear on the details) in training (ongoing?) of ChatGPT.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.