'Imagine my surprise when I received reviews on a submitted paper declaring that it was the work of ChatGPT. One reviewer wrote that it was “obviously ChatGPT”, and the handling editor vaguely agreed, saying that they found “the writing style unusual”. Surprise was just one emotion I experienced; I also felt shock, dismay and a flood of confusion and alarm. Given how much work I put into writing, it was a blow to be accused of being a chatbot — especially without any evidence.'

nature.com/articles/d41586-024

@cyrilpedia

Wow. I disagree with the decision by journals that authors can't use ChatGPT to help convey their scientific discoveries more clearly.

I don't understand how it's any different than hiring an editor--something many journals recommend to authors of poorly written articles. Sure ChatGPT might make something up, but a scientific editor can similarly misunderstand the original draft and write something nonsensical.

Either way, it's up to the author to validate the product.

Follow

@MCDuncanLab @cyrilpedia I can think of lots of reasons not to let the Stochastic Parrot anywhere near the scientific publishing system. With all the litigation about copyright infringement by ChatGPT in their unethical scraping of the web, I can imagine editors would want to steer well clear of any futute legal issues, as do I!

@MCDuncanLab @cyrilpedia we don’t allow plagiarism as a strategy to help convey our scientific discoveries more clearly. To me ChatGPT is much more similar to plagiarism than to hiring an editor—conflating the two ignores the fundamentally extractive and exploitative nature of how ChatGPT was built. Plus there is a real risk of plagiarizing with ChatGPT! Of course ChatGPT makes stuff up as you note, but it can also just spit out training data, aka other people’s words.

@MCDuncanLab @cyrilpedia and as a human being I hate the thought of the “ChatGPTification” of our writing and communication styles. ChatGPT is wordy, bland, and lacking insight. It may be fine for mimicking corporate-speak in mundane emails, but I don’t want that anywhere near the creative and scholarly process of academic writing.

@askennard @cyrilpedia

I am more concerned about the rampant plagiarism of ideas and ignoring prior work in the field than some struggling author who describes their novel findings using words first assembled by another author.

The former does actually hurt the victim.

I fail to see the hurt of reusing phrases such as 'Macroautophagy, hereafter referred to as autophagy'

It's just that it's easier to prove using words without attributing sources than proving someone stole an idea.

@askennard @cyrilpedia

That defense is about protecting butts, not conveying science more clearly.

I am pro-conveying science more clearly, and if AI is a good tool* to do that then yay!

*that's a big if. Considering the possibility of inadvertent plagiarism, I wouldn't recommend authors use it at this point beyond getting inspiration for a particularly hard-to-convey concept.

@MCDuncanLab @cyrilpedia I disagree that it is solely about protecting companies from liability. The folks whose writing was used without consent to build the training corpus for ChatGPT have a legitimate interest in holding OpenAI accountable. I hope they succeed, and if so then I hope that publishers do the responsible thing and avoid liability. This is interest convergence!

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.