At the NAS Journal Summit last week, I talked to editors who were afraid that reviewers would start using ChatGPT and similar to write their reviews.

It seemed unlikely to me. Why would you accept a review and cheat on it when you could just decline.

I still don't know the answer, but apparently their fears may be well-founded.

h/t @aidybarnett

fediscience.org/@ukrio@mstdn.s

@ct_bergstrom @aidybarnett Sadly, as "doing reviews" counts toward promotion I expect it will happen. On the upside, predatory journals can save the effort of sending predatory emails - simply get LLM to write and review their own papers.

Well damn.

@KiwiskiNZ just pointed out a pitfall that LLMs pose for scholarly publishing that I hadn't thought about before.

Predatory journals will be able to easily fake peer review.

My guess is that authors legitimately won't be able to tell whether their work was reviewed by a not-very-thoughtful person, or by a bot. And there are plenty of both out there!

Because peer review is anonymous, the predatory publishers have perfect cover for using these systems.

Follow

@KiwiskiNZ @ct_bergstrom
It could spin off a whole self contained ecosystem: e-mail solicitation, invent the data, write the paper, write the reviews, presenting at predatory conferences.
Would it matter if any humans were involved?

Maybe there’s already an invisible virtual publishing world churning away in silico.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.