maybe this could be avoided if academic papers came with some sort of short summary. if this were provided by the authors, then we wouldn't need to rely on expensive and unreliable tools to generate them. think how convenient it would be if the authors put these summaries right near the start of the paper!!

sigarch.org/the-role-of-llms-i

@regehr So many times when people point out the high error rates, the response is basically “well, our choices are AI or make no changes at all. And AI is better than nothing.” That is, there is no alternative (like structuring the work differently, compensating people instead of unpaid work, etc).

But look at how many words this person has to use to say “it’s better than nothing.”

“The idea of reviewers using LLMs raises legitimate concerns. Chief among them is the fear that reviewers will rely on AI to write reviews, rather than reading the paper carefully. That undoubtedly can be problematic. But the alternative often produces worse outcomes: shallow or disinterested reviews that authors and reviewers alike regret.”

@regehr The other terrible thing that article does (and many others do) is make vague claims about “the alternative being worse” without ever getting into any specifics on “worse.” What are we comparing? They never compare error rates of people to machines. They just insinuate that error prone machines are still somehow better than people..

@paco @regehr god the more carefully I read this the dumber it gets

"Thinking ahead, as AI agents become better, I believe a lot of the work of “herding” reviewers (ensuring reviews come in on time) can be done by AI agents. Imagine an AI helper for PC chairs that automatically emails reviewers when reviewers are late, or urges reviewers to come to a conclusion in online discussions. "

Like you can fucking do all of this already and you don't need AI for it, but the article treats it as if it's the hard part after doing the easy part of ensuing that reviews are high quality

Follow

@ricci @paco @regehr Really annoying that now everything is branded AI. Look! You can automatically send emails!! AI!!111

My main issue with LLM use in reviewing or marking is that those are not tasks LLM were ever built for. Sure you can use a hammer to put jam on your toast, but the end result won't be that great... I've tried asking ChatGPT to review one of my papers.
Apart from the annoying sycophancy, suggestions were either very generic to the point of being useless, outright wrong, or OK but not really that relevant within the context of the study.

I'm all for using AI (which, by the way is not just LLMs...) in situations where it actually helps. Reviewing papers is not one of them.

· Edited · · Tusky · 1 · 0 · 5

@nicolaromano @ricci @paco I haven't tried getting an LLM to review one of my papers, but I did use Google's service that turns a paper into a podcast. it was, as you say, both sycophantic and superficial.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.