In which the author experiments with chatGPT as a reviewer.
"However, when asked to suggest more specific improvements, it fails and starts what is often described as hallucinating, the process by which the LLM provides a confident sounding response that is false or unsubstantiated."
#PeerReview
https://www.thelancet.com/journals/laninf/article/PIIS1473-3099(23)00290-6/fulltext
@cyrilpedia … which, to be fair, human peer-reviewers also do :-) :-(
Seeing if an LLM can follow your work is not the same as having an LLM do your work for you. If an LLM can't follow your reasoning, there's little chance that your audience will be able to consistently follow along either.
@cyrilpedia