I haven't checked this thoroughly, but another way to tell real from hallucinated sources in #ChatGPT may be to look for DOIs.

A quick scan of the experiment I did yesterday shows a mixture of references with and without DOIs. The DOIs I checked link to legitimate articles. Whether the articles actually support the point they were cited for is another matter.

None of the hallucinated references, by contrast, seem to list DOIs.

So that might be something.

#AcademicMastodon

Follow

@yasha

Not how it works. Try specifically asking it for a DOI to a non-existing article. You'll get one. You'll also get a PubMed ID. Or a link to the article to the publisher.

Sorry.

@boris_steipe

I'm sure that it if I asked it to provide DOIs for its invented references, it could make something up. That wasn't my point. Rather, when not instructed otherwise, it produced a mixture of references, but only provided DOIs for legitimate sources.

Would that be a surefire way to detect ChatGPT-generated text by students? Of course not. But it might help narrow down which sources to spot-check.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.