I haven't checked this thoroughly, but another way to tell real from hallucinated sources in #ChatGPT may be to look for DOIs.
A quick scan of the experiment I did yesterday shows a mixture of references with and without DOIs. The DOIs I checked link to legitimate articles. Whether the articles actually support the point they were cited for is another matter.
None of the hallucinated references, by contrast, seem to list DOIs.
So that might be something.
#AcademicMastodon
@boris_steipe
I'm sure that it if I asked it to provide DOIs for its invented references, it could make something up. That wasn't my point. Rather, when not instructed otherwise, it produced a mixture of references, but only provided DOIs for legitimate sources.
Would that be a surefire way to detect ChatGPT-generated text by students? Of course not. But it might help narrow down which sources to spot-check.