Earlier I posted about using ChatGPT's propensity to fabricate citations entirely as a short-term strategy for detecting journal submissions and classroom assignments that had been written by machine.
I've been playing with the system for the last couple of hours, and as best as I can tell, ChatGPT now does a much better job than it did when first released at only citing papers that actually exist.
They're not perfect—for example, DOIs can be wrong and some are fabricated—but most are not.
@ct_bergstrom Shit, this is even worse. This seems like a good counterargument to the idea that ChatGPT's fabrications don't cause harm.
ChatGPT cites a paper (Gandini) to support the argument that there was no association between sunscreen use and melanoma. I couldn't find a single mention of sunscreen in the paper (though I might have missed it because it was a scanned version), other than in the references.