Earlier I posted about using ChatGPT's propensity to fabricate citations entirely as a short-term strategy for detecting journal submissions and classroom assignments that had been written by machine.

I've been playing with the system for the last couple of hours, and as best as I can tell, ChatGPT now does a much better job than it did when first released at only citing papers that actually exist.

They're not perfect—for example, DOIs can be wrong and some are fabricated—but most are not.

@ct_bergstrom I tried your example (on sunscreen) and it looks like the only improvement OpenAI made ChatGPT is, as you implied, the ability to generate more convincing bullshit.

For example, the attached reference to the 1991 Diffey paper gives the impression that that paper focused on the damage caused by UV radiation. Instead, the paper examined the public health consequences of the divergence between perceptions of sunscreen protection and actual protection.

Follow

@ct_bergstrom Shit, this is even worse. This seems like a good counterargument to the idea that ChatGPT's fabrications don't cause harm.

ChatGPT cites a paper (Gandini) to support the argument that there was no association between sunscreen use and melanoma. I couldn't find a single mention of sunscreen in the paper (though I might have missed it because it was a scanned version), other than in the references.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.