“Described as hallucination, confabulation or just plain making things up, it’s now a problem for every business, organization and high school student trying to get a generative AI system to compose documents and get work done." https://fortune.com/2023/08/01/can-ai-chatgpt-hallucinations-be-fixed-experts-doubt-altman-openai/
@lilithsaintcrow And it's going to get worse very rapidly as the public internet is contaminated with this wordwooze, and new generations of LLM are trained on a skimmed internet text contaminated with this semantic diarrhoea (feeding GAN output back into a GAN rapidly causes it to degenerate). Our LLMs may already be at their zenith in terms of quality.
@mwl @cstross @lilithsaintcrow That's the first time I've seen someone say they can't find information because of AI-generated noise swamping the signal. I'm sure it won't be the last.
Out of interest, how do you tell the difference? I'm sure some things make elementary errors, but some presumably don't, while still not being trustworthy. Or is that the problem?