Follow

AI “hallucinates” (it makes up information, including citations). I’m wondering how this is different from humans creating conspiracy theories.

@garyackerman It's different in that LLMs have no mind to speak of. They construct plausible sentences based on linguistic probabilities. They're not aware of the meaning of what they construct. People may not fully understand the topics they talk about, but they're at least aware of their immediate meaning.

@dragfyre Yes, but the fact that humans are “aware of their immediate meaning” doesn’t make me feel better about humans who spout quackery. :)

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.