I find the use of the term ‘hallucinate’ in relation to #ChatGPT curious. As an LLM, GPT doesn’t experience let alone hallucinate anything. As far as I can tell, this is used to mean just making stuff up, but since that’s all ChatGPT ever does, alignment or divergence from the truth is a merely statistical matter based on its training corpus.

Or am I missing something and there is some other sense in which GPT’s inventions are analogous to hallucinations? #hallucination

Follow

@keithwilson

There is, I think, indeed a philosophical question lurking there. ChatGPTs "understanding" is **implicit**, derived from the understanding of its training data. I think this kind of superposition is not something we have considered deeply before. Hope to get around writing a bit on that at the project ... sentientsyllabus.substack.com

@boris_steipe Interesting. Are you suggesting that ChatGPT possesses a kind of ‘understanding’ inherited from the understanding encoded in its training data. A sort of understanding-once-removed, as it were? I’m starting to wonder to what extent we can really attribute truth or falsity to GPT’s remarks or whether to treat everything it produces as fictional, some of which coincides with the actual world.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.