There is, I think, indeed a philosophical question lurking there. ChatGPTs "understanding" is **implicit**, derived from the understanding of its training data. I think this kind of superposition is not something we have considered deeply before. Hope to get around writing a bit on that at the #SentientSyllabus project ... https://sentientsyllabus.substack.com
@boris_steipe Interesting. Are you suggesting that ChatGPT possesses a kind of ‘understanding’ inherited from the understanding encoded in its training data. A sort of understanding-once-removed, as it were? I’m starting to wonder to what extent we can really attribute truth or falsity to GPT’s remarks or whether to treat everything it produces as fictional, some of which coincides with the actual world.