#ChatGPT-likers, especially those of you using it as a learning tool (cc @simon) how do you square the problem of it very clearly being capable of putting out "correct-looking/intelligent-sounding, but wrong" content?
I cannot see why you'd ever be able to fully trust what it's telling you is accurate, unless you're already a domain expert on the topic – at which point, why bother?