I agree that confabulation/hallucination/lying is a huge problem with LLMs like ChatGPT, Bard etc

But I think a lot of people are underestimating how difficult it is to establish "truth" around most topics

High quality news publications have journalists, editors and fact checkers with robust editorial processes... and errors still frequently slip through

Expecting a LLM to perfectly automate that fact checking process just doesn't seem realistic to me

What does feel realistic is training these models to be MUCH better at providing useful indications as to their confidence levels

The impact of these problems could be greatly reduced if we could counteract the incredibly convincing way that these confabulations are presented somehow

I also think there's a lot of room for improvement here in terms of the way the UI is presented, independent of the models themselves

@simon they’re entirely *generative* no? (They construct any answer as they go, is that right?) That doesn’t seem even lined up for truth, which on most accounts has required at least some element of “checking to see”. 🤔

@carlton it's weird how good they are at "truth" though - and how much they've improved. GPT-4 makes things up far less frequently than GPT-3 in my experience

Turns out statistics can get you a really long way!

Follow

@simon @carlton but statistics cannot give you intent. And without intent, it is just chatter

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.