Question for the Fediverse hive mind. Is there any evidence that the #AI models hallucination rates are getting ANY better over time?

I'm wondering if a 5 to 15% hallucination rate may just be the nature of the best with LLM's and an unsolvable problem.

@tchambers They absolutely are getting better, but also yes, it is an unsolvable problem... mainly because they aren't designed to be fact machines, so we need to stop treating them like they should be.

@LouisIngenthron @tchambers Even the most intelligent human could honestly reply to a query with "I don't know" which is something an LLM can't (honestly) do.

It's not that anyone is expecting "fact machines" they are expecting "honest machines" which is impossible.

@JustinH @tchambers Fwiw, they absolutely could be trained to respond that way, but "I don't know" doesn't fit the specs of the people buying it. That's not a restriction of the algorithm; it's a restriction of business.

@JustinH @tchambers But more importantly, answering "I don't know" first requires you to know if you know, and these things don't know that, or anything else. They're simple pattern matchers. Trying to act like they know *any* facts (even facts about what they know) is giving them more credit than they're due.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.