💻 **Rates Of Hallucination In AI Models From Google, OpenAI On The Rise**

"_In a recent study, it was found that two recent OpenAI models, o3 and 04-mini, hallucinated in 33% and 48% of answers, respectively, according to The Times of London. These percentages are more than double those of previous models._"

🔗 finance.yahoo.com/news/rates-h.

@ai

@bibliolater I feel it would be better to stop using the term "haluzinations" when we're dealing with very large, though always restricted, stochastic models.

Anthropologizing these technologies only reinforces the propaganda.

@bibliolater Good question...

Deficient modelling? Pretended intelligence? Non-factual production? Empty form?

@bibliolater Fabrications is a good one - that resonates with "fake" and "ersatz".


All these effects can be traced to mechanistic emulations of things we may observe in the real world, and a wealth of research into stochastic modelling exists, waits to be used by people who make decisions, or those that educate the public. Philosophy also has a huge reservoir of insights into the limitations of language and meaning. Psychology can help understand the effect of consuming fabricated text.

@bibliolater we shouldn't forget, however, that 100% of LLM output is fabrication, or to use a loaded German term: versatz.

@tg9541 I think the stumbling block will be wider acceptance and use. I personally like algorithmic fabrications but trying to get reporters to use such a term may be difficult.

Follow

@odd @tg9541

So true, we are assigning value judgements such as truth and falsehood to a system that has no way of judging such matters.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.