Follow

'The “hallucinations” of large language models are not pathologies or malfunctions; rather they are direct consequences of the design philosophy and design decisions that went into creating the models. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people — it’s behaving exactly as it was designed to. By design, it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things it’s responding to. And by design, it guesses whenever that dataset runs out of advice.'

undark.org/2023/04/06/chatgpt-

@cyrilpedia

As AI is added to drone technology, there's a reasonable concern that it will "hallucinate" enemies that aren't there or target imaginary weaponry.

@cyrilpedia It's spewing *bullshit* just like it would if it were genuine stupidity (human arrogance) rather than artificial. Which ought to prove that no one fucking needs it Ince we have plenty of the genuine variety.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.