'The “hallucinations” of large language models are not pathologies or malfunctions; rather they are direct consequences of the design philosophy and design decisions that went into creating the models. ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people — it’s behaving exactly as it was designed to. By design, it makes up plausible responses to dialogue based on a set of training data, without having any real underlying knowledge of things it’s responding to. And by design, it guesses whenever that dataset runs out of advice.'
https://undark.org/2023/04/06/chatgpt-isnt-hallucinating-its-bullshitting/
@cyrilpedia
As AI is added to drone technology, there's a reasonable concern that it will "hallucinate" enemies that aren't there or target imaginary weaponry.