<strong>(Ir)rationality and cognitive biases in large language models</strong>

"_First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._"

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. doi.org/10.1098/rsos.240255

@ai

@bibliolater @ai minor comment: the LLM data are not being compared to multiple responses by a single person on the same task as that is not a general feature of the primary human experimental literature involved. So, as far as I can make out, the levels of human self-consistency are simply imputed/assumed. Doesn’t mean the difference isn’t there, just that the empirical basis seems somewhat anecdotal.

Follow

@UlrikeHahn @ai In agreement with you regarding an empirical basis needed for assumptions.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.