Follow

(Ir)rationality and cognitive biases in large language models

First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task.

Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. doi.org/10.1098/rsos.240255

@ai

@bibliolater @ai minor comment: the LLM data are not being compared to multiple responses by a single person on the same task as that is not a general feature of the primary human experimental literature involved. So, as far as I can make out, the levels of human self-consistency are simply imputed/assumed. Doesn’t mean the difference isn’t there, just that the empirical basis seems somewhat anecdotal.

@bibliolater @ai the other thing maybe worth note is that taking participant responses in the highly constrained pragmatic context of a psych experiment to reflect the full breadth of human responding is a bit misleading. It’s an interesting question how LLMs without pretraining or finetuning respond to experimental questions, but a better comparison might be with data from asking those questions of random shoppers at a mall if “breadth of answer” is of interest

@UlrikeHahn @ai Once again in agreement the sterile conditions of an academic setting do not always best represent the wide breadth of human responses.

@UlrikeHahn @ai In agreement with you regarding an empirical basis needed for assumptions.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.