<strong>(Ir)rationality and cognitive biases in large language models</strong>
"_First, the responses given by the LLMs often display incorrect reasoning that differs from cognitive biases observed in humans. This may mean errors in calculations, or violations to rules of logic and probability, or simple factual inaccuracies. Second, the inconsistency of responses reveals another form of irrationality—there is significant variation in the responses given by a single model for the same task._"
Macmillan-Scott Olivia and Musolesi Mirco. 2024 (Ir)rationality and cognitive biases in large language models R. Soc. Open Sci. 11: 240255. http://doi.org/10.1098/rsos.240255
#OpenAccess #OA #Research #Article #DOI #AI #ArtificialIntelligence #LLM #LLMS #Bias #Academia #Academic #Academics @ai
@bibliolater @ai minor comment: the LLM data are not being compared to multiple responses by a single person on the same task as that is not a general feature of the primary human experimental literature involved. So, as far as I can make out, the levels of human self-consistency are simply imputed/assumed. Doesn’t mean the difference isn’t there, just that the empirical basis seems somewhat anecdotal.
@UlrikeHahn @ai Once again in agreement the sterile conditions of an academic setting do not always best represent the wide breadth of human responses.