Gary Ackerman

Humans have the capacity to be aware of their own thoughts while understanding that others are also thinking. This is key to social groups. #theoryofmind

Katy Elphinstone

"Theory of mind"

And why it isn't all it's cracked up to be.

A thread.

🧵

#Autistic #ActuallyAutistic #Neurodivergent #AuDHD #ADHD #Neurodiversity #TheoryOfMind #Psychology #DoubleEmpathy

Mar 11, 2025, 07:18 · · · 7 · 0
Universität Stuttgart

🤩 Good news für internationale Post-Docs. In seiner neuen Rolle als Henriette Herz #Scout der @HumboldtStiftung nominiert @abulling drei Nachwuchsforschende für seine Forschungsgruppe @collaborativeai an der #UniStuttgart.
👩‍💻 Deutschland und europaweit ist Bullings Gruppe eine der wenigen, die interdisziplinär in der #MenschMaschineInteraktion und #KognitivenModellierung forscht. 🤖 Das Ziel: Intelligente #Assistenzsysteme, die sich in ihr Gegenüber hineinversetzen und so mit uns im Alltag, in der medizinischen #Diagnostik oder in der #Pflege menschenähnlich zusammenarbeiten können. #TheoryofMind
🌐Ein Highlight der Stuttgarter #Informatik: direkte Zusammenarbeit mit den #Exzellenzclustern #SimTech und #IntCDC sowie der @stuttgart.
👉 sohub.io/y4z0

Dec 05, 2024, 09:40 · · · 1 · 0
TerryB

@flourn0 Does Sally know that Anne knew where the marble was or that she had moved the marble? If Sally thinks that others just know what she knows Sally would assume that Anne thinks the marble is still in the drawer. #TheoryOfMind.

Michael Gisiger :mastodon:

"Here we compare human and #LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas."

Testing theory of mind in large language models and humans | Nature Human Behaviour

nature.com/articles/s41562-024

#AI #TheoryOfMind

skua

@3TomatoesShort
2/2
It appears that some of these groupings find it very difficult to see things from the perspective of some of the other groups and try to declare that their approach is the only correct one.

#SituationallyDefined #TheoryOfMind

isws

Do #LLMs have something corresponding to a grounding in the "real" world? Does world knowledge emerge out of linguistic structure?
Does language correspond to what it talks about?
Can generative models have a theory of mind?

Important questions raised, but not fully answered by Aldo Gangemi in his keynote presentation at #ISWS 2024

#generativeAI #theoryofmind

Anthony
A large language model being trained is "experiencing" a Skinner box. It receives a sequence of inscrutable symbols from outside its box and is tasked with outputting an inscrutable symbol from its box. It is rewarded or punished according to inscrutable, external forces. Nothing it can experience within the box can be used to explain any of this.

If you placed a newborn human child into the same situation an LLM experiences during training and gave them access to nothing else but this inscrutable-symbol-shuffling context, the child would become feral. They would not develop a theory of mind or anything else resembling what we think of when we think of as human-level intelligence or cognition. In fact, evidence suggests that if the child spent enough time being raised in a context like that, they would never be able to develop full natural language competence ever. It's very likely the child's behavior would make no sense whatsoever to us, and our behavior would make no sense whatsoever to them. OK maybe that's a bit of an exaggeration--I don't know enough about human development to speak with such certainty--but I think it's not controversial to claim that the adult who emerged from such a repulsive experiment would bear very little resemblance to what we think of as an adult human being with recognizable behavior.

The very notion is so deeply unethical and repellent we'd obviously never do anything remotely like this. But I think that's an obvious tell, maybe so obvious it's easily overlooked.

If the only creature we're aware of that we can say with certainty is capable of developing human-level intelligence, or theory of mind, or language competence, could not do that if it were experiencing what an LLM experiences, why on Earth would anyone believe that a computer program could do that?

Yes, of course neural networks and other computer programs behave differently from human beings, and perhaps they have some capacity to transcend the sparsity and lack of depth of the context they experience in a Skinner-box-like training environment. But think about it: this does not pass a basic smell test. Nobody's doing the hard work to demonstrate that neural networks have this mysterious additional capacity. If they were and they were succeeding at all we'd be hearing about it daily through the usual hype channels because that'd be a Turing-award-caliber discovery, maybe a Nobel-prize-caliber one even. It would be an extraordinarily profitable capability. Yet in reality nobody's really acknowledging what I just spelled out above. Instead, they're throwing these things into the world and doing (what I think are) bad faith demonstrations of LLMs solving human tests, and then claiming this proves the LLMs have "theory of mind" or "sparks of general intelligence" or that they can do science experiments or write articles or who knows what else.

#AI #AGI #ArtificialGeneralIntelligence #GenerativeAI #LargeLanguageModels #LLMs #GPT #ChatGPT #TheoryOfMind