A paper on arXiv finds that an emergent ability to solve Theory-of-Mind (ToM) tasks, in ChatGPT (Thanks @kcarruthers). Such emergent behaviour is particularly interesting because it has not been built into the algorithm by design.
https://arxiv.org/abs/2302.02083
I find particularly intriguing (although the authors don't discuss that point) how beliefs change simply with the length of the conversation, even when no new facts are added. The philosopher Paul Grice stated four maxims of communication: quantity, quality, relation, and manner; aspects that allow speakers and listeners to establish contextual information _implicitly_. It is intriguing to think that this need to evaluate implicit context is a necessary condition for natural communication, and that this is the stimulus for ToM emergence.
I'm intrigued - but not totally surprised. The ability of LLMs to pass the "Winograd Schema Challenge" already showed that there is something going on. Example:
Human:
(1) The cat ate the mouse, it was tasty. Who was tasty: the cat or the mouse?
(2) The cat ate the mouse, it was hungry. Who was hungry: the cat or the mouse?
AI:
(1) The mouse was tasty.
(2) The cat was hungry.
... and you can easily try that for yourself.
That paper is here:
https://arxiv.org/abs/2201.02387
#SentientSyllabus #ChatGPT #HigherEd #AI #Education #TheoryOfMind #Mind #Intelligence
Tangent: I find it surprising that OpenAI has not done a better job correcting gender bias, as clearly reflected by a simple he/she extension of your example:
Ok - there's something interesting to be said about that.
The algorithm is not biased. The data is biased. And whether we should wish for @openai to correct this bias algorithmically is far from clear.
Let me quote from an insightful toot by @gaymanifold yesterday: "I love how people are discovering that #ChatGPT is #racist #sexist or #bigoted or at least show these traits for some prompts. ChatGPT is the best approximation to human written content [...] we can still tell right now that it's being bigoted. As computer scientists tweak it more [...] it won't be human noticable that it is bigoted. Thank you for coming to my #dystopia where machines are institutionalizing bigotry in a way that looks utterly impartial."
I think that's an important perspective.
I've written elsewhere: we need computers to think with us, not for us.
@austegard @kcarruthers @gaymanifold
Thank you for the link to the Si et al. 2022 paper.
It is difficult to bring a thread like this to any form of closure. But thank you for sharing your views.