@mapto

I just commented the following on your blog where you discuss levels of linguisitic competence:

This is quite strange, since it is so evidently not true. All of these behaviours are well in the capabilities of ChatGPT. I give you two examples that are claimed as failures.

-----------
Prompt:
The mother of Michael has four children. Three of them are Marcy, Millie, and Moses. Please tell me the name of the fourth child.

ChatGPT:
The name of the fourth child is likely "Michael," as the statement at the beginning of the question says "the mother of Michael has four children," and then goes on to name three of them.

Prompt:
Please complete the following sentence reasonably: "If tigers would like cereals, they would be cheaper to keep. Families could feed their tigers with "

ChatGPT:
If tigers would like cereals, they would be cheaper to keep. Families could feed their tigers with a variety of inexpensive grain-based products such as corn, wheat, and rice, rather than having to provide costly meat-based diets. However, it is important to note that tigers are obligate carnivores and have specific nutritional needs that cannot be met by a purely plant-based diet, so feeding them only cereals would not be a healthy or appropriate long-term solution.
------------

So I am genuinely curious what drives such profoundly different experiences.

The proposed tests and competences are interesting; I would hope that they would be put to better use to map out the quite unexpected emergent abilities of the , to put it to good use, and to prepare ourselves (and our institutions) for 's even more capable descendants.

I invite you to head over to sentientsyllabus.substack.com for some in depth analysis.

🙂

@boris_steipe interesting that for the same questions, different people get conceptually different responses. Would you comment on this? I would, but wanted to hear your interpretation

@mapto

I think it is often overlooked that is not an but a language model. To get non-trivial responses, one has to think how to phrase a request so it is part of a dialogue. Many abilities then become apparent; but if the request first has to pass through a level of abstraction that the was not trained for, it often gets confused.

That's really the essence of it: express your request as language.

@boris_steipe does this mean that you claim that a language model can handle performative knowledge (know-how) or proactive knowledge (we're limiting the discussion on the examples of riddles and counterfactuals)? I'm very confused about what you're trying to say with your first comment here

Follow

@mapto

Yes, that's what I mean. If you can give me an example of each that would satisfy your definition, I'll be happy to demonstrate.

@boris_steipe but if so how is it not general intelligence? Do you mean that mastery of language is sufficient for reasoning, decision making, and conditionality? I still feel lost about this conversation, sorry

@mapto

Do I mean that "mastery of language is sufficient for reasoning, decision making, and conditionality?"

Yes, I'll go out on a limb and say that I do – with some reservations. One is that has not "mastered" language, but it has become very good at it – there is cartainly scope for improvement. The other is that terms like "reasoning", "compositionality" etc. are themselves concepts, labels we apply to organize the large space of possible abstractions of language. They mean different things to different people in different contexts. A discussion of these terms may already be biased by its premises. A productive discussion is better off looking at actual examples, and then discuss whether or not the example fairly maps to the respective concepts.

So let's take a look at , simply because it has been named as one of the aspects of thought that lack and that are deemed be critical on the path to .

Let's work off a recent definition cited by Frankland & Greene (2020, DOI: 10.1146/annurev-psych-122216-011829) "The principle of compositionality holds that the meaning of a complex expression is a function of the meaning of its parts and the way in which they are combined" (according to Frege); this gives rise to an idea of .

In order to establish such compositionality, we can ask: is able to infer consequences of a novel utterance from its components. We can also ask whether it can substitute a component, since that requires decomposition and recomposition. Let's give that a try.

----------------------------------------------
Human:
want you to act as an Understander. As an Understander you examine a sentence, interpret it, answer questions, and change it to explore its meaning and changes of meaning. Here is the sentence:

"The bells rang out like laughter as the frog and the goat marched down the aisle – was there a note of ridicule in the sound?"

ChatGPT:
As an Understander, I interpret the sentence to mean that there were bells ringing loudly, resembling the sound of laughter, while a frog and a goat walked down an aisle. The sentence then asks whether there was a hint of mockery or contempt in the ringing of the bells.

Here are some possible questions that could be asked about this sentence:

What is the sound of the bells compared to?
Answer: The sound of the bells is compared to laughter.

Who is walking down the aisle?
Answer: A frog and a goat are walking down the aisle.

What is the tone or mood of the sentence?
Answer: The tone of the sentence could be interpreted as whimsical or surreal, but there is also an element of uncertainty or suspicion as the question at the end suggests that the sound of the bells may have had a negative connotation.

What might the bells represent?
Answer: The bells could be symbolic of celebration or joy, but they could also be symbolic of ridicule or mockery, depending on the context and interpretation of the sentence.

As an Understander, I can also change the sentence to explore its meaning and changes of meaning. Here are some possible variations:

"The bells rang out like thunder as the bride and groom walked down the aisle - did they feel intimidated by the sound?"
This sentence replaces the frog and goat with a bride and groom and changes the comparison of the sound of the bells to thunder. The question at the end suggests a different tone, one of concern or apprehension.

"The bells rang out like church bells as the procession made its way to the altar - did they signify a joyful occasion?"
This sentence replaces the comparison of the sound of the bells with a more specific reference to church bells and suggests that the occasion is a happy one.

"The bells rang out like a taunting chant as the opposing team took the field - were they meant to intimidate?"
This sentence changes the context entirely, replacing the aisle and procession with a sports game and suggesting that the sound of the bells is meant to taunt or intimidate the opposing team.
-----------------------------------------

This answer certainly checks my boxes on compositionality. It was asked in a fresh thread, i.e. the prompt formed the entire content of the context-window. And the example is _not_ in the training data, I just made it up on the spot.

The bottom line is: apparently we do not need to encode higher-order aspects of thought explicitly, they emerge implicitly in large language models. But this is to be expected: language is not _what_ we think, language is _how_ we think. If the training data is large enough to provide a significant subset of human language, and the model is large enough, you get of mind-like abilities. How could you not?

This is actually a lesson that could have been learned in 2017, when AlphaZero (an algorithm that trained itself) surpassed the chess playing abilities of DeepBlue (the last of the large expert systems). You can't really analyze how such abilities are represented in the network – they are distributed, and they have no need to partition neatly along our conceptual boundaries. And it would be silly to require the abilities to map neatly into our favourite conceptual space, before we are even willing to acknowledge the ability exists.

In this sense it is truly unfortunate that the current perception focusses so much on misunderstanding what ChatGPT is (LLM not AGI), how it works ("Schrödinger Facts", not necessarily truths), what it cannot do (e.g. counting characters, and scanning meter) – which are typically tasks that are trivial for an AI that would be trained on them, and how it does not align with this or that aspect of human thought.

I would really hope to see more discussion that recognizes such surprising emergent abilities (e.g. spatial abstractions like "on top of"), and through that contributes to something important and constructive: an exploration of what the limits of emergence in Large Language Models really are.

This is not an academic question.

It determines the horizon on which to expect actual AGI.

@boris_steipe there's too much happening in your post, so I'll try to respond in pieces. First, I hope you don't mind me asking, but why and how (to me the interesting part is almost exclusively in why and how) did you come with your example? I hope you would agree if I define it as nonsensical. I mean it in the sense that to me (and I agree that in that regard I might be compatible to a LLM, certainly loosing in terms of common knowledge) the different parts of your sentence do not relate so trying to find relationships is a blue sky task, i.e. one cam invent interpretations, but due to the absence of a ground truth, they would be indistinguishable from hallucinations. Then of course the model relates the stochastically most probable objects to the ones already mentioned. Then finally, nonsensical text is a genre of its own (Woodhouse comes to mind), so having it is a test case is ok. I just don't see how it could be insightful

@boris_steipe then your idea of compositionality is very syntactical. I wouldn't agree that the whole is the sum of the parts, but would think that there's additional meaning by the fact these parts are put together. When I read about a frog and a goat together, I don't only think of the two animals, but also of what might bring them together and what might make one stand out next to the other. To ground it in some theory, I can relate it to phenomenography, where to understand a phenomenon, people rely not only on contrast (presence or absence of a feature, could be an object in your phrase), but also on separation, generalisation and fusion, i.e. how it interacts with its context. doi.org/10.1207/s15327809jls15

@boris_steipe here's another example: ChatGPT got me confused about my family. It's in the counting and the greats that en.wikipedia.org/wiki/Rote_lea fails. See below:


mapto:
Polihron is the grandfather of Rusko. Rusko has a grandson who is also called Rusko. This child is my grandfather. What is my relationship to Polihron?

ChatGPT:
If Rusko (the grandson) is your grandfather, then Polihron must be your great-great-grandfather.

Here is the reasoning:

Polihron is the grandfather of Rusko (the first generation).
Rusko (the first generation) has a son who has a son who is also named Rusko (the third generation, your grandfather).
Therefore, Polihron is the great-grandfather of Rusko (the third generation and your grandfather).
Since Polihron is the great-grandfather of your grandfather, that makes Polihron your great-great-grandfather.

mapto:
And what is my son Lazar to Polihron?

ChatGPT:
If Polihron is your great-great-grandfather, then your son Lazar would be Polihron's great-great-great-grandson.

Here is the reasoning:

Polihron is your great-great-grandfather (the first generation).
You are one generation away from Polihron (the second generation).
Your son Lazar is two generations away from you (the third generation).
Therefore, Lazar is three generations away from Polihron, making him Polihron's great-great-great-grandson.

@mapto

Your question maps to similar questions about abstract counting abilities. Correct answers are usually elicited with a variant of "step-by-step prompts. I changed your input slightly because the duplicate name was confusing me more than ChatGPT (actually, ChatGPT several times anticipated that and referred to them as Rusko Sr, and Rusko Jr.). Here is an example of the correct reasoning:

-----------------
Prompt:
Abel is the grandfather of Bertram. Bertram has a grandson who is called Cyrus. This child is my grandfather. How many generations is Abel removed from me?

Response:
If Cyrus is your grandfather, that means you are two generations below Cyrus (Cyrus -> your parent -> you). Since Bertram is the grandfather of Cyrus, there are two generations between Bertram and Cyrus (Bertram -> Cyrus's parent -> Cyrus). Abel is the grandfather of Bertram, so there are two generations between Abel and Bertram (Abel -> Bertram's parent -> Bertram).

To find out how many generations Abel is removed from you, we can add up the generations between each person:

Abel -> Bertram (2 generations) + Bertram -> Cyrus (2 generations) + Cyrus -> you (2 generations) = 6 generations

So, Abel is removed from you by 6 generations.

[I believe this is correct. Now we need abstract counting. Here: how to repeat something six times.]

Prompt:
If a male ancestor is removed from me by 6 generations, how is this relationship expressed in English? Consider this step by step.

Response:
In English, relationships are described based on the number of generations separating two individuals. When there is a 6-generation gap between you and a male ancestor, you can describe this relationship step by step as follows:

Your parent - This is one generation above you, so your male ancestor at this level would be your father.
Your grandparent - Two generations above you, your male ancestor would be your grandfather.
Your great-grandparent - Three generations above you, your male ancestor would be your great-grandfather.
Your great-great-grandparent - Four generations above you, your male ancestor would be your great-great-grandfather.
Your great-great-great-grandparent - Five generations above you, your male ancestor would be your great-great-great-grandfather.
Your great-great-great-great-grandparent - Six generations above you, your male ancestor would be your great-great-great-great-grandfather.
So, a male ancestor removed from you by 6 generations would be called your great-great-great-great-grandfather.

[Again, I think this is correct. The reasoning is systematic, takes the special cases of father and grandfather correctly into account, and iterates the required number of times.]

Do you agree that this is a correct solution and that it illustrates the ability of the LLM to reason about generations and their labels?

(ChtaGPT-v4 2023-03-14)

@boris_steipe you've simplified the syntax to strip away the semantics and you got the correct answer. Yes, I confirm this is the case.

@boris_steipe sorry, let me try to explain better. This is all about intent. Is the objective that we use technology to explain things to us, or do we want to be able to explain things to technology in a way that it can reproduce basic questions whose responses have no added value, and arguably are reduced to fact checking? I'm interested in the former, the latter is what I see in your guiding simplifications. Maybe you meant it differently?

Remember from school that facts are the mostly basic type of knowledge. The more valuable ones are functional and others that are widely called conditional, but as I hinted in my article, are very diverse.

@mapto

I respectfully disagree. Your article claimed that there were fundamental limitations of performance of LLMs. I noted that we are in fact not sure about the limitations, because they are difficult to disentangle from their semantic presentation. In particular I had observed abilities that in my understanding would cover many if not all of the higher order processing abilities you had listed. They can be elicited.

First, it is surprising that such emergent abilities appear, especially given how those algorithms work: their "world" is entirely based on relationships between abstract symbols, and has no grounding in experience at all, and their training consists merely in predicting a next token in a sequence. That we see higher order "understanding" emerge under these circumstances is profound. It strongly supports the idea that language and thought are deeply linked: language is not what we think, it is how we think.

Second, what must not be conflated is the ability to reason in principle, and the ability to parse human language, with all of its ambiguities, unstated assumptions and discursive conventions.(1)

When you say "explain things to technology", you are referring to the latter: "parsing human language". This is a different question from the ability to "reason". Now, the current dialogical abilities of LLMs are already a radical departure. Remember: previously, this process of "explaining things" was called programming.

Of course there are limitations. Where I disagree with you is: these limitations of communicating intent are not the same thing as a lack of reasoning.

The distinction is important because it can lead to significantly improved interactions. Jason Wei of Google has done groundbreaking work on "chain-of-thought prompting", and similar work from Google Brain on "Least-to-most prompting" shows that there are probably many additional strategies to be discovered.

All the best.

---
(1) One of the more surprising aspects of ChatGPT is that it embodies Paul Grice's "cooperative principle" – the well-known maxims of quantity, quality, relation, and manner.

@boris_steipe it's clear we'll not agree, but I struggle to comprehend what though process would lead to the apparent confidence of saying "ChatGPT embodies Paul Grice's "cooperative principle". Is there some sort of a widely agreed verification process that it has passed, or is this based on some personal observations?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.