Follow

So here's what I'm pondering this morning: are(n't) Bard and GPT-4 AGI?

They're certainly artificial.

They're general, in that they'll give you at least half-assed opinions on any subject at all.

And they're intelligent, by the I-think-fair criterion that if you showed a discussion with one to someone back before they were created, that person would say yep, both parties in this hypothetical discussion are intelligent.

The main reason to say they aren't AGI is, I think, that we always thought that AGI would be _more interesting_... :)

@ceoln no, because all it can do is words. It can't learn to play angry birds, it can't learn to lay bricks, it can't learn to dance or play the guitar. It can't even really draw very well

@pre
Do we require those things of humans? People with no legs can't dance; do they not have general intelligence? Many people can't draw well at all; are they (we) therefore unintelligent?

These strike me as reasons made up after the fact to avoid counting them as AGI.

@ceoln we require it off general humans. And specific human may not be able.

But generality means more than words, yeah.

@pre
I would be interested in someone saying that prior to say 2017. :)

Personally I don't think AGI requires being able to dance, or anything else outside of language.

@ceoln dance probably doesn't count, maybe dance games do though. Definitely non verbal tasks need to be possible to be agi. It don't count till it can win at diplomacy and farmville and bomber man. Till then it ain't even a general game player

@pre
Those things are all done basically verbally, and an LLM will take a crack at them all. Does it really have to be able to win? Against how good a player? Can every human that we call intelligent win at diplomacy? I never have. Is it fair to set the bar higher than we do for humans? I think it's uncontroversial that your average human has [not-A]GI.

@ceoln I'm sure if you put a transformer on the job the transformer could learn to play Bomberman.

Maybe transformers + back propagation is some kind of general intelligence/learning system.

But GPT can't learn to do anything outside it's current skill-set. It has a context-window for memory, but other than that it can't learn at all, let alone learn to play the piano or whatever.

Needs some mechanism for running the back-propagation continually during operation as well as during training maybe.

@pre
It can learn if you update the weights periodically with recent interactions, say. Or just put new stuff in the input window.

Interesting points, though. We might consider having a more (hm) inherent medium to long term memory, and being of a class (hm also) that can win at diplomacy (etc), as reasonably legitimate criteria that they don't have.

I wonder if anyone did list them (or anything else the LLMs can't do) as AGI requirements prior to LLMs actually showing up.

@ceoln @pre the world of srudying intelligence is not limited to computer science (nor the recent ML approach where every problem is a benchmark).

Have a look into bahvioral biology looking into intelligence for animals. Apart from regirous experimental protocols (not easily adapted to text only system) one can find many insights on cognitive capabilities which are thought to be precursor for intelligence.

@ggdupont

Interesting thought! Thank you.

The lack of rigor in like 90% of what claims to be LLM research is for sure frustrating. "Every problem is a benchmark", exactly!

I wonder if the cognitive capabilities which are precursors to intelligence in animals are precursors definitionally, or only contingently. That is, are they necessarily true of any intelligent being, or are they just in fact true of animals (incl humans)?

Probably there's no right answer to that question. :) Our consensus on what constitutes intelligence is very rough.

@ceoln
> I wonder if the cognitive capabilities which are precursors to intelligence in animals are precursors definitionally, or only contingently.

Hard to know and likely depends on the definition of intelligence one choose. At least in that area (behavior of animals) the studies are trying to be rigorous and grounded to earth.

Trying to study intelligence as a abstract mathematical concept goes a bit beyond my capabilities.

@ceoln
I don't know. It's like mixing a bullshitter with photographic memory.

It would pass the Turing test with a little adjustment, but can it play chess? Unlike dancing, etc, winning at chess has long been a marker. Previously to computers playing good games, it was thought that that would mean general intelligence.

It's a good question

@mollydot
Bullshitting was certainly thought to be something only humans did, until now. :)

As was, like you say, playing chess up until computers doing it.

I'm thinking that all this mostly shows that we don't have a strong consensus on what intelligence, or AGI, really is. Which is interesting in itself!

@ceoln
Very true re bullshitting!

I feel we're closer to knowing than when they thought chess was it, or that it was a year away.

@ceoln
I've thought of a text based test for it.

SHRDLU was a program that could hold conversations and obey orders about an internal set of blocks of varying shapes, colours and positions. So you could ask it to put the red cone on the big blue block, then ask it where the cone is.

@ceoln
It started well
but as soon as I asked for something impossible (due to physics), it started disappearing blocks

@mollydot
Interestingly bad! :) A friend gave it a task with shapes and things that required some physical world modelling and it did surprisingly well.

I think these point up the fact that we're used to just associating intelligence with a whole raft of vaguely related things that humans generally have, and when only some of them are present, in a pattern we aren't used to, we don't have a common or consistent understanding.

@ceoln
During the rest of the conversation, it still felt like it was intelligent, but that it was gaslighting me.
An alternative explanation could be that the blocks are real, and somebody is moving them when it's not looking.
I tried to check for that, asking in various ways whether it was the only one that could move blocks, but it kept answering about reality or the program

@ceoln
I don't know why it disappeared the green block.
After that, it was consistent with the blue and yellow, telling me they were cuboids, not cubes. I haven't included that part.
Note that this is the free interface, so chatgpt 3, afaik
I might try it again, letting it decide from the beginning what blocks there are.
I suspect it'll generally work better when the conversation matches Winograd's recorded one.

@mollydot
It definitely makes all sorts of mistakes like this; 4.0 would probably do better but might still be embarrassingly wrong.

Interesting to think about what kinds of errors make us think "not actually thinking" vs "not very smart" vs "well we all make mistakes". :)

@ceoln
Yeah. I'm also kind of fascinated by my reaction of it's gaslighting me or it's trying to piss me off, rather than it's doing it's best, but it doesn't actually know.
But a human bullshitter could probably give me the same reaction. Except they'd probably get angry at my questions about who can move the blocks, or what did you do with the red cone.

@mollydot
Although they have done a lot to reduce it, ChatGPT can still get angry, tell the user never to speak to them again, etc. Pretty funny. :)

@ceoln
I've just had a much better conversation with it. I started by asking it to choose some blocks and tell me what they are.
There were some disappearing blocks, and it thinks you can put another block on top of a pyramid, but I was able to retrieve the missing ones by just asking it to put them on the table

@mollydot
These things can be SO INTERESTING to talk to.

I actually prefer the ones in pure "write some stuff that would follow this" mode, rather than Q&A mode. In some way it's a superset, and it doesn't lead to (falsely, I think) personifying the AI itself (it's more like you're co-writing a thing with the AI, which may have characters in it).

@ceoln
Perhaps I'm co-creating a table of blocks 🙂

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.