So here's what I'm pondering this morning: are(n't) Bard and GPT-4 AGI?
They're certainly artificial.
They're general, in that they'll give you at least half-assed opinions on any subject at all.
And they're intelligent, by the I-think-fair criterion that if you showed a discussion with one to someone back before they were created, that person would say yep, both parties in this hypothetical discussion are intelligent.
The main reason to say they aren't AGI is, I think, that we always thought that AGI would be _more interesting_... :)
@mollydot
Bullshitting was certainly thought to be something only humans did, until now. :)
As was, like you say, playing chess up until computers doing it.
I'm thinking that all this mostly shows that we don't have a strong consensus on what intelligence, or AGI, really is. Which is interesting in itself!
@ceoln
I've thought of a text based test for it.
SHRDLU was a program that could hold conversations and obey orders about an internal set of blocks of varying shapes, colours and positions. So you could ask it to put the red cone on the big blue block, then ask it where the cone is.
@ceoln
It started well
but as soon as I asked for something impossible (due to physics), it started disappearing blocks
@mollydot
Interestingly bad! :) A friend gave it a task with shapes and things that required some physical world modelling and it did surprisingly well.
I think these point up the fact that we're used to just associating intelligence with a whole raft of vaguely related things that humans generally have, and when only some of them are present, in a pattern we aren't used to, we don't have a common or consistent understanding.
@ceoln
During the rest of the conversation, it still felt like it was intelligent, but that it was gaslighting me.
An alternative explanation could be that the blocks are real, and somebody is moving them when it's not looking.
I tried to check for that, asking in various ways whether it was the only one that could move blocks, but it kept answering about reality or the program
@ceoln
I don't know why it disappeared the green block.
After that, it was consistent with the blue and yellow, telling me they were cuboids, not cubes. I haven't included that part.
Note that this is the free interface, so chatgpt 3, afaik
I might try it again, letting it decide from the beginning what blocks there are.
I suspect it'll generally work better when the conversation matches Winograd's recorded one.
@mollydot
It definitely makes all sorts of mistakes like this; 4.0 would probably do better but might still be embarrassingly wrong.
Interesting to think about what kinds of errors make us think "not actually thinking" vs "not very smart" vs "well we all make mistakes". :)
@ceoln
Yeah. I'm also kind of fascinated by my reaction of it's gaslighting me or it's trying to piss me off, rather than it's doing it's best, but it doesn't actually know.
But a human bullshitter could probably give me the same reaction. Except they'd probably get angry at my questions about who can move the blocks, or what did you do with the red cone.
@mollydot
Although they have done a lot to reduce it, ChatGPT can still get angry, tell the user never to speak to them again, etc. Pretty funny. :)
@ceoln
I've just had a much better conversation with it. I started by asking it to choose some blocks and tell me what they are.
There were some disappearing blocks, and it thinks you can put another block on top of a pyramid, but I was able to retrieve the missing ones by just asking it to put them on the table
@ceoln
I haven't recorded the conversation (yet?)
@mollydot
These things can be SO INTERESTING to talk to.
I actually prefer the ones in pure "write some stuff that would follow this" mode, rather than Q&A mode. In some way it's a superset, and it doesn't lead to (falsely, I think) personifying the AI itself (it's more like you're co-writing a thing with the AI, which may have characters in it).
@ceoln
Perhaps I'm co-creating a table of blocks 🙂
@mollydot
Exactly! :)
@ceoln
Very true re bullshitting!
I feel we're closer to knowing than when they thought chess was it, or that it was a year away.