So here's what I'm pondering this morning: are(n't) Bard and GPT-4 AGI?
They're certainly artificial.
They're general, in that they'll give you at least half-assed opinions on any subject at all.
And they're intelligent, by the I-think-fair criterion that if you showed a discussion with one to someone back before they were created, that person would say yep, both parties in this hypothetical discussion are intelligent.
The main reason to say they aren't AGI is, I think, that we always thought that AGI would be _more interesting_... :)
@pre
I would be interested in someone saying that prior to say 2017. :)
Personally I don't think AGI requires being able to dance, or anything else outside of language.
@ceoln dance probably doesn't count, maybe dance games do though. Definitely non verbal tasks need to be possible to be agi. It don't count till it can win at diplomacy and farmville and bomber man. Till then it ain't even a general game player
@pre
Those things are all done basically verbally, and an LLM will take a crack at them all. Does it really have to be able to win? Against how good a player? Can every human that we call intelligent win at diplomacy? I never have. Is it fair to set the bar higher than we do for humans? I think it's uncontroversial that your average human has [not-A]GI.
@ceoln I'm sure if you put a transformer on the job the transformer could learn to play Bomberman.
Maybe transformers + back propagation is some kind of general intelligence/learning system.
But GPT can't learn to do anything outside it's current skill-set. It has a context-window for memory, but other than that it can't learn at all, let alone learn to play the piano or whatever.
Needs some mechanism for running the back-propagation continually during operation as well as during training maybe.
@pre
It can learn if you update the weights periodically with recent interactions, say. Or just put new stuff in the input window.
Interesting points, though. We might consider having a more (hm) inherent medium to long term memory, and being of a class (hm also) that can win at diplomacy (etc), as reasonably legitimate criteria that they don't have.
I wonder if anyone did list them (or anything else the LLMs can't do) as AGI requirements prior to LLMs actually showing up.
@ceoln @pre the world of srudying intelligence is not limited to computer science (nor the recent ML approach where every problem is a benchmark).
Have a look into bahvioral biology looking into intelligence for animals. Apart from regirous experimental protocols (not easily adapted to text only system) one can find many insights on cognitive capabilities which are thought to be precursor for intelligence.
Interesting thought! Thank you.
The lack of rigor in like 90% of what claims to be LLM research is for sure frustrating. "Every problem is a benchmark", exactly!
I wonder if the cognitive capabilities which are precursors to intelligence in animals are precursors definitionally, or only contingently. That is, are they necessarily true of any intelligent being, or are they just in fact true of animals (incl humans)?
Probably there's no right answer to that question. :) Our consensus on what constitutes intelligence is very rough.
@ceoln
> I wonder if the cognitive capabilities which are precursors to intelligence in animals are precursors definitionally, or only contingently.
Hard to know and likely depends on the definition of intelligence one choose. At least in that area (behavior of animals) the studies are trying to be rigorous and grounded to earth.
Trying to study intelligence as a abstract mathematical concept goes a bit beyond my capabilities.
@ceoln we require it off general humans. And specific human may not be able.
But generality means more than words, yeah.