I find that so much of the disagreement between #AGI “alarmists” and AGI “sceptics” stems from basic differences in the definition of “AGI” and a lack of imagination on the part of the optimists.
🧵
AGI means human-level intelligence (or superior).
The scenarios of AGI most people imagine are naïve amplifications of present-day narrow AIs plus some sleek robotics sprinkled on top.
Like, an AI that would take care of my house while I'm on holidays: watering plants if it's hot and dry, and calling the police if someone breaks in. Or: an AI would fill in and submit my tax returns on my behalf. Or: it'll read my blood tests and modify my diet accordingly. Or: we'll ask it to solve climate change, and it'll suggest the best course of action…
The thing is:
We already have all that! That's _not_ AGI!
Human- (or higher) level intelligence means being knowledgeable, resourceful, independent and creative enough to do all that well, without being asked explicitly, _and much more_.
Even if it's “just” human-level, you should expect from it everything that you normally expect from natural (human) intelligences.
I don't think an AGI necessarily needs to do all of those things. General intelligence comes in lots of different forms. Not all humans have a desire to die or to lie or to hallucinate or to reproduce or to do all those other things. Many animals have a different repertoire of mental skills, some of which humans can't do.
I think AGI will follow the same path that many other technologies have. Airplanes don't flap their wings.