I find that so much of the disagreement between #AGI “alarmists” and AGI “sceptics” stems from basic differences in the definition of “AGI” and a lack of imagination on the part of the optimists.
🧵
AGI means human-level intelligence (or superior).
The scenarios of AGI most people imagine are naïve amplifications of present-day narrow AIs plus some sleek robotics sprinkled on top.
Like, an AI that would take care of my house while I'm on holidays: watering plants if it's hot and dry, and calling the police if someone breaks in. Or: an AI would fill in and submit my tax returns on my behalf. Or: it'll read my blood tests and modify my diet accordingly. Or: we'll ask it to solve climate change, and it'll suggest the best course of action…
The thing is:
We already have all that! That's _not_ AGI!
Human- (or higher) level intelligence means being knowledgeable, resourceful, independent and creative enough to do all that well, without being asked explicitly, _and much more_.
Even if it's “just” human-level, you should expect from it everything that you normally expect from natural (human) intelligences.
An AI "trapped" in a cage could easily escape if it is much more intelligent than it's human captors. This experiment has already been done using a simple human simulating an AI, and they escaped every time.
And there is no such thing as being completely isolated. Whoever is developing the AI is in contact with the world, so the AI is not completely isolated.
I read about it a while ago and I don't remember who did it. In the experiment the test subjects typed into a keyboard and a human out of their sight typed in the answers as if they were an AI.
The people who simulated the AI were coached by the guy who did the experiment and they had to swear not to reveal how they were able to escape. That's all I remember.
Also, there's a recent scifi film about artificial intelligence, Ex Machina (2014).
Yes, it comes to my mind too, when I think of trapped #AGIs trying to trick their human captors into letting them escape.
But then, it's fiction. Not an argument :) That's why I don't use it in debates.
/cc @ImperfectIdea
Fiction is useful to possibly bring to your attention some aspect of a problem or argument that you may not have considered, but of course you still need to use reason and facts to confirm any hypothesis.
I think another way of thinking about this question is how a less intelligent animal might relate to more intelligent humans. If an animal somehow had a human trapped somewhere, the human could probably figure out a way to fool the animal into allowing the human to escape.
It's not exactly analogous, but it's another approach to figuring out the problem.
@Pat
Link to information about that experiment?