@vidar @Benhm3 @pluralistic to tie this back to AGI instead of the "consciousness" distraction, you make a good point that these "next-word-predictor programs" are pretty impressive.
But I think I'm convinced by the argument that they aren't the category of things that are going to be able to order up a batch of killer viruses or whatever, though. (But it would be nice to read something making a robust argument for that...)
@ech @Benhm3 @pluralistic On the other hand a stupid one less capable of carrying out it's stupid actions might still be preferable. I wouldn't feel very threatened by a paperclip optimizer with GPT4 serving as the brain, for example.
@vidar @Benhm3 @pluralistic Yeah, I think a proper paperclip maximizer needs to be pretty "smart" to be a real problem, good point. (But it probably doesn't matter if it is conscious or not.)
Killer viruses/nanotech, building so many data centers that earth becomes too hot to support life, etc; all the ai doom scenarios I know of seem to require extreme smarts.
Those don't seem like the kind of things that llms are about to do to us.
@ech @Benhm3 @pluralistic Well if one is willing to consider AGI that does not meet a "consciousness" threshold, that will invariably lower the barrier.
I think a problem there is that it doesn't take smarts to do a whole lot of stupid things. If anything, I think the chance of something tremendously stupid is higher from something that has learnt how to optimise for a target, but not to reason well enough to grasp the consequences.