@Benhm3 @pluralistic This rests on an assumption that we know what causes consciousness and so know that the differences between "the next-word-predictor program" and humans fall on different sides of that divide. But that is both massively exaggerating what we know about consciousness and massively downplay current AI. To start with, the "next-word-predictor programs" are Turing complete with just a basic loop around them.
@vidar @Benhm3 @pluralistic to tie this back to AGI instead of the "consciousness" distraction, you make a good point that these "next-word-predictor programs" are pretty impressive.
But I think I'm convinced by the argument that they aren't the category of things that are going to be able to order up a batch of killer viruses or whatever, though. (But it would be nice to read something making a robust argument for that...)
@ech @Benhm3 @pluralistic Well if one is willing to consider AGI that does not meet a "consciousness" threshold, that will invariably lower the barrier.
I think a problem there is that it doesn't take smarts to do a whole lot of stupid things. If anything, I think the chance of something tremendously stupid is higher from something that has learnt how to optimise for a target, but not to reason well enough to grasp the consequences.
@vidar @Benhm3 @pluralistic Yeah, I think a proper paperclip maximizer needs to be pretty "smart" to be a real problem, good point. (But it probably doesn't matter if it is conscious or not.)
Killer viruses/nanotech, building so many data centers that earth becomes too hot to support life, etc; all the ai doom scenarios I know of seem to require extreme smarts.
Those don't seem like the kind of things that llms are about to do to us.