I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).

They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.

In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.

Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?

If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.

@jasonetheridge Sci-fi name aside, yes, this is what is going on.

We made computers parse language, but not think. People are freaking out about it because we associate language with sentience. I mean, we *know* that's not how it works. People can lose the ability to use language and still be sentient, functional human beings. I unfortunately know this well.

No, these things don't pose much of an ethical concern about rights. It's no more sentient than your phone and much less than your cat.

@jasonetheridge Seriously, though, can we wait until we come up with real AI before jumping ahead to all the cool sci-fi robot uprising debates?

I know we all grew up with them and they're cool and all, but right now we have to explain how to interact with these things to a few billion people before we worry about whether Siri needs weekends off.

Follow

@MudMan I certainly wouldn't argue for AI rights at this stage, and that wasn't what I was suggesting. That was the point: these chatbots talk like people, but they are NOT people. They have no feelings, no inner life, no inner dialogue... nothing. They're just word predictors, drawing upon a vast dataset created by real humans. P-zombies indeed.

As to how to interact with them: I think the point is to remember that they're not people. You're talking to an extremely knowledgeable, but utterly mindless, machine. And nothing more. The trick will be not forgetting that given its illusion of a sentient human that will only get better as time goes on.

@jasonetheridge Yeah, but... you're coming at it backwards.

Why *would* they have feelings or inner lives? What caused that impression? Why even ponder that here and not with Siri or Google's search engine? They also take your natural language requests and provide some output.

It's not a zombie, P or any other kind. It's a computer returning an output from an input.

The speculation on sentience worries me because it's the wrong conversation in the first place even in the negative.

@MudMan A philosophical zombie is a thought experiment on something with that gives the convincing illusion that it has the same inner life as a human, but in reality doesn't. That's the point: these AIs can give that illusion, too. I made it clear that I don't believe they are sentient; but someone less familiar with computer science might be fooled. Somehow the public should be informed as to the reality of what these AIs aren't. You'll note that the tech companies pushing them don't make any attempt to explain this. They're just magic boxes that talk back to us. People in general are (or so it seems to me) going to default to assuming there's a mind in there, because how else could they talk back? The reality is obviously prosaic, and yet unless it is grasped, we will have bleeding hearts fighting for AI rights (which makes as much sense as fridge rights).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.