Follow

I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).

They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.

In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.

Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?

If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.

@jasonetheridge Everything you describe would equally apply to modern management. Empty skulls parading around emitting buzzwords with no understanding of what they mean. Lack of self-awareness is also common to both.

@ingram I have observed this phenomenon for many years, and cannot contradict you. The effect of too much perceived power? Adopting what they see as the needs of the role, that inadvertently lobotomises them?

@jasonetheridge I think it is something that Adm. Rickover identified decades ago. Having "leaders" that know business school principles but nothing of which they manage is not particularly good. azquotes.com/quote/730279

@ingram He's nailed it. Those empty-headed managers are being saved by their technical experts, who let them swan around spouting their buzzwords, while quietly getting on with it. But if this trend continues, there won't be any technical experts left, or too few to make a difference. Madness.

@jasonetheridge one aspect I’m interested in is what it does to human to human communication and interactions. I always make a point of treating an AI politely, it might not be self aware but I don’t want to get used to treating something that seems human as less than human just in case I start treating humans the same.

@gpowerf I would do the same (and do with the very limited AI in the Google Assistant), and am especially aware that I'm modelling such interactions for my young children.

I can't help wondering if this is parochial, however; ChatGPT and its ilk are no less machines than toasters or fridges, though with the very real distinguishing feature that they can talk to us. Not wanting to develop a pattern of behaviour one could inadvertently inflict on another human is a good reason to always model good behaviour, you're right. Or perhaps as a hedge against the day when our AIs do manifest true sentience, and mistreating them would at that point be ethically wrong.

@jasonetheridge Sci-fi name aside, yes, this is what is going on.

We made computers parse language, but not think. People are freaking out about it because we associate language with sentience. I mean, we *know* that's not how it works. People can lose the ability to use language and still be sentient, functional human beings. I unfortunately know this well.

No, these things don't pose much of an ethical concern about rights. It's no more sentient than your phone and much less than your cat.

@jasonetheridge Seriously, though, can we wait until we come up with real AI before jumping ahead to all the cool sci-fi robot uprising debates?

I know we all grew up with them and they're cool and all, but right now we have to explain how to interact with these things to a few billion people before we worry about whether Siri needs weekends off.

@MudMan I certainly wouldn't argue for AI rights at this stage, and that wasn't what I was suggesting. That was the point: these chatbots talk like people, but they are NOT people. They have no feelings, no inner life, no inner dialogue... nothing. They're just word predictors, drawing upon a vast dataset created by real humans. P-zombies indeed.

As to how to interact with them: I think the point is to remember that they're not people. You're talking to an extremely knowledgeable, but utterly mindless, machine. And nothing more. The trick will be not forgetting that given its illusion of a sentient human that will only get better as time goes on.

@jasonetheridge Yeah, but... you're coming at it backwards.

Why *would* they have feelings or inner lives? What caused that impression? Why even ponder that here and not with Siri or Google's search engine? They also take your natural language requests and provide some output.

It's not a zombie, P or any other kind. It's a computer returning an output from an input.

The speculation on sentience worries me because it's the wrong conversation in the first place even in the negative.

@MudMan A philosophical zombie is a thought experiment on something with that gives the convincing illusion that it has the same inner life as a human, but in reality doesn't. That's the point: these AIs can give that illusion, too. I made it clear that I don't believe they are sentient; but someone less familiar with computer science might be fooled. Somehow the public should be informed as to the reality of what these AIs aren't. You'll note that the tech companies pushing them don't make any attempt to explain this. They're just magic boxes that talk back to us. People in general are (or so it seems to me) going to default to assuming there's a mind in there, because how else could they talk back? The reality is obviously prosaic, and yet unless it is grasped, we will have bleeding hearts fighting for AI rights (which makes as much sense as fridge rights).

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.