Show newer

My kids have brainwashed me into pronouncing Ninjago as "Nin-JA-go". Sigh.

I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).

They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.

In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.

Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?

If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.