I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).
They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.
In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.
Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?
If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.
🦊FoxiMax #167 4/8
https://foximax.com/
🟩🟩🟩🟩🟩
🟩⬜🟩⬜🟩
🟩⬜🟩🟩🟩
🟩🟩🟩🟩🟩
🦊FoxiMax #164 5/8
https://foximax.com/
🟩🟩🟩🟩🟩
🟩⬜🟩🟩🟩
⬜🟩⬜🟩🟩
⬜🟩⬜🟩⬜
🟩⬜🟩🟩🟩
(a.k.a. Teo) Software engineer, runner, voracious reader of fantasy novels, former podcaster and gamer, whitefella (mainly English and northern Italian).