I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).
They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.
In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.
Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?
If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.
@gpowerf I would do the same (and do with the very limited AI in the Google Assistant), and am especially aware that I'm modelling such interactions for my young children.
I can't help wondering if this is parochial, however; ChatGPT and its ilk are no less machines than toasters or fridges, though with the very real distinguishing feature that they can talk to us. Not wanting to develop a pattern of behaviour one could inadvertently inflict on another human is a good reason to always model good behaviour, you're right. Or perhaps as a hedge against the day when our AIs do manifest true sentience, and mistreating them would at that point be ethically wrong.