I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).
They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.
In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.
Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?
If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.
@jasonetheridge More good stuff from Rickover. https://www.nytimes.com/1981/11/25/opinion/getting-the-job-done-right.html
@ingram He's nailed it. Those empty-headed managers are being saved by their technical experts, who let them swan around spouting their buzzwords, while quietly getting on with it. But if this trend continues, there won't be any technical experts left, or too few to make a difference. Madness.
@jasonetheridge I think it is something that Adm. Rickover identified decades ago. Having "leaders" that know business school principles but nothing of which they manage is not particularly good. https://www.azquotes.com/quote/730279