Show newer

Wordle 608 3/6*

⬛⬛🟨⬛🟩
⬛🟩⬛⬛🟩
🟩🟩🟩🟩🟩

Wordle 607 3/6*

⬛⬛🟨⬛⬛
⬛🟩🟨⬛⬛
🟩🟩🟩🟩🟩

Wordle 606 3/6*

🟩⬛🟨⬛⬛
🟩🟩🟩🟨⬛
🟩🟩🟩🟩🟩

Wordle 605 4/6*

🟩⬛⬛⬛⬛
🟩⬛⬛🟩⬛
🟩⬛🟩🟩⬛
🟩🟩🟩🟩🟩

Wordle 604 3/6*

🟨⬛🟩⬛🟩
⬛⬛🟩🟨🟩
🟩🟩🟩🟩🟩

@hughster That's too indirect for me. I choose to interpret who beliefs and position based upon her own words, in her 2020 essay, and what she's said publicly. I understand this is an unfashionable position, and could be considered wrongthink, but I can't violate my own sense of integrity by doing otherwise.

How long has this been going on and they've been undetected or ignored?

I find myself hoping we're not pissing off some innocent alien tourists. 😀

bbc.com/news/world-us-canada-6

@hughster Is that the entire corpus of evidence? That she 'liked' the tweets of an individual with questionable views? What has she said publicly that could be considered transphobic?

Wordle 603 4/6*

⬛🟨🟩⬛⬛
⬛⬛🟩🟩🟩
⬛⬛🟩🟩🟩
🟩🟩🟩🟩🟩

@hughster It's certainly the zeitgeist. When I've looked at what she's said, the theme is consistently protecting the rights of natal women, particularly against natal men. She has no animus for trans people generally, and genuinely supports their human rights. I don't believe that makes her transphobic, but rather a feminist who won't sacrifice women's rights.

@MudMan A philosophical zombie is a thought experiment on something with that gives the convincing illusion that it has the same inner life as a human, but in reality doesn't. That's the point: these AIs can give that illusion, too. I made it clear that I don't believe they are sentient; but someone less familiar with computer science might be fooled. Somehow the public should be informed as to the reality of what these AIs aren't. You'll note that the tech companies pushing them don't make any attempt to explain this. They're just magic boxes that talk back to us. People in general are (or so it seems to me) going to default to assuming there's a mind in there, because how else could they talk back? The reality is obviously prosaic, and yet unless it is grasped, we will have bleeding hearts fighting for AI rights (which makes as much sense as fridge rights).

@MudMan I certainly wouldn't argue for AI rights at this stage, and that wasn't what I was suggesting. That was the point: these chatbots talk like people, but they are NOT people. They have no feelings, no inner life, no inner dialogue... nothing. They're just word predictors, drawing upon a vast dataset created by real humans. P-zombies indeed.

As to how to interact with them: I think the point is to remember that they're not people. You're talking to an extremely knowledgeable, but utterly mindless, machine. And nothing more. The trick will be not forgetting that given its illusion of a sentient human that will only get better as time goes on.

@gpowerf I would do the same (and do with the very limited AI in the Google Assistant), and am especially aware that I'm modelling such interactions for my young children.

I can't help wondering if this is parochial, however; ChatGPT and its ilk are no less machines than toasters or fridges, though with the very real distinguishing feature that they can talk to us. Not wanting to develop a pattern of behaviour one could inadvertently inflict on another human is a good reason to always model good behaviour, you're right. Or perhaps as a hedge against the day when our AIs do manifest true sentience, and mistreating them would at that point be ethically wrong.

@ingram He's nailed it. Those empty-headed managers are being saved by their technical experts, who let them swan around spouting their buzzwords, while quietly getting on with it. But if this trend continues, there won't be any technical experts left, or too few to make a difference. Madness.

I'm really glad to see that is doing well in spite of the hysterical campaign by misguided tech journos, which actually helped promote it! All because @jk_rowling is mistakenly perceived to be transphobic. It's insane.

fortune.com/2023/02/08/hogwart

@ingram I have observed this phenomenon for many years, and cannot contradict you. The effect of too much perceived power? Adopting what they see as the needs of the role, that inadvertently lobotomises them?

I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).

They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.

In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.

Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?

If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.

Wordle 602 4/6*

⬛⬛⬛⬛🟨
⬛⬛🟨⬛🟨
🟩🟩⬛⬛⬛
🟩🟩🟩🟩🟩

🦊FoxiMax #167 4/8
foximax.com/

🟩🟩🟩🟩🟩
🟩⬜🟩⬜🟩
🟩⬜🟩🟩🟩
🟩🟩🟩🟩🟩

Wordle 601 4/6*

⬛⬛🟩⬛🟨
⬛🟩🟩⬛🟨
🟩🟩🟩⬛🟩
🟩🟩🟩🟩🟩

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.