Show newer

techcrunch.com/2024/11/14/snap A creepy attempt to virtue signal which plays on the myth of stranger danger.

This is a *chat app*. It is no place for this creepy location tracking garbage. What does it say about a company that *this* is the sort of functionality that they cram into there?

Olives boosted

@Connewitz I followed that account a couple of weeks ago. That only resulted in an empty profile. I tried following it again just now and it still doesn't work.

Olives boosted

Due to popular demand, I chose a cool anime avatar.

Olives boosted

I tried the #Bluesky bridge and it doesn't work at all.

Apparently, Bluesky's services have been going down.

If people spoke about cars like with "AI".

In ten years, your car is going to turn into Optimus Prime.

Imagine if people spoke about bikes like with "AI".

Normal: The pedals aren't working.

AI: The pedals are disobeying his instructions.

Even if it was used in some sort of security research, the terms "scan" or "circumvent" would be preferable to "deceive" with "AI".

It's not "the AI". It's not a person. It's a tool. It's an unreliable tool.

Show thread

Bad Language: The AI is being deceitful.

Passable Language: This program can be used as a tool to assist in producing misleading messages. The chatbot might generate misleading messages.

Bluesky doesn't appear to thread well when the subsequent post is made a while after the first such that there are other posts in between.

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

techcrunch.com/2024/11/14/eu-a
1) What if there are computers ten, twenty, thirty years in the future with far more processing power than those now?

2) I haven't looked into this deeply but "risks" sounds awfully vague. It seems likely to drive censorship.

3) Some of these "risks" are hypothetical and appear to be inspired by Hollywood disaster films.

4) "having a tendency to deceive"

If you have an issue with hallucinations (or inappropriate uses like customer support), can you just say that straight-up instead of beating around the bush like this?

Also, once again, this leans into a fantastical framing of the "AI" being an intelligent agent, that plots against people, rather than being an over-hyped tool.

"woke" originally meant being socially conscious or something but that was back in 2015 and I think that meaning was forgotten.

I see a concern about how "mitigating AI risks" could lead to censorship. I haven't had enough time to look into these.

Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.