Show newer
Olives boosted

Two hangings in #Singapore this morning, of two men who’ve been on death row for drug offences for over a decade. When will these killings end? #deathpenalty

theregister.com/2024/11/03/asi
"[] is moving from what it calls Electric Road Pricing (ERP) 1.0 to ERP 2.0. The first version used gantries – or automatic tolls – to charge drivers a fee through an in-car device when they used specific roadways during certain hours.

ERP 2.0 sees the vehicle instead tracked through GPS, which can tell where a vehicle is at all operating times."

techcrunch.com/2024/11/14/snap A creepy attempt to virtue signal which plays on the myth of stranger danger.

This is a *chat app*. It is no place for this creepy location tracking garbage. What does it say about a company that *this* is the sort of functionality that they cram into there?

Apparently, Bluesky's services have been going down.

If people spoke about cars like with "AI".

In ten years, your car is going to turn into Optimus Prime.

Imagine if people spoke about bikes like with "AI".

Normal: The pedals aren't working.

AI: The pedals are disobeying his instructions.

Even if it was used in some sort of security research, the terms "scan" or "circumvent" would be preferable to "deceive" with "AI".

It's not "the AI". It's not a person. It's a tool. It's an unreliable tool.

Show thread

Bad Language: The AI is being deceitful.

Passable Language: This program can be used as a tool to assist in producing misleading messages. The chatbot might generate misleading messages.

Bluesky doesn't appear to thread well when the subsequent post is made a while after the first such that there are other posts in between.

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

I haven't had the time to look into this that deeply but these are my immediate thoughts on the matter.

Show thread

techcrunch.com/2024/11/14/eu-a
1) What if there are computers ten, twenty, thirty years in the future with far more processing power than those now?

2) I haven't looked into this deeply but "risks" sounds awfully vague. It seems likely to drive censorship.

3) Some of these "risks" are hypothetical and appear to be inspired by Hollywood disaster films.

4) "having a tendency to deceive"

If you have an issue with hallucinations (or inappropriate uses like customer support), can you just say that straight-up instead of beating around the bush like this?

Also, once again, this leans into a fantastical framing of the "AI" being an intelligent agent, that plots against people, rather than being an over-hyped tool.

"woke" originally meant being socially conscious or something but that was back in 2015 and I think that meaning was forgotten.

I see a concern about how "mitigating AI risks" could lead to censorship. I haven't had enough time to look into these.

Olives boosted
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.