Literally every argument about AI risk is entirely made up from exactly nothing. All the terminology is fart-huffing. It has the same evidentiary basis as a story about floopflorps on Gavitron9 outcompeting nuubidons.

I can make up shit that's never happened about things that don't exist using language I invented to describe concepts without form about events I imagined that are important because everybody is going to die unless they listen to me.

Except that HAS happened. It's called a cult.

Show thread

Look I get it watching Terminator 2 was a formative yet incredibly destructive childhood event for you.

Show thread

@SwiftOnSecurity No offense, but I've seen the code quality of pretty much everything we put out these days. It's hard to have faith in anything AI going anywhere BUT completely catastrophic in light of that.

@Firehawke_R Only if we take the output seriously, and that’s the actual risk.

Ai isn’t a risk to us. People delegating decisionmaking and doing stupid shit because “the algorithm said so” is the risk to us, and we can stop that by just refusing to accept, societally, that “I just did what the computer said to do” absolves responsibility.

AI was partially responsible for the housing collapse. We already know what happens if people give up thinking about hard problems and just trust the machine model.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.