Show newer

It doesn't matter that the announcements of replacing workers with robots is false: it matters that it strengthens capital against labor.
Excellent, as usual, Cory Doctorow's debunking work.
Self-driving trucks, anyway, could not "fire those parcels at your door with a catapult".

Cory Doctorow  
Long before the current wave of #AIHype, we were being groomed for automation panics with misleading stories. Remember this one? "'Truck driver' is...

This is wrong on so many levels. @jeffjarvis there's no such thing as "an AI", piles of linear algebra ("mathy maths" as we call them on #MAIHT3k) are not the sort of things that have rights, and when they ingest large amounts of text to extract co-occurence patterns of words >>

bird.makeup/@neilturkewitz/174

Wow, reading @jeffjarvis 's blog post and it gets even worse:

Synthetic text ("generative AI output") is text that represents no one's communicative intent. But it is well-formed and so people who encounter it interpret it.

>>

Long thread/13 

Now, these drivers aren't about to be replaced by AI - but that doesn't mean that AI won't affect their jobs. Commercial drivers are among the most heavily surveilled workers in the country. Amazon's drivers (whom Amazon misclassifies as subcontractors) have their *eyeballs* monitored by AI;

pluralistic.net/2022/04/17/rev

13/

Show thread

Long thread/15 

And AI monitors the conduct of workers on temp-work apps. If a worker is dispatched to a struck workplace and refuses to cross the picket-line, the AI boss fires you and blacklists you from future jobs for refusing to robo-scab:

pluralistic.net/2023/07/30/com

Writing in *The Guardian*, #StevenGreenhouse describes the AI-enabled workplace, where precarious, often misclassified workers are monitored, judged, and fined by algorithms:

theguardian.com/technology/202

15/

Show thread

Long thread/16 

Whether it's the robot that gets you disciplined for sending an email with the word "union" in it or the robot that takes money out of your paycheck if you take a bathroom break, AI has come for the workplace with a vengeance.

Here's a supreme irony: nearly all of the beneficial applications for AI require that AI be used to *help* workers, not replace them, which is absolutely not how AI is used in the workplace.

16/

Show thread

Long thread/17 

An AI that helps radiologists by giving them a second opinion might help them find tumors on x-rays, but that's a tool that *reduces* the number of scans a radiologist processes in a shift, by making them go back and reconsider the scans they've already processed:

locusmag.com/2023/12/commentar

But AI's sales pitch is not "Buy an AI tool and increase your costs while increasing your accuracy." The pitch for AI is "buy and AI and save money by firing workers."

17/

Show thread

Long thread/19 

For example, AI is a really good tool for fraud! Rather than paying people to churn out variations on a phishing email, you can get an AI to do it. If the AI writes a bad phishing email, it's OK, since nearly all recipients of even good phishing emails delete them. What's more, no one will fine you or publish an op-ed demanding that your board of directors fire you if you buy an incompetent AI to commit fraud. Fraud is a high-value, low-consequence environment for using AI.

19/

Show thread

Long thread/20 

Another one of those applications is managing precarious workers who don't have labor rights. If the AI unfairly docks your worker's wages, or forces them to work until they injure themselves or others, or decides that their eyeball movements justify firing them, those workers have no recourse. That's the whole point of pretending that your employees are contractors: so you can violate labor law with impunity!

20/

Show thread

Long thread/23 

The pseudoscientific cod-ergonomics of the 1900s was demeaning and even dangerous, but it wasn't *automated*, and if it increased worker output, this was incidental to the real purpose of making workers move like the machine-cogs their bosses reassured themselves they were:

pluralistic.net/2022/08/21/gre

23/

Show thread

Long thread/26 

But there *is* a way that AI could destroy the human race! The carbon footprint and water consumption associated with training and operating large-scale models are significant contributors to the climate emergency, which threatens the habitability of the only planet in the known universe capable of sustaining human life:

forbes.com/sites/federicoguerr

26/

Show thread

Long thread/25 

The "X-risk" of the spicy autocomplete chatbot waking up and using its newfound sentience to turn us all into paperclips is nonsense. Adding words to the plausible sentence generator doesn't turn it into a superintelligence for the same reason that selectively breeding faster horses leads to locomotives:

locusmag.com/2020/07/cory-doct

25/

Show thread

"...tech executives envision that, over time, bot teachers will be able to respond to and inspire individual students" - and even that AI could "look at the student’s facial expression and say: ‘Hey, I think you’re a little distracted right now. Let’s get focused'"... nytimes.com/2024/01/11/technol

Automated teaching machines are an old dream, constantly recurring in the fantasies of entrepreneurs, but there's little historical or current evidence to support them - the new chatbot-tutors appear more like further privatizing public education through genAI plug-ins

Show thread

Sembra un discorso pio, ma non lo è affatto. Padre Benanti sta paragonando la nostra situazione con i #SALAMI a una in cui il diritto è inefficace e vale lo ius necessitatis (vedi § 3 qui, a proposito del dilemma del carrello: btfp.sp.unipi.it/it/2022/11/la). Con i #SALAMI, siamo davvero in questa situazione, o c'è qualcuno che sta cercando di instillare nel pubblico - padre Benanti compreso - un falso senso di emergenza?

Show thread
Show older
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.