Long thread/5
They didn't channel hundreds of millions to election campaigns through #StrawDonations and other forms of campaign finance frauds. They didn't even open a crypto-themed hamburger restaurant where you couldn't buy hamburgers with crypto:
https://robbreport.com/food-drink/dining/bored-hungry-restaurant-no-cryptocurrency-1234694556/
5/
Long thread/6
They were amateurs. Their attempt to #MakeFetchHappen only succeeded for a brief instant. By contrast, the superpredators of the crypto bubble were able to make fetch happen over an improbably long timescale, deploying the most powerful reality distortion fields since Pets.com.
Anything that can't go on forever will eventually stop.
6/
Long thread/7
We're told that trillions of dollars' worth of crypto has been wiped out over the past year, but these losses are nowhere to be seen in the real economy - because the "wealth" that was wiped out by the crypto bubble's bursting never existed in the first place.
Like any #PonziScheme, crypto was a way to separate normies from their savings through the pretense that they were "investing" in a vast enterprise.
7/
Long thread/8
But the only real money ("#fiat" in cryptospeak) in the system was the hardscrabble retirement savings of working people, which the bubble's energetic inflaters swapped for illiquid, worthless #shitcoins.
We've stopped believing in the illusory billions. #SamBankmanFried is under house arrest. But the people who gave him money - and the nimbler Ponzi artists who evaded arrest - are looking for new scams to separate the marks from their money.
8/
Long thread/9
Take #MorganStanley, who spent 2021 and 2022 hyping cryptocurrency as a massive growth opportunity:
https://cointelegraph.com/news/morgan-stanley-launches-cryptocurrency-research-team
Today, Morganstanley wants you to know that #AI is a $6 trillion opportunity.
They're not alone. The CEOs of Endeavor, Buzzfeed, Microsoft, Spotify, Youtube, Snap, Sports Illustrated, and CAA are all out there, pumping up the AI bubble with every hour that god sends, declaring that the future is AI.
https://www.hollywoodreporter.com/business/business-news/wall-street-ai-stock-price-1235343279/
9/
Long thread/10
Google and Bing are locked in an arms-race to see whose search engine can attain the speediest, most profound #enshittification via #chatbot, replacing links to web-pages with florid paragraphs composed by fully automated, supremely confident liars:
https://pluralistic.net/2023/02/16/tweedledumber/#easily-spooked
10/
Long thread/11
Blockchain was a solution in search of a problem. So is AI. Yes, Buzzfeed will be able to reduce its wage-bill by automating its personality quiz vertical, and Spotify's "AI DJ" will produce slightly less terrible playlists (at least, to the extent that Spotify doesn't put its thumb on the scales by inserting tracks into the playlists whose only fitness factor is that someone paid to boost them).
11/
Long thread/12
But even if you add all of this up, double it, square it, and add a billion dollar confidence interval, it still doesn't add up to what #BankOfAmerica analysts called "a defining moment — like the internet in the ’90s." For one thing, the most exciting part of the "internet in the '90s" was that it had incredibly low barriers to entry and wasn't dominated by large companies - indeed, it had them running scared.
12/
Long thread/13
The AI bubble, by contrast, is being inflated by massive incumbents, whose excitement boils down to "This will let the biggest companies get much, much bigger and the rest of you can go fuck yourselves." Some revolution.
AI has all the hallmarks of a classic pump-and-dump, starting with terminology. AI isn't "artificial" and it's not "intelligent." "Machine learning" doesn't learn.
13/
Long thread/14
On this week's Trashfuture podcast, they made an excellent (and profane and hilarious) case that #ChatGPT is best understood as a sophisticated form of #autocomplete - not our new robot overlord.
https://open.spotify.com/episode/4NHKMZZNKi0w9mOhPYIL4T
We all know that autocomplete is a decidedly mixed blessing. Like all statistical inference tools, autocomplete is profoundly conservative - it wants you to do the same thing tomorrow as you did yesterday.
14/
Long thread/15
That's why "sophisticated" ad retargeting ads show you ads for shoes in response to your search for shoes. If the word you type after "hey" is usually "hon" then the next time you type "hey," autocomplete will be ready to fill in your typical following word - even if this time you want to type "hey stop texting me you freak":
15/
Long thread/16
And when autocomplete encounters a new input - when you try to type something you've never typed before - it tries to get you to finish your sentence with the statistically median thing that *everyone* would type next, on average. Usually that produces something utterly bland, but sometimes the results can be hilarious.
16/
Long thread/17
Back in 2018, I started to text our babysitter with "hey are you free to sit" only to have Android finish the sentence with "on my face" (not something I'd ever typed!):
https://mashable.com/article/android-predictive-text-sit-on-my-face
Modern autocomplete can produce long passages of text in response to prompts, but it is every bit as unreliable as 2018 Android SMS autocomplete, as @ThatPrivacyGuy discovered when ChatGPT informed him that he was dead.
17/
Long thread/18
It even generated a plausible URL for a link to a nonexistent obit in *The Guardian*:
https://www.theregister.com/2023/03/02/chatgpt_considered_harmful/
Of course, the carnival barkers of the AI pump-and-dump insist that this is all a feature, not a bug. If autocomplete says stupid, wrong things with total confidence, that's because "AI" is becoming *more* human, because humans also say stupid, wrong things with total confidence.
18/
Long thread/19
Exhibit A is the billionaire AI grifter Sam Altman, CEO if OpenAI - a company whose products are not open, nor are they artificial, nor are they intelligent. Altman celebrated the release of ChatGPT by tweeting "i am a stochastic parrot, and so r u."
https://twitter.com/sama/status/1599471830255177728
19/
Long thread/20
This was a dig at the #StochasticParrots paper, a comprehensive, measured roundup of criticisms of AI that led Google to fire @timnitGebru, a respected AI researcher, for having the audacity to point out the Emperor's New Clothes:
Gebru's co-author on the Parrots paper was @emilymbender, a computational linguistics specialist at UW, who is one of the best-informed and most damning critics of AI hype.
20/
Long thread/21
You can get a good sense of her position from @lizweil's *New York Magazine* profile:
https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html
Bender has made many important scholarly contributions to her field, but she is also famous for her rules of thumb, which caution her fellow scientists not to get high on their own supply:
* Please do not conflate word form and meaning
* Mind your own credulity
21/
Long thread/22
As Bender says, we've made "machines that mindlessly generate text, but we haven’t learned how to stop imagining the mind behind it." One potential tonic against this fallacy is to follow an Italian MP's suggestion and replace "AI" with "#SALAMI" ("Systematic Approaches to Learning Algorithms and Machine Inferences"). It's a lot easier to keep a clear head when someone asks you, "Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?"
22/
Long thread/23
Bender's most famous contribution is the "stochastic parrot," a construct that "just probabilistically spits out words." AI bros like Altman love the stochastic parrot, and are hellbent on reducing human beings to stochastic parrots, which will allow them to declare that their chatbots have feature-parity with human beings.
23/
Long thread/24
At the same time, Altman and Co are strangely afraid of their creations. It's possible that this is just a shuck: "I have made something so powerful that it could destroy humanity! Luckily, I am a wise steward of this thing, so it's fine. But boy, it sure is powerful!"
They've been playing this game for a long time.
24/
Long thread/25
People like Elon Musk (an investor in OpenAI, who is hoping to convince the EU Commission and FTC that he can fire all of Twitter's human moderators and replace them with chatbots without violating EU law or the FTC's consent decree) keep warning us that AI will destroy us unless we tame it.
25/
Long thread/26
There's a lot of credulous repetition of these claims, and not just by AI's boosters. AI *critics* are also prone to engaging in what Lee Vinsel calls #CritiHype: criticizing something by repeating its boosters' claims without interrogating them to see if they're true:
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5
26/
Long thread/27
There are better ways to respond to Elon Musk warning us that AIs will emulsify the planet and use human beings for food than to shout, "Look at how irresponsible this wizard is being! He made a Frankenstein's Monster that will kill us all!" Like, we could point out that of all the things Elon Musk is profoundly wrong about, he is most wrong about the philosophical meaning of Wachowksi movies:
27/
Long thread/28
But even if we take the bros at their word when they proclaim themselves to be terrified of "existential risk" from AI, we can find better explanations by seeking out other phenomena that might be triggering their dread. As @cstross points out, corporations are #SlowAIs, autonomous artificial lifeforms that consistently do the wrong thing even when the people who nominally run them try to steer them in better directions:
https://media.ccc.de/v/34c3-9270-dude_you_broke_the_future
28/
Long thread/30
#TedChiang nailed this back in 2017 (the same year of the Long Island Blockchain Company):
> There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending.
30/
Long thread/31
> What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.
https://www.buzzfeednews.com/article/tedchiang/the-real-danger-to-civilization-isnt-ai-its-runaway
31/
Long thread/32
Chiang is still writing some of the best critical work on "AI." His February article in the *New Yorker*, "ChatGPT Is a Blurry JPEG of the Web," was an instant classic:
> [AI] hallucinations are compression artifacts, but—like the incorrect labels generated by the Xerox photocopier—they are plausible enough that identifying them requires comparing them against the originals, which in this case means either the Web or our own knowledge of the world.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
32/
Long thread/33
"AI" is practically purpose-built for inflating another hype-bubble, excelling as it does at producing party-tricks - plausible essays, weird images, voice impersonations. But as Princeton's Matthew Salganik writes, there's a world of difference between "cool" and "tool":
https://freedom-to-tinker.com/2023/03/08/can-chatgpt-and-its-successors-go-from-cool-to-tool/
33/
Long thread/34
*Nature* can claim "conversational AI is a game-changer for science" but "there is a huge gap between writing funny instructions for removing food from home electronics and doing scientific research." Salganik tried to get ChatGPT to help him with the most banal of scholarly tasks - aiding him in peer reviewing a colleague's paper. The result? "ChatGPT didn’t help me do peer review at all; not one little bit."
34/
Long thread/35
The criti-hype isn't limited to ChatGPT, of course - there's plenty of (justifiable) concern about image and voice generators and their impact on creative labor markets, but that concern is often expressed in ways that amplify the self-serving claims of the companies hoping to inflate the hype machine.
35/
Long thread/36
One of the best critical responses to the question of image- and voice-generators comes from #KirbyFerguson, whose final #EverythingIsARemix video is a superb, visually stunning, brilliantly argued critique of these systems:
https://www.youtube.com/watch?v=rswxcDyotXA
36/
Long thread/37
One area where Ferguson shines is in thinking through the copyright question - is there any right to decide who can study the art you make? Except in some edge cases, these systems don't store copies of the images they analyze, nor do they reproduce them:
https://pluralistic.net/2023/02/09/ai-monkeys-paw/#bullied-schoolkids
For creators, the important material question raised by these systems is economic, not creative: will our bosses use them to erode our wages?
37/
Long thread/38
That is a very important question, and as far as our bosses are concerned, the answer is a resounding *yes*.
Markets value automation primarily because automation allows capitalists to pay workers less. The textile factory owners who purchased automatic looms weren't interested in giving their workers raises and shorting working days.
'
38/
Long thread/39
They wanted to fire their skilled workers and replace them with small children kidnapped out of orphanages and indentured for a decade, starved and beaten and forced to work, even after they were mangled by the machines. Fun fact: *Oliver Twist* was based on the bestselling memoir of Robert Blincoe, a child who survived his decade of forced labor:
https://www.gutenberg.org/files/59127/59127-h/59127-h.htm
39/
Long thread/40
Today, voice actors sitting down to record for games companies are forced to begin each session with "My name is ______ and I hereby grant irrevocable permission to train an AI with my voice and use it any way you see fit."
https://www.vice.com/en/article/5d37za/voice-actors-sign-away-rights-to-artificial-intelligence
Let's be clear here: there is - at present - no firmly established copyright over voiceprints.
40/
Long thread/41
The "right" that voice actors are signing away as a non-negotiable condition of doing their jobs for giant, powerful monopolists doesn't even exist. When a corporation makes a worker surrender this right, they are betting that this right will be created later in the name of "artists' rights" - and that they will then be able to harvest this right and use it to fire the artists who fought so hard for it.
41/
Long thread/42
There are other approaches to this. We could support the US Copyright Office's position that machine-generated works are not works of human creative authorship and are thus not eligible for copyright - so if corporations wanted to control their products, they'd have to hire humans to make them:
42/
@pluralistic I would not have thought I would ever say that, but you need to find more things to do than post here :D
@pluralistic @pies I didn't even mention the columns and podcasts!
@pluralistic @harwell That is true, you seem to be an absolute beast when it comes to productivity, kudos.
@pies @pluralistic I mean, there are also the books, the lectures, the activism, the public appearances, and the guest spots. His schedule is pretty full as it is. 😅