Follow

AI = Degenerative AI ? [Article summary / best bits] 

AI article summary / best bits / skimmed sentences...

From source: "Are We Watching The Internet Die?"
wheresyoured.at/are-we-watchin

+ [my added comments]

[About Reddit]
...millions of unpaid contributors made billions of posts so that CEO Steve Huffman could make $193 million in 2023 while laying off 90 people and effectively pushing third party apps off of the platform by charging exorbitant rates for API access,

Reddit also announced that it had cut a $60 million deal to allow Google to train its models on Reddit's posts, once again offering users nothing in return for their hard work.

[BUT LET'S FACE IT PEOPLE WILL DO THINGS THEY LOVE FOR FREE AND CAPITALISM CAPITALISES ON THAT - A HARD ONE TO SOLVE AS EVEN I DID OVERTIME FOR FREE AT WORK AND DIDN'T LEAVE TILL LATE OFTEN!! But not any more!"]

...ultra-rich technologists tricked their customers into building their companies for free.

[mentioned] Cory Doctorow's Enshittification theory
en.wikipedia.org/wiki/Enshitti

Yet what's happening to the web is far more sinister than simple greed, but the destruction of the user-generated internet,

...

And it's slowly killing the internet.

⭐ Degenerative AI [great sub-title]

Generative AI models are trained by using massive amounts of text scraped from the internet, meaning that the consumer adoption of generative AI has brought a degree of radioactivity to its own dataset. As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings.

... Jathan Sadowski calls "Habsburg AI,"...so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.

...as its models are trained on increasingly-identical content.

...generative features... immediately fed back into Azure's OpenAI models owned by its parent company Microsoft, which invested $10 billion in OpenAI in early 2023

Generative AI also naturally aligns with the toxic incentives created by the largest platforms.

Google makes up more than 85% of all search traffic and pays Apple billions a year to make Google search the default on Apple devices.

... a cottage industry of automation gurus are cashing in by helping others flood Facebook, TikTok and Instagram with low-effort videos that are irresistible to algorithms.

Generative AI is a perfect tool for soullessly churning out content to match a particular set of instructions — such as those that an algorithm follows — and while an algorithm can theoretically be tuned to evaluate content as "human," so can scaled content be tweaked to make it seem more human.

Google might pretend it cares about the quality of search results, but nothing about search's decade-long decline has suggested it’s actually going to do anything. Google's spam policies have claimed for years that scraped content (outright ripping the contents of another website) was grounds for removal from Google, but even the most cursory glance at any news search shows how often sites thinly rewrite or outright steal others' content.

As we speak, the battle that platforms are fighting is against generative spam, a cartoonish and obvious threat of outright nonsense, meaningless chum that can and should (and likely will) be stopped. In the process, they're failing to see that this isn't a war against spam, but a war against crap, and the overall normalization and intellectual numbing that comes when content is created to please algorithms and provide a minimum viable product for consumers.

We're watching the joint hyper-scaling and hyper-normalization of the internet, where all popular content begins to look the same to appeal to algorithms run by companies obsessed with growth.

... Quora, which now promotes ChatGPT-generated answers at the top of results),

I believe that their goal is to intrude on our ability to browse the internet, to further obfuscate the source of information while paying the platforms for content that their users make for free. Their eventual goal, in my mind, is to remove as much interaction with the larger internet as possible, summarizing and regurgitating as much as they can so that they can control and monetize the results as much as possible.

There's also no way to escape the fact that these hungry robots require legal plagiarism, and any number of copyright assaults could massively slow their progress.

It's incredibly difficult to make a model forget information, meaning that there may, at some point, be steps back in the development of models if datasets have to be reverted to previous versions with copyrighted materials removed.
It's incredibly difficult to make a model forget information, meaning that there may, at some point, be steps back in the development of models if datasets have to be reverted to previous versions with copyrighted materials removed.

Question everything they say. Don't accept that AI "might one day" be great.

Reject their marketing speak and empty fantasizing and interrogate the tools put in front of you, and be a thorn in their side when they try to tell you that mediocrity is the future.

You are not "missing anything.” These tools are not magic — they're fantastical versions of autocomplete that can't help but make the same mistakes it's learned from the petabytes of information it's stolen from others.

= =

--- vs. - --
= Shallow-Understanding

Mastodon users?... Just u$ers?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.