I'm getting tired of simplistic, indignant characterizations of generative AI like this one: social.ericwbailey.website/@er "a spicy autocomplete powered by theft that melts the environment to amplify racism and periodically, arbitrarily lie"

It's a tool like any other; it can be used for good as well as bad. Yes, the copyright issue is real, but we can presumably overcome it by using models whose developers are more scrupulous about their sources of training data, not throwing out the whole thing.

I'll mention again a more balanced take from @danilo that I posted the other day: redeem-tomorrow.com/the-averag

I also like @simon's writing on generative AI.

@matt @danilo "And so the problem with saying “AI is useless,” “AI produces nonsense,” or any of the related lazy critique is that destroys all credibility with everyone whose lived experience of using the tools disproves the critique, harming the credibility of critiquing AI overall." 💯

@simon @matt @danilo The core problem here, and I don't know how to solve it, is extreme ignorance about information provenance in these people going by their "lived experience" with AI. What AI produces is no less nonsense than the output of a magic 8 ball. The process by which it's produced has nothing to do with the truth of the statement.

@dalias @matt @danilo that's not true. 90% of the output I get from LLMs is genuinely useful to me. Comparing it to a magic 8-ball doesn't work for me, at all.

@dalias @matt @danilo @maria same way I do with random information I find on Google, or stuff that a confident but occasionally confidently wrong teacher might tell me

@dalias @matt @danilo @maria I genuinely think that the idea that "LLMs get things confidently wrong, so they're useless for learning" is misguided

I can learn a TON from an unreliable teacher, because it encourages me to engage more critically with the information and habitually consult additional sources

It's rare to find any single source of information that's truly infallible

Follow

@simon @matt @danilo @maria @dalias

"Useless for learning" is a bit of a straw man.

More accurate perhaps is "actively dangerous for the lazy or gullible". As an example, I point to *multiple* instances of lawyers turning in phony case citations. These people should absolutely know better - yet it's happened multiple times, and will happen again.

The llm is presented in the news as an AI - artificial intelligence - and source of information. To most people, that brings to mind a trusted advisor, or subject matter expert - and when they say "provide 5 legal citations that support my argument" - boy, it sure sounds convincing, because the AI is generally incapable of saying "I don't know" - and that's the dangerous bit.

Lots of tools human beings make are both useful and dangerous. Fire, the automobile, a chainsaw. We generally don't hand those out to people without some sort of training or warning. We regulate their use. But the law and human society are still catching up here.

LLMs are useful in the right hands, very much so. But they need a wrapper preventing children, the gullible, and apparently lawyers from diving in without some warnings. You simply can't trust the output the same way you'd trust, say, a teacher of the subject.

@Biggles @matt @danilo @maria @dalias I agree, "actively dangerous for the lazy or gullible" is a good summary of where we are today

That's why I spend so much effort trying to counter the hype and explaining to people that this stuff isn't science fiction AI, it's spicy autocomplete - it takes a surprising amount of work to learn how to use it effectively

@Biggles @danilo @maria @matt @dalias I wish I could take credit for that one but I've seen it pretty widely used by AI skeptics - I think it's a great short description!

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.