I'm getting tired of simplistic, indignant characterizations of generative AI like this one: https://social.ericwbailey.website/@eric/111584809768617532 "a spicy autocomplete powered by theft that melts the environment to amplify racism and periodically, arbitrarily lie"
It's a tool like any other; it can be used for good as well as bad. Yes, the copyright issue is real, but we can presumably overcome it by using models whose developers are more scrupulous about their sources of training data, not throwing out the whole thing.
I'll mention again a more balanced take from @danilo that I posted the other day: https://redeem-tomorrow.com/the-average-ai-criticism-has-gotten-lazy-and-thats-dangerous
I also like @simon's writing on generative AI.
@simon @matt @danilo The core problem here, and I don't know how to solve it, is extreme ignorance about information provenance in these people going by their "lived experience" with AI. What AI produces is no less nonsense than the output of a magic 8 ball. The process by which it's produced has nothing to do with the truth of the statement.
@simon @matt @danilo @maria @dalias
"Useless for learning" is a bit of a straw man.
More accurate perhaps is "actively dangerous for the lazy or gullible". As an example, I point to *multiple* instances of lawyers turning in phony case citations. These people should absolutely know better - yet it's happened multiple times, and will happen again.
The llm is presented in the news as an AI - artificial intelligence - and source of information. To most people, that brings to mind a trusted advisor, or subject matter expert - and when they say "provide 5 legal citations that support my argument" - boy, it sure sounds convincing, because the AI is generally incapable of saying "I don't know" - and that's the dangerous bit.
Lots of tools human beings make are both useful and dangerous. Fire, the automobile, a chainsaw. We generally don't hand those out to people without some sort of training or warning. We regulate their use. But the law and human society are still catching up here.
LLMs are useful in the right hands, very much so. But they need a wrapper preventing children, the gullible, and apparently lawyers from diving in without some warnings. You simply can't trust the output the same way you'd trust, say, a teacher of the subject.
@Biggles @matt @danilo @maria @dalias I agree, "actively dangerous for the lazy or gullible" is a good summary of where we are today
That's why I spend so much effort trying to counter the hype and explaining to people that this stuff isn't science fiction AI, it's spicy autocomplete - it takes a surprising amount of work to learn how to use it effectively