It seems to me one fairly common use-case of is to absolve companies from taking responsibility for things. Example scenarios:

(1) "The AI denied all those insurance claims, not our kind and compassionate company."

(2) "The AI plagiarized your book/art, not our ground-breaking content-creation company. And anyway, many of the words/pixels in our version are different so you don't have any rights to it."

(3) "The AI wrote the lies in this legal document we created for you. You can't blame us for that."

Follow

@aebrockwell 1) I think the worst thing is when they believe the nonsense the machine puts out without thinking critically about it.

3) A lawyer got fined for making filings with a court which cited a bunch of non-existent cases. That excuse didn't fly with the judge.

@olives @LouisIngenthron Good points you both make. Thankfully, accountability is still there in many cases.

But I guess these (bad) things are much easier to do in the first place if you believe in the infallibility of the LLM and you like what it is telling you.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.