It seems to me one fairly common use-case of #AI is to absolve companies from taking responsibility for things. Example scenarios:
(1) "The AI denied all those insurance claims, not our kind and compassionate company."
(2) "The AI plagiarized your book/art, not our ground-breaking content-creation company. And anyway, many of the words/pixels in our version are different so you don't have any rights to it."
(3) "The AI wrote the lies in this legal document we created for you. You can't blame us for that."
@aebrockwell 1) I think the worst thing is when they believe the nonsense the machine puts out without thinking critically about it.
3) A lawyer got fined for making filings with a court which cited a bunch of non-existent cases. That excuse didn't fly with the judge.
@olives @LouisIngenthron Good points you both make. Thankfully, accountability is still there in many cases.
But I guess these (bad) things are much easier to do in the first place if you believe in the infallibility of the LLM and you like what it is telling you.
@aebrockwell I don't think any of those apply, though. If a company uses non-AI software for those purposes, they're still liable for the results of that software. So, I don't see any reason why that changes when the software is ML-based.