@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.
@LouisIngenthron False comparison. Not even close.
@LouisIngenthron Excel is, for all practical purposes, a calculator. Users can see all input data and how that data was used to formulate results. This is not the case for generative AI. The full scope of sources used, how those sources were used, and virtually all other aspects of the system are a black box to users. The AI firms want to create new content and then disclaim responsibility for it. Unacceptable.
@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?
@LouisIngenthron The fundamental question is pretty simple. Let's say someone asks a generative AI system a question, it provides an inaccurate answer, and then someone is harmed or killed as a result of that answer. Who is responsible for that answer (which is original content created by that system) and the damage it caused? "Nobody" is not acceptable.
@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.
I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.
@LouisIngenthron Those disclaimers are there to satisfy the corp lawyers. They are not a license to spew dangerous misinformation to the public in a way that is specifically designed to foster confidence in those answers. I can pretty much guarantee that the amount of litigation that will be focused on this area will be immense. Especially when ads start running with those answers. The advertisers are gonna be just THRILLED having their ads running on wrong answers that end up hurting people. Oh yeah.
@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.
And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.
But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.
@lauren Yeah, I think the search companies putting AI responses as fact at the top of results, especially when the user has not opted in nor acknowledged the danger, could be one of the cases where a company has chosen to use the bot's speech as its own and therefore becomes liable for it.
But, again, I draw the distinction between that conduct and a chatbot with a disclaimer.
@lauren @LouisIngenthron I agree here with the problem that end users don't understand LLM and as such take things that they shouldn't at face value. But should their ignorance limit me in using it do very powerful things.
Tools come with risks, powerful tools even more so. But in the end I'm mostly curious on the court cases to see how deep this rabbit hole goes.
@LouisIngenthron This is going to be a deep well of litigation as courts work through all this. My gut feeling is that it isn't going to turn out well for the firms.