@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.
@LouisIngenthron Excel is, for all practical purposes, a calculator. Users can see all input data and how that data was used to formulate results. This is not the case for generative AI. The full scope of sources used, how those sources were used, and virtually all other aspects of the system are a black box to users. The AI firms want to create new content and then disclaim responsibility for it. Unacceptable.
@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?
@LouisIngenthron Regarding your chat example, no, that would be pretty clearly covered by Section 230 since it does not involve original content per se.
@lauren So you believe that the core issue here is that user-prompted content is first-party speech, not third-party speech? Even though the user can ask the system to repeat them verbatim (as I demonstrated above)?
@LouisIngenthron The question isn't prompts, the question is facts. If a user asks a straightforward fact-based question and is given a direct answer that turns out to be wrong and does that user harm, who is responsible for that answer?
@lauren So long as the provider has a "this might be bullshit" disclaimer, they're not dishing out "facts", so the user is responsible for improperly treating it as such.
@LouisIngenthron I don't think that's going to work in the long run. Courts have routinely ruled that various kinds of corporate disclaimers are invalid in various circumstances (e.g. gross negligence). There's a whole new world of negligence in these AI systems.
@lauren See, I disagree with that. The negligence is in the user treating an entertainment system like a fact machine. It's every bit as negligent as only getting your news from a comedy program, or consulting Reddit for legal advice.
@LouisIngenthron Ah! But my point is that users cannot be expected to understand this difference. Most of them barely understand the phones and laptops they're using. That's the bottom line.
Now, if in order to see an AIO, you need to click through a big banner that said, "THIS ANSWER MAY BE WRONG. CONSIDER IT ENTERTAINMENT ONLY. CLICKING THIS MEANS YOU UNDERSTAND THIS!" -- well, that *might* make a difference. Presentation matters.
@lauren
> But my point is that users cannot be expected to understand this difference.
That's some nanny-stateism there. If they're too stupid to understand it, then they shouldn't use it, like cars or kitchen knives or matches. It's not the state's job to ruin things because some people are too stupid to use them properly.
@LouisIngenthron I disagree. Many people are effectively forced to use these techs because the alternatives have been cut so far back or even eliminated, or purposely made difficult or insanely expensive to use. Billing systems, customer service, the list is long as firms and even the government pushes everything online and into apps. And I take exception to you calling these people "STUPID". Apparently you do not routinely get the kinds of questions and pleas for help that I get from smart nontech people who have been screwed by these firms through no fault of their own. The Google Account Recovery horror stories alone are nightmarish.
@lauren Who exactly is being forced to use AI?
@LouisIngenthron Increasingly anyone interacting with customer service, billing, help lines, on and on. And those are just the backend systems that are supposed to be invisible to users and callers. Apart from the systems like Google now pushing AI on every user of their search engine -- and the other search engines are going in the same direction -- without opt outs even being available.
@lauren I've already conceded, long ago, that companies that allow such systems to speak for them should be liable for the results.
But that's very different from a chatbot with a disclaimer.
Nobody is stupid for believing a corporate bot that lies to them about a sale. But they are absolutely stupid if they try to get facts from ChatGPT, ignoring all the disclaimers, and then later rely on those "facts" in a critical situation.
@LouisIngenthron My primary issue is with generation of purported fact-based content in the form of answers to fact-based questions, as in the way Google is doing AIOs conflated with traditional SERPs. However, I do believe that courts will be taking a more expansive view of firms' responsibilities in wider areas related to GAI as more dramatic cases of harm occur.
@LouisIngenthron The difference is that Google for many, many years has built a reputation as a source for finding accurate information. NOT as a comedian. THIS MATTERS.
@LouisIngenthron The fundamental question is pretty simple. Let's say someone asks a generative AI system a question, it provides an inaccurate answer, and then someone is harmed or killed as a result of that answer. Who is responsible for that answer (which is original content created by that system) and the damage it caused? "Nobody" is not acceptable.
@lauren The person who asked is responsible. They used the system, after being warned about its inaccuracy multiple times during the onboarding process and *underneath every prompt* (see image), and then chose to use this potentially faulty information in a life-or-death situation.
I'm a pilot. If I choose to get my weather information from Chat GPT and end up crashing as a result, that's my own damn fault.
@LouisIngenthron Those disclaimers are there to satisfy the corp lawyers. They are not a license to spew dangerous misinformation to the public in a way that is specifically designed to foster confidence in those answers. I can pretty much guarantee that the amount of litigation that will be focused on this area will be immense. Especially when ads start running with those answers. The advertisers are gonna be just THRILLED having their ads running on wrong answers that end up hurting people. Oh yeah.
@lauren They're there as much for the lawyers as for the users. Just like "don't eat poison" labels.
And, yeah, I think there's nuance there. When a company decides to use a chatbot as customer service, to speak on their behalf, then it absolutely should be liable for the results.
But that's a far cry from a generalized chatbot with a "don't believe my bullshit" disclaimer that can be easily-manipulated by the user.
@LouisIngenthron Manipulation is straightforward to demonstrate from logs. Let's pin this down even more. Most people simply do NOT understand the differences between these systems and traditional search. There's no reason to expect them to, given how (for example) Google is pushing AI Overviews to the top of SERPs. Google clearly wants users to accept AIOs as THE answers. The disclaimers are meaningless in the real world in this context, except as an attempt at legal cover for the firm. This is the standard "victim blaming" that Google and other tech firms commonly use. I don't think it's going to fly in the current regulatory and political environment, and the firms have not internalized this fact yet.
@lauren Yeah, I think the search companies putting AI responses as fact at the top of results, especially when the user has not opted in nor acknowledged the danger, could be one of the cases where a company has chosen to use the bot's speech as its own and therefore becomes liable for it.
But, again, I draw the distinction between that conduct and a chatbot with a disclaimer.
@LouisIngenthron This is going to be a deep well of litigation as courts work through all this. My gut feeling is that it isn't going to turn out well for the firms.
@lauren @LouisIngenthron I agree here with the problem that end users don't understand LLM and as such take things that they shouldn't at face value. But should their ignorance limit me in using it do very powerful things.
Tools come with risks, powerful tools even more so. But in the end I'm mostly curious on the court cases to see how deep this rabbit hole goes.
@lauren @LouisIngenthron Side note: Google is making it harder to find actual web answers. A week ago I could look at the options immediately above the AI Overview and select WEB. Now my options are:
All
Images
Videos
Shopping
Forums
More
I need to take an extra step just to get to the web...and that change is in the space of a week.
@LPerry2 @lauren @LouisIngenthron
Not sure what the difference is between "All" and "Web."
Anyway, I switched to DuckDuckGo and and much happier.
@lauren @LouisIngenthron@qoto.org There is an interesting congressional report on the interplay between generative AI and section 230 of the CDA. It looks at some of these issues. Good to have in your reference library.
@lauren I think the next logical step is to apply this question to self-driving vehicles.
... though there's likely already precedent there: who is responsible when a malfunctioning autopilot crashes a plane?
@mark It's a different kind of question since it doesn't involve the creation of original content, but it is somewhat related, yes.
@LouisIngenthron @lauren I think you have uncovered the flaw.
The problem with generative "AI" is it is falsely advertised as resembling human intelligence. It does not. It mimics human speech patterns, thus giving the false impression that its "reasoning" is "intelligent". Its reasoning is low quality computer nerd crapola.
@LouisIngenthron @lauren Actually it mimics prose, not speech.
@LouisIngenthron @lauren That generative "AI" cannot POSSIBLY resemble human intelligence is obvious. The closest thing to human intelligence is chimpanzee intelligence, and it is very similar. Yet it obviously cannot be modeled by a language model, because chimpanzee brains have no language.
Language is an ADJUNCT capability.
@LouisIngenthron False comparison. Not even close.