@lauren Yeah, and a law should be passed making Microsoft fully responsible for any and all content created with Microsoft Excel. Period. No exceptions.
@LouisIngenthron False comparison. Not even close.
@LouisIngenthron Excel is, for all practical purposes, a calculator. Users can see all input data and how that data was used to formulate results. This is not the case for generative AI. The full scope of sources used, how those sources were used, and virtually all other aspects of the system are a black box to users. The AI firms want to create new content and then disclaim responsibility for it. Unacceptable.
@lauren Tbf, I've used some excel spreadsheets that were pretty "black box" too.
But more importantly, the transparency of an algorithm has no bearing on the liability for speech resulting from its use. Nearly every video game is a black box. Should the publishers therefore become liable for user content (like online voice chat) as a result?
@LouisIngenthron Regarding your chat example, no, that would be pretty clearly covered by Section 230 since it does not involve original content per se.
@lauren So you believe that the core issue here is that user-prompted content is first-party speech, not third-party speech? Even though the user can ask the system to repeat them verbatim (as I demonstrated above)?
@LouisIngenthron The question isn't prompts, the question is facts. If a user asks a straightforward fact-based question and is given a direct answer that turns out to be wrong and does that user harm, who is responsible for that answer?
@lauren So long as the provider has a "this might be bullshit" disclaimer, they're not dishing out "facts", so the user is responsible for improperly treating it as such.
@lauren See, I disagree with that. The negligence is in the user treating an entertainment system like a fact machine. It's every bit as negligent as only getting your news from a comedy program, or consulting Reddit for legal advice.
@lauren
> But my point is that users cannot be expected to understand this difference.
That's some nanny-stateism there. If they're too stupid to understand it, then they shouldn't use it, like cars or kitchen knives or matches. It's not the state's job to ruin things because some people are too stupid to use them properly.
@LouisIngenthron I disagree. Many people are effectively forced to use these techs because the alternatives have been cut so far back or even eliminated, or purposely made difficult or insanely expensive to use. Billing systems, customer service, the list is long as firms and even the government pushes everything online and into apps. And I take exception to you calling these people "STUPID". Apparently you do not routinely get the kinds of questions and pleas for help that I get from smart nontech people who have been screwed by these firms through no fault of their own. The Google Account Recovery horror stories alone are nightmarish.
@lauren Who exactly is being forced to use AI?
@LouisIngenthron Increasingly anyone interacting with customer service, billing, help lines, on and on. And those are just the backend systems that are supposed to be invisible to users and callers. Apart from the systems like Google now pushing AI on every user of their search engine -- and the other search engines are going in the same direction -- without opt outs even being available.
@lauren I've already conceded, long ago, that companies that allow such systems to speak for them should be liable for the results.
But that's very different from a chatbot with a disclaimer.
Nobody is stupid for believing a corporate bot that lies to them about a sale. But they are absolutely stupid if they try to get facts from ChatGPT, ignoring all the disclaimers, and then later rely on those "facts" in a critical situation.
@LouisIngenthron My primary issue is with generation of purported fact-based content in the form of answers to fact-based questions, as in the way Google is doing AIOs conflated with traditional SERPs. However, I do believe that courts will be taking a more expansive view of firms' responsibilities in wider areas related to GAI as more dramatic cases of harm occur.
@LouisIngenthron The difference is that Google for many, many years has built a reputation as a source for finding accurate information. NOT as a comedian. THIS MATTERS.
@LouisIngenthron Ah! But my point is that users cannot be expected to understand this difference. Most of them barely understand the phones and laptops they're using. That's the bottom line.
Now, if in order to see an AIO, you need to click through a big banner that said, "THIS ANSWER MAY BE WRONG. CONSIDER IT ENTERTAINMENT ONLY. CLICKING THIS MEANS YOU UNDERSTAND THIS!" -- well, that *might* make a difference. Presentation matters.