For Section 230 purposes, is AI generated text third party or first party content? If a site sets up basically unsupervised or algorithmically supervised routine publication of #ChatGPT content, is it under current US law liable for what the robot says? Would OpenAI be? Would anyone at all be? (inspired by @emilymbender dair-community.social/@emilymb, though she presents a more traditionally publication-like, so arguably more likely liable, case.) #Section230

@interfluidity @emilymbender The algorithm isn't a person. So, if a person at the organization posts the algorithm's content, it's first party. But if someone outside the organization posts it, it's third party.

We need to avoid the temptation to assign agency to these tools.

@LouisIngenthron @emilymbender if a corporation posts something, it’s a 1st party but it’s not a person. is OpenAI the 1st party? The intent of Section 230 was to encourage a diverse range of internet forums, in terms of particip8n and moder8n. Section 230 shields even when in practical terms there is no 1st party to hold responsible, e.g. anonymous speech. AI tools are arguably an important new participant in online forums. should they be uniquely perilous?

@interfluidity @emilymbender OpenAI is every bit as much the first party as Microsoft Word is.

Humans (including human collectives like orgs) are responsible for our own actions. Our tools are not responsible for how we use them.

AI tools are not "participants" any more than video games are. Both are just entertaining output of human-designed algorithms.

@LouisIngenthron @emilymbender But AI language models are a form of human expression. Open AI is not the model. It’s the human organization that hired Kenyan workers to decide the model would be trained on this speech, not that, with this structure and these parameters, not those. That’s entirely unlike MS Word, which is neutral as to content. If anyone is to be responsible for Chat GPT speech, why shouldn’t it be the people who most determined its character?

@interfluidity @emilymbender That doesn't really matter.

You could use MS Word to write the Bible or Mein Kampf.
Likewise, ChatGPT can spit out something uplifting or something deeply racist, based on the prompt of the user.

The blacksmith should not be punished for the lunatic using his hammers to murder.

@LouisIngenthron @emilymbender But it’s not just the prompt of the user! MS has almost no role in determining the content of MS Word. Open AI typical has done much more than the prompter to determine what speech any particular prompt will yield. I prompt, “Say something jiggly” and it spits out a graf. Who more “authored” that, OpenAI or me (or nobody)?

@LouisIngenthron @emilymbender If you encounter a person’s speech, even anonymous, that is defamatory, and you pass it along by forwarding emails, usually you’d be protected by #Section230. (This is one of EFF’s disingenuous talking points in favor of Section 230.) What if it turns out the anonymous speaker was #ChatGPT. Are you still protected? Do we deem it the speech of the prompter, or Open AI, or whom?

@interfluidity @emilymbender It would be the speech of the prompter, the person using the tool.

@LouisIngenthron @emilymbender That’s an answer. I don’t think it’s necessarily wrong, but I don’t think it’s as clearly right as you do. We’ll have to collectively decide. It will come up soon, as orgs are publishing and conveying LLM outputs, and LLM outputs are very unpredictable. I’m bringing out the :popcorn: for how those controversies pan out. Both OpenAI and the prompter will argue that Section 230 means no one is accountable, just like anonymous speech.

@interfluidity @emilymbender I don't think the prompter will have an argument in that case. They chose to use the tool. They chose to write the prompt. They chose to publish the result. If the speech causes harm, their agency is at its root.

@LouisIngenthron @emilymbender It won’t look like that though. They will have build an algorithmic product that sends email or summarizes articles or whatever. No (prompter-side) human will have been in the loop, in products that would be uneconomical if one had to be. The argument will be the usual #Section230 claim: you’ll break the future you you make us take responsibility for this text we are conveying. 1/

@interfluidity @emilymbender That's still pretty simple: If there's no prompter on the client-side, then the server-side is just directly serving content they wrote and they'd therefore be liable.

@LouisIngenthron @emilymbender The prompter is the end user! User writes a question on a help forum, firm presents that to LLM OpenAI mostly trained but firm has customized. It replies to user. Is support-seeker author of the reply she receives, for “using the tool” that is the vendor’s support forum? Under your “prompter is author” theory, she would be! Is the firm responsible bc it customized the model? If so, why not OpenAI, whose training forms the bulk of it?

@interfluidity @emilymbender Our earlier hypothetical dealt with situations where the publisher and prompter were the same party (where I still stand by my old logic).

If the prompter and publisher are different parties, then I think the liability falls on the publisher for choosing to publish the bot's output as their own speech.

@LouisIngenthron @emilymbender These questions of who is the “prompter”, “publisher”, “creator”, “author” get very vague. A friend uses ChatGPT, gets a funny but defamatory response, forwards it to me privately by mail. I then publish it. Section 230 clearly protects me. Is my friend then liable?

@interfluidity @emilymbender I'm pretty sure S230 would not protect you in that situation, because, when you affirmatively made the decision to publish it, you became the first party. It does, however, protect the email system in hosting and delivering that speech to you.

@interfluidity @emilymbender However, had you not published it and the email you received from your friend became public by other means (let's say a hack, for example), then your friend remain liable for the speech, since they would be both the prompter and the publisher of the speech.

@LouisIngenthron @emilymbender According to EFF, Section 230 protects you when you, say, forward an e-mail to a public list. (It's part of their disingenuous it-protects-you-and-me-not-just-big-firms spin.) Mail providers were protected before 230, bc they didn't curate. If you retweet a defamatory tweet, you are not liable, even though you affirmatively chose to do so. Sectio 230 exists to shield discretionary decisions to publish or not, distribution without discretion was already protected.1/

@LouisIngenthron @emilymbender (If 230 didn't exist would mail providers become liable on the theory that spam filtering is editorial discretion? That's an interesting question!) 2/

@interfluidity @emilymbender Yes, which was an impetus for the creation of 230 in the first place.

@LouisIngenthron @emilymbender It wasn't, at the time. That's retconning, I think. It was discussion forums that prompted 230, to encourage curation of harmful speech. (It's Section 230 of the Communications Decency Act.)

@interfluidity @emilymbender It's more like paraphrasing than retconning. Here are some direct quotes from Chris Cox:
"Ron and I were determined that good faith content moderation should not be punished"
"[One of S230's two key purposes is] incentivizing blocking and filtering technologies that individuals could use to become their own censors in their own households."

Those are close enough to describing spam filters to me.

Source:
realclearpolitics.com/articles

@LouisIngenthron @emilymbender I won't argue with your impressions of close enough, that's for you, but spam was not the problem it would soon become in 1996, provider-based spam filtering didn't exist, and "censors in their own households" had a pretty clear meaning in the context of the Communications Decency Act and the particular cases that gave rise to it. (Prodigy was punished for moderating objectionable content, Compuserve was immune because it didn't.)

@interfluidity @emilymbender The Stratton Oakmont v Prodigy case quotes the board manager, Charles Epstein, as listing "solicitation" as one of the major categories of pre-written reasons for the deletion of content.

Spam long predates the internet. I have no doubt that Wyden and Cox were including unsolicited advertisements in their definition of "objectionable content".

@LouisIngenthron @emilymbender Having been around at the time, I'm just going to say I don't think e-mail spam filtering was a meaningful impetus for CDA 230. It may have been on someone's radar, who knows, but e-mail spam wasn't the huge issue it soon became, and it's not meaningfully what provoked the law. Arguing about this doesn't seem like a great use of our time, though, if you want to disagree.

@LouisIngenthron @emilymbender However counterintuitive, at least under EFF's description, if I privately dish to you by e-mail, and you forward the defamatory speech to a big public mailing list, I am protected but you are liable. Defamation doesn't depend on an intent to publicize. Leaked private defamation is actionable if it's harmful, and in the digital realm all but the original defamer are often shielded. 3/

@LouisIngenthron @emilymbender Now we are saying the people must protect accurate reports of what ChatGPT said from any leak if they prompted it, because whatever the eff ChatGPT said it's as if they said it themselves, from a liability perspective. A bit weird. /fin

@interfluidity @emilymbender Can you cite a specific source on that? I think you may be getting the details wrong.

@interfluidity @emilymbender Yeah, that says that the forwarder would be protected, while the person who actually wrote the speech would remain liable, which is the way it should work, and the opposite of what you said above.

@LouisIngenthron @emilymbender No, that's exactly what I said. ChatGPT produces something. You privately forward it to me. I forward it to a big list. Under your theory, you then become liable for ChatGPT's speech. I, the forwarder, am not.

@LouisIngenthron @emilymbender Note also that the plain statutory language of CDA 230 doesn't contain any requirement for a responsible first party. I am immunized for "any information provided by another information content provider", which ChatGPT might reasonably be deemed to be.

@interfluidity @emilymbender Correct.

You had that backwards here: fosstodon.org/@interfluidity/1

And yes, I don't think forwarding an email counts as "republishing" so that seems legitimate.

@LouisIngenthron @emilymbender I think it's pretty dumb that a person who forwards an e-mail to a big list would be immune, while a person who expected it would be a private correspondence becomes liable. (I don't see where you think I have something backwards. The link took me to the very top of the thread?)

@LouisIngenthron Thanks. Yes, you are right, I wrote that backwards! I meant to say that the forwarder is protected but the originator is liable (which strikes me as wrong when the originator communicates much more privately than the forwarder).

@LouisIngenthron (I edited to append a note that I mistakenly got that one backwards, with a Thx to you for pointing it out!)

@interfluidity @emilymbender It works the exact same offline.

If I write you a letter defaming someone, and you show it to the newspaper, who runs the story about my defamation, I'm still the only one at fault for what I wrote.

We're all responsible for what we say, regardless of who we're saying it to.

@LouisIngenthron @emilymbender That's definitely the view enshrined in Section 230, whatever I think about it. (Offline, you'd have to be careful about the manner in which you pass it along. If you are very careful to report factually the third party's defamation, maybe you are immunized. But there's no blanket shield, your role in causing any damages might well be litigated.) 1/

@LouisIngenthron @emilymbender But you'd extend this duty of care to anything ChatGPT produced from your prompt. If I prompt "Tell me about Louis!" and ChatGPT bullshits, and I pass the whole convo along to a friend, and she forwards it, and what ChatGPT said was not accurate, I am solely responsible. Not OpenAI, not the forwarder. I gotta treat ChatGPT sessions like I authored all the bullshit that it says. /fin

@interfluidity @emilymbender Yes, correct. Think of ChatGPT as a big and very-freeform mad-libs generator. It may provide the framework, but you're still filling in the context to produce the end result. If you then choose to publish that, you do so under your name.

And, while that solution may not be 100% perfect, it's definitely leagues better than the inverse, where OpenAI's authors would be responsible for me asking their product to defame someone.

@LouisIngenthron @emilymbender Yeah. I don't think either solution is very satisfactory. In general, I think 230 has been harmful by taking a realm of law that needs a great deal of context and providing too broad a shield, making (non)liability too independent of the facts of cases. But, whatever you and I think shld be, I don't think it's at all clear cut what will be. I can certainly see LLMs provided by third parties deemed "information content provider", regardless of how they are prompted.

Show newer
Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.