For Section 230 purposes, is AI generated text third party or first party content? If a site sets up basically unsupervised or algorithmically supervised routine publication of #ChatGPT content, is it under current US law liable for what the robot says? Would OpenAI be? Would anyone at all be? (inspired by @emilymbender dair-community.social/@emilymb, though she presents a more traditionally publication-like, so arguably more likely liable, case.) #Section230

@interfluidity @emilymbender The algorithm isn't a person. So, if a person at the organization posts the algorithm's content, it's first party. But if someone outside the organization posts it, it's third party.

We need to avoid the temptation to assign agency to these tools.

@LouisIngenthron @emilymbender if a corporation posts something, it’s a 1st party but it’s not a person. is OpenAI the 1st party? The intent of Section 230 was to encourage a diverse range of internet forums, in terms of particip8n and moder8n. Section 230 shields even when in practical terms there is no 1st party to hold responsible, e.g. anonymous speech. AI tools are arguably an important new participant in online forums. should they be uniquely perilous?

@interfluidity @emilymbender OpenAI is every bit as much the first party as Microsoft Word is.

Humans (including human collectives like orgs) are responsible for our own actions. Our tools are not responsible for how we use them.

AI tools are not "participants" any more than video games are. Both are just entertaining output of human-designed algorithms.

@LouisIngenthron @emilymbender But AI language models are a form of human expression. Open AI is not the model. It’s the human organization that hired Kenyan workers to decide the model would be trained on this speech, not that, with this structure and these parameters, not those. That’s entirely unlike MS Word, which is neutral as to content. If anyone is to be responsible for Chat GPT speech, why shouldn’t it be the people who most determined its character?

@interfluidity @emilymbender That doesn't really matter.

You could use MS Word to write the Bible or Mein Kampf.
Likewise, ChatGPT can spit out something uplifting or something deeply racist, based on the prompt of the user.

The blacksmith should not be punished for the lunatic using his hammers to murder.

@LouisIngenthron @emilymbender But it’s not just the prompt of the user! MS has almost no role in determining the content of MS Word. Open AI typical has done much more than the prompter to determine what speech any particular prompt will yield. I prompt, “Say something jiggly” and it spits out a graf. Who more “authored” that, OpenAI or me (or nobody)?

@interfluidity @emilymbender You did.

If you go to build-a-bear, do you not build a bear just because the organization exercises some editorial control over the output of their system by choosing which parts to offer?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.