What am I missing? The US Supreme Court is deciding whether content algorithms are considered content themselves. I completely support content algorithms being considered content. Otherwise no one is held responsible for the algorithm's decisions. These algorithms have contributed to genocides as far back as 2016 and literally no one was held accountable and no meaningful changes occurred.

#Section230 #SCoUS #SupremeCourt #USSupremeCourt #YouTube #Google #Meta #Facebook #TikTok #Instagram

@joeo10 None of those "if you said" statements in the article are relevant to my argument. I am pro Section 230, I just believe that algorithms are content. Recommendation engines are optimized for short term advertising revenue and they have horribly perverse outcomes. Google and others created these tools with no regard for their long term impacts. You understand what is at stake if manipulative algorithms are answerable to no one but short term share holders.

@greg @joeo10 Enjoy the hellscape you seem so keen on creating.

"Content algorithms" drive search bars, spam filters, internet archives and more, while protecting essential sites like Wikipedia and news from being overrun by bad actors.

The internet will be a far shittier place if this goes through.

If you want something a bit more scholarly to read on the subject, check out this Amicus Brief that lays out the arguments well (and look who signed it):
supremecourt.gov/DocketPDF/21/

@LouisIngenthron @joeo10 I'm not suggesting getting rid of content algorithms, I simply want them held accountable for their outcomes. And currently there is no incentive for the creators of content algorithms to reduce the harm they cause.
Giving users control of the algorithms they use is one possible way to push the content burden back onto users. If the user controls the algorithms then the platforms are protected under Section 230. There are solutions, what we currently have is unworkable.

@greg @joeo10 Making companies liable for the content of others *is* suggesting getting rid of content algorithms, whether you realize that or not. Google's not going to offer you an account if they're liable for getting sued for whatever you post/upload.

Most users can't even figure out how to sign up to Mastodon, and you want them writing content algorithms? Those are written by some of the smartest people in the world. And if a company dumbs it down enough for the average user, guess what we get? Exactly what we have now: where people customize their content by choosing who to follow and like.

@LouisIngenthron @joeo10 I'm not suggesting making companies liable for the content of others, where did you get that idea? I'm arguing that recommendation engines *are* content produced by the platform. Therefore tech companies should be liable for those recommendations. They would still have protection for user generated content. But if a company promotes content that encourages a genocide then the platform should be held responsible. I don't understand why that's a controversial statement.

@greg @joeo10 "Those recommendations" that you think they should be liable for are comprised of the content of third parties.

And, in many cases, those recommendations are not driven by decisions made by people in the company, but rather by preferences expressed by the user.

So, no, an intermediary should not be legally liable for the content of one third party just because their algorithms show it to another third party who wants to see it.

@LouisIngenthron @joeo10 YouTube has recommended diets to people suffering from anorexia. The diets themselves aren't dangerous, it's the targeted recommendations that's dangerous. Taken to the extreme this concept is literally responsible for terrorist attacks and genocides.
I don't understand why people are defending the horrible practices of these tech monopolies.
Recommendations *are* content. Hold tech companies responsible for their actions otherwise they have no incentives to change.

@greg @joeo10 Gee, could that possibly be because people suffering from anorexia frequently search for diet-related content? How is that YouTube's fault that they're just giving them more of what they've asked for?

It's not big tech's responsibility to manage anyone's addiction for them.

And the reason people object to your rhetoric is because you're ignoring the people who are actually creating the hurtful content to instead focus on killing the messenger. Your views are misguided and actively harmful to marginalized communities, as was explained in the Amicus Curiae I linked above.

@LouisIngenthron @joeo10 Did you ever consider that recommending something someone has searched for previously isn't always a good idea?
I don't understand how you can morally defend the actions of these tech monopolies.
"Facebook owner Meta’s dangerous algorithms and reckless pursuit of profit substantially contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people in 2017"
amnesty.org/en/latest/news/202

@greg @joeo10 How are they supposed to know if presenting a user with a certain bit of content is a "good idea"? Do you want them to collect everyone's medical records to make value judgements about which content is good for them or not?

Because that sounds dystopian to me.

It really sounds like you simply don't understand the scale of the problem or how these things work at all. The recommendations the algorithm provides do not inherently match the whims or goals of the company that published it. ChatGPT should be about as obvious an example of that as any.

Also, it wasn't Facebook's "pursuit of profit" that led to the Myanmar thing. It was a lack of non-English-speaking moderators. Facebook ran a study and found they made *more* money when they turned off their algorithm (because people scrolled past more content they didn't care about, which meant they were shown more ads in between).

So, even the "facts" you're starting from are lacking. Maybe consider that the conclusions aren't nearly as sound either.

@LouisIngenthron @joeo10
Dismissing my arguments as not understanding the problem is unfair & inaccurate.
I identified & researched this specific problem years ago youtu.be/yCtx-MDaiYw
Data is my craft & I have a patent on removing bias from algorithms patents.google.com/patent/US20
The UN, Amnesty International, & many others disagree with your interpretation of Facebook's involvement in the Rohingya genocide. Either way, there are many other examples of algorithms pushing individuals to extremes.

@greg And how much success have you had applying the contents of that patent in the real world? Because it seems to me that it would only work in very narrowly-scoped environments or for very specific types of users and wouldn't work as well for the general public in the billion-user scale systems.

Of course we could train the learning models to avoid certain types of harmful-but-legal content. But the problem is: other people still want to see that content and actively seek it out.

Like your diet example from before: Sure, the algorithm could be trained to disincentivize diet advice, but then you're pissing off a whole group of people looking to lose weight to try to protect anorexics.

You can't require the entire internet have kid gloves on because some people could be harmed by seeing some content.

@greg as for the Myanmar thing, the below paragraph is from the Amnesty International report you cited.

In addition, they don't seem to have any real concrete evidence or recommendations about the algorithmic recommendations. It basically boils down to "Facebook should do more human rights due diligence for its algorithms". Which is like a teacher responding "think harder" when a kid asks why they got a bad grade.

They couldn't make specific recommendations because their entire paper is established on assumptions that may or may not be true. They *feel* Facebook is reckless with its algorithms, so it must be. :eyeroll:

The bottom line is that these companies are giving people what they want. That so many people seem to want hatred isn't an algorithmic problem; It's a human nature problem.

@LouisIngenthron we're just talking past each other at this point. My direct research into this problem, Amnesty International, U.N. investigators, and many others have come to the same conclusion. Social media algorithms designed to increase advertising engagement have the perverse outcome of amplifying hatred. I've literally shared a video I produced showing the extremism amplifying mechanism.
I feel like you're engaging in motivated reasoning to defend these companies.

@greg No, I absolutely acknowledge your point as valid.

I just primarily disagree with where to place the blame.

Those same algorithms that can cause harm also help marginalized communities find and support each other. They're not inherently good *or* bad because all they do is just repeat and distribute the content of others.

The content itself is what matters.

And it drives me up a wall that so many people are more angry at social media for *carrying* the messages than they are at the authors for *writing* those messages.

To use your own words: "I feel like you're engaging in motivated reasoning to [attack] these companies."

@LouisIngenthron I completely agree that algorithms are not inherently bad. In fact the project I was working on was going to use similar algorithms as YouTube just with a different and user selectable success metrics.
And I agree, the algorithms don't have malicious intent, they just have narrow short term success metrics. But the social media companies have seemingly no interest or meaningful ways of measuring perverse long term outcomes. They need regulation or to be held accountable

@greg See, I disagree with that. They can be "regulated" with simple public pressure. Look at how Facebook has massively beefed up its non-English-speaking moderators since the Myanmar thing.

This is all exceedingly new technology. It's not reasonable to punish someone for failing to predict every possible tangent the technology enables. Moreover, the law moves too slowly to keep up, and by setting things in stone, could stifle the very innovation that would produce further improvements limiting harm.

If social media causes harm (maliciously or ignorantly), they're already held responsible by the general public's outcry. They then have the option to respond to that criticism and improve, or lose customers and profits.

Which do you think they'll choose?

@LouisIngenthron I would argue that Facebook beefing up non-english-speaking moderators was the wrong level of abstraction and didn't actually fix the root cause. Not to mention that moderators were some of the first positions to be laid off.
I don't think Facebook should be punished for not predicting the negative consequences of their tech. I think they should be punished for not addressing the issues of their tech after being called out.
The public doesn't understand this tech.

@greg I don't know about that. The proliferation of hashtags into everyday conversation is a pretty good indicator that the public does have some idea of how this works. Obviously, they don't know the intricate details, but I think most social media users understand that the "Recommended" or "Trending" sections feature content that's a mix of what the site owners want you to see and what other users are most talking about (and ads).

But let's digress. You started this thread by writing "content algorithms [should be] considered content themselves".

So, let's try a hypothetical. Let's say a white supremacist uploads some awful, and legally defamatory, content to YouTube in two parts. Defamation is difficult to prove, and may or may not violate YouTube's rules, but for the sake of the hypothetical, let's say it does break YouTube's rules, but it's still pending moderation (they haven't gotten to review it yet).

A viewer then watches the entire first part and, naturally, YouTube's algorithm recommends the subsequent episode, as anyone would reasonably expect them to.

Under your desired system, would this not make YouTube just as liable to be sued for the content as the creator of the content?

Another example: Wikipedia has a list of recommended articles on its homepage. If some user goes to those articles and edits one to contain libel, should Wikipedia the company then be liable for that because they "recommended" it?

A third example: Many email services now use spam filters that learn from user preferences, making them customized to the user. The fact that they sort incoming information automatically to recommend "good" mail to the inbox while hiding bad mail in the spam box... would that make the mail provider liable for the content of all messages delivered?

@LouisIngenthron in your YouTube example, YouTube created content (a recommendation) that links to content (the questionable video) and therefore YouTube is responsible for their part in promoting that video. If the video creator added a link to the subsequent video then YouTube would not be responsible.
Wikipedia is all user generated content, even the home page, so that's not relevant.
Email spam filtering removes suspected spam emails, the default is delivery. So that's not relevant.

@greg And you don't see any problem with YouTube being liable for third-party defamation because they included a "next episode" button?

Spam filters may remove the most egregious, but for the most part, they don't; they simply categorize it into a folder. The only difference between the folders is which one opens by default when you open the app... because those are the ones you're recommended to actually pay attention to. Spam filters are absolutely recommendation algorithms.

How about search bars? Can platforms add any functionality to make them more useful by recommending datapoints that are more viewed and liked to the top of the search results? In particular, that's useful for preventing people from following fake accounts of high-profile folks on social media. But would that make them liable for the third-party content of those accounts they're "recommending" because a user searched for them?

If a small forum has a feed at the top recommending the most active threads, do they become responsible for the third-party content of those threads?

If an online game recommends friends to play with, do they become responsible for what those users say?

Is Yelp going to be liable for basically everything ever written on their website by third parties by the very nature of their business?

@LouisIngenthron ugh, my responses are bound by the 500 character limit on my instance. I need to fix that. So I apologize if some of my responses are terse.
Yes, I think YouTube should be held liable if their content (the recommendation) promotes defamation content. But more importantly, if YouTube's content (the recommendations) leads to extremism indoctrination, YouTube is responsible.
I would argue that a spam filter isn't promoting the content of the email because the default is deliver.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.