What am I missing? The US Supreme Court is deciding whether content algorithms are considered content themselves. I completely support content algorithms being considered content. Otherwise no one is held responsible for the algorithm's decisions. These algorithms have contributed to genocides as far back as 2016 and literally no one was held accountable and no meaningful changes occurred.
#Section230 #SCoUS #SupremeCourt #USSupremeCourt #YouTube #Google #Meta #Facebook #TikTok #Instagram
@greg Please read this since you basically don't understand and you're wrong here: https://www.techdirt.com/articles/20200531/23325444617/hello-youve-been-referred-here-because-youre-wrong-about-section-230-communications-decency-act.shtml
@joeo10 None of those "if you said" statements in the article are relevant to my argument. I am pro Section 230, I just believe that algorithms are content. Recommendation engines are optimized for short term advertising revenue and they have horribly perverse outcomes. Google and others created these tools with no regard for their long term impacts. You understand what is at stake if manipulative algorithms are answerable to no one but short term share holders.
@greg @joeo10 Enjoy the hellscape you seem so keen on creating.
"Content algorithms" drive search bars, spam filters, internet archives and more, while protecting essential sites like Wikipedia and news from being overrun by bad actors.
The internet will be a far shittier place if this goes through.
If you want something a bit more scholarly to read on the subject, check out this Amicus Brief that lays out the arguments well (and look who signed it):
https://www.supremecourt.gov/DocketPDF/21/21-1333/252764/20230119182956110_230103a%20AC%20Brief%20for%20efiling.pdf
@LouisIngenthron @joeo10 I'm not suggesting getting rid of content algorithms, I simply want them held accountable for their outcomes. And currently there is no incentive for the creators of content algorithms to reduce the harm they cause.
Giving users control of the algorithms they use is one possible way to push the content burden back onto users. If the user controls the algorithms then the platforms are protected under Section 230. There are solutions, what we currently have is unworkable.
@greg @joeo10 Making companies liable for the content of others *is* suggesting getting rid of content algorithms, whether you realize that or not. Google's not going to offer you an account if they're liable for getting sued for whatever you post/upload.
Most users can't even figure out how to sign up to Mastodon, and you want them writing content algorithms? Those are written by some of the smartest people in the world. And if a company dumbs it down enough for the average user, guess what we get? Exactly what we have now: where people customize their content by choosing who to follow and like.
@LouisIngenthron @joeo10 I'm not suggesting making companies liable for the content of others, where did you get that idea? I'm arguing that recommendation engines *are* content produced by the platform. Therefore tech companies should be liable for those recommendations. They would still have protection for user generated content. But if a company promotes content that encourages a genocide then the platform should be held responsible. I don't understand why that's a controversial statement.
@greg @joeo10 "Those recommendations" that you think they should be liable for are comprised of the content of third parties.
And, in many cases, those recommendations are not driven by decisions made by people in the company, but rather by preferences expressed by the user.
So, no, an intermediary should not be legally liable for the content of one third party just because their algorithms show it to another third party who wants to see it.
@LouisIngenthron @joeo10 YouTube has recommended diets to people suffering from anorexia. The diets themselves aren't dangerous, it's the targeted recommendations that's dangerous. Taken to the extreme this concept is literally responsible for terrorist attacks and genocides.
I don't understand why people are defending the horrible practices of these tech monopolies.
Recommendations *are* content. Hold tech companies responsible for their actions otherwise they have no incentives to change.
@greg @joeo10 Gee, could that possibly be because people suffering from anorexia frequently search for diet-related content? How is that YouTube's fault that they're just giving them more of what they've asked for?
It's not big tech's responsibility to manage anyone's addiction for them.
And the reason people object to your rhetoric is because you're ignoring the people who are actually creating the hurtful content to instead focus on killing the messenger. Your views are misguided and actively harmful to marginalized communities, as was explained in the Amicus Curiae I linked above.
@LouisIngenthron @joeo10 Did you ever consider that recommending something someone has searched for previously isn't always a good idea?
I don't understand how you can morally defend the actions of these tech monopolies.
"Facebook owner Meta’s dangerous algorithms and reckless pursuit of profit substantially contributed to the atrocities perpetrated by the Myanmar military against the Rohingya people in 2017"
https://www.amnesty.org/en/latest/news/2022/09/myanmar-facebooks-systems-promoted-violence-against-rohingya-meta-owes-reparations-new-report/
@greg @joeo10 How are they supposed to know if presenting a user with a certain bit of content is a "good idea"? Do you want them to collect everyone's medical records to make value judgements about which content is good for them or not?
Because that sounds dystopian to me.
It really sounds like you simply don't understand the scale of the problem or how these things work at all. The recommendations the algorithm provides do not inherently match the whims or goals of the company that published it. ChatGPT should be about as obvious an example of that as any.
Also, it wasn't Facebook's "pursuit of profit" that led to the Myanmar thing. It was a lack of non-English-speaking moderators. Facebook ran a study and found they made *more* money when they turned off their algorithm (because people scrolled past more content they didn't care about, which meant they were shown more ads in between).
So, even the "facts" you're starting from are lacking. Maybe consider that the conclusions aren't nearly as sound either.
@greg And how much success have you had applying the contents of that patent in the real world? Because it seems to me that it would only work in very narrowly-scoped environments or for very specific types of users and wouldn't work as well for the general public in the billion-user scale systems.
Of course we could train the learning models to avoid certain types of harmful-but-legal content. But the problem is: other people still want to see that content and actively seek it out.
Like your diet example from before: Sure, the algorithm could be trained to disincentivize diet advice, but then you're pissing off a whole group of people looking to lose weight to try to protect anorexics.
You can't require the entire internet have kid gloves on because some people could be harmed by seeing some content.