@joeyh @evangreer Indeed. We have no algorithm, such a ruling would be bad for Google, Facebook, and Twitter, but very good for the fediverse.

Without algorithms driving engagement numbers, Trump would never have become President, and I think a lot of Internet folks don't want to acknowledge just how much death can be attributed to the immunity Section 230 provides.

@ocdtrekkie @joeyh@octodon.social @evangreer If you think this site doesn't have any algorithms, you really don't understand how any of this works.

@LouisIngenthron @joeyh @evangreer I do understand how any of this works. =)

There is no recommendation algorithm here. There are algorithms in the way "any math problem is technically an algorithm", but not in a way that would be legally claimable as "recommending content". Even in the case of "trending" mechanisms, the mechanism is publicly inspectable, and not profit-driven.

@LouisIngenthron @joeyh @evangreer That last bit is what, unfortunately, Section 230 defenders completely miss: Section 230 *is not used to do good faith moderation*. It's used to do *profit-motivated* moderation.

Tech companies moderate content in ways that make money, not protect people. And that's why 230 *has to go*. Because it turns out, radicalizing an entire half of the US is very, very profitable.

@LouisIngenthron @joeyh @evangreer Like, even if all of the organizations Google has funneled money to are right, and the Internet can't survive without Section 230: Then to be blunt, the Internet should die.

Too many people have been murdered because, you know, you like YouTube a lot, and YouTube might change if it's legally liable for *anything*.

@ocdtrekkie @joeyh@octodon.social @evangreer I remember when people understood that shooting the messenger was a bad idea.

If you think YouTube is the necessary catalyst to radicalization, you have a lot of pre-YouTube history to explain.

The problem isn't tech: it's human nature. And you're not going to fix that by restricting speech on the internet. You're going to make it worse. You're going to make it less visible, festering, hidden.

Social media, like any communication medium, brings that darkness out into the light where it can be fought with counter-speech.

The answer is education, not muzzles.

@LouisIngenthron @joeyh @evangreer Removing Section 230 doesn't restrict speech online. Weird blanket immunity for tech companies is a solely United States thing, nobody else does it, and free speech is, strangely, not solely for American Internet servers.

Removing Section 230 will require companies be responsible for the actions they willfully take when operating their service. Every apocalyptic line of bull you've read beyond that is false.

@ocdtrekkie @joeyh@octodon.social @evangreer They already are responsible for the actions they willfully take when operating their service. That just doesn't make them responsible for the content of others just because of how they choose to order it.

@LouisIngenthron @joeyh @evangreer Incorrect. The entire case at hand is about whether or not Google and Twitter can be held responsible for *their actions in handling the content* as opposed to the content itself. This is about the actions they take as service operators.

@ocdtrekkie @joeyh@octodon.social @evangreer No, it's trying to hold them responsible for others' content because of their actions.

I'm all for them being held responsible for their own speech, but choosing the order third party speech is presented in shouldn't shift the liability.

@LouisIngenthron @joeyh @evangreer We're not talking about a bubble sort here. We're talking about a system that goes "hey, this person seems to be susceptible to fringe political positions, let me push terrorism recruitment on them!"

@ocdtrekkie @joeyh@octodon.social @evangreer Wait, you think they're intentionally trying to radicalize people?

@LouisIngenthron @joeyh @evangreer They operate a psychopathic mechanism solely designed to keep people watching, regardless of the impact on the individual or society.

Section 230 means they don't have to care about the impact, all they need to care about is maximizing advertising profit, and that means increasing eyeball time at any cost.

Every other company has to handle legal liability for their actions.

@ocdtrekkie @joeyh@octodon.social @evangreer Their "actions" are simply giving people what they ask for.

Should we also make McDonalds liable for diabetes?

@LouisIngenthron @joeyh @evangreer They didn't ask to be recruited into ISIS. YouTube's algorithm discerned it could increase watch time by presenting ISIS videos to them. This is like "videos pushed to users' front page" stuff, not search results.

@ocdtrekkie @joeyh@octodon.social @evangreer Really? Because I've watched quite a few YouTube videos and I've never seen an ISIS recruitment video. Have you?

Seems to me that someone has to be searching or watching something pretty ISIS-adjacent for the algorithm to even offer that.

@ocdtrekkie @joeyh@octodon.social @evangreer Also, even if you were shown an ISIS recruitment video, would that make you want to join ISIS?

Or would you already have to have some sympathy for their cause before even considering that?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.