@joeyh @evangreer Indeed. We have no algorithm, such a ruling would be bad for Google, Facebook, and Twitter, but very good for the fediverse.

Without algorithms driving engagement numbers, Trump would never have become President, and I think a lot of Internet folks don't want to acknowledge just how much death can be attributed to the immunity Section 230 provides.

@ocdtrekkie @joeyh@octodon.social @evangreer If you think this site doesn't have any algorithms, you really don't understand how any of this works.

@LouisIngenthron @joeyh @evangreer I do understand how any of this works. =)

There is no recommendation algorithm here. There are algorithms in the way "any math problem is technically an algorithm", but not in a way that would be legally claimable as "recommending content". Even in the case of "trending" mechanisms, the mechanism is publicly inspectable, and not profit-driven.

@LouisIngenthron @joeyh @evangreer That last bit is what, unfortunately, Section 230 defenders completely miss: Section 230 *is not used to do good faith moderation*. It's used to do *profit-motivated* moderation.

Tech companies moderate content in ways that make money, not protect people. And that's why 230 *has to go*. Because it turns out, radicalizing an entire half of the US is very, very profitable.

@LouisIngenthron @joeyh @evangreer Like, even if all of the organizations Google has funneled money to are right, and the Internet can't survive without Section 230: Then to be blunt, the Internet should die.

Too many people have been murdered because, you know, you like YouTube a lot, and YouTube might change if it's legally liable for *anything*.

@ocdtrekkie @joeyh@octodon.social @evangreer I remember when people understood that shooting the messenger was a bad idea.

If you think YouTube is the necessary catalyst to radicalization, you have a lot of pre-YouTube history to explain.

The problem isn't tech: it's human nature. And you're not going to fix that by restricting speech on the internet. You're going to make it worse. You're going to make it less visible, festering, hidden.

Social media, like any communication medium, brings that darkness out into the light where it can be fought with counter-speech.

The answer is education, not muzzles.

@LouisIngenthron @joeyh @evangreer Removing Section 230 doesn't restrict speech online. Weird blanket immunity for tech companies is a solely United States thing, nobody else does it, and free speech is, strangely, not solely for American Internet servers.

Removing Section 230 will require companies be responsible for the actions they willfully take when operating their service. Every apocalyptic line of bull you've read beyond that is false.

@ocdtrekkie @joeyh@octodon.social @evangreer They already are responsible for the actions they willfully take when operating their service. That just doesn't make them responsible for the content of others just because of how they choose to order it.

@LouisIngenthron @joeyh @evangreer Incorrect. The entire case at hand is about whether or not Google and Twitter can be held responsible for *their actions in handling the content* as opposed to the content itself. This is about the actions they take as service operators.

@ocdtrekkie @joeyh@octodon.social @evangreer No, it's trying to hold them responsible for others' content because of their actions.

I'm all for them being held responsible for their own speech, but choosing the order third party speech is presented in shouldn't shift the liability.

@LouisIngenthron @joeyh @evangreer We're not talking about a bubble sort here. We're talking about a system that goes "hey, this person seems to be susceptible to fringe political positions, let me push terrorism recruitment on them!"

@ocdtrekkie @joeyh@octodon.social @evangreer Wait, you think they're intentionally trying to radicalize people?

@LouisIngenthron @joeyh @evangreer They operate a psychopathic mechanism solely designed to keep people watching, regardless of the impact on the individual or society.

Section 230 means they don't have to care about the impact, all they need to care about is maximizing advertising profit, and that means increasing eyeball time at any cost.

Every other company has to handle legal liability for their actions.

@ocdtrekkie @joeyh@octodon.social @evangreer Their "actions" are simply giving people what they ask for.

Should we also make McDonalds liable for diabetes?

@LouisIngenthron @joeyh @evangreer They didn't ask to be recruited into ISIS. YouTube's algorithm discerned it could increase watch time by presenting ISIS videos to them. This is like "videos pushed to users' front page" stuff, not search results.

@ocdtrekkie @joeyh@octodon.social @evangreer Really? Because I've watched quite a few YouTube videos and I've never seen an ISIS recruitment video. Have you?

Seems to me that someone has to be searching or watching something pretty ISIS-adjacent for the algorithm to even offer that.

@LouisIngenthron @joeyh @evangreer Radicalizing is a process of increasingly sliding the window over. Maybe they start with information about the COVID vaccine, and spend a bit too much time with a crazy theory. Next you're getting videos about Bill Gates and what "the Jews" are up to. And then it keeps going downhill.

But it sure keeps those view counts going up!

@ocdtrekkie @joeyh@octodon.social @evangreer You're proving my point. These people are seeking out this information.

@LouisIngenthron @joeyh @evangreer They weren't seeking to become a terrorist when they were curious if the vaccine was safe.

YouTube goes for maximizing engagement. Handing out little carrots to increasingly outrageous ideas.

And it goes well beyond just foreign terrorism considered in this case: Section 230 is squarely to blame for the mass shooting problem and the election of Donald Trump. Our democracy depends on repealing this bad law.

@LouisIngenthron @joeyh @evangreer Let me take a different tack: If I check your profile, you've boosted several people's comments about Section 230. Are you aware how many of them are Google-funded?

@ocdtrekkie @joeyh@octodon.social @evangreer I'm aware of the allegations you're preparing to spew, yes.

@ocdtrekkie @joeyh@octodon.social @evangreer Really? See, I thought the assholes pulling the triggers were responsible for mass shootings. Silly me.

It's weird how humans have no agency in your world, but corporations do.

@LouisIngenthron @joeyh @evangreer It's important to understand both individuals and the systems and influences that push them around.

For instance, if we only consider individual agency, why do we concern ourselves about who funds money into political lobbying groups at all? People are still just... going to vote what they believe, right?

@ocdtrekkie @joeyh@octodon.social @evangreer People are, yes.

But politicians may not because politicians wield power and power corrupts.

@LouisIngenthron @joeyh @evangreer So there's no point in political advertising at all? All the billions spent on TV commercials are for nothing?

@ocdtrekkie Of course influence is a factor. But it's not mind control.

Just because McDonalds advertises to me doesn't mean it's not my decision whether or not to eat there. It doesn't make them liable for my choices.

@LouisIngenthron And nobody's trying to charge Google with murder here. They're trying to charge Google with aiding and abetting, of influencing and contributing to a terrorist act. And there's no doubt that YouTube's profit-maximizing algorithm helped recruit a lot of terrorists.

@LouisIngenthron Let me pose a question that's... a more direct case of 230 most definitely killing people. Are you aware of the study Facebook did on how it's feed algorithm impacts the emotions of it's users?

@ocdtrekkie Yes. Are you aware of the study they did that shows they make more money when they turn off their algorithm and go to chronological?

@LouisIngenthron Didn't see that, please share!

And my point: When Facebook ran a study, testing if pushing sadder posts in a user's feed made them sadder, a simple tweak of their algorithm, it almost certainly killed people. Facebook, for the purposes of a "study", intentionally chose to nudge some people towards suicide. Is that something we should just... do nothing about?

@ocdtrekkie As for your other question, yes. You can't prevent harm without first performing studies to determine what causes it. It's a good thing that they're doing such studies.

@LouisIngenthron Research studies generally have careful controls and go through ethical review to ensure they don't put people in danger.

Facebook has no way to even identify whether or not it drove anyone to kill themselves, nor did any of the study "participants" choose to participate. That's not ethical research.

@ocdtrekkie Facebook can't "drive" anyone to suicide. Suicide is a deeply personal decision based on multiple factors.

All Facebook can do is expose people to triggers, but it's still the choice of the user whether they want to expose themselves to Facebook.

It's not any tech company's responsibility to sanitize the internet to protect every individual.

People are responsible for their own actions.

@LouisIngenthron Unless you're Google, Facebook, or Twitter. Then you are not responsible for your own actions, because you're immune from legal action. =D

@ocdtrekkie No, they're entirely responsible for all content they create themselves.

The rules work the same for everyone. They're not responsible for your speech and you're not responsible for theirs.

That's how it should be.

@LouisIngenthron Choosing to put specific content in front of someone is an *action*. Again, this is not "it's next to it in shelf order". This is "Google has decided this will make them more ad money to show you."

@ocdtrekkie You left out the last bit.

"Google has decided this will make them more ad money to show you because they think you'll like it because it resembles other things you've liked in the past."

@LouisIngenthron Well, not the last bit: "but slides you slightly further outside the Overton window each time, as it increases your watch time, and your desire to share it with others to convince them of your increasingly radical ideas".

@LouisIngenthron That's the core problem here: Without liability, tech companies have no downside to doing psychopathic things. They take on no real risk in implementing solutions which... just happen to make terrorists, or raise suicide rates, or make scams easier.

As long as we can't hold them responsible for the harmful design they use around UGC, they're going to keep making more harmful systems.

Show newer

@ocdtrekkie Again, you've got the agency in that sentence backwards. Nobody forces anyone to watch. Nobody forces anyone to click the recommendations in the list.

People make those decisions for themselves.

You write about Facebook like crime reporters write about police sidearms.

"The victim was injured when the officer's weapon discharged."
"The victim was injured when the assailant was fully radicalized by Facebook."

@ocdtrekkie There's lots of doubt about that. The videos posted to YouTube may have recruited Terrorists, but you're still trying to shoot the messenger for a crime someone else committed.

@ocdtrekkie Only if they knew what they were doing was a crime when they did it. If their buddies lied to them and they were unaware the bank was robbed, we don't charge them.

@LouisIngenthron Do you think Google is unaware it's algorithms radicalize people into fringe theories?

@ocdtrekkie No, they aren't "unaware", but they're also not active participants.

If someone asks a librarian for books about Hitler, and the librarian gives them a list, is the librarian responsible for radicalizing a new fascist?

@LouisIngenthron Again, this is active recommendation, not search results. If a librarian said "Hey, you read a lot of books about conservative politics, would you like to check out some of this great Hitler stuff while you're at it?", I think they'd be a bit more responsible.

@ocdtrekkie Except it's really more like the Dewey Decimal System.

Like reading one book on Hitler? Just go to that shelf and you'll find a dozen more, plus other books on tangential subjects.

That's also effectively a recommendation system.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.