@joeyh @evangreer Indeed. We have no algorithm, such a ruling would be bad for Google, Facebook, and Twitter, but very good for the fediverse.
Without algorithms driving engagement numbers, Trump would never have become President, and I think a lot of Internet folks don't want to acknowledge just how much death can be attributed to the immunity Section 230 provides.
@ocdtrekkie @joeyh@octodon.social @evangreer If you think this site doesn't have any algorithms, you really don't understand how any of this works.
@LouisIngenthron @joeyh @evangreer I do understand how any of this works. =)
There is no recommendation algorithm here. There are algorithms in the way "any math problem is technically an algorithm", but not in a way that would be legally claimable as "recommending content". Even in the case of "trending" mechanisms, the mechanism is publicly inspectable, and not profit-driven.
@LouisIngenthron @joeyh @evangreer That last bit is what, unfortunately, Section 230 defenders completely miss: Section 230 *is not used to do good faith moderation*. It's used to do *profit-motivated* moderation.
Tech companies moderate content in ways that make money, not protect people. And that's why 230 *has to go*. Because it turns out, radicalizing an entire half of the US is very, very profitable.
@LouisIngenthron @joeyh @evangreer Like, even if all of the organizations Google has funneled money to are right, and the Internet can't survive without Section 230: Then to be blunt, the Internet should die.
Too many people have been murdered because, you know, you like YouTube a lot, and YouTube might change if it's legally liable for *anything*.
@ocdtrekkie @joeyh@octodon.social @evangreer I remember when people understood that shooting the messenger was a bad idea.
If you think YouTube is the necessary catalyst to radicalization, you have a lot of pre-YouTube history to explain.
The problem isn't tech: it's human nature. And you're not going to fix that by restricting speech on the internet. You're going to make it worse. You're going to make it less visible, festering, hidden.
Social media, like any communication medium, brings that darkness out into the light where it can be fought with counter-speech.
The answer is education, not muzzles.
@LouisIngenthron @joeyh @evangreer Removing Section 230 doesn't restrict speech online. Weird blanket immunity for tech companies is a solely United States thing, nobody else does it, and free speech is, strangely, not solely for American Internet servers.
Removing Section 230 will require companies be responsible for the actions they willfully take when operating their service. Every apocalyptic line of bull you've read beyond that is false.
@ocdtrekkie @joeyh@octodon.social @evangreer They already are responsible for the actions they willfully take when operating their service. That just doesn't make them responsible for the content of others just because of how they choose to order it.
@LouisIngenthron @joeyh @evangreer Incorrect. The entire case at hand is about whether or not Google and Twitter can be held responsible for *their actions in handling the content* as opposed to the content itself. This is about the actions they take as service operators.
@ocdtrekkie @joeyh@octodon.social @evangreer No, it's trying to hold them responsible for others' content because of their actions.
I'm all for them being held responsible for their own speech, but choosing the order third party speech is presented in shouldn't shift the liability.
@LouisIngenthron @joeyh @evangreer We're not talking about a bubble sort here. We're talking about a system that goes "hey, this person seems to be susceptible to fringe political positions, let me push terrorism recruitment on them!"
@ocdtrekkie @joeyh@octodon.social @evangreer Wait, you think they're intentionally trying to radicalize people?
@LouisIngenthron @joeyh @evangreer They operate a psychopathic mechanism solely designed to keep people watching, regardless of the impact on the individual or society.
Section 230 means they don't have to care about the impact, all they need to care about is maximizing advertising profit, and that means increasing eyeball time at any cost.
Every other company has to handle legal liability for their actions.
@ocdtrekkie @joeyh@octodon.social @evangreer Their "actions" are simply giving people what they ask for.
Should we also make McDonalds liable for diabetes?
@LouisIngenthron @joeyh @evangreer They didn't ask to be recruited into ISIS. YouTube's algorithm discerned it could increase watch time by presenting ISIS videos to them. This is like "videos pushed to users' front page" stuff, not search results.
@ocdtrekkie @joeyh@octodon.social @evangreer Really? Because I've watched quite a few YouTube videos and I've never seen an ISIS recruitment video. Have you?
Seems to me that someone has to be searching or watching something pretty ISIS-adjacent for the algorithm to even offer that.
@LouisIngenthron @joeyh @evangreer Radicalizing is a process of increasingly sliding the window over. Maybe they start with information about the COVID vaccine, and spend a bit too much time with a crazy theory. Next you're getting videos about Bill Gates and what "the Jews" are up to. And then it keeps going downhill.
But it sure keeps those view counts going up!
@ocdtrekkie @joeyh@octodon.social @evangreer You're proving my point. These people are seeking out this information.
@LouisIngenthron @joeyh @evangreer They weren't seeking to become a terrorist when they were curious if the vaccine was safe.
YouTube goes for maximizing engagement. Handing out little carrots to increasingly outrageous ideas.
And it goes well beyond just foreign terrorism considered in this case: Section 230 is squarely to blame for the mass shooting problem and the election of Donald Trump. Our democracy depends on repealing this bad law.
@ocdtrekkie @joeyh@octodon.social @evangreer Really? See, I thought the assholes pulling the triggers were responsible for mass shootings. Silly me.
It's weird how humans have no agency in your world, but corporations do.
@LouisIngenthron @joeyh @evangreer It's important to understand both individuals and the systems and influences that push them around.
For instance, if we only consider individual agency, why do we concern ourselves about who funds money into political lobbying groups at all? People are still just... going to vote what they believe, right?
@ocdtrekkie @joeyh@octodon.social @evangreer People are, yes.
But politicians may not because politicians wield power and power corrupts.
@LouisIngenthron @joeyh @evangreer So there's no point in political advertising at all? All the billions spent on TV commercials are for nothing?
@ocdtrekkie Of course influence is a factor. But it's not mind control.
Just because McDonalds advertises to me doesn't mean it's not my decision whether or not to eat there. It doesn't make them liable for my choices.
@LouisIngenthron And nobody's trying to charge Google with murder here. They're trying to charge Google with aiding and abetting, of influencing and contributing to a terrorist act. And there's no doubt that YouTube's profit-maximizing algorithm helped recruit a lot of terrorists.
@LouisIngenthron Let me pose a question that's... a more direct case of 230 most definitely killing people. Are you aware of the study Facebook did on how it's feed algorithm impacts the emotions of it's users?
@ocdtrekkie Yes. Are you aware of the study they did that shows they make more money when they turn off their algorithm and go to chronological?
@LouisIngenthron Didn't see that, please share!
And my point: When Facebook ran a study, testing if pushing sadder posts in a user's feed made them sadder, a simple tweak of their algorithm, it almost certainly killed people. Facebook, for the purposes of a "study", intentionally chose to nudge some people towards suicide. Is that something we should just... do nothing about?
@ocdtrekkie As for your other question, yes. You can't prevent harm without first performing studies to determine what causes it. It's a good thing that they're doing such studies.
@LouisIngenthron Research studies generally have careful controls and go through ethical review to ensure they don't put people in danger.
Facebook has no way to even identify whether or not it drove anyone to kill themselves, nor did any of the study "participants" choose to participate. That's not ethical research.
@ocdtrekkie Facebook can't "drive" anyone to suicide. Suicide is a deeply personal decision based on multiple factors.
All Facebook can do is expose people to triggers, but it's still the choice of the user whether they want to expose themselves to Facebook.
It's not any tech company's responsibility to sanitize the internet to protect every individual.
People are responsible for their own actions.
@LouisIngenthron Unless you're Google, Facebook, or Twitter. Then you are not responsible for your own actions, because you're immune from legal action. =D
@ocdtrekkie No, they're entirely responsible for all content they create themselves.
The rules work the same for everyone. They're not responsible for your speech and you're not responsible for theirs.
That's how it should be.
@LouisIngenthron Choosing to put specific content in front of someone is an *action*. Again, this is not "it's next to it in shelf order". This is "Google has decided this will make them more ad money to show you."
@ocdtrekkie You left out the last bit.
"Google has decided this will make them more ad money to show you because they think you'll like it because it resembles other things you've liked in the past."
@LouisIngenthron Well, not the last bit: "but slides you slightly further outside the Overton window each time, as it increases your watch time, and your desire to share it with others to convince them of your increasingly radical ideas".
@ocdtrekkie That's complete bullshit. They already implement hundreds of systems to protect the safety and security of their users, not because any law demands it, but because it's good for the user experience which (get this!) drives profits.
It's up to users to vote with their wallets about which features they care most about.
By leaving Twitter for Mastodon, users such as me (and presumably you) are active participants in the marketplace of ideas that is currently punishing the hell out of Twitter's profitability for its poor decisions.
@LouisIngenthron For example, Google loves to claim it's a "green company", and even that it's "carbon neutral" or "carbon negative".
But when it decides to drop support for things after a couple years, it often exacerbates the ewaste problem, rendering billions of Android phones and Chromebooks as useless garbage.
But because Google didn't actually manufacture the hardware, it doesn't take the blame for the environmental impact.
We have to understand how decisions have indirect results.
@ocdtrekkie Boy you really gloss over that "didn't manufacture the hardware" problem really easily, didn't you?
There's no reason that old hardware can't run new software.
Google isn't the one dropping support: It's the manufacturers. They could keep pushing firmware updates for the phones they manufacture until the end of time, but because people want to buy shiny new phones every couple years, it isn't worth it to them, so they assign an end-of-life.
Once again, this is a problem driven by demand as much, if not more so, than supply.
@LouisIngenthron Ah, you're one of those people. LOL. "It's the OEMs fault" is one of the more comical bad takes in tech, but since you're already buying their other bad positions, I suppose it makes sense.
@ocdtrekkie If the OEM chooses to stop supporting old hardware, then yes, they're the ones responsible for hardware discard cycles.
Unless Google is preventing them from updating their old devices to new versions of the software, it's not Google's fault. And even then, there are some extenuating circumstances where it makes sense.
Is Microsoft responsible for hardware that ran Windows 3.1 not being able to run Windows 10?
@LouisIngenthron Now you're just being deliberately ridiculous. And it doesn't sound like you understand much about how either Android or Chromebook platforms are controlled or updated.
@ocdtrekkie No, I'm more of a desktop person. I don't follow the mobile world as closely.
@ocdtrekkie I'm well aware of externalities are.
I guess my bigger concern is this: You're trying to cut off the supply of dangerous content, like videos intended to radicalize people, even if it does a ton of collateral damage. And you're not considering the demand side of the equation at all.
In other words, your strategy is indistinguishable from the War on Drugs.
@LouisIngenthron No, I'm not trying to remove speech. If someone wants to actively go looking for such and such content, I'm largely fine with them finding it. I have significant issues about tech companies deliberately promoting extremist content because doing so is more profitable than presenting less extreme content.
@ocdtrekkie Do you also have significant issues about fast food companies deliberately promoting unhealthy meals because doing so is more profitable than presenting healthier meals?
Do you have significant issues about video game companies deliberately promoting games because doing so is more profitable than presenting educational software?
Do you have significant issues about film studios deliberately promoting exciting fiction full of violence because doing so is more profitable than presenting factual documentaries?
These issues exist across the spectrum of our commercial world. So long as we consumers are given the choice to reject a company if we disagree with their actions, it's fine.
The way these algorithms work has been covered so extensively that it's common knowledge, even among people who don't use social media. If people continue to choose to use them, that's an implicit endorsement of the status quo. That's because the status quo gives them what they want too because it works both ways.
If Facebook or Google had little meters in the back labelled "Fascism" and there was some worker back there dialing them up while cackling maniacally, I'd be on your side.
But all these algorithms do is recommend the well-beaten path that others have chosen to travel before. If human nature leads us to beat paths to bad places, that's not Facebook's or Google's fault, and it shouldn't be their liability.
@LouisIngenthron I wish I had a longer character limit. But yes, I think a lot of those areas need better regulations!
However, all of those companies can be held legally liable for the damage they cause. Tech companies cannot, as Section 230 provides them an unprecedentedly wide get out of jail free card.
Have you seen the lawsuits about the opioid crisis? This is just as bad, but Google can't be taken to court over it.
@ocdtrekkie No, they can't be held liable. Fast food isn't liable for diabetes. Video games aren't liable for health effects of sedentary lifestyle. Movie studios aren't liable for people who decide to copy something stupid they saw in a movie and end up dead or in jail.
The opioid thing was for misleading marketing, aka outright fraud. That's different.
Those companies aren't liable for those things because human beings are responsible for our own actions. We're responsible for deciding not to cram as much McDonalds into our mouths as we can until we die.
We can choose whether we just want to watch a video of some dogs or a bullet going through a watermelon in slow-mo or if we want to go down the rabbit hole of radicalization.
I can tell you right now I wouldn't be radicalized no matter how many times some company recommended it to me. If someone is getting those videos recommended to them, it's because they probably want to watch them. If they're clicking on the recommendations, it's because they definitely want to watch them.
It's crazy to me that the three parties here are: (1) the user, actively seeking out harmful content, (2) the uploader, actively creating and sharing harmful content, and (3) the provider, slightly reducing the friction in connecting 1 and 2 to make a profit.
And 3 is where you're most concerned. 🤦♂️
@LouisIngenthron I am really really glad most of our government officials aren't as excited about corporate-run dystopian societies as you seem to be, lol.
@ocdtrekkie And I'm glad that the federal judiciary isn't as excited about chilling speech as you seem to be.
@LouisIngenthron Sure, *some* safety systems help profits. Not all of that is good for society. For example, mistreating and banning sex workers appears to be good for business, so they build systems to do that, but promoting white supremacy and terrorism is also pretty good for business, so they don't build systems that protect users there at all.
It's really important that you learn about externalities: The indirect costs of business decisions.