> The Post said it spoke to a Twitter employee under the condition of anonymity, who explained that toxic accounts are supposed to get flagged as ineligible for advertising. It seems that hasn’t been happening, probably because Musk fired the employees tasked with it.

The important bit here is that social media monopolists *know* what "toxic" means, and can deal with toxic accounts — as long as it's related to advertising revenues.

But for general safety of people on their platform? Nope.

@rysiek

You can accept a much higher false positive rate for refusing to advertise compared to refusing service to an account, though. Or what intervention were you envisioning they could apply?

@robryk I am not responsible for solving problems huge centralized social media platforms created by being huge, centralized, and platforming toxic people. 🤷‍♀️

But it's important to recognize that when they *want* to do something about them, they apparently can.

I'm sure if they *wanted* to find ways to deal with their toxicity as directed at other people on these platforms, they would too. And that's my point. Letting toxic accounts be toxic to others is a *business decision* there.

@rysiek

Putting aside whether I agree with the conclusion, I wanted to point out that the evidence you provided for this claim is not evidence for this claim: being able to have high-false-positive-rate detector of toxicity (for some definition thereof, which matters less here due to high FPR rate) isn't much (any?) evidence for being able to have a lower-FPR detector of toxicity.

(Not putting it aside, I think that it's immaterial whether they _can_ do it: they chose to put themselves in a position where they need to do it, so inability is not a good excuse for anything.)

@robryk the false-positive rate matters less here.

What matters is that a). they understand what "toxic" means; b). they have a way of deciding that an account is "toxic"; and c). they choose to use that only to protect the advertisers, not other people on the platform.

> they chose to put themselves in a position where they need to do it, so inability is not a good excuse for anything

:100a:

Follow

@rysiek

I don't think they need to understand what toxic is for the purposes they have. Given (c), they are working in a nonadversarial situation, because those accounts have no incentive to try to confuse this process. Thus, they can be using some very poor proxy for "toxic" as their definition of the thing they want to detect.

IOW anti-abuse measures are much harder than abuse-detection measures, even as far as making it harder to define the problem.

@robryk all of this is true, and you're making some good points. However, I stand by my statement: these companies flag certain accounts as "toxic" for certain purposes. This is already implemented and working for the most important part of their business: advertising.

They don't get to pretend to not know what "toxic" means when people ask them to do something about the toxic accounts on their platforms.

@rysiek

ISTM that the crux of your argument is that "toxic" has the same meaning in both cases. Is it? If so, why do you think it does? (E.g. I would expect that for advertising purposes Twitter does not care about toxicity via DMs.)

@robryk my argument is that where there is a will, there is a way. I've heard Twitter drones talk about how "difficult" it is for them to define and deal with toxicity on their platform dozens of times. But when ad revenue is on the line — bam, they have a solution!

Of course "toxic" has somewhat different meanings in both cases, but the sets of ad-toxic and people-toxic accounts will have a huge overlap. They could start with that. Silly idea: flag such accounts publicly. "Too toxic for ads" 🤷‍♀️

@rysiek

I think that at first approximation these two problems are only distantly related. You can argue that Twitter is solving a problem in this area and is not solving a problem in that area, but I think that this is a vastly weaker[1] argument than pointing out that the latter problem is a consequence of their choices.

The public flagging is interesting: I have no predictions whether it would make people try to avoid or seek out being so marked.

[1] eristically maybe not so, sadly

@robryk for sure, but it would at least be a clear signal to other people on the platform.

Add a setting "auto-mute/block all ad-toxic accounts" that could be enabled in settings, and it becomes interesting. Now the ad-toxicity is also tied to reach, and the ability to troll/harass people.

@rysiek

Doing so introduces a feedback loop. That incentivizes people to try to game your system (not only preventing themselves from being marked as bad, but also trying to cause random bystanders to be so marked).

If a majority of such behaviour is premeditated and organized, I expect that this will ~destroy the signal. If most of it is single trolls doing things on the spur of the moment I agree that this would probably be helpful.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.