I've been participating in the fediverse for about 8.5 years now, and have run infosec.exchange as well as a growing number of other fediverse services for about 7.5 of those years. While I am generally not the target of harassment, as an instance administrator and moderator, I've had to deal with a very, very large amount of it. Most commonly that harassment is racism, but to be honest we get the full spectrum of bigotry here in different proportions at different times. I am writing this because I'm tired of watching the cycle repeat itself, I'm tired of watching good people get harassed, and I'm tired of the same trove of responses that inevitably follows. If you're just in it to be mad, I recommend chalking this up to "just another white guy's opinion" and move on to your next read.

The situation nearly always plays out like this:

A black person posts something that gets attention. The post and/or person's account clearly designates them as being black.

A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harrassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

A small army of "helpful" fedi-experts jumps in with replies to point out how Mastodon provides all the tools one needs to block bad actors.

Now, more exasperated, the victim exclaims that it's not their job to keep racists in check - this was (usually) cited as a central reason for joining the fediverse in the first place!

About this time, the sea lions show up in replies to the victim, accusing them of embracing the victim role, trying to cause racial drama, and so on. After all, these sea lions are just asking questions since they don't see anything of what the victim is complaining about anywhere on the fediverse.

Lots of well-meaning white folk usually turn up about this time to shout down the seal lions and encouraging people to believe the victim.

Then time passes... People forget... A few months later, the entire cycle repeats with a new victim.

Let me say that the fediverse has a both a bigotry problem that tracks with what exists in society at large as well as a troll problem. The trolls will manifest themselves as racist when the opportunity presents itself, anti-trans, anti-gay, anti-women, anti-furry, and whatever else suits their fancy at the time. The trolls coordinate, cooperate, and feed off each other.

What has emerged, in my view, on the fediverse is a concentration of trolls onto a certain subset of instances. Most instances do not tolerate trolls, and with some notable exceptions, trolls don't even bother joining "normal" instances any longer. There is no central authority that can prevent trolls from spinning up fediverse software of their own servers using their own domains names and doing their thing on the fringes. On centralized social media, people can be ejected, suspended, banned, and unless they keep trying to make new accounts, that is the end of it.

The tools for preventing harassment on the fediverse are quite limited, and the specifics vary between type of software - for example, some software like Pleroma/Akkoma, lets administrators filter out certain words, while Mastodon, which is what the vast majority of the fediverse uses, allows both instance administrators and users to block accounts and block entire domains, along with some things in the middle like "muting" and "limiting". These are blunt instruments.

To some extent, the concentration of trolls works in the favor of instance administrators. We can block a few dozen/hundred domains and solve 98% of the problem. There have been some solutions implemented, such as block lists for "problematic" instances that people can use, however many times those block lists become polluted with the politics of the maintainers, or at least that is the perception among some administrators. Other administrators come into this with a view that people should be free to connect with whomever on the fediverse and delegate the responsibility for deciding who and who not to block to the user.

For many and many other reasons, we find ourselves with a very unevenly federated network of instances.

Wit this in mind, if we take a big step back and look at the cycle I of harassment described from above, it looks like this:

A black person joins an instance that does not block m/any of the troll instances.

That black person makes a post that gets some traction.

Trolls on some of the problematic instances see the post, since they are not blocked by the victim's instance, and begin sending extremely offensive and harassing replies. A horrific torrent of vile racist responses ensues.

The victim expresses frustration with the amount of harassment they receive on Mastodon/the Fediverse, often pointing out that they never had such a problem on the big, toxic commercial social media platforms. There is usually a demand for Mastodon to "fix the racism problem".

Cue the sea lions. The sea lions are almost never on the same instance as the victim. And they are almost always on an instance that blocks those troll instances I mentioned earlier. As a result, the sea lions do not see the harassment. All they see is what they perceive to be someone trying to stir up trouble.

...and so on.

A major factor in your experience on the fediverse has to do with the instance you sign up to. Despite what the folks on /r/mastodon will tell you, you won't get the same experience on every instance. Some instances are much better keeping the garden weeded than others. If a person signs up to an instance that is not proactive about blocking trolls, they will almost certainly be exposed to the wrath of trolls. Is that the Mastodon developers' fault for not figuring out a way to more effectively block trolls through their software? Is it the instance administrator's fault for not blocking troll instances/troll accounts? Is it the victim's fault for joining an instance that doesn't block troll instances/troll accounts?

I think the ambiguity here is why we continue to see the problem repeat itself over and over - there is no obvious owner nor solution to the problem. At every step, things are working as designed. The Mastodon software allows people to participate in a federated network and gives both administrators and users tools to control and moderate who they interact with. Administrators are empowered to run their instances as they see fit, with rules of their choosing. Users can join any instance they choose. We collectively shake our fists at the sky, tacitly blame the victim, and go about our days again.

It's quite maddening to watch it happen. The fediverse prides itself as a much more civilized social media experience, providing all manner of control to the user and instance administrators, yet here we are once again wrapping up the "shaking our fist at the sky and tacitly blaming the victim" stage in this most recent episode, having learned nothing and solved nothing.

@jerry
I agree with everything you observe, the cycle is both predictable and all too frequent.

What concerns me the most, and I will pick on Mastodon here as the predominent platform, the devs do not sufficiently consider safety as a priority, nor seemingly as a factor in their design decisions. It feels like it would take a fork to properly implement safety mechanisms to counter the apparent race to "help build engagement".

@doug @jerry I'm going to stand up for the devs here and say that they absolutely do factor in these things, just not always in the ways that are most apparent. There are a number of features that don't get added (at least as quickly as folks demand) specifically because of their impact on user privacy, safety, security, etc. (Quote toots, for example.)

There's a triad of engagement, safety, and accessibility that has to be factored into everything. Then how those features are maintained going forward.

@vmstan @doug Additionally, I am not sure what additional safety mechanisms are missing, to be honest. Perhaps making block lists more frictionless? Allowing admins to block certain words? (Which btw, would cause it's own set of backlash for flitering out legitimate use of some words)...

@jerry I have no ideas about the programming. But it would help already if private blocking of people would be a real block and not only a "silencing". Often I can read everything in profiles that I blocked long before (only sometimes the profile is "not available"). So trolls often make screenshots for harassing people who have blocked them.

The other feature I liked on X: close a post completely for comments. A lot of people miss it.

@vmstan @doug

Follow

@NatureMC @jerry @vmstan @doug just commented on the last one. Seems to be a big topic, but it seems we diverge on the usefulness of silencing
qoto.org/@mapto/11289062161199

@mapto @jerry @vmstan @doug Without having time to read so many discussions, I give you an example: Imagine, you are a black queer woman and want to post about politics relevant for you. You post only as a service for your community, and don't want to discuss. Automatically, you have all the mainsplainers, trolls etc. commenting that you can imagine. For your mental health, you don't want to have that: not only invisible for you but no comments. But the message for your community is important.

@mapto @jerry @vmstan @doug On Birdcrap you could decide: ok, that's a service post, no comments.
Often it's so bad on Mastodon (it's enough to be a woman) that I don't talk anymore about important topics. Instead of own posts, I boost bot-messages, so the comments would land there.
This problem silences a lot of people. We should listen to the those affected, not just those who never experience it.

@NatureMC @jerry @vmstan @doug thanks for the illustrative example.

The big problem is that the technical discussion seems to be running into further and further complications, see last update and invitation to contribute in discussion here: socialhub.activitypub.rocks/t/

This is why I thought partial solution is better than nothing.

I must say, @trwnh is doing a great job trying to consolidate the discussion and "translate" across different communities.

@mapto As I said, I can only talk about the user side. I am a bloody layperson when it comes to technical "innards".
Unfortunately, people often leave Mastodon quickly because they lack certain options or their concerns are not taken seriously by the creators. Yet this is precisely where the opportunity lies not to be a predominantly white male network.
No question: A lot of admins/engaged people do a wonderful job. But the key developers make the decisions.

@jerry @vmstan @doug @trwnh

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.