To reiterate:
It is, without a doubt, true that the #fediverse is unsafe for a lot of marginalized groups in a variety of ways.
It is unsafe even for groups that are overrepresented here compared to society at large, such as for queers.
It is also true that the fediverse needs a multipronged, multi_layered_ approach to these issues
#Blocklists are necessary and vital, but I'm not convinced that _consensus_ blocklists nor _rapidly updating_ blocklists are the answer, or even have a place.
1/
Still. If blocklists learn from history, have a good postmortem process, and open documentation about how they operate I don't generally mind the attempt, even when I may have criticisms
A good example of "how to blocklist" in this space, IMO, is @Seirdy: https://seirdy.one/posts/2023/05/02/fediverse-blocklists/
* Clear, documented process with documentation on a lot of sources
* Starts with a source that is examined and decisions are made about and eliminates unshared ones
* Public postmortems
* Canaries
* Clear claims
2/
Do I still have criticisms? Sure. I wouldn't use the top level list, I have disagreements around interpretation or reasoning, but that's fine. I know that because those assumptions are documented and often there are options to remove them depending on the exact assumption being made. I'd prefer TTLs on decisions to force revisiting them as well.
So let me be clear: I have disagreements, but ultimately: It's a good set of blocklists and a good example of how to run a blocklist set like that
3/
We also have a feature that's basically unique to the #fediverse with #blocklists that are run here:
The permanent severing of connections without notice for _groups of mostly uninvolved people_.
This is simply a bad practice and it _demands_ more attention—and criticism—of blocklist patterns than we would have otherwise. Even ones that have historically been safe or might be safe in the future.
If blocklists didn't sever connections then automatic updating would not be nearly as bad.
4/
Given this behavior _any_ attempt to implementing a rapidly updating blocklist that you automatically import (be it run by #IFTAS or through #FSEP or something else) is _very likely to be abused_ and it is _going_ to cause problems even absent abuse cases.
People keep acting like this is up for debate?
We have literally decades of precedent here. It's going to happen.
My hardline: the _only_ ethical and responsible approach here is to _fix that first_ and tread carefully until then.
5/
I have other criticisms and thoughts here on How To Blocklist More Safely™ that I've talked about at length. Especially if you intend to automate things, but until we fix _that_ aspect it's mostly debating about chair arrangements on the titanic, or sometimes ways to improve the safety if you are willing to make other compromises (like updating once a month).
But it needs some sort of compromise: checking for severed connections first, done once a week, a clear retraction policy, etc.
6/
I and others have also talked at length about _other safety tools_.
This is an area that deserves serious research and investment. There are several people thinking and working in this space and I am not one of them except in the most peripheral of senses, but it is an area that deserves significant work.
I've posted a few things about it but others have said a lot more a lot more eloquently.
So it isn't all-or-nothing either. There are other tools here that we can gainfully employ.
7/
Some examples include, but are not limited to:
* Trusted servers
* Moderator channels and keys that allow you to share, e.g., reasons for a block
* Consensus labeling
* Self-reported labeling
* Notices
* Using bearcaps in a variety of different ways.
* "Pet names" and other "introduction" or "trust" based consensus systems.
* API proxies, especially with tools like rego.
The list goes on and on (and on and on…AND SHE WALKED IN LOOKING LIKE DYNA-MITE… wait why isn't there a gif for that)
8/
@hrefna honestly, if a user is that concerned about seeing content they don't want to see, the solution is probably more along the lines of opting in than blocking out, only allowing content from preapproved speakers than reacting to content after the fact.
There have been longstanding ideas with names like WoT, Web of Trust, where folks are judged based on their in person connections to you, and it's one solution to only display content that's trusted by people the user trusts.
And this sort of thing can be tailored to the individual user, to empower them to have the experience they prefer.