@x_cli how would a consensus protocol help with moderation? 🧐 What was your idea?
@netbroom Centralized denylists (a.k.a. RBL) are subject to arbitrary decisions. We have seen this several times in the past for emails. By using a consensus protocol, that is resistant to byzantine failures, we could reduce the risk of arbitrary decisions. Problem is that most byzantine protocols are distributed but we still have to believe a specific set of "generals" that were chosen by a central authority. Federated Byzantine Agreement might be interesting to explore. This was introduced with the Stellar Consensus Protocol but I believe it could be used for other purposes.
@x_cli But the main part of the blocklist problem is social – what evidence is needed for a server to decide to block? BFT only solves a minor technical subproblem (eventual consistency) of the whole problem, and even that problem is not strictly necessary to be solved in practice – if the evidence (in whatever form) is published openly then usually it's enough for specific servers to attempt acquiring it and make a decision locally based on the evidence they manage to access. The "making a decision" part is hard, the evidence aggregation less so.
So if I'm right (I might have misunderstood your proposal) I am sad to inform you that you committed the Cardinal Sin of Cryptobros – proposing a technical solution for a relatively simple part of a mostly social problem. ;<
// Also, this is probably not even Sybil-resistant, but I don't remember Stellar well enough to be absolutely sure.
@netbroom
@timorl
I perfectly understand why you might think that I missed the issue by proposing Federated Byzantine Agreement for a social problem: deciding what to do, based on a set of evidence. Actually it is possible to use that protocol to solve this social problem by considering that the traitors are the moderators that disagree with the consensus (i.e. those that consider that an instance/account should or should not be blocked).
As an example, a byzantine protocol is used by the authors of Tournesol: an app developed to collect open data for video recommendations, in order to train a ML model.
Main website:
https://tournesol.app
Whitepaper: https://arxiv.org/abs/2107.07334
Please forgive me if I misunderstood your message. It has been a very long day and thinking is quite hard at the moment 😅
@x_cli Huh, I know some of the people involved in that project, cool.
Anyway, this way you are (implicitly) applying a naive majoritarian/democratic solution to the social part of the problem. That's not necessarily wrong (although I suspect it wouldn't be good enough for many minorities), but the BFT itself doesn't bring much to the table – you could just as well have people personally publish signed votes and have instances collect them and process them locally. It's not a huge problem if some instances will disagree on a ban anyway, but even that should happen rarely, due to votes rarely being close to equilibrium.
And there are still Sybil attacks – e.g. Tournesol seems to be mostly banking on people not abusing it, but it's not strictly speaking secure against them, they seem to only require an email? (I haven't read the paper, perhaps they have some nifty abuse detection, but I strongly suspect it's centralized if they have one.) If something (whether Tournesol or this hypothetical moderation system) became popular enough there would be people motivated enough to abuse any such vulnerability. :/