Do you believe there is one right way of doing moderation? Or that someone can "zero in" on that one right way? That is the impression I'm getting from reading some of these "Trust & Safety" takes. It is also fundamentally wrong, and encourages the wrong expectations from stakeholders.
I've seen cases where Big Tech might jump onto the same bad policy decision (typically, censorious in some weirdly specific way), and I think that is indicative of this problem.