I'm thinking about the comparison between between anarchist mutual aid societies creatively getting things done without asking for permission and techbro disruptors moving fast and breaking things.
How do we make rules that prevent deliberate or accidental harm without miring everyone in bureaucracy? Even without resource allocation issues and differing ideas about what constitutes harm, it's challenging!
@peterdrake It's a difficult balance to find.
I can relate to this from IT change management: too little of it, or too little engagement, and you spend more time recovering from incidents than doing anything else. Too much, and you're too bogged down to get anything done.
I came to see their value as being mostly in protecting against the downside:
Did you actually think through what you're doing?
No, really: did you think about what other effects it will/might have, beyond the intended one(s)?
For each of the things that might go wrong, what's your plan for preventing/mitigating/recovering?How will you monitor for each of those things?
Who needs to know this is going to happen?
Who needs to be standing by with their equivalent of a fire extinguisher?
How will you know it had the desired effect?
The greatest resistance came from senior engineers who "know what they're doing."
The incidents with greatest impact also tended to come from those guys.