The #SBF (#FTX) conundrum might be solvable by throwing these propositions into the mix:
1. He's “earning to give”
1. He's an utilitarian
1. He thinks that “the end justifies the means”
1. He's explicitly risk-neutral
He simply computed the probability of getting away with financial engineering and deception times the potential increase in well-being (by tossing billions at #EffectiveAltruism causes), and that seemed to him higher than the odds of being caught times {investors and customers' funds lost plus the huge reputational damage that would inflict to the #EA cause}.
So he pressed the red button and bet the world. And he lost.
It's not trivial to find the flaw in his reasoning, though.
@tripu I'm not sure I understand. Is this an "ends justify the means" kind of thought process or is there something more here?
I think believing that “the end justifies the means” is a necessary ingredient in all this, yes.
But as I said, you also need a specific end to pursue (in #SBF's case, allegedly, “maximising aggregate/average utility”), and neutrality towards risk (so that you're OK with extremely unlikely odds of hugely positive impact).
I don't think we'd have the #FTX situation without any of those three components in the equation.
@tripu apologies but I'm still not sure I understand what you were originally saying. It sounded like you were saying behavior can be justified so long as the ends justify the means. "earning to give", "utilitarian", and "risk-neutral" are all just constraints for what would constitute acceptable ends in this hypothetical.
the problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want.
No apologies needed! 🙂
> _It sounded like you were saying behavior can be justified so long as the ends justify the means._
That's a tautology, isn't it? If you believe the ends justify the means, any behaviour can be justified as long as that's the necessary means for a sufficiently good outcome.
That's what (I think) a consequentialist thinks.
> _The problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want._
Not really. It's not enough to “imagine” a great outcome; that outcome should be feasible, and realistically follow from the behaviour we are trying to justify — and only the least destructive behaviour would be justified.
@tripu I strongly agree "There's no way of escaping the fallibility of a moral system, no matter which one you pick."
would this not imply that moral systems are themselves ill-suited to the real world? Their justifications and hedging necessarily too broad to be applicable to any given unique and/or complex situation?
it seems a defense of "I did it because this moral system said it was a good idea" would always be flimsy, at best.
@shadowsonawall
“There’s no way of escaping the fallibility of a moral system, no matter which one you pick”
≠
“All moral systems are equally fallible”
The impossibility to achieve perfection is no excuse not to try to get as close as we can.
Besides, it's just impossible not to have a “moral system”. Who doesn't have one? You have to be a rock or an automaton not to.
So, let's keep refining them.