The #SBF (#FTX) conundrum might be solvable by throwing these propositions into the mix:
1. He's “earning to give”
1. He's an utilitarian
1. He thinks that “the end justifies the means”
1. He's explicitly risk-neutral
He simply computed the probability of getting away with financial engineering and deception times the potential increase in well-being (by tossing billions at #EffectiveAltruism causes), and that seemed to him higher than the odds of being caught times {investors and customers' funds lost plus the huge reputational damage that would inflict to the #EA cause}.
So he pressed the red button and bet the world. And he lost.
It's not trivial to find the flaw in his reasoning, though.
@tripu this thread about this is interesting https://twitter.com/edu_riera_/status/1591475588136853505?s=46&t=NW4tUlW9OShEun7om6vPBg
@tripu I think the flaw in his reasoning is simply second order effects. Being deceptive and risking people's investments can make altruists seem reckless and untrustworthy, and I don't think this bodes well for future (public) effective altruists.
@tripu Also, it does seem reckless, because he couldn't know who had invested in those funds really, it's not wise to be such a judge I think.
@tripu Finally, I think this misses the point of EA which is inviting everyone to cooperate and donate to ones in most need, not steal from a minority -- even if it were a deserving one -- which is a much more robust, longlasting and reliable solution to improving the world, I believe. Thanks for sharing.
Thank you! I think I mostly agree with you.
Definitely #SBF's behaviour is not condoned by #EffectiveAltruism per se: not all EAs are consequentialists, utilitarians or risk-neutral — and even those who are, probably would have made different decisions, influenced also by common-sense morality and/or by moral uncertainty.
What I find fascinating is that his decision algorithm (or the one I presume he had) seems robust to me, and (perhaps) he failed “only” in weighing the terms of the equation appropriately, or in coming up with good estimates. Or perhaps his math was indeed perfect, and we just happen to live in that one universe where luck played against him!
I'm sure he throw “possible reputational damage to EA” into the equation. Given all we know about him, how could he not?
@tripu I don't think it's robust because he (#SBF) couldn't possibly solve the world's problems single-handedly. This wasn't an all or nothing for the future of mankind. But the idea of #EffectiveAltruism itself could be of fundamental importance to humanity's future, so I don't think this strategy is warranted, except in more clear cases where a good future for humanity were decisively at stake.
@tripu I'm not sure I understand. Is this an "ends justify the means" kind of thought process or is there something more here?
I think believing that “the end justifies the means” is a necessary ingredient in all this, yes.
But as I said, you also need a specific end to pursue (in #SBF's case, allegedly, “maximising aggregate/average utility”), and neutrality towards risk (so that you're OK with extremely unlikely odds of hugely positive impact).
I don't think we'd have the #FTX situation without any of those three components in the equation.
@tripu apologies but I'm still not sure I understand what you were originally saying. It sounded like you were saying behavior can be justified so long as the ends justify the means. "earning to give", "utilitarian", and "risk-neutral" are all just constraints for what would constitute acceptable ends in this hypothetical.
the problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want.
No apologies needed! 🙂
> _It sounded like you were saying behavior can be justified so long as the ends justify the means._
That's a tautology, isn't it? If you believe the ends justify the means, any behaviour can be justified as long as that's the necessary means for a sufficiently good outcome.
That's what (I think) a consequentialist thinks.
> _The problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want._
Not really. It's not enough to “imagine” a great outcome; that outcome should be feasible, and realistically follow from the behaviour we are trying to justify — and only the least destructive behaviour would be justified.
If I want to stop a pickpocket, I could shot them dead. That's hardly justifiable from a consequentialist PoV, because arguably I could as well knock them down, grab them by the arm, shout to get him blocked by other passers-by, call the police, snap a photo of them, etc. Most of us would defend that those other behaviours produce a better state of the world than instantly killing a person who was stealing a handbag.
Also, the issue you mention is not unique to consequentialism.
>_ You can justify any means simply by imagining a sufficiently positive end._
The same happens if you're a deontologist (is your set of rules perfect? can't some of those rules be used to justify monstrous actions?), if your ethics is religion-based (can't the love for God and following His command justify abhorrent decisions?), etc.
There's no way of escaping the fallibility of a moral system, no matter which one you pick.
@tripu I strongly agree "There's no way of escaping the fallibility of a moral system, no matter which one you pick."
would this not imply that moral systems are themselves ill-suited to the real world? Their justifications and hedging necessarily too broad to be applicable to any given unique and/or complex situation?
it seems a defense of "I did it because this moral system said it was a good idea" would always be flimsy, at best.
“There’s no way of escaping the fallibility of a moral system, no matter which one you pick”
≠
“All moral systems are equally fallible”
The impossibility to achieve perfection is no excuse not to try to get as close as we can.
Besides, it's just impossible not to have a “moral system”. Who doesn't have one? You have to be a rock or an automaton not to.
So, let's keep refining them.
[Exhibit A](https://conversationswithtyler.com/episodes/sam-bankman-fried/)