Follow

The () conundrum might be solvable by throwing these propositions into the mix:

  1. He’s “earning to give”
  2. He’s an utilitarian
  3. He thinks that “the end justifies the means”
  4. He’s explicitly risk-neutral

He simply computed the probability of getting away with financial engineering and deception times the potential increase in well-being (by tossing billions at causes), and that seemed to him higher than the odds of being caught times {investors and customers’ funds lost plus the huge reputational damage that would inflict to the cause}.

So he pressed the red button and bet the world. And he lost.

It’s not trivial to find the flaw in his reasoning, though.

@tripu Is this meant as “all disappears” saying all existing earths, with no way to select one of many (from previous wins) as the one you gamble with?
If that is the case, the math is pretty badly against it, as the chance to destroy the whole universe (assumed as negative) approaches 1 soon. Not something one would have to think about much to understand.

@admitsWrongIfProven

I think the simplest version of that thought experiment is one where it’s a one-off bet (double-or-nothing this universe, once). I guess the math supports saying yes to that, if the odds of winning are > .5, right?

My intuition for the reason to say no to the bet is that there’s a qualitative difference between existing and not existing.

If I were offered a .51 chance of transforming this universe into one where The Beatles had recorded 20 more songs, vs one where we had never known 20 songs by them, I would say yes. That works for me regardless of the no. of songs. But when it gets to “twice as many songs” vs “no songs at all” (ie, The Beatles never existed for all practical purposes), something changes, and I would reject the bet.

…I think 🙂

@tripu Well, with a one-off bet, it would simply be the question “Do you want to win something more than you want to not loose everything.”

The rest of this you have taken in another direction that i also agree with. “Win a little or lose a little, that’s ok” is something i would also say.

But just the idea that there are consecutive all-or-nothing bets that include betting the base that was there before just does not make sense mathematically. No matter how much one values anything, if it is ok to lose it, you have no expected value if step n takes everything you ever won.

I’m not good at formal math, but i would guess you had to lower the expected value of step n-1 if you do a step n by the probability to loose everything in step n.

Like in step 1, you expect 0.51 * 2 = 1.02 value. But if you actually do two steps, your expectation of getting zero is now 0.49 + (0.49^2). Am i doing this right? It should slowly approach 1 for expecting to get nothing at all.

Still onboard with the “scam scammers to donate to a good cause” thing, though.

@victoriano

I see. Seems that guy and I reached exactly the same conclusions about the fiasco! 👍

@tripu I think the flaw in his reasoning is simply second order effects. Being deceptive and risking people's investments can make altruists seem reckless and untrustworthy, and I don't think this bodes well for future (public) effective altruists.

@tripu Also, it does seem reckless, because he couldn't know who had invested in those funds really, it's not wise to be such a judge I think.

@tripu Finally, I think this misses the point of EA which is inviting everyone to cooperate and donate to ones in most need, not steal from a minority -- even if it were a deserving one -- which is a much more robust, longlasting and reliable solution to improving the world, I believe. Thanks for sharing.

@gnramires

Thank you! I think I mostly agree with you.

Definitely ’s behaviour is not condoned by per se: not all EAs are consequentialists, utilitarians or risk-neutral — and even those who are, probably would have made different decisions, influenced also by common-sense morality and/or by moral uncertainty.

What I find fascinating is that his decision algorithm (or the one I presume he had) seems robust to me, and (perhaps) he failed “only” in weighing the terms of the equation appropriately, or in coming up with good estimates. Or perhaps his math was indeed perfect, and we just happen to live in that one universe where luck played against him!

I’m sure he throw “possible reputational damage to EA” into the equation. Given all we know about him, how could he not?

@tripu I don't think it's robust because he (#SBF) couldn't possibly solve the world's problems single-handedly. This wasn't an all or nothing for the future of mankind. But the idea of #EffectiveAltruism itself could be of fundamental importance to humanity's future, so I don't think this strategy is warranted, except in more clear cases where a good future for humanity were decisively at stake.

@tripu By the way, thank you as well. I'm feeling very welcome in #mastodon. It's very personal and different from reddit which I'm most used to. I still like reddit, but building personal, close and hopefully lasting connections seems very nice. 😀

@tripu I’m not sure I understand. Is this an “ends justify the means” kind of thought process or is there something more here?

@shadowsonawall

I think believing that “the end justifies the means” is a necessary ingredient in all this, yes.

But as I said, you also need a specific end to pursue (in ’s case, allegedly, “maximising aggregate/average utility”), and neutrality towards risk (so that you’re OK with extremely unlikely odds of hugely positive impact).

I don’t think we’d have the situation without any of those three components in the equation.

@tripu apologies but I’m still not sure I understand what you were originally saying. It sounded like you were saying behavior can be justified so long as the ends justify the means. “earning to give”, “utilitarian”, and “risk-neutral” are all just constraints for what would constitute acceptable ends in this hypothetical.

the problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want.

@shadowsonawall

No apologies needed! 🙂

It sounded like you were saying behavior can be justified so long as the ends justify the means.

That’s a tautology, isn’t it? If you believe the ends justify the means, any behaviour can be justified as long as that’s the necessary means for a sufficiently good outcome.

That’s what (I think) a consequentialist thinks.

The problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want.

Not really. It’s not enough to “imagine” a great outcome; that outcome should be feasible, and realistically follow from the behaviour we are trying to justify — and only the least destructive behaviour would be justified.

@shadowsonawall

If I want to stop a pickpocket, I could shot them dead. That’s hardly justifiable from a consequentialist PoV, because arguably I could as well knock them down, grab them by the arm, shout to get him blocked by other passers-by, call the police, snap a photo of them, etc. Most of us would defend that those other behaviours produce a better state of the world than instantly killing a person who was stealing a handbag.

Also, the issue you mention is not unique to consequentialism.

_ You can justify any means simply by imagining a sufficiently positive end._

The same happens if you’re a deontologist (is your set of rules perfect? can’t some of those rules be used to justify monstrous actions?), if your ethics is religion-based (can’t the love for God and following His command justify abhorrent decisions?), etc.

There’s no way of escaping the fallibility of a moral system, no matter which one you pick.

@tripu I strongly agree “There’s no way of escaping the fallibility of a moral system, no matter which one you pick.”

would this not imply that moral systems are themselves ill-suited to the real world? Their justifications and hedging necessarily too broad to be applicable to any given unique and/or complex situation?

it seems a defense of “I did it because this moral system said it was a good idea” would always be flimsy, at best.

@shadowsonawall

“There’s no way of escaping the fallibility of a moral system, no matter which one you pick”


“All moral systems are equally fallible”

The impossibility to achieve perfection is no excuse not to try to get as close as we can.

Besides, it’s just impossible not to have a “moral system”. Who doesn’t have one? You have to be a rock or an automaton not to.

So, let’s keep refining them.

@tripu @shadowsonawall
Sorry i’m late to the party.

I would say “end justifies the means” makes a very broad view of problems necessary for utilitarianism to work right.

If you concentrate on one interaction, it might be that going out into the street and robbing someone might be seen as positive, if you do enough good with the money.
Viewed broadly, by allowing this, you would push this action as normal and acceptable, spreading fear. So there you have a great negative utility. Undermining the fabric of social interactions can hardly be eclipsed by building some orphanages or feeding the poor.

In the specific example, we should see what exactly the consequences are. I do not know every detail, but as far as i can see the consequences would be:

  • People investing in crypto loose their money (Would happen anyway)
  • People stop trusting crypto (Which is a good thing)

Correct?

@admitsWrongIfProven

Yes, but consequences of the fiasco are even broader: people stop trusting , and politicians seize the opportunity to meddle with blockchains for their own gain.

I think “people stop trusting crypto” is a bad thing. But that’s debatable.

You are right in that ripples of bad actions are also consequences and thus should be considered by consequentialists. But then, someone secretly committing a crime or deception for the greater good, and never being caught, is a good thing? That’s the riddle for hard-core consequentialists.

/cc @shadowsonawall

@tripu @shadowsonawall Oh yeah, the “but what if nobody catches me” trick. I think this ignores probabilities. If you are never caught, nice for you, but if it becomes a rule then someone will be caught and the damage will be done.
So it is not part of an utilitarist view but a personal justification that is extremely egoistic.

@tripu @shadowsonawall Oh, i kind of skipped the consequence part. I guess you are right, if people associate this with EA, then there is a notable negative effect. I would add it to the list if toots were editable ;-)

On “people trusting crypto”, i would argue that crypto currencies can soak up any amount of processing hardware and energy (with enough time and trust in them), so they make a switch to renewable energies impossible. Therefore, they must end for us to survive.

@admitsWrongIfProven

With the shift to , energy consumption is no longer an issue in . is almost there, and future blockchains won’t waste energy like does.

/cc @shadowsonawall

@tripu @shadowsonawall Ok, POS would solve the energy problem, i guess i just don’t see the point of a POS crypto. Is there anything else than “could be used as an alternative to paying with electronic bank transfers”?

Because POW enabled people to race to a (pointless, but existing) valuation while i do not understand how that component could exist in POS.

@admitsWrongIfProven

Not sure I get your question.

If you’re asking what advantages provides, I’d recommend from a number of good sources.

One example:

commonsense.news/p/is-bitcoin-

@tripu Hmm, i still don’t see it.

What i meant was that with POW, many could possibly gain participation against the interest of people in power, so there was that. It was flawed in its energy consumption, not in its fairness.
With POS, even if it is not the state, there still is an small group of wealthy controlling everything.

The linked article argues about crypto in general and does not provide any insight into how a POS system could hinder concentrations in power.

@admitsWrongIfProven

With PoW, past a certain point in the growth of the network, it’s prohibitive for most bad actors to spend enough money in hardware and energy to become > 50% of the network and thus gain control.

With PoS, it’s exactly the same, just replace “spend enough money in hardware and energy” with “buy enough tokens”.

In any case, if someone/something has the means and the determination to grow past half of the whole network, they will control the consensus and therefore the money.

both PoW and PoS are vulnerable to concentration of power — but both are less vulnerable than centralised systems, eg central banks or credit card companies.

@tripu Thank you for your answer. I may not be entirely convinced, but i do see some sense in this.
In any case, i appreciate the effort to write together this concise point.

@tripu Is there any strong hint he actually wanted to be altruistic? I only heard the bancrupcy news, nothing about motives yet.

@admitsWrongIfProven

He was a utilitarian from the cradle, an EA before he was a billionaire, and used to donate a lot of his income. Apparently Alameda Research donated 50% of their profits in the early days, too. After he became so wealthy he donated a lot of money to EA-approved organisations.

That at least seems true. It would be impossible to bribe or deceive so many people and organisations who, by all we know, did receive actual money from him already.

@tripu So he was scamming scammers to donate to a well-curated list of charitable organizations? That, i approve of whole-heartedly!

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.