The () conundrum might be solvable by throwing these propositions into the mix:

1. He's “earning to give”
1. He's an utilitarian
1. He thinks that “the end justifies the means”
1. He's explicitly risk-neutral

He simply computed the probability of getting away with financial engineering and deception times the potential increase in well-being (by tossing billions at causes), and that seemed to him higher than the odds of being caught times {investors and customers' funds lost plus the huge reputational damage that would inflict to the cause}.

So he pressed the red button and bet the world. And he lost.

It's not trivial to find the flaw in his reasoning, though.

@tripu I'm not sure I understand. Is this an "ends justify the means" kind of thought process or is there something more here?

@shadowsonawall

I think believing that “the end justifies the means” is a necessary ingredient in all this, yes.

But as I said, you also need a specific end to pursue (in 's case, allegedly, “maximising aggregate/average utility”), and neutrality towards risk (so that you're OK with extremely unlikely odds of hugely positive impact).

I don't think we'd have the situation without any of those three components in the equation.

@tripu apologies but I'm still not sure I understand what you were originally saying. It sounded like you were saying behavior can be justified so long as the ends justify the means. "earning to give", "utilitarian", and "risk-neutral" are all just constraints for what would constitute acceptable ends in this hypothetical.

the problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want.

@shadowsonawall

No apologies needed! 🙂

> _It sounded like you were saying behavior can be justified so long as the ends justify the means._

That's a tautology, isn't it? If you believe the ends justify the means, any behaviour can be justified as long as that's the necessary means for a sufficiently good outcome.

That's what (I think) a consequentialist thinks.

> _The problem with this line of reasoning is that you can justify any means simply by imagining a sufficiently positive end. Offering a veneer of legitimacy to as horrific an act as you want._

Not really. It's not enough to “imagine” a great outcome; that outcome should be feasible, and realistically follow from the behaviour we are trying to justify — and only the least destructive behaviour would be justified.

Follow

@shadowsonawall

If I want to stop a pickpocket, I could shot them dead. That's hardly justifiable from a consequentialist PoV, because arguably I could as well knock them down, grab them by the arm, shout to get him blocked by other passers-by, call the police, snap a photo of them, etc. Most of us would defend that those other behaviours produce a better state of the world than instantly killing a person who was stealing a handbag.

Also, the issue you mention is not unique to consequentialism.

>_ You can justify any means simply by imagining a sufficiently positive end._

The same happens if you're a deontologist (is your set of rules perfect? can't some of those rules be used to justify monstrous actions?), if your ethics is religion-based (can't the love for God and following His command justify abhorrent decisions?), etc.

There's no way of escaping the fallibility of a moral system, no matter which one you pick.

@tripu I strongly agree "There's no way of escaping the fallibility of a moral system, no matter which one you pick."

would this not imply that moral systems are themselves ill-suited to the real world? Their justifications and hedging necessarily too broad to be applicable to any given unique and/or complex situation?

it seems a defense of "I did it because this moral system said it was a good idea" would always be flimsy, at best.

@shadowsonawall

“There’s no way of escaping the fallibility of a moral system, no matter which one you pick”


“All moral systems are equally fallible”

The impossibility to achieve perfection is no excuse not to try to get as close as we can.

Besides, it's just impossible not to have a “moral system”. Who doesn't have one? You have to be a rock or an automaton not to.

So, let's keep refining them.

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.