Definition 1. World is any pair (P, w) where P is a finite set which represents all people who have ever lived and will live; and w is a function from P to ℝ which represents total well being of people in the world.
Definition 2. Utility of a world (P, w) is defined as
\[ \sum_{h \in P} w(h). \]
Theorem. For any world (P, w) for any positive real number t there exists a world (Q, y) such that the range of the function y does not include numbers greater than t and utility of (Q, y) is greater than utility of (P, w).
Proof is obvious.
So, what do you not understand?
@p this is cool an mathy, but you can't birth a few trillion people on a same planet because physics (and economics)
@dpwiz Lol, same planet? Do you even far (and possibly not so far) future?
@p It is increasingly difficult to apply the decision "at the same time" due to relativity.
If the Repugnant Conclusion requires that we have to consider infinities, then I just don't care and T.U. is flawless for all practical purposes.
@dpwiz Repugnant conclusion does not need infinite sets of people. However for the theorem to be applicable as I stated it, finite sets of all sizes must be allowed.
IMO total utilitarianism does have a problem with infinite sets of moral patients.
>all practical purposes
I don't think I know many practical enough purposes where it can be applied. And as they become less practical (like thinking how the future of humanity should be shaped), its problems become more important.
Just to be clear, total utilitarianism is my favorite utility function (even though it is not well defined because of problem with infinite sets). However I prefer to count well being of person-moments and not of persons.
Can you explain what it is you're saying about special relativity? I don't know special relativity theory.
@p > However for the theorem to be applicable as I stated it, finite sets of all sizes must be allowed.
That's it, the problem is in the theorem, not reality.
@dpwiz
>That's it, the problem is in the theorem, not reality.
I wonder if you wrote that to make me angry on purpose.
We can send Von Neumann probes everywhere and tile the universe (not literally the whole universe) with moral patients. Perhaps there will be a decision to send them now with them having very slightly positive average well being or wait a year and increase their average well being a lot. Perhaps due to how exponential tiling of the universe works, if we launch now, we will be able to create much more moral patients before the universe dies. Do all such scenarios seem incredibly improbable to you? They seem plausible enough to me.
@p 🦉
What for do you really need infinitely more moral patients instead of current number of them?
Especially, considering population traps are the common knowledge.
@dpwiz I guess you're saying here that in practice agents will extremely rarely make decisions such that some pairs of options of the decision looks similar to Repugnant Conclusion. I don't really get why you think so. If the humanity (or whatever humanity will get transformed into; excluding variant with totally alien mind doing whatever it wants with us) will reach extremely high level of technology (I think there's about 20% probability that this will happen), then we might (I hope so) start thinking about how to tile the universe in an optimal way. And then such decisions might happen.