Definition 1. World is any pair (P, w) where P is a finite set which represents all people who have ever lived and will live; and w is a function from P to ℝ which represents total well being of people in the world.
Definition 2. Utility of a world (P, w) is defined as
\[ \sum_{h \in P} w(h). \]
Theorem. For any world (P, w) for any positive real number t there exists a world (Q, y) such that the range of the function y does not include numbers greater than t and utility of (Q, y) is greater than utility of (P, w).
Proof is obvious.
So, what do you not understand?
@p this is cool an mathy, but you can't birth a few trillion people on a same planet because physics (and economics)
@dpwiz Lol, same planet? Do you even far (and possibly not so far) future?
@p It is increasingly difficult to apply the decision "at the same time" due to relativity.
If the Repugnant Conclusion requires that we have to consider infinities, then I just don't care and T.U. is flawless for all practical purposes.
@dpwiz Repugnant conclusion does not need infinite sets of people. However for the theorem to be applicable as I stated it, finite sets of all sizes must be allowed.
IMO total utilitarianism does have a problem with infinite sets of moral patients.
>all practical purposes
I don't think I know many practical enough purposes where it can be applied. And as they become less practical (like thinking how the future of humanity should be shaped), its problems become more important.
Just to be clear, total utilitarianism is my favorite utility function (even though it is not well defined because of problem with infinite sets). However I prefer to count well being of person-moments and not of persons.
Can you explain what it is you're saying about special relativity? I don't know special relativity theory.
@p > However for the theorem to be applicable as I stated it, finite sets of all sizes must be allowed.
That's it, the problem is in the theorem, not reality.
@dpwiz
>That's it, the problem is in the theorem, not reality.
I wonder if you wrote that to make me angry on purpose.
We can send Von Neumann probes everywhere and tile the universe (not literally the whole universe) with moral patients. Perhaps there will be a decision to send them now with them having very slightly positive average well being or wait a year and increase their average well being a lot. Perhaps due to how exponential tiling of the universe works, if we launch now, we will be able to create much more moral patients before the universe dies. Do all such scenarios seem incredibly improbable to you? They seem plausible enough to me.
@p 🦉
What for do you really need infinitely more moral patients instead of current number of them?
Especially, considering population traps are the common knowledge.
@dpwiz Right now the utility function I like most is as follows:
Consider time to be discrete, for example possible moments of time are 0, 1, 2, ... Utility of a world is the sum over each moment of time over each moral patient existing in that moment of time of well being of this moral-patient-moment.
By population traps do you mean the possibility that overpopulation of the Earth will happen and will cause bad things? I rarely consider such things when thinking about far future because it seems to me other huge changes will happen before that - maybe we will all die, maybe a superintelligence will kill us all and do continue doing its thing, maybe that doesn't happen and we start colonizing the nearby planets, maybe humans will stop living in organic bodies and will rarely produce new humans, etc. So overpopulation problems seem more like a problem of the near future to me.
@dpwiz Actually I am not sure how ethics should solve thought experiments with identical lifes and simulations running (or not running) moral patients inside them.
@p "Identity is not that matters", I get that.