Definition 1. World is any pair (P, w) where P is a finite set which represents all people who have ever lived and will live; and w is a function from P to ℝ which represents total well being of people in the world.
Definition 2. Utility of a world (P, w) is defined as
\[ \sum_{h \in P} w(h). \]
Theorem. For any world (P, w) for any positive real number t there exists a world (Q, y) such that the range of the function y does not include numbers greater than t and utility of (Q, y) is greater than utility of (P, w).
Proof is obvious.
So, what do you not understand?
@p this is cool an mathy, but you can't birth a few trillion people on a same planet because physics (and economics)
@dpwiz Lol, same planet? Do you even far (and possibly not so far) future?
@p It is increasingly difficult to apply the decision "at the same time" due to relativity.
If the Repugnant Conclusion requires that we have to consider infinities, then I just don't care and T.U. is flawless for all practical purposes.
@dpwiz Repugnant conclusion does not need infinite sets of people. However for the theorem to be applicable as I stated it, finite sets of all sizes must be allowed.
IMO total utilitarianism does have a problem with infinite sets of moral patients.
>all practical purposes
I don't think I know many practical enough purposes where it can be applied. And as they become less practical (like thinking how the future of humanity should be shaped), its problems become more important.
Just to be clear, total utilitarianism is my favorite utility function (even though it is not well defined because of problem with infinite sets). However I prefer to count well being of person-moments and not of persons.
Can you explain what it is you're saying about special relativity? I don't know special relativity theory.
@dpwiz
>That's it, the problem is in the theorem, not reality.
I wonder if you wrote that to make me angry on purpose.
We can send Von Neumann probes everywhere and tile the universe (not literally the whole universe) with moral patients. Perhaps there will be a decision to send them now with them having very slightly positive average well being or wait a year and increase their average well being a lot. Perhaps due to how exponential tiling of the universe works, if we launch now, we will be able to create much more moral patients before the universe dies. Do all such scenarios seem incredibly improbable to you? They seem plausible enough to me.
@p You have utility U spread over P patients resulting in Up value per patient. If you need P'=P+n patients for some unrelated reason, then the RC is also unrelated because your decision is *not* "let's add n agents and re-distribute utility just because we can and total stays the same".
@dpwiz I guess you're saying here that in practice agents will extremely rarely make decisions such that some pairs of options of the decision looks similar to Repugnant Conclusion. I don't really get why you think so. If the humanity (or whatever humanity will get transformed into; excluding variant with totally alien mind doing whatever it wants with us) will reach extremely high level of technology (I think there's about 20% probability that this will happen), then we might (I hope so) start thinking about how to tile the universe in an optimal way. And then such decisions might happen.
@dpwiz Right now the utility function I like most is as follows:
Consider time to be discrete, for example possible moments of time are 0, 1, 2, ... Utility of a world is the sum over each moment of time over each moral patient existing in that moment of time of well being of this moral-patient-moment.
By population traps do you mean the possibility that overpopulation of the Earth will happen and will cause bad things? I rarely consider such things when thinking about far future because it seems to me other huge changes will happen before that - maybe we will all die, maybe a superintelligence will kill us all and do continue doing its thing, maybe that doesn't happen and we start colonizing the nearby planets, maybe humans will stop living in organic bodies and will rarely produce new humans, etc. So overpopulation problems seem more like a problem of the near future to me.
@p by mentioning time and moments you're playing with fire. That way lie decisions like "let's all hop into event horizon and experience the bliss forever" and other funny things granted to you by relativity - time dilation, lightspeed limits on information etc.
@dpwiz and by thinking about persons and well being of their life problem arise when I think about less exotic (in my opinion) things - like cloning minds, running the same simulation twice, etc.
@p "Identity is not that matters", I get that.
@dpwiz Actually I am not sure how ethics should solve thought experiments with identical lifes and simulations running (or not running) moral patients inside them.
@p also, physics ruin matrioshka brains and even planetary hive-minds.
@dpwiz I don't know much about special relativity or about general relativity, so I usually model the universe as Euclidean space, either with another dimension as continuous time (then we need to sum wellbeingmoments with integrals) or as discrete time.
@p sorry, but you have to update on that before considering infinity.
@p e.g. consider a mind having a two nodes on opposite sides of the Earth. The RTT is ~150ms. That is comparable to integration delay in human brains. That may feel normal if there are two brains are linked together (actually, have you tried to play games with 150ms lag?). Add some byzantine fault tolerance and CAP theorem. Coherent thought at such distances would be unbearably slow.
@p unless there is a possibility of faster-than-light tech and then we are beyond fucked on the cosmic scale and "repugnant conclusion" is one of the lesser of our problems.
@dpwiz That's a possibility, but not 100%, so expected value of your utility function still doesn't exist.
@p 🦉
What for do you really need infinitely more moral patients instead of current number of them?
Especially, considering population traps are the common knowledge.