rant, longtermism, we need more philosophy 

Augh, I just read a huge list of critiques of longtermism, reading a couple of the more interesting-sounding ones in detail and skimming a couple more, and **not one** mentions the core of the issue with treating future people exactly the same as currently existing ones. (Vaden Masrani in vmasrani.github.io/blog/2020/a comes the closest, although he also accidentally invalidates all epistemology with one of his arguments. Anyway, credit to him for coming the closest.) I don't think this is correct, for two kinds of reasons, anthropics and the fact that people are beings existing in time, so it shouldn't be surprizing that our values are not time-invariant. Why no one(?) is properly criticizing this part is beyond me, am I really the only one who sees these specific problems? Seems extremely unlikely.

Oh, and to be clear the criticisms of the _effects_ of longtermism are on point – the dangers of the ideology should be clear even to its proponents – the criticisms of the practicalities are pretty good (I would put more stress on the fact that a big part of the problem is that thinking about sufficiently small probabilities almost surely hits the problems with resource limited reasoning, in which case it's well known that Bayesianism ceases to be optimal, but in general the points are good) the criticisms of utilitarianism mostly suck (although mostly inasmuch as they conflate utilitarianism in general with the total utility variant, and it's hard to blame them for that since this is important as a basis for longtermism), it's just the complete absence of criticisms of the core idea described above that worries me and likely makes proponents of longtermism feel secure in these assumptions, which they really shouldn't.

For reference, the list I'm referring to: longtermism-hub.com/critiques .

rant, longtermism, we need more philosophy 

@timorl I think there’s something very attractive to generalising systems, so time-invariance feels intuitively fair to some type of person, just as, for example, generalising of the “rational veil of ignorance” kind tries to make things position- or identity-invariant, or justice is supposed to apply independently of who does what. If you come to it with the timeless physics view that many long-termists have, this becomes even more pronounced.

rant, longtermism, we need more philosophy 

@modulux Yeah, I understand why this looks like the default to them, I even think that most of the time assuming things are time-invariant at first is the reasonable approach (e.g. I strongly suspect that the correct decision theory will end up being time-invariant), I'm just mystified why no one examined this assumption with regards to morality among so many critiques.

rant, longtermism, we need more philosophy 

@timorl @modulux

Note that there are two point in time involved in this problem, so there are many varied forms of a time-invariance assumption.

@timorl Maybe you've already seen this, or it's too much of a layman treatment for you, but I liked samharris.org/podcasts/making- which touched on this quite a bit.

@cappallo Unfortunately I am quite averse to podcasts as a format, and this one additionally seems to start with quite some unrelated stuff – do you know of any transcripts? Even if not, thanks anyway, I might end up listening to this at some point.

@timorl maybe I'm missing a piece here, but the reason you don't treat future people the same as currently living people is:
1) they don't exist
2) their potential existence does not give them a probabilistic moral standing
3) even if 2 is wrong, they have no way to voice their needs or desires

Future people shouldn't be treated the same as living people because they do not have moral standing as full people. They're more like a species of tree we don't want to go extinct. I can comprehend it's general needs (air, water, sunlight), but there is no want to understand it's specific needs

@nomi Yes! I have minor quibbles with some of the things your wrote, but this is the general line of reasoning I would like someone to follow, only in a more formal philosophical language (unfortunately none of these assertions are sufficiently well supported a priori from a formal point of view). Why no one has is exactly what annoys me.

@timorl @nomi What's the most important distinction between people who don't exist yet and people who are sleeping?

@robryk @timorl the person that is sleeping is alive, has wants and needs, is connected to the web of human relationships.

I think the better question is: what is the difference between "a person who doesn't exist yet" and any other fictional character?

@nomi @timorl

Well, but where does the definition of "alive" come from or, if we're using the biological definition of alive, why being alive matters?

Re human relationships: what about a sleeping hermit?

@robryk @nomi @timorl

We have far better guarantees that a person that is currently sleeping will be a conscious person in the (very) near future, than a person who haven't even done into existence yet. Similarly, fictional characters are much less sure to exist in the future than "real"people that simply haven't been born yet.

@krzyz @nomi @timorl

> We have far better guarantees that a person that is currently sleeping will be a conscious person in the (very) near future, than a person who haven't even done into existence yet.

That strongly depends on the timescale.

Re fictional characters: I have a problem defining existence for them.

@krzyz @nomi @timorl Ah, ok, if you mean "will be conscious at some point within the near future" then it doesn't. Fair.

@robryk I think that's what I meant? I don't necessarily see an alternative interpretation at this moment.

@nomi @timorl

@krzyz @nomi @timorl

The alternative interpretation I thought of originally was "at the precise point in time of now + short interval".

@robryk @krzyz @timorl a person doesn't lose moral standing if they sleep or are unconscious. That'd mean I could drug you and do whatever I wanted.

I think robyk, you may be using a placeholder for a future people. Like holding a spot. The idea of the person exists (ie: my future kids), but those actual kids in no way exist. Additionally, I might get hit by a car and never have them

@nomi @krzyz @timorl

> a person doesn't lose moral standing if they sleep or are unconscious. That'd mean I could drug you and do whatever I wanted.

So clearly if some assumptions implied that they do lose moral standing, they would not be ones we want to hold.

> I think robyk, you may be using a placeholder for a future people. Like holding a spot. The idea of the person exists (ie: my future kids), but those actual kids in no way exist. Additionally, I might get hit by a car and never have them

I don't understand what you're trying to tell me. I don't try to map "corresponding" people between potential worlds.

@robryk @krzyz @timorl

>> a person doesn't lose moral standing if they sleep or are unconscious. That'd mean I could drug you and do whatever I wanted.

> So clearly if some assumptions implied that they do lose moral standing, they would not be ones we want to hold.

This whole thing started because you asked for a distinction between a sleeping and a non-existent person and nit-picked a definition of alive. I haven't seen a single reason why *anything* that doesn't exist should be given moral weight, let lone certain classes of non-existent things (potentially future people). Barring that, I'm not clear how this line of argument does anything except equivocate rape.

@nomi @krzyz @timorl

A reason: you from tomorrow != you from now. The desires of both are different. Some people intuitively think that future desires of currently-live people are morally important.

How does anything I said equivocate rape?

Sign in to participate in the conversation
Qoto Mastodon

QOTO: Question Others to Teach Ourselves
An inclusive, Academic Freedom, instance
All cultures welcome.
Hate speech and harassment strictly forbidden.