escaping the moral quandary: some ideas which didn’t work

There is only one way to cause somebody’s death: to get somebody else pregnant.

Murderers do not cause deaths. Neither does disease, famine, ageing or accidents.

There has not been a single death caused by failure to sign up for cryonics.

The basic idea here is that as soon as you’re conceived, you’re doomed to die at some point.

I guess there might be some theories as to how the cosmos works that would allow you to literally live forever, but they’re pretty far fetched. Also if you’re a whole brain emulation, or intend to become one, then birth and death become more complex due to the possibility of creating exact copies of yourself. I’ll leave these complications aside for now and just deal with the basic population ethics.

Is death bad? Is birth, or perhaps more accurately conception, bad? These two questions are sort of the same, because one event always implies the other. You might like one and dislike the other, but morally they’re part of the same package in the same way that a magnet has two poles.

Of course all the things I originally mentioned might move that date of death forward or backward quite a lot, but I’ll get on to that in a sec.

You can take the view that death itself doesn’t matter. In this view, what matters is what you experience while you’re alive – and whether it’s better or worse than some kind of neutral baseline. Death might still be problematic, as it might cause emotional or practical difficulties for the people close to you. Most consequentialist moral theories wouldn’t advocate legalizing murder, because then everyone would be murdering each other whenever it’s convenient and society would descend into chaos. But you don’t have to view death, inherently, as bad in order to end up with a vaguely sane moral system.

What happens if you ignore death and just focus on quality of life?

This is going to depend how you aggregate everybody’s well-being. There are two obvious candidates: average and total.

Average has problems. There might be alien worlds, or Everett branches, where morally relevant entities are having experiences but not interacting with us in any way. It doesn’t seem like their existence, or whether or not they’re having a good time, ought to affect our moral decisions here on our local copy of Earth. But with average utility, it does.

Basically:

  • If there’s no aliens then you get the regular sort of average utility that you were expecting.
  • Similarly, if you manage to invent some kind of philosophical razor that allows you to ignore aliens somehow then you get your original average utility back
  • Otherwise, if the aliens are experiencing lives of unbearable torment, then lives of unbearable torment here are actually a good thing as long as they’re slightly less unbearable, because hey at least they bring the average up
  • On the other hand if the aliens are experiencing a state of pure bliss then we should basically all just give up now unless we have a realistic plan to ascend to their level of pleasure.
  • In the more realistic case where we have no idea about either of those, then I don’t know what we would do.

In practice this sort of thing doesn’t matter so much, you can still carry on trying to be moral and make reasonable decisions. But if you’re interested in more epic stuff like existential risk, this kind of reasoning might actually start to impact your decision-making.

So if not average, then what about total?

Here we come instead to the repugnant conclusion, which may or may not be repugnant and may or may not be a conclusion, but hey that’s what it got called.

It starts off by imagining taking an OK world and adding a whole bunch of people who would almost, but not quite, rather be dead. It’s assumed that this is a positive move because these people have positive utility associated with them. (Negative would imply instead that they’d genuinely rather be dead).

Then you imagine smearing the happiness around until everyone’s equally happy, which also shouldn’t affect the overall utility. As long as you added enough people in the first stage (or you can go through several rounds of this) you end up with a very large population of almost terminally miserable people, plus the sound conviction that you’ve actually made the world better than it was to start with.

Some people don’t like that.

There’s an argument that if you put a friendly AI in charge, and told it to maximize total utility, the repugnant conclusion isn’t necessarily what would happen. The repugnant state isn’t the maximally optimal one – you can always make it better by either adding more people or making them happier – and in principle there isn’t any maximum here. Instead, in the friendly AI setup, the state of the universe that you’d end up with would instead be the optimal one subject to resource constraints. Lots of miserable people aren’t necessarily more resource-efficient than smaller numbers of happier ones, so we might be ok.

This is really just sidestepping the problem. The repugnant conclusion doesn’t say that this wide but thin layer of misery is the “best”, and then horrify us with how terrible it sounds. Instead it merely says that it’s “better” than the original, and not so much horrify as confuse us with how it doesn’t really seem like it ought to count as better than the original after all.

In other words it exposes a preference cycle – using different kinds of intuition, we can come to the conclusion that one state of the world is either better, or worse, than another and in general preference cycles are a problem.

There’s a possible way out of this, but it relies on death being bad.

If death is considered really really bad, independently of any kind of suffering that happens to the individual or their family, then something interesting happens. A “life that’s just barely worth living” is actually a pretty decent one. You have to gather up lots and lots of utility while you’re alive to make up for the terrible fact that you were born and hence are going to die.

If we imagine the repugnant conclusion in this framework, then we still have a whole lot of people but day to day their lives are pretty pleasant. Yes, there’s a lot of dying going on and we’ve morally decided that this is supposed to be a bad thing, but your actual experience of living in this world – and everybody else’s too – will be an ok one. And on a philosophical level we’ve already sort of got used to there being a whole lot of dying so maybe that aspect doesn’t seem so bad after all.

This would be very convenient but is unfortunately already starting to sound like nonsense.

For a start, suppose radical life extension becomes a possibility. Morally, we’d be obligated to make use of it – since we’ve decided the constant churn of people being born and dying is morally reprehensible, and with significantly longer lives each we could maintain the same population and have a lot less dying going on.

But then we discover we can lower the average quality of life while still making a life “just barely worth living”, since there’s still only one death event per life and you only need to accumulate the same amount of positive experience per lifetime to make that worthwhile, it’s just that now it’s spread a lot more thinly. The conclusion now is similar to the original repugnant one – that a huge population of only-just-not-wanting-to-commit-suicide-miserable people is morally better than the status quo, but this time under the assumption that these people are now also living for thousands of years.

We could introduce another term punishing very long lives due to bioconservativism or some other consideration, but this starts feeling like we’re introducing hacks to work around the real problem.

I think we need tools to address preference cycles head on. There might even be some. For some reason, moral preference cycles come more easily to mind for me right now than personal preference cycles. This includes the repugnant conclusion and maybe other things such as negative utilitarianism, and to what extent I’m willing to make personal sacrifices to help others.

Preferences must not be carved from some single structure, else such cycles could be easily resolved by binary chopping until you realised what it was you were confused by. I guess there are different prefrence-like intuitions that get activated at different times? Let’s try to find out.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s