Our most recent nightmare scenario triggered some great discussion. (So did the earlier nightmare scenarios, which I’ll review in a day or two.) The conundrum was this:
In front of you are two childless married couples. For some reason, it’s imperative that you kill two of the four people. Your choices are: A. Kill one randomly chosen member from each couple. All four people agree that if they die, they want to be well remembered. Therefore all four ask you, please, to choose A so that anyone who dies will be remembered by a loving spouse. If you care about the four people in front of you, what should you do? |
Commenters, interestingly, split pretty much 50-50 (though it’s hard to get an exact count because several equivocated). Many bought the argument that unanimous preferences should be respected (leading to A). About equally many bought the argument that the preferences of the dead don’t count, and it’s better to leave two happily married survivors than two grieving widows/widowers (leading to B).
A few highlights:
- Our commenter Martin raised the interesting possibility that we have an externality problem here: Each individual is thinking of his/her own happiness and not necessarily that of the other potential survivors. But I’m not sure this is right (though I’m also not sure it’s wrong). If four of us all have the same preferences, then accounting for my own preferences automatically accounts for everyone else’s. On the other hand, post-execution, we’re not all going to have the same preferences anymore (two of us will have no preferences at all), which means that Martin’s point might stand. On the first hand, we’ve rescued Martin’s point only at the expense of discounting the preferences of the dead, the appropriateness of which was pretty much exactly what we were arguing about in the first place. So although Martin has recast the problem in an interesting way, I suspect he hasn’t shifted the fundamental locus of disagreement. But back on one or the other of those hands, it’s late and I’m tired, and I’m not sure I’ve thought this through completely.
- A bit further along, Phil King also invoked externalities, which led him to come down firmly on the side of B.
- Chas Phillips asked us to consider the interests of the (existing or potential) offspring of these couples, which brings us back to the issues that the earlier nightmare scenarios were supposed to address. I was trying to focus on something a little different here, so I’m going to amend the problem by adding the assumption that both couples are childless and infertile. (Chas, of course, didn’t know I was going to do this, so regarding the problem as stated, his points are well taken).
- Neil made the rather brilliant suggestion that we kill one couple, but only after assuring each of them (separately) that his/her spouse will survive. This strikes me as clearly better than either of the two alternatives I offered, and it’s as good a counterexample to Kantian ethics as I can imagine. If ever there were a justifiable lie, this is it.
- Bennett Haselton made the excellent observation that the problem changes depending on whether it’s a one-time occasion or likely to be repeated. In the latter case, our choice has repercussion for the happiness of future (living) people who have the bad fortune to find themselves in similar situations.
- Scott H., channeling Thomas Jefferson, observed that life is for the living, which argues for Option B.
-
No doubt inspired by that sentiment, Sam Wilson (and John Faben, expanding on Sam’s point) had, I thought, an exceptionally good suggestion: Kill one person, then ask the others to vote again. There’s no doubt now that it will be two-to-one in favor of killing that person’s spouse; moreover, there’s a sense in which the two votes (coming from people with the potential to continue a happy marriage) are in some sense “stronger” than the one on the other side. If you supported option A on the grounds that we should respect the expressed preferences of the potential victims, why would you not, in accordance with those preferences, now switch to B? And if you would, why would you not, knowing that you’re going to switch to B anyway, choose B in the first place?
The A-people might respond that Sam and John have ignored the preferences of the first victim. Others might counter-respond that once s/he is dead, the preferences of the first victim don’t count. Which brings us, I suppose, back to where we started.
To me, it seems crystal clear that we should ignore the preferences of the dead. (This is not entirely separate from the issue of whether we should ignore the preferences of the not-yet-born, which is the issue I really want to get at eventually, but am deferring for now.) It also seems crystal clear that many of the living have preferences about what happens after they’re dead, and that we can (and should) respect those preferences by credibly assuring them that we will abide by their wishes. Which means, I think, that Bennett Haselton had this right — if we’re going to be repeating this little experiment, we might want to enhance our credibility by choosing Option A. But if this is a one-off than Neil has the most humane solution: Tell everyone you’ve chosen A; then implement B.
To tie this into something of real world import: Those, like me, who lean toward some form of B, ought, I think, to be a little more tolerant of estate taxation than the A-people are. As faithful readers know, I oppose estate taxation for reasons for reasons that invoke only the prosperity of the living —- but that conclusion is bolstered even further if you’re a strong believer in respecting the preferences of the dead. I don’t have that bolster, so I suppose that should make my opposition a little weaker.
(Though come to think of it—-sorry for the rambling; as I said it’s late at night—-I guess there are just as many people who died hoping we’d tax other dead people’s estates as there are people who died hoping their own estates would remain intact. So maybe “respecting the preferences of the dead” cuts both ways on this issue. Hrm.)
“If four of us all have the same preferences, then accounting for my own preferences automatically accounts for everyone else’s.”
This is almost surely not true in any useful sense. I have a preference for a bank account with a billion dollars in it. If everyone else has that preference too, accounting for mine doesn’t help anyone else. In fact it’s not possible to account for all of these simultaneously, because I’m saying “I want a bigger piece of the pie than my fair share”
“To me, it seems crystal clear that we should ignore the preferences of the dead”
However, should we respect the wishes of the living who wish that the wishes of the dead (as expressed while they were still living) should be respected?
Steve,
“If four of us all have the same preferences, then accounting for my own preferences automatically accounts for everyone else’s”
Yes, I think you’re right about this.
Assume a thief who can steal from person A and person B and give it to the other. Person A values gains with g_a and losses as l_a, similarly, person B g_b and l_b. From whom the thief steals is decided by a coin flip. The amount stolen is $100.
A’s cost-benefit (ex ante):
g_a * 100 * 0.5 – l_a * 100 * 0.5 > 0 Ask the thief to steal
g_a * 100 * 0.5 – l_a * 100 * 0.5 < 0 Ask the thief not to steal.
If g_a = g_b an l_a = l_b, then what A asks is what B would want as well. B does not need to contract with A to convince him to do it or not to do it to maximize their joint profit.
Given though that all four of them agreed on the proposition that they preferred to be remembered well, you don't even need identical preferences.
On second thought I'd therefore go for A now. I like Neil and Bennet's points though and agree, given that, yes.
Steve,
I think my post got lost, but I wanted to say that I agree with you and that you don’t need identical preferences given that they all agreed.
Assume a thief who can steal or not steal $100. Let g_a and g_b denote the value A and B attach to a gain. Similarly, let l_a and l_b denote how much A and B attach to a loss. The thief steals based on a coin flip (from the perspective of A and B anyway).
When:
0.5*100*g_a – 0.5*100*l_a > 0 then A wants the thief to steal
0.5*100*g_a – 0.5*100*l_a g_a & l_b > g_b or g_b > l_b & g_a > l_a.
We have to make some assumptions in order to isolate the factors we want to consider. We must assume no other people could be affected by the outcome. This makes it a one-time, secret event. Otherwise we are fudging the issue. We also assume that each couple does not care at all about the other couple. We assume that there is no after-life. We also assume that they have correctly weighted their preference regarding predictable outcomes – e.g. greiving. I think the situation is intended to be that their priorities are:
1) To survive.
2) If they die, to have their spouse survive them.
3) If they survive, to have their spouse survive also.
Although (3) is not stated, we assume it to be true.
We know (3) is valued less than (2), because everybody prefers one of each couple to be killed. For the same reason we know that the negative utility of grieving does not outweigh the positive utility of having a surviving spouse, since otherwise they would not ask for option A.
Lets give them a utility value of 3, 2 and 1 respectively.
There are only two outcomes possible – (A) one of each couple dies or (B) one couple dies.
To maximise their expected utility, each person prefers option A. Each has an expected utility of (1/2 x 3) + (1/2 x 2.) =2.5.
For option B we have (1/2 x 3) + (1/2 x 1) =2. This is why they ask for A.
After the event, we can add up actual ultility. For (A) we have (2×3) = 6 For B we have (2 x 3) + (2 x 1) = 8.
The reason for the discrepancy is that 4 units of expected utility disappear with the death of two people.
Given these assumptions, we know that utility will be greater with option B.
What reason is there to choose A? Given our assumptions, it must involve a system of morality that is not based on utility. Any reason is an implicit rejection of an assumption.
What basis of morality might this be? It could be based on “freedom” or “liberty” to make choices. If we believe that this is the over-riding basis for our system of morality, then surely we must obey the instructions given whilst the people are alive. We have no right to second-guess their choices. Or we may believe it is just wrong to interfere in choices of this nature, so must have a random selection.
I’m still wondering why I care about the wishes of the people I am about to murder.
Guilt should be factored in. If everyone expressed a preference for A, and I as the killer choose B, surely I’ll live with some level of guilt for not having respected the last choices of these four people.
Neil’s example implicitly assumes that the preferences of the dead are worth respecting. In Neil’s case, the killer goes out of his way to assure the person who is about to be murdered that their spouse is going to live. Post-death, this person cannot feel emotionally distressed for the safety of their spouse, or their own life. Why should the killer go out of his way to explain anything to the person he’s going to murder? Why not just get it over with as quickly as possible? Why prolong the emotional distress of the person being murdered for even one more minute to offer an explanation?
From a strictly utilitiarian perspective, it makes no sense. You prolong the emotional distress of the victim for one more minute, and waste one minute of the killer’s time, to achieve an objective that ultimately doesn’t matter since dead people have no utility.
I really like Harold’s post. I tried different weights and also different priorities, but got similar results. For example, if I love my spouse very, very much, my priorites might be:
1) My spouse survives.
2) If my spouse dies, I survive.
3) We both survive.
Before the murders, I have expected utility of:
A = 10 (2.5 * 4)
B = 8 (2 * 4)
After, I have utility of:
A = 4 (2 * 2)
B = 8 (4 * 2)
Which makes Harold’s point even more strongly.
What about a 3rd scenario, immediately after the killer chooses B, one of the surviving spouses is struck by lightning and drops dead. Now we have only 1 survivor and 3 dead. If the killer chose A: we might still have 2 survivors because the lightning will strike a dead person, or the lightning strikes a survivor and kills him leaving us with 3 dead and 1 survivor. Now what’s the better choice? (I have a follow up)
I don’t want to speak for Will A, but I am disturbed at the cavalier way A and B are taken in vain here.
Mike H – “This is almost surely not true in any useful sense. I have a preference for a bank account with a billion dollars in it. If everyone else has that preference too, accounting for mine doesn’t help anyone else. In fact it’s not possible to account for all of these simultaneously, because I’m saying ‘I want a bigger piece of the pie than my fair share'”
In your example it’s impossible to accommodate everyone at once, but I don’t see why not in the original problem. Nobody’s asking for a free lunch.
Martin-1 – Beat me to it I see.
So here’s a deontological approach to scenario 2:
1. I would kill both men, whatever the preferences. That’s because it’s a man’s duty to take the hit in a crazy situation.
2. It’s absolutely wrong to lie to the person about to die and tell them their spouse will survive. You are denying them the opportunity to set their affairs in order, choose their last words, etc. It may be traumatic but they have a duty to do and it is wrong deny them the opportunity to do their duty – even if that is what they would prefer.
And scenario 1:
Obviously it is better to kill half of humanity.
Then the other half can grow up to take revenge on the alien.
Let’s frame this in a way that I think is clearer.
Scenario: two couples in a space capsule. It gets into trouble. A
rescue craft can reach it, but not in time. By the time it makes
rendezvous, all of the people on the capsule will be dead due to the
oxygen running out. However, if they act promptly and kill off half
the crew, there will be enough air left for the two survivors.
The supplies in the capsule can deal with this. There are a number
of individually-numbered pills. Some offer a quick, painless death.
Others are placebo pills. Mission control knows which is which.
You are in mission control. Your astronauts have chosen option A:
one of each couple will die. You know that the odd numbered pills
are the placebos. Would you, knowing the choice the astronauts
made, say:
A. “One Couple should take pills 1 and 2 and the other couple should
take pills 3 and 4.”
or
B. “One couple one should take pills 1 and 3 and the other couple
should take pills 2 and 4.”
If you do A, it’s assisted suicide, but if you do nothing, they all
die.
If you do B, it’s assisted suicide plus murder.
Questions: If you lie to them and cause B, would you expect to not
be prosecuted for what you did? Is it okay to ignore the
unanimous decision of those affected? Did your answer change from
what you decided in Steve’s original scenario?
Ron:
Questions: If you lie to them and cause B, would you expect to not
be prosecuted for what you did?
I might expect to be prosecuted. I don’t see that this bears on the moral question.
Is it okay to ignore the unanimous decision of those affected?
Yes, I think so. And let me try to articulate why, a little better than I did in the post.
Deferrring to the preferences of those affected is almost always a good thing, but that deference is a means, not an end. It’s a means toward making people happy, and in most cases I embrace it because I think it’s a very effective means. But in this case, I know for sure that it’s ineffective; I know for sure that everyone who’s capable of feeling happiness will be happier if I don’t defer to their preferences.
Did your answer change from
what you decided in Steve’s original scenario?
No.
Ron:
Questions: If you lie to them and cause B, would you expect to not
be prosecuted for what you did?
I might expect to be prosecuted. I don’t see that this bears on the moral question.
Is it okay to ignore the unanimous decision of those affected?
Yes, I think so. And let me try to articulate why, a little better than I did in the post.
Deferrring to the preferences of those affected is almost always a good thing, but that deference is a means, not an end. It’s a means toward making people happy, and in most cases I embrace it because I think it’s a very effective means. But in this case, I know for sure that it’s ineffective; I know for sure that everyone who’s capable of feeling happiness will be happier if I don’t defer to their preferences.
Did your answer change from
what you decided in Steve’s original scenario?
No.
Deferrring to the preferences of those affected is almost always
a good thing, but that deference is a means, not an end. It’s a
means toward making people happy, and in most cases I embrace it
because I think it’s a very effective means. But in this case, I
know for sure that it’s ineffective; I know for sure that everyone
who’s capable of feeling happiness will be happier if I don’t defer
to their preferences.
I see. So you know that both couples are happily married, and
neither would love to get rid of their partner if only it wouldn’t
ruin them both socially and financially.
Ron:
I see. So you know that both couples are happily married, and
neither would love to get rid of their partner if only it wouldn’t
ruin them both socially and financially.
Yes, I was tacitly assuming that I know this.
Steve:
Yes, I was tacitly assuming that I know this.
I think this moves the scenario well towards spherical cow
territory, as this kind of certainty can be very difficult to
validate in real life, even for a therapist.
Ron –
I think your scenario is a good simplification of what concerns us here.
I disagree that if you choose B it’s murder as opposed to assisted suicide. You have to choose the scheme under which 2 people will die in both cases. Everyone hopes to survive. 2 people will die, and they have equal chances in both cases. Just because you go against a surface preference does not mean you’ve gone from assisted suicide to murder.
The two living people will surely not feel that is the case. I think one thing that speaks strongly to choice B that I’m not sure has been pointed out yet is:
Surely the two survivors in scenario B, after learning the outcome will think, “Wow, thank god he picked B, this is great!” They will not feel an injustice has occurred.
The two dead people – they’re not thinking at all, and as I pointed out last time, we are in fact violating people’s living preferences regarding their after life wishes, but that is OK! We reserve the right to deny irrational preferences like these just as much as we have the right to deny (or punish) someone whose preference it is to kill others; the A preference is a tax on the living, plain and simple.
So if at the very worst the outcome is (I think the outcome is much stronger than this), ‘welp he followed our choice and this outcome sucks just like we knew it would’ or ‘thank god he chose B instead!’ then the latter is clearly a better choice.
Mike H –
I believe your preference accounting is incorrect. I believe correct analog would be:
if you have a preference that there exist a billion dollar bank account rather than a billion $1 accounts and so does everyone else, and we all get to vote on whether there gets to be a billion dollar bank account or a billion $1 bank accounts then if everybody feels they’d rather take the gamble, then your preference does, in fact, account for everyone else’s.
I think in this instance the accounting is correct, although I am skeptical of its generality.
Steve said:
“But in this case, I know for sure that it’s ineffective; I know for sure that everyone who’s capable of feeling happiness will be happier if I don’t defer to their preferences. ”
How can you know? Unless you know what makes people happy — if you do, please share! We would all be better off.
I don’t think the argument is that perhaps the couples are unhappily married – I thought the original framing was that they were clearly happily married. But rather, how do you know that all four would not feel some moral outrage, and extraordinary amounts of survivor guilt if they were the couple that you chose to live? And that this outrage, if not sufficient enough to kill you, would not be enough to make them no happier than two people left alive from scenario A?
It also seems that the question is supposing that you are somehow not part of society, but simply some being with no other attachments to the world (you don’t want to consider how the possibility of being prosecuted could play into your decision). I would argue in that case that you cannot distinguish between moral and immoral – at least not in the way I would view morality. Not only must you not be worried about the consequences of your decision in any legal or physical terms, but you must also not be worried about you reputation — what others will think of your choice. If that’s the case, then how could your choice possibly matter — you have no morals (see Westley Allan Dodd’s summation: “born without feelings . . . . Maybe it’s my birth defect.”). Alternatively, I believe there are some Star Trek episodes that come close to arguing about this.
I am fairly sure I wouldn’t want to be stuck on a desert island with the B choosers — at least not without the only gun.
Harold, your utility calculations are incorrect. The utility can’t just “disappear”.
You assign “if they die, to have their spouse survive them” a utility of 2 in one scenario, and of 0 in the other. Naturally, you get different answers.
It would be better to more finely distinguish the things these people are assigning utility to.
A : survives, and spouse survives.
B : survives, but spouse does not.
C : dies, but spouse survives.
D : dies thinking the spouse will survive.
E : dies, and spouse also dies.
F : dies thinking the spouse will die also.
Steve and others would like to set C=E=0. This doesn’t mean D or F are also 0.
In scenario A, for each individual (unless we lie to them) the expected utility is B/2+E/2+F/2. The total utility is 2B+2E+2F.
In scenario B, for each individual (unless we lie to them) the expected utility is A/2+C/2+D/2, and the total is 2A+2C+2D.
Assuming we can’t lie to them, choose A if B+E+F>A+C+D, that is (letting C=E=0) if B+F>A+D. This holds whether you consider their wishes before or after the event.
Or is someone going to argue that we should set D=F=0 also?? Wouldn’t your argument imply that all current preferences of all living people also be deemed to be zero, since all die eventually? Sounds very nihilistic to me….
Mike H. Yes, actual utility for the individual for D = F = 0.
I think I am more or less right. The utility did not disappear – it was never there. It was expected utility which fails to materialise because the dead can receive no benefit.
They did not wish to pass on something that was a benefit to the survivor, only to themselves. The remembering imposes neither cost nor benefit to the rememberer – or none that we can value. It would be different if each person said they wanted to remember – it would then be the living partner who received the benefit.
I think we can only attach utility to actual outcome. What the person is thinking does not much matter. It only fine-tunes the ultility by a small amount. The utility of life is huge, there may be utility in spending a few minutes believing something, but it is tiny in comparison.
We can have only 4 outcomes for an individual and their spouse: LL, LD, DL, DD. (live / Die).
For the individual, DL and DD have the same actual utility, because they are dead. However, the individuals anticipate a utility from DL in being remembered.
People feel happier in life if they think their wishes will be respected after death. Therefore in real life we can never isolate these factors quite so cleanly. On this basis, current preferences of all living people about what happens to them after death should also be deemed zero, except that they derive utility from the belief during their life. It therefore becomes necessary to respect these wishes so the living obtain this benefit.
I am not sure that we can really separate out these factors enough to be the defining point of out morality.
Mike H. Yes, I see I have assigned a utility then changed it. This cannot be right. I think the error is in my phrasing, not the central point.
I’ll ask something I asked last time: by the reasoning which says that the preferences of the dead do not matter, should a case where one person is killed painlessly and a case where one person is killed by slow and painful torture be equally preferable?
Is it morally okay to kill someone who doesn’t have any friends and family who might mourn him? After all, once you kill him, his preference that he would rather be alive has no value.
Whilst the person is alive he would prefer not to be tortured. The torture option is obviously wrong.
To kill someone deprives them of life – obviously wrong. The options that SL proposed had the same number of deaths in each case, so this did not factor.
This line of thinking does lead us to some uncomfortable places. We assume life with one leg has less utility than life with two. Therefore we think spending resources on saving a leg is OK. So far so good. But this means that a one-legged man has less utility than a two legged one. We should expend less effort to save the unidexter.
“Whilst the person is alive he would prefer not to be tortured.”
In the original problem, the people who are alive express preferences as well.
“To kill someone deprives them of life – obviously wrong.”
Why would you say that depriving someone of life is obviously wrong? If the preferences of dead people can be ignored as soon as they die, then depriving someone of life can’t be wrong–once they are dead, their former preference for being alive is of no consequence.
@Harold “Yes, actual utility for the individual for D = F = 0”
D and F are the utility the doomed individual derives from an expectation, while they are alive. Either it’s zero because they are doomed, or nonzero even though they are doomed.
However, I see from your next comment that you already realise this.
Eg, “I know for sure that everyone who’s capable of feeling happiness will be happier if I don’t defer to their preferences”
The fallacy here is of choosing a particular point in time, and saying “we shall define utility to be their happiness NOW,” and ignoring the very real torment (the nonzero F) suffered by the doomed couple.
The doomed individuals are perfectly capable of feeling. After the fact, you might wash your hands and say “what’s done is done, we can’t help them now” – but that doesn’t make D=F=0, nor mean you made the right choice by ignoring their feelings.
After all, if it did, I could choose to measure their happiness in the year 2100, and declare that all choices are equally good, since they’re all dead by then. Or just shoot them all on the spot and declare “now nobody’s grieving”. This is the same fallacy at work, with a more obviously pernicious conclusion.
If work on a basis of maximising utility. To kill someone deprives them of the utility of the rest of their life. This is obviously a very large negative.
To torture somneone causes them a significant negative utility during the torture. After they are dead, they no longer suffer from the torture, but they still suffer from the negative utility of not having life. To torture someone to death causes more negative utility than killing someone painlessly. However, I feel that the loss due to death is much greater than the relatively short loss during the torture.
To kill someone and tell them you are not going to respect their wishes after they die causes a negative utility in a similar way to torturing them, whilst they are alive. Therefore if you can reassure them that you will respect their wishes, it will increase utility. However, the gain will be small compared to the loss of losing a life.
I am not picking a moment in time, I am integrating over time.
So in our example, a small utility is gained whilst the person is alive if they think their wishes will be respected. This utility is the same regardless of whether their wishes actually are respected after death.
We have a small gain in comfort against a huge loss through dying.
A similar situation would arise if each person was a member of a particular cult that believed that only a nominated individual could prepare a body for the afterlife. Each has only nominated their spouse so far, and has no time to change this. Each person would be very keen that they were survived by a spouse, so would chose option A. I know that they will not actually obtain any such benefit in the afterlife – in this case because I know the founder of the cult is a crook.
Do we need to respect the wishes, even though we know the benefit they think they will get is spurious?
Think of it as a plot of happiness against time. Utility is the area under the curve.
Lets look at the lines of the killed. The line progresses, up and down a bit, for years. Meets wife, marries – the line goes up a bit. The Individual learns of the “event” – the line drops sharply. The person is re-assured their spouse will survive them – line goes up a bit for a few seconds. Death. Line drops to zero, for ever. Or perhaps the person is not re-assured that his spouse will survive him – line drops a little bit for a few seconds, then death, and drops to zero for ever.
The total utility will be almost unchanged by the re-assurance. However, you might as well increase it if you can, even for a small gain.
If you could avoid killing them at all, then of course there would be a huge gain.
Now look at the survivors lines. All is the same until the death bit. Those that were re-assured their spouse would survive them gain that little bit of lift for a few seconds. Then Not Death! The line jumps up. In scenario A the person finds out their beloved spouse is dead – line drops a bit, and stays dropped for years.
In scenario B the person finds their spouse alive – line rises a bit, and stays high for years.
Scenario B has a much larger total area. If we can lie, then scenario B with lying has a tiny bit more area than scenario B without lying – so why not take this option.
@Harold:
“I am not picking a moment in time, I am integrating over time. ”
“So in our example, a small utility is gained whilst the person is alive if they think their wishes will be respected. This utility is the same regardless of whether their wishes actually are respected after death.”
The trouble with this argument is that the idea that something someone doesn’t know about doesn’t count towards maximizing utility is itself a disputed claim. You’re just pointing out that one premise that people don’t agree with (what happens after you die doesn’t count) is a subcase of another (what happens that you don’t know about, doesn’t count).
Suppose you’re married, you’re having an affair, and your spouse doesn’t know about it. By your reasoning not only is nothing wrong with this, but in fact, for anyone to tell your spouse would be wrong (since it would reduce utility for all people involved.) And for you to stop having an affair would be actively harmful (negative utility to yourself and your paramour, no change in utility for your spouse).
(And given the way people nitpick examples, make any necessary assumptions to fit with the intent of the scenario–asking “what if the spouse is into polygamy and likes when you have an affair” is stupid.)
Oh, here’s another scenario: I can kidnap someone, and by a very painful procedure, insert a device in their head which cause them to hallucinate that all their desires are satisfied to the point where they don’t know that what they are experiencing is not real. Ignoring questions about their friends and relatives, should I do this? By your reasoning, the fact that they are hallucinating events of very high utility should count for as much utility as actually experiencing those events and the brief loss from being kidnapped and operated on won’t outweigh it.
As I said before, this accounting for utility takes us to some uncomfortable places. I do not think it should be the entire basis for our morality. What I am not sure it is because we cannot practically assign the utilities, so we must fail in our attempts to implement it, or whether there should be another underlying principle(s). In practical terms there may not be much difference.
In the real world, we must consider third party effects, and this makes it difficult to assign utility. For example, a person aware of the affair may suffer through that knowledge. If you answer B it does suggest that an affair that remains forever secret is a good thing – unless the person suffers guilt. This is not the same as saying an affair is a good thing, because there is a high probablity that it will not remain secret.
Your example of the implant is also very interesting. Again, if we can rule out third party effects, should we put somebody into the matrix if it would make them happier? If you answered B, presumably the answer here is yes. However, it is dificult to rule out third part effects. The individual will not be able to perform useful work, so will presumably be a drain on everyone else.
If we imagine a case where the victim could not perform any useful work anyway, perhaps they are “locked in” – have no voluntary movement. In this situation, I think it likely that many would say that the hallucination would be the best option.
‘To me, it seems crystal clear that we should ignore the preferences of the dead. ‘
But if we do that then no one would have an incentive to leave a positive sum estate or create a trust for a long term charitable purpose.
Furthermore, a lot of Constitutional theory and things like stare decisis fall down once you say ‘what the dead wanted or what they meant or what they intended should be ignored’. There’s no point having laws and a Constitution, or Families, or Churches, if what dead people wanted, intended and meant, are considered irrelevant.
In any case, these ‘nightmare scenarios’ don’t even tell us much about what our own unconscious affiliations are w.r.t the issue of dead people’s preferences because small changes in the information set, the introduction of irrelevant alternatives and so on could cause us to reverse our decision and then reverse it again and so on.
Steve wrote:
Our commenter Martin raised the interesting possibility that we have an externality problem here: Each individual is thinking of his/her own happiness and not necessarily that of the other potential survivors. But I’m not sure this is right (though I’m also not sure it’s wrong). If four of us all have the same preferences, then accounting for my own preferences automatically accounts for everyone else’s.
Sorry if this already came up in the comments–I can’t scroll through them all–but I think this is wrong, Steve. For sure, I think it’s wrong if we generalize your approach, but maybe you didn’t intend to generalize it, and you meant to apply just in the case of 4 people.
Anyway, in just about any problem involving externalities, you can assume people have identical preferences, and even if you assume that those (identical) preferences involve altruism–even high degrees of altruism–then you still get inefficiency, right?
I’m not going to bother thinking it through, but my hunch is that (a) you are wrong to deploy this argument against Martin but that (b) even if you are right, you got unbelievably lucky.
Bob Murphy:
(I also meant to say this to an earlier commenter who I misled in the same way; my apologies for not getting around to that, and for being too rushed to check right now on who it was):
When I say “identical preferences” I mean literally identical preferences — e.g. If Alice’s first choice is to rule the universe, and if Alice and Bob have identical preferences in this sense, then Bob’s first choice is also for Alice to rule the Universe. In other contexts, identical preferences means that if Alice would rather have an apple than have a banana, then Bob would rather have an apple than a banana. In the current context, I mean it to say that if Alice would rather have an apple than a banana, then Bob would also prefer for Alice to have an apple than for Alice to have a banana.
Among the various choices on the table, everyone’s preferences here are identical in the strong sense in which I am using the phrase (as I should have clarified). Of course,
regarding some options that are *not* on the table (e.g. “Kill only Alice” and “Kill only Bob”), they might not have identical preferences in this sense.
Given this strong sense of identical preferences among all options on the table, I don’t see how there can be an externality problem. Make anyone a dictator, and s/he will choose exactly what everyone else wants.
Oh OK yeah I had assumed you meant “identical preferences” in a looser sense.
To what extent do we value the concerns of the dead – or those who we anticipate will be dead shortly?
Potentially analogous question: What strategy should the state employ when choosing which property to condemn for a public purpose (e.g., highway, electric transmission line, gas pipeline, prison, etc.)? Some people’s property will be condemned entirely; by analogy, they will be “killed.” Other people’s property will not be condemned in the legal sense, but will be adversely affected by proximity to the public purpose; by analogy, these people will “survive.”
Should the state focus solely on minimizing the harm done to those who will be killed? Should the state focus solely on minimizing the harm to those who will survive? Should the state balance these concerns?
And how should the state weigh the competing concerns of survivors? Should the state identify some fixed amount on interest to accord to each survivor (e.g., give equal weight to the concerns of everyone living within X distance of the new structure, and ignore the concerns raised by people living beyond that distance)? Or should the state acknowledge everyone’s claims, but weight them differently (e.g., weigh people’s concerns as a function of their proximity to the new structure)?
It’s curious to ponder the shifting outcomes of various scenarios. If you advance a strong claim regarding the problems of living in proximity to the new structure, you may get more compensation – or even get the structure moved away from you. Alternatively, the state my find that the optimal way to deal with your concerns is to condemn your property outright, thereby eliminating your concerns entirely.