The death this week of Nobel laureate (and relativity denier!) Maurice Allais reminds me that I’ve been meaning to blog about Allais’s famous challenge to the way economists think about rational decision making.
I’m going to ask you two questions about your preferences. In neither case is there a right or a wrong answer. A perfectly rational person could answer either question either way. But I do want you to think about your answers, and to write them down before you read any further.
Question 1: Which would you rather have:
- A million dollars for certain
- A lottery ticket that gives you an 89% chance to win a million dollars, a 10% chance to win five million dollars, and a 1% chance to win nothing.
Try taking this seriously. What would you actually do if you faced this choice? Don’t bother trying to figure out the “right” answer, because there is no right answer. Some perfectly rational people choose A, and other perfectly rational people choose B.
Okay, ready for the next question?
Question 2: Which would you rather have:
- A lottery ticket that gives you an 11% chance at a million dollars (and an 89% chance of nothing)
- A lottery ticket that gives you a 10% chance at five million dollars (and a 90% chance of nothing)
Once again, this is a matter of preference. There is no right or wrong answer. But decide what your answer is and write it down before you continue.
Okay, ready? As I said, there is no such thing as an irrational answer to either of these questions. But arguably, there is such a thing as an irrational pattern of answers. If you answered “A” to both questions, that’s fine. If you answered “B” to both questions, that’s also fine. But if you answered “A” to the first and “B” to the second, as many people do, then we have a problem here.
To see why, I want you to answer one more question. Imagine I’ve got an urn with 89 red balls, 10 black balls and 1 white ball. I’m going to write dollar amounts on these balls, then let you choose one at random and award you the corresponding prize. I’ve already written some dollar amounts on the red balls but I haven’t gotten around to the blacks and whites yet.
Question 3: Which would you prefer that I write?
- One million dollars on each of the 10 black balls and one million dollars on the white ball.
- Five million dollars on each of the 10 black balls and zero on the white ball.
Try writing down your answer to that one.
Now: I claim that if you are a rational person in the sense that economists traditionally understand the word, your answer to Question 3 must be the same as your answer to Question 1 — because Question 3 is the same as Question 1, provided I’ve written “one million dollars” on each of the 89 red balls.
And I also claim that if you are that rational person, your answer to Question 3 must be the same as your answer to Question 2 — because Question 3 is the same as Question 2, provided I’ve written “zero” on each of the 89 red balls.
So if you answered “A” to all three questions, congratulations — economists pronounce you rational! Likewise if you answered “B” to all three questions. But if your answers included both an “A” and a “B”, economists see a problem here — though as Allais pointed out, a lot of people give mixed answers, so you’ve got plenty of irrational company. Whether the problem is really with you or with the economists is a question I’ll come back to in a few days, once you’ve had time to ponder this.
Monkeys have the same types of biases:
http://www.ted.com/talks/laurie_santos.html
It seems I am ‘irrational’, but here’s how I work it.
Using the urn and 100 balls.
Question 1)
Option A) 100 balls marked $1M: avg of $1M per ball, 100% success rate
Option B) 89 balls marked $1M, 10 balls marked $5M, 1 ball marked $0.: avg of $1.39M per ball, 99% success rate (i.e. not ending up with zero)
So, an extra 39% more money in exchange for a 1% chance of losing.
Question 2)
Option A) 11 balls with $1M, avg per ball, $110,000, 11% success rate
Option B) 10 balls with $5M, avg per ball, $500,000, 10% success rate.
So, and extra ~350% return in exchange for a 1% increase in the chance of losing.
There’s definitely some psychology going on here. In game one, the pay-offs aren’t that difference, but going from no chance to lose to a small chance to lose is a big step, so I’ll go with A. I imagine in game two, I’m likely to lose, so I’ll go for the significantly bigger pot of gold for a negligible reduction in winning odds, B.
“Rational” is such a loaded word and means different things to different people. So when I teach the independence axiom to students, I make it a point not to use it. Sure, economists have a precise definition for it, but not everybody shares that definition. As you say, many people would make the choice A-B and I wouldn’t want to imply that they are crazy, even if I find independence to be normatively compelling.
I think it is pretty clear that humans are not entirely rational creatures, in the sense the economists mean. That is not to say the people do not respond to incentives, or are entirely unpredictable. It is partly because people cannot take on board all the data, such as in the above example, so we use shortcuts. This is a perfectly sensible strategy, but leads to individual decisions that are not “rational”, i.e. not optimum for the stated goals. Once we have made a descision, we are usually pretty keen to stick by it, so we will then ignore evidence that contradicts our view. This makes future descisions even less ratiopnal. You can sometimes define them as rational, by assigning values to the emotional benefits derived. I think that is introducing unecessary complications, and verges on tautology.
Say someone answered A for Q1 and B for Q2. This could be an “irrational” over-emphasis on the word “certain” in A and “nothing” in B for Q1. It makes A appeal more emotionally. In Q2 we have a pretty large chance of nothing in both, and we don’t distinguish much between 89 and 90, so we can focus on the difference between 1 and 5 million, a big difference. This gives B the greatest emotional appeal.
I presume the economists view would be that the person either wishes to maximise the statistical probability, or minimise the chance of getting nothing. By choosing A on Q1 and B in Q2, the person is being inconsistent, and therefore irrational.
You could say the person would feel so bad if they got nothing in scenario 1, knowing they could have definitely got $1M with the other choice, that this makes the descision rational. I say tautology.
Perhaps economists mean something different from the word “rational”-that is, people act to attempt to satisfy their ends. Says nothing about whether those ends are appropriate or not, or if the means will work.
Found this @ econguru.com:
“A key assumption for economic analysis is that individuals, be it a person, a family or a firm, tend to make choices and select alternatives rationally, that they believe in their best interest. By rational, economists mean simply that people try to make the best choice they can, given the available information and resource. Uncertainty exists and people do not know what will turn out to be the most self-benefiting, so they simply select the alternatives that they expect to yield them the most satisfaction and happiness or the ones with the highest possibility to achieve it. In general, rational self-interest means that given a certain condition, individuals try to minimize the expected cost for a benefit or maximize the expected benefit with a cost.”
Says nothing about whether decisions are rational in layman’s terms or anything.
In my opinion, this isn’t an economics question, but rather a question of numeracy. Anyone with an understanding of basic probability concepts (I won’t even say theory) will make consistent choices in the above examples.
If someone who doesn’t hold this knowledge makes inconsistent choices, it is unacceptable to call them “irrational.” They are making perfectly rational choices, however they lack sufficient numeracy to always know what their “best” choice is, according to their individual preferences.
There is a big push in economics these days to say that people “aren’t rational,” but in such a world there is no use for economics at all. If people don’t behave rationally, then they don’t behave predictably, and all economic analysis is futile.
On top of that, a belief in fundamental human irrationality is misanthropic.
Ryan: An interesting point.
For those who choose option A for question one and B for question 2, maybe risk appetite is a function of wealth. Question 1 can be rephrased as “You have $1 million. You can do nothing, or you can gamble for a 10% chance to win another $4 million and a 1% chance to lose it all.”
This kind of reminds me of the “Win for Life” lottery commercials with the guy who won walking around in a suit of armor to protect himself…
I’m not an economist, so I tend to think ‘rational’ means consistent in the sense that your answer will optimize some cost function.
I understand that if the cost function is the expected value of any function of your winnings, then you will pick the same option (A or B) for both questions. Maximizing or minimizing any cost function assigned to the outcomes will lead to option A or B for both questions.
But what if cost functions aren’t your thing? What if, instead, you don’t like uncertainty? Is that a rational thing to optimize? Suppose you want to create situations for which the uncertainty associated with the outcome is minimized, and you define uncertainty through something like Shannon’s entropy. For Question 1, option A has less uncertainty than option B. For Question 2, option B has less uncertainty than option A. Why is this irrational?
Do I get to play this game once or many times? If I play once, my answers are different. If I play multiple times, my answers are the same.
The previous comments cover most of the rational explanations for my giving an inconsistent answer. I will add one more: “the lottery concept.”
If the choice is between winning one dollar versus winning $100 million dollars, many/most people will pick the $100 million (even if that choice has a lower probability-adjusted value) because that outcome will materially change one’s life. This illustrates that expected value is not the only criteria for decision-making.
In 1999, Alan Greenspan rationalized the internet bubble as real-world illustration of this phenomenon.
There is another point here lurking in the wings, and it matters a lot in situations where you make decisions repeatedly (play the game more than once). You can learn, and adapt. Either that or those who do will squeeze you out.
This is important because anomalies like this — and my gut reactions were A and B — are used to argue that the rational actor basis of market theory is wrong. And that is not so. All it means is that you have to learn and if you don’t you’ll get outcompeted. In other words, markets force us to be more rational. (Intuitive judgemnets in real games like backgammon can be quite wrong too; doesn’t mean people cannot learn to play better backgammon.)
What Thomas Purzycki said. I went with A for #1 and B for #2 using very simple logic. A certain million would easily resolve all my financial difficulties (not terribly difficult ones, mind you) and provide a comfortable boost to my family’s well-being for at least a decade. The additional marginal benefit of possibly having another four million is simply not worth the 1% chance of having nothing.
For #2, on the other hand, it’s a small chance of winning a million versus a very slightly smaller chance of winning five. Might as well go for the five in that case.
Introspecting, I think it’s the chance of getting nothing that weighs on my mind. Going from a 0% chance to a 1% chance is a big jump. Going from a 90% chance to an 89% chance is next to nothing.
Ryan,
It’s not so black and white. The consequence of “people aren’t rational” isn’t “all economic analysis is futile” because the statement doesn’t actually mean “all people are completely irrational all the time”. What does seem to be true is that “rationality” in the economic sense is one of multiple components that drive how people act.
Economists do tend to act as if that weren’t the case (even many who say they understand otherwise), which does make a lot of economic analysis much less useful than it purports to be, and the fix wouldn’t necessarily make it more useful – it would only make it purport to be about as useful as it actually is. But that wouldn’t make it completely futile, either.
“Monkeys have the same types of biases:
http://www.ted.com/talks/laurie_santos.html”
there’s a very good reason for that.
I agree with Thomas Purzycki. I chose the answers that would provide the greatest expected payout. The decision to do this (and therefore my appetite for risk), however, was largely based on my current level of wealth and expectations of future wealth. If I were dirt poor, I would definitely choose the answer with more certainty of getting a million dollar payout. Once you get to the “millionaire” level from a low level of wealth, each additional million provides diminshing marginal utility. This is weighed vs. the prospect of remaining poor.
Cos – I refuse to engage in a discussion about whether human beings are rational “all of the time,” “some of the time,” or “never.”
Such determinations are value judgements. You are deciding for yourself the extent to which a person behaves “rationally.” In other words, you are taking your own concept of rationality and applying universally to all other human beings. And you say my comment was black-and-white?
In reality, people always act rationally unless they have cognition problems (read: psychologically diagnosable mental deficiency). They might have different values than you, or more or less knowledge about a particular market segment, but they always act under rational self-interest.
To assume otherwise is to assume you know better. I’m here to tell you: You don’t. (Hey, it’s okay – I don’t either!)
I agree with Thomas Bayes. I was attempting to minimize my uncertainty. For question 1 A, I am guarenteed to win $1 million, so I picked that. For question 2 B, I was willing to trade off a little certainty for a bigger pay-off. And if all 4 choices in questions 1 and 2 were combined into a single question my preferences would be: 1A > 2B > 1B > 2A.
Sol —
Let’s quantify your preferences.
First, let’s assign a value to receiving no money. Because you would rather have money, let’s put a negative value on this:
V0 = -10.
Next, let’s assign a value to receiving one million dollars. This is a very good thing that resolves your financial difficulties, so let’s put a large positive value on this:
V1 = 1000.
Finally, let’s assign a value to receiving five million dollars. Because this has little marginal value to you over having one million dollars, let’s set its value as:
V5 = 1001
Now, we can determine the expected values associated with your answers to Question 1.
For Answer A, the probabilities that you receive 0, 1 or 5 million dollars are:
p0 = 0; p1 = 1; and p5 = 0,
and the expected value for this answer is:
EA = V0*p0 + V1*p1 + V5*p5 = 1000.
For Answer B, the probabilities that you receive 0, 1 or 5 million dollars are:
q0 = 0.01; q1 = 0.89; and q5 = 0.1,
and the expected value for this answer is:
EB = V0*q0 + V1*q1 + V5*q5 = 990.
Evidently, then, you should take Answer A.
You could change the relative values, and, if you did, you would pick Answer A when EA is larger than EB. The difference between these two expected values is:
EA – EB = V0*(p0-q0) + V1*(p1-q1) + V5*(p5-q5),
so you would select Answer A anytime this difference is positive. The problem statements sets the probabilities (p0, p1, p5) and (q0, q1, q5), and you set the values (V0, V1, V5).
Now look at Question 2. The analysis we used for Question 1 would still hold, and, assuming you don’t change your relative values for the outcomes, the Answer we pick would only depend on the differences p0-q0, p1-q1, and p5-q5.
For Question 2:
p0-q0 = 0.89 – 0.90 = -0.01
p1-q1 = 0.11 – 0 = 0.11
p5-q5 = 0 – 0.1 = -0.9
But these differences are exactly the same as for Question 1:
p0-q0 = 0 – 0.01 = -0.01
p1-q1 = 1 – 0.89 = 0.11
p5-q5 = 0 – 0.1 = -0.1
So, regardless of the marginal value you put on having 5 million dollars over 1 million dollars, you should select the same answer (A or B) for both questions, PROVIDED that you are optimizing the expected value associated with your answer. This is true regardless of the relative values you associate with each outcome.
@Thomas Bayes
‘Rational’ means that given a set of outcomes that are dependent on your actions and given that you have a (well-ordered) preference of the outcome you will chose the action that maximizes the payoff (based on your preferences).
The uncertainty isn’t really the issue. Suppose I rewrote the first question so the options are:
A. Ten dollars for certain
B. A lottery ticket that gives you an 89% chance to win a million dollars, a 10% chance to win five million dollars, and a 1% chance to win nothing.
If you still chose A, that would be very unexpected and rather silly. People calculate the risk in the payoff of the action (if you ever drive a car, you do this every day). Minimizing cost while ignoring benefit doesn’t make any sense. You’re right, though, that some people are risk-adverse and can make irrational decisions, but I’m not an economist either, so I’ll let one take the lead on that discussion.
Apples and oranges. Question 1 is qualitatively different from
question 2.
In question 1, you have the option of an assured million dollars
(via answer A). Thus, answer B is really asking, “Would you bet a
million dollars with an 89% chance of no change, 10% chance of
quintupling your money, and 1% chance of losing it all?
Question two says you have a choice of two gambles, each with a
low probability of success. Are you’re willing to reduce the
probability of winning by a slight amount to substantially increase
the potential payoff?
So, question one is “Are you willing to gamble?”, while question 2
is “You must gamble – do you go for the better expected return?”
If economists fail to see these questions as being substantially
different, I must conclude that their idea of what constitutes
“rational” is flawed.
Followup thoughts:
If you’re looking to test rationality, change the game in question
1. Same rules, but specify that the person will be committed to the
same answer for five rounds of this game. I bet a substantial
percentage of people who would give answer A for the original game
would change to answer B. Five rounds gives the laws of chance
enough applicability to counter a single bad luck round.
Five rounds of question 2 would probably not make much difference
in the answer choices.
Thomas Bayes: brilliant (except for the p5-q5 arithmetic in Q2). This puts numbers on what I intuitively thought, that the “emotional” value of going from 0% to 1% causes you to put “too much” weight on it, and act in an”irrational” way. However, as Ryan said, the individuals concerned are acting rationally, because they are doing what they believe to be in their best interests. This is because it is quite hard to figure out the relative “value” of the options in Q2.
I believe studies have shown that people tend to overestimate the frequency of unusual events, and this leads to this type of “error”. As Ryan said, this could be thought of as acting rationally, because the behaviour is based on a genuinely believed error or perception. I think this gives economics a bit of a dillema.
We talked earlier about efficiency. We had various scenarios, including town A trying to decide whether to build a safety rail. The efficiency was decided by the amount that people chose to pay to reduce risk. We have evidence here that people are not good at estimating extreme events. Lets say that we know for certain (somehow) that the benefit of a 1 in a million risk reduction was $100, and that this represented the “true” value. Lets say then that the risk reduction in building the safety rail is actually 1 in 10 million. The “true” value is therefore $10. However, because of peoples erroneous estimations of unlikely events, the value is perceived to be $5. If the cost per head of the rail is $7.50, is it efficient?
@Thomas Bayes
You are assuming that a person’s preferences cannot change. Just as it is fair to assume that different people assign different values to outcomes, I think it would also be fair to assume that the same person could assign different values to different outcomes depending on his or her situation. In question 1, your starting position is $1 million richer than in question 2.
(some) Economists remind me a great deal of the Behaviorists in psychology, who were just finally starting to fall out of favor when I started studying the subject. The Behaviorists also had very neat mathematical models for predicting how people would behave, how positive and negative reinforcements would operate, how people would always seek to minimize pain and maximize pleasure, and so on.
And every time behavioral predictions were put to the test in human experiments you would find all sorts of deviation from the predicted model behaviors. Some of the deviation was even quantifiable, but it required abandoning core precepts and predictions of the Behaviorists’ theories. I expect that current models of rationality in economics will go the same route.
Ryan: “In my opinion, this isn’t an economics question, but rather a question of numeracy. Anyone with an understanding of basic probability concepts (I won’t even say theory) will make consistent choices in the above examples.”
Really? I’ve know of Allais’s paradox for decades, had the ability to do the math for longer, and would choose A then B. Of course, I never talk about rationality when analyzing the data from my economic experiments. And, I suspect you’re taking a different focus.
To be fair, I did the math. But, I have the advantage of knowing I would be devastated if I selected 1B over 1A and earned zero and would only be disappointed getting zero in 2B or 2A. I’ve known for a while I violated independence in some cases.
Rationality is in the eye of the beholder, isn’t it. Would we have much need for competition if it weren’t?
B/B/B
Tristan–I think you are confusing rationality with risk-attitude (neutral/averse/seeking). The rationality of this puzzle is more a measure of whether you exhibit the same risk attitude in three different situations. Or is the criteria by which you decide your actions the same in each case?
I am risk-neutral because status quo is well within my comfort level. So I choose the option with the highest “expected-value” in each case.
My wife is risk-averse from growing up in rural Oklahoma so she chooses the one with the surest chance of a payout (any payout) in each case. Even taking a certain million dollars over a 99:1 chance at 5 million.
Both of these are rational because our criteria is exercised in a way that yields the highest value to each of us (substituting the value of certainty for $$ in my wife’s case).
It’s that million dollar sure thing v. 99% chance of $5m that ruins it though. Most people go for the 99%. To assume this makes them irrational assumes a constant risk aversion that just does not exist in nature. Even behavioral economists have thrown this out as a useful concept.
Is it really correct to call Allais a “relativity denier”. He certainly disputed Einstein’s priority, as have others, but wouldn’t that be kind of pointless (irrational?) if he denied relativity’s validity? Why dispute priority for a wrong idea? I seem to recall that Allais did some work in physics that supposedly contradicted some aspects of relativity, but I don’t remember what it was.
I see many comments justifying the A-B choice, but not any addressing the issues brought up by Question 3. What I would like to know is how many people answered A-B, and then switched one of their answers after reading Question 3 and Steven’s explanation. Did anyone find that reasoning convincing? If not, why not?
Ron–
In question 1, you have the option of an assured million dollars
(via answer A). Thus, answer B is really asking, “Would you bet a
million dollars with …?
You are suggesting that a person with risk-seeking behavior cannot act rationally?
Right…Play me a game of coin toss?
A> You put up $100. Heads I keep it, Tails you get $200 back.
B> You put up $50. Heads I keep it, Tails you get $100 back
C> You put up $1. Heads I keep it, Tails you get $2 back
Answer yes to all of them or none of them and you display a constant
risk aversion.
D> You put up $100. Heads I keep it, Tails you get $110 back
E> You put up $10,000. Heads I keep it, Tails you get $11,000 back
A person who holds a risk-seeking attitude could rationally choose yes to A-E.
It is not a question of first do you want to gamble. For a risk-neutral person a 50-50 chance at $200 has the same “expected value” as a $100 bill. Since they have the same value choosing one or the other is not a gamble.
Without getting into the problem posed above, it might be instructive to read Gary Becker’s article (I’ve forgotten the title.) demonstrating that the basic tenets of economic theory do not require the assumption of individuals behaving in a “rational” manner. Even if habit or random behavior characterizes individual behavior, the basic elements of economic theory, for instance the downward sloping demand curve or rising supply curve, are still valid.
@33 Benkyou Burito
>You are suggesting that a person with risk-seeking behavior cannot act rationally?
No, I’m not suggesting that. I’m suggesting that someone who is
seeking to avoid risking $1 million, despite promises of potential
lavish rewards, can not realistically be accused of acting
irrationally.
As for your A,B,C coin toss, you run into the fallacy of equating
the payoff to the pure dollar amounts. You thus assume it’s
perfectly linear for all dollar values. It’s not necessarily true.
In practice, answer yes to some of them and no to others also
displays a constant risk aversion, IF the dollar amounts are
translated to utils.
You may see things like this translated to Utils, which will differ
from person to person, because the value of a particular gain or
loss is different from person to person.
“One of the most common uses of a utility function, especially in
economics, is the utility of money. The utility function for money
is a nonlinear function that is bounded and asymmetric about the
origin.” (from the Wikipedia entry on Utility)
The utility value of risking $1 to win $2 is not necessarily the
same as risking $1 million to win $2 million, even if the odds are
the same.
I answered A-B-A. Question 3 doesn’t really help because knowing what’s written on the balls makes a difference to my desired result! I answer A to question 3 because in the absence of knowledge of what’s written on the balls, zero is my Schelling point – I assume it’s similar to question 1.
When I was on a blackjack team, my team used strategy index numbers that were CE – “certainty equivalent” – adjusted. What that means is that we did NOT play in order to merely maximize EV – “Expected Value” – for every hand. Rather, we tried to maximize EV *while* minimizing variance. High-variance plays make you more likely to lose your whole bankroll, which is such a bad thing that we were willing to give up some amount of expected value to reduce the chance of that happening. Same thing here.
In short, my utility function for making a bet cares about not just the mean expected return but also the general shape of the probability distribution.
I think Ryan has a point when he mentions numerical limitations. I correctly calculated the expected return for each option but I still chose inconsistently. I didn’t know why I made this decision but I had the intuition that it had something to do with my declining marginal demand for money and with the details of my risk-aversion curve.
Thomas Bayes has shown me wrong about the declining marginal demand for money. Apparently, I should have chosen consistently no matter how my demand for money changes with my wealth. Thomas made the effort to explain the matter and I benefitted from it.
After reading Thomas’s comment I went to Wikipedia to see if I could put numbers to my degree of risk aversion under different payoffs and wealth levels. Trying to make sense of the measures of risk aversion, whether they are relevant to the present problem and, if so, how I could apply them would cost me time and effort. I am not willing to go through the effort.
Steve asked us to examine the options and write down our choices. My choices were inconsistent and I was labeled irrational. But there are no real financial stakes here. The options are purely hypothetical. I would not gain or lose any money by writing down a mathematically consistent pattern of choices. Why would I try hard overcoming my numerical limitations?
If real, substantial money were at stake I would make a greater effort or seek advice. But again, it is very difficult for me to estimate in advance the optimal effort or money I will need to invest to arrive cost-effectively at the optimal decision. I will probably invest too much effort or too little. Does this mean I am irrational?
@Ron
Are you *really* trying to say that you would never choose a ‘gamble’ over a sure thing? What if the choice was between a certain $10 or a 90% chance at $10 million? What if the choice was between a certain million or a 99.999 chance at $10 million? Just because you might be more risk-averse doesn’t mean probability isn’t in the equation at all.
I answered B, B, B.
I think of being rational as using a predictable rationale for making decisions. For simple questions like the three in this post, the rationale for making decisions should be easy to write down in advance of seeing the details of a particular question.
One way to do this is to assign numerical values to the three outcomes: V0, V1, and V5. (These can be any numbers — positive or negative — but you should decide on these values before seeing the possible answers, otherwise you will be letting the question decide how valuable the three outcomes are for you. That will not be viewed as rational.) After you have assigned values, you can compute the expected value for each answer and pick the one for which it is largest. Because the differences between the probabilities for the two answers are the same for both questions, then you will alway pick answer A or answer B, regardless of the question. (I posted details on this earlier.)
Any objection to this answer must be rooted in a belief that either (a) it is okay to set the values for the different outcomes after you see the two answers; or (b) you should make your decision based on something other than the expected value associated with the answers.
If you believe (a), then you are not assigning the value of the outcomes in a rational way. $0, $1M, or $5M should not have more or less value to you because of the wording of a question. (If you think it is really, really, really bad to come away with $0, and you believe the marginal value of $5M over $1M is small, then set the values in advance to something like V0=-1,000,000,000,000, V1=1 and V5=1.01. You’ll pick answer A for both cases, though.)
If you believe you should make decisions based on something other than the expected value associated with the answers, then I’d be interested in learning the rationale you suggest for making decisions.
The common reasons for selecting A for Q1 and B for Q2 are something like this . . .
“For Q1, I will maximize my probability of obtaining at least $1M instead of taking the risk associated with the chance for $5M. For Q2, I will minimize my probability of obtaining at least $1M and, instead, take the risk associated with the chance for $5M.”
I realize I put this in my words, but the spirit is consistent with most of the explanations. What rationale would cause me to do this? How could I describe this in a way that would guide me in making decisions in other situations?
Several people will claim that a probability equal to 1 is special, but I’d like to learn more about why. For anyone who explains that, please use this question as an example:
Question 4:
What value of X, if any, will make you select Answer B over Answer A?
Answer A: $1M
Answer B: X% chance for $0; (100-X)% chance for $5M
@38 Super-Fly
>Are you *really* trying to say that you would never choose a
>’gamble’ over a sure thing?
Quite the opposite. I’m not even vaguely trying to say I’d
never choose to gamble over a sure thing. In fact, on question
2, I’d choose option B, myself. What I’m saying is that the
payoffs, expressed in utils for a given person, could make a
1:A 2:B perfectly rational.
Given the non-linearity, let’s try question one in what an
economist looking only at dollar choices would consider the same
thing, but most people would see differently:
Question 1a: Which would you rather have:
A. $10 for certain
B. A lottery ticket that gives you an 89% chance to win a $10,
a 10% chance to win $50, and a 1% chance to win nothing.
How many who chose “A” for question 1 would choose “B” for 1a?
Ron:
. What I’m saying is that the
payoffs, expressed in utils for a given person, could make a
1:A 2:B perfectly rational.
And this is in fact precisely wrong.
Let x be the number of utils associated with 5 million dollars
Let y be the number of utils associated with 1 million dollars
Let z be the number of utils associated with 0 million dollars.
To choose A over B in question 1, it must be the case that y > .89y + .10x + .01z
To choose B over A in question 2, it must be the case that .11y + .89z < .10x +.90z I will leave it as an exercise for you to prove that these inequalities cannot be simultaneously satisfied. In other words, no assignment of utils can justify choosing A over B in question 1 and then B over A in question 2.
@41 Steve Landsburg
I don’t dispute your proof; given the three variables you describe,
the result is as you show. The problem is, you have oversimplified
the problem and are missing one variable.
Repeating my earlier note:
In question 1, you have the option of an assured million dollars
(via answer A). Thus, answer B is really asking, “Would you bet a
million dollars with an 89% chance of no change, 10% chance of
quintupling your money, and 1% chance of losing it all?
Thus,
Let w be the number of utils associated with losing 1 million
dollars that were yours. For most people not extremely rich,
the value of w is an absurdly high negative number.
OTOH, if you doubt the existence of w, then would the restatement
of the problem as 1a, earlier, satisfy your rationality test? And
what explanation would you make for the difference in answers
between problem 1 and problem 1a?
Ron: Your latest comment is in one sense completely wrong and in another sense exactly right. Unfortunately, I am pretty sure (but not certain) that you mean it in the first of these senses. All of which requires much further explanation, which will be forthcoming in a separate post before long.
Steve,
I hope this is the beginning of a series around your thoughts on behavioural economics, you mentioned a couple of weeks ago.
@Ron,
I agree with you that the switch from one certain option in Q1 to no certain options in Q2 does change my assessment each scenario, and in my mind makes the two questions incomparable.
Is there any room for ‘negative utility’ Steve, or is this, too, irrational in the article’s sense?
Ron:
Losing $1M is the same as having the outcome be $0. So associate a util for this equal to -1,000,000,000. This is your w. Make it even smaller if this is something you really, really, really want to avoid. Now, let’s assume that you consider $1M to be an infinite amount of money, so the util for $1M and $5M are set the same. Make this a util of 1. Then, Answer A will have an average util of 1, whereas Answer B will have an average util of -1,000,000,000*(.01) + 0.99, which is much smaller than 1, so you pick Answer A.
For Question 2, you keep the same util function because you are rational. The average utils are:
A: 0.89*(-1,000,000,000) + .11 = -890,000,000 + .11
B: 0.90*(-1,000,000,000) + .10 = -900,000,000 + .10
So, you will pick A again.
Prof. Landsburg’s point is that for any util allocations you select, you will always pick the same answer (A or B) for the two questions. The reason for this is that the difference between the probabilities for the two answers are the same for the two questions. That is, the probability for $0 in Answer A minus the probability for $0 in Answer B is the same for both questions. Ditto for $1M and for $5M.
Provided you decide based on average util, you have to select the same answer.
I’d like to learn if there is an alternative to using average utility that would result in different answers for the two questions. The answer must be yes for anything that is not a linear function of the probabilities, but I don’t know if nonlinear functions make sense for
making ‘rational’ decisions. Anyone have a comment on this?
@Thomas,
I agree with you, the 1% difference between 100% and 99% percent is qualitatively bigger than the 1% difference between 11% and 10%.
And quantitatively,
There’s a trade off in Q1 and not in Q2., so maybe the equations should look more like this, as per your comment
Q1)
y > .89y + .10x + .01z – y —> 1.11y > .10x + .01z
Q2)
.11y < .10x + .01z
Which can reconcile.
@45 Thomas Bayes
>Losing $1M is the same as having the outcome be $0.
No, it’s not.
You hand X a million dollars. He’s holding it in his hot little
hands when I offer to sell him a lottery ticket with the given odds.
One of the possible results of buying that ticket is to end up with
-$1 million (i.e.: the ticket wins nothing). Passing up the
assured choice A is exactly equivalent to accepting it and then
using it to buy choice B.
Contrast that with offering X the choice between being handed one
of two lottery tickets with varying odds. The worst possible result
is zero, the amount he had before you handed him a ticket.
For Question 1, I’m not sure how to come to the conclusion that either answer can saddle a person with negative $1M, but I suppose the realization that some people will view it that way is the point of Professor Landsburg’s post. For both Question 1 and Question 2, I see only three options: $0, +$1M, and +$5M.
If we are going to think of this in terms of the “loss” of potential gain, then here is an alternative view: Rather than thinking that I will ‘lose’ $1M 1% of the time by selecting option B, I’ll also view option A as breaking even 89% of the time, losing $4M 10% of the time, and gaining $1M 1% of the time. Sure, I ‘lose’ $1M 1% of the time by selecting option B, but I ‘lose’ $4M 10% of the time by selecting option A.
If you can view yourself as potentially ‘losing’ $1M by selecting option B, then you have to view yourself as potentially ‘losing’ $4M by selecting option A.
Here is a new, but identical, game: First, I give you $5M.
Option A: I take $4M with probability 1.
Option B: I take $4M with probability .89; I take $5M with probability 0.01; I take nothing with probability 0.1.
Now Option A guarantees that you will lose $4M. Option B gives you a chance to lose nothing. If losing money that you have been given has a very bad utility, then why isn’t Option B the best one?
Something to think about . . .
Ron– You say, “I’m suggesting that someone who is
seeking to avoid risking $1 million, despite promises of potential
lavish rewards, can not realistically be accused of acting
irrationally”
Again, I really think you are mixing up rationality and risk-aversion.
I see how you do this and it’s a little sideways. In Q1 you reason that since A guarantees a $million then choosing B means you must risk that Million in order to take the 99% chance at $1-5million. Right?
>>”Thus, answer B is really asking, “Would you bet a
million dollars with an 89% chance of no change, 10% chance of
quintupling your money, and 1% chance of losing it all?”<<
Factoring out the $1m payout as a starting point then means that A is really asking if you would give up a 90% to quintuple your money with a 10% of losing $1million.
Now let's say (as the example does) that this million dollars I risk losing by choosing B I happened upon that very day. And let's say that there is not a thing in the world that $1m can do to add joy to my life because I am very comfortable or have very few demands. But there is something I prize that costs $3m (any number significantly greater than $1m). It would be completely irrational for me to "keep" the million dollars instead of taking the 90% chance for $5m
My example is not nearly as contrived as it sounds. Remember Bill Gates and the water bottle? Money is practically worthless to Gates, but at some point the potential payout of a gamble is worth more to him than the $1m he found in the laundry machine, because at some point the potential payout will be large enough to add value to his life. The $million is incapable of doing that no matter how certain he is to receive it.
Semantics causes some of the problems with this discussion, rational can be used in 2 ways. I think the layman uses it to mean “does what the person perceives to be the best thing” and the economist requires a certain accurate reflection of the world.
How about someone who is afraid of flying, who says they want to get to their destination the safest way. They erroneously perceive this as driving. You point to data showing that driving is more dangerous than flying, but they still perceive flying to be more dangerous. They quite rationally drive, whereas you perceive their behaviour as irrational, because they have chosen the more dangerous option, while professing it to be the safest.
Choosing A then B is like the fly-o-phobe. You still see yourself as rational, because it seems to you that you have justified your choices, whereas it can be shown that you have not.
I think to be rational in the simplest economic sense, if you wish to travel most safely, you have to choose the safest way to travel. This example shows us that people often behave contrary to this, without realising it.
Going back to the examples, in each case simple economic theory says we should allow the bits of each choice that are the same to “cancel out” So if we think of each choice in 3 parts: 89%, 10% and 1%.
Q1 A is 89:1, 10:1, 1:1
B is 89:1, 10:5, 1:0
Q2 A is 89:0, 10:1, 1:1
B is 89:0, 10:5, 1:0
We “should” let the 89’s cancel out, since they are the same, so we should not care about them. This clearly leaves us with the same choice in both cases. Many people clearly do not behave in this way. Any theory based on assuming they will will therefore lead you to incorrect conclusions. Whether we call this “rational” is a bit of a red herring.
@48 Thomas Bayes
>Now Option A guarantees that you will lose $4M.
I never had the $5M in the first place unless I could have said,
“Thank you” and walked away. Clearly the optimum strategy for your
new game is to say “Thank you” after you hand me the money, then run
away as fast as I can. If this is truly precluded (you have armed
guards to stop me), then the $5M isn’t mine, it’s merely a tease.
“You may already have won…”.
Ron: I think I see your point, that if you answer A to Q1, then you could have $1M. You could then be offered the gamble in 1B. However, this is not the question. By accepting the $1M you have rejected the chance to take the lottery ticket. All losses and gains must be compared to your position before being asked any question.
Following on from the above comment, people do not do this. Does this make sense? Economic theory at its simplest would say that if you take 1B, and lose, you are exactly as well off as you were before. In a strict economic sense this is right, but in a real sense you are worse off, because you are so regretful you did not take option B. This is the “w” Ron refered to earlier. Thomas Bayes used this in a calculation by assigning negative utils to receiving zero. However, because of peoples mis-understanding, they assign different values to w depending on the question. For Q1, w is huge, for Q2 it is small. Oh dear, we are right back where we started – people behaving irrationally because of a mis-understanding.
Harold–On this premise…”They quite rationally drive, whereas you perceive their behaviour as irrational, ”
This is a common use of the term rational… to mean to act in the most beneficial way. But the key element of rationality in behavioral economics is that a rational-actor will -always- behave in the manner they believe yields the most value (benefit per cost, or benefit per risk, or expected value).
Your fly-o-phobe would be perfectly rational so long as the amount of benefit required to get him on that plane remains roughly constant.
But if he refuses to fly on Tuesdays but can be convinced to fly on Wednesdays even though he believes the risk to be the same, he is not acting rationally.
If he refuses to fly to go to his beloved mothers funeral because it is too risky but will fly to Maine in order to buy lobsters that he is relatively indifferent to, he is not acting rationally.
Yet if you switch it around and have him willing to fly to the funeral but not to get lobsters, he could be acting rationally. It would be rational because he is applying the same decision making criteria in both cases. The funeral just happened to cross the threshold where the lobster did not.
Rationality is never about objective judgement. A person can be acting rationally even while robbing a liqueur store high on crystal meth and using a baby as a (small) human shield.
At least part of the problem is with the economists. If I do the math, I can see that the two problems are calibrated so that the move from A to B causes about a $0.33 increase in expected value for every $1.00 of increase in standard deviation. Thus, given a stable risk tolerance, an individual should prefer either both A’s or both B’s. There are 2 problems with this:
First, many study partipants don’t have the knowledge to do this type of analysis. Even if they did, are they given tools to do the calculation? If not, the study would simply be measuring quality of estimating the important values. If the participants’ estimation abilities are systematically biased, then it would not be surprising to find they make systematically “irrational” choices.
Second, the researchers’ analytical framework is bogus. The assumption underlying the test is that the only two things that matter in assessing the options are expected return and standard deviation of return. When returns are normally distributed, this is correct for people with “reasonable” utility functions. These returns are not normally distributed (even approximately). The researchers can get back to the result that only expected return and standard deviation matter if they assume partipcants have quadratic utility functions. Quadratic utility functions are not reasonable (they imply that at some level of wealth, utility begins to decrease with increasing wealth). Given reasonable utility functions, the higher order moments of the return distribution matter and the examples aren’t calibrated so that the higher order moments change in similar ways from option A to B. Therefore, even if the participants had the knowledge and tools to analyze the options, it isn’t necessarily true that the only “rational” choices are AA and BB. I can probably find the citation for the paper that walked me through the math on this paragraph. I’m pretty sure it was by William Sharpe.
Benkyou Burito, thanks, all help on this appreciated. So rational behaviour in the economic sense does not require any connection to the “real world”. That makes sense. Perhaps it would be easier to use the term “consistent” instead of rational.
In the risk thing, is it consistent to have a linear benefit / risk reduction model? i.e. if 1 in a million is worth $100, then 1 in 10M is worth $10? It would seem to be so, or you would get a different answer to one 1-in-a-million project to ten 1-in-10M projects.
In the examples, it seems the A and B choosers are being inconsistent.
AaronG:
The assumption underlying the test is that the only two things that matter in assessing the options are expected return and standard deviation of return.
There is absolutely no such assumption.
Did you notice that the phrases “expected return” and “standard deviation” did not appear anywhere in the argument?
Steve,
I see what you mean (A link in the post to the Allais Paradox would have been helpful). Looking at just the first and second moments would make it seem that choosing 1A and 2B is a “rational” possiblity while the higher moments push back in the other direction. It seems amazing that they always cancel out given well-behaved utility functions, but I’ll believe it in lieu of doing any more digging.
Forget the details of the equations. Here is the situation in a nutshell. If you pick A and B you are inconsistent and hence irrational. THIS IS WRONG. It is wrong as a matter of logic because the decisions come at different moments and rational people can change their minds. That’s pretty sophistical, and evades SL’s point so let’s ignore that and stipulate no mind changing. IT IS STILL WRONG because all you have shown is that I made an error. Only if I PERSIST in my choices after understanding that they are contradictory can you call me irrational. This is the key point of this whole discussion, because many try to use such little traps for the intuition to prove grand conclusions that do not follow.
Ron: I think your trying to include the “current wealth” as a variable a person’s utility function, the sure million is akin to having a higher initial wealth point. Whereas a high probability to make much more than a million puts you at a lower starting wealth point. And hence you measure the two situations with two different utility functions.
This intuitively makes sense, and is likely a representation of how humans currently behave, but is probably still “irrational” because it violates some consistency principle or other. Which suggest this is not how humans should behave.
I’ll try to work out an example when i get some time…
Harold– You suggest, “Perhaps it would be easier to use the term “consistent” instead of rational.”
But someone who consistently (or inconsistently) makes choices that they believe to be sub-optimal is being rational even if they are consistent.
Alright. Pulling a lazy-man and pulling from wiki (which pulled from Milton Friedman) “Rational choice theory uses a specific and narrower definition of “rationality” simply to mean that an individual acts as if balancing costs against benefits to arrive at action that maximizes personal advantage.”
Where people stall is fixating on what can and cannot constitute a “maximum personal advantage”. But it doesn’t matter. Different people value things differently.
But unless you’re manic (ceteris peribus) you will not be risk-averse on Monday and risk-seeking on Tuesday. Yet when the scale of risk changes that is exactly what happens.
The puzzle above assumes constant risk-attitude regardless of the scale.
Take it to the next extreme:
Q1> Choose (A)100% chance of $500Billion or a (B)99% chance of $500B-$2.5T
Q2> Choose (A)11% chance of $500B or (B)10% chance of $2.5T
At this scale it seems I am decidedly risk-averse and will choose A/A, But substitute Q2 with the one from Steven’s quiz and I would answer A/B
It is not actually a shift from risk-neutral to risk-averse because there is nothing I want that $500B will not get me. The marginal value of every dollar above that option is 0. At this scale the different dollar amounts are equally valued and the cost-benefit decision becomes “Choose 11% chance at 1 fliptillion dollars or 10% chance at 1 fliptillion dollars.” Being risk-neutral means choice A.
This is fun. Ask an ascetic bhuddist monk, whose maximum advantage would be to lose everything and maybe even starve to death.
sorry addendum:
But someone who consistently (or inconsistently) makes choices that they believe to be sub-optimal is being rational even if they are consistent.
Should read “…is being irrational…”
Benkyou Burito. I should know better than to try to substitute one word for another. However, whilst one can be consistent and irrational, I think it is impossible to be rational and inconsistent. So we can use consistency as a test for irrationality. If you are inconsistent, then you are irrational.
I am not clear on what you mean when you say “substitute Q2 with the one from Steve’s quiz and I would answer A/B” Do you mean with $1M and $5M you answer A/B, but with $500B and $2.5T you answer A/A? But Thomas Bayes calculation set the utility for $1M and $5M the same, and still came up with the same answer. The reason for the “irrationality” is not the reducing utility of larger amounts of money. I think it is with the estimate of risk.
All deliberate human action is rational.
What this experiment exposes is that “expected value” is not a good model of decision making in probabilistic scenarios, particularly when the mean is wildly different from its constituent observations.
Noah Yetter:
What this experiment exposes is that “expected value” is not a good model of decision making
How so?
“Expected value” is a bad model for any decision that includes an
unaffordable loss. Buying insurance uniformly gives you a worse
expected value than not buying it because of the added cost of
overhead and company profit, Despite this, it’s still prudent to be
covered by insurance in cases where it spreads the cost of a
low-probability but catastrophic loss.
Similarly, betting everything you own, including your house, on a
single flip of a fair coin, with a pot worth 210% of your worth, is
probably a bad idea. Sure, the expected value is 105% of what
you’re risking, but do you really want to take the risk?
Harold–
I’m not well versed on Thomas Bayes, But “maximum advantage” is not necessarily the same as maximum utility. The zen bhuddist who recieves the most value from an object when it breaks, rots, or never existed. Who is made “richer” by being robbed.
I am anti-social an get no value from giving money away. I have modest needs, but even if I did not $500B would provide for me until I die everything I could ever hope to want or need. If you offered me an additional $2T, what reason would I have to take it. At zero risk, I might take it just for the experience of having a trillion+ dollars (an experience I am indifferent to). But at even the slightest amount of risk it becomes irrational for me to risk the $500b for a chance at $2.5t.
Now as for rational inconsistency we can do that too. A person playing slot machines at a casino with someone else’s money, someone whose feelings they care little about, would likely be (or become) risk-seeking. Neither the $$-cost nor the $$-benefit of their gambling belongs to them. For this person the cost is time and energy to pull that handle down which is consistent from one machine to the next. And the benefit is the enjoyment of watching the wheels spin and free booze which is also consistent from one machine to the next. That person could rationally wander the floor playing slots at random without any consideration of the odds of winning which varies highly from one machine to the next. He might play the machine that (by reading the face of the machine) can be determined to have a 99% payout and then rationally choose to go play the one with a 45% payout only to decide (rationally) that the money-wheel (universally the worst game in the house)would be the best place for him.
The danger is always in judging someone else’s rationality by your own values. Ascribing values is a matter for philosophers and theologians.
In this context rationality is not measuring value but the distance between a persons values and their actions. If that distance changes dramatically from one action to the next they are not behaving rationally. Values can change from one action to the next as long as the actions based on that value change accordingly. The gambler could learn, halfway through his bankroll, that he is playing with mafia money.
Ron:
“Expected value” is a bad model for any decision that includes an
unaffordable loss.
Of course, and of course everyone knows this. But of course this has nothing to do with the case at hand. Which is why I wondered why Noah Yetter mistook it to be relevant.
Sorry I forgot to throw this last part in…
Harrold said, “The reason for the “irrationality” is not the reducing utility of larger amounts of money. I think it is with the estimate of risk.”
Two points.
First> Rationality is unconcerned with utility. Utility is only an issue if the person whose actions you are studying values utility.
Second> Risk is never estimated. It is determined logically from measurements that may be estimated. There is a subtle difference even if the end result is the same. Because risk is not estimated the risk of an action must change in relation to changes in the measurements and/or estimations of the benefit, cost, or statistical probability of success of the action. You cannot move from one roulette wheel to one with fewer green spaces and estimate that your risk is about the same.
Risk relevancy:
Since question one is the equivalent of “I will hand you $1M. Will
you risk the entire sum on the following bet…”, the risk is the
loss of $1M.
This is why I proposed question 1a, which will get a different
answer from most people. It boils down to the equivalent at a lower
dollar value: “I will hand you $10. Will you risk the entire sum
on the following bet…”.
A simple way of stating the principle that suggests mixed choices are irrational is: When deciding between two options A and B, your choice should be independent of whether you are told that, in fact, with some probability, your choice will be rendered irrelevant and some other thing C will happen. (In the example above, C is a $1M payout in case 1, and a $0 payout in case 2).
Another, even simpler way, is to say that when making choices in life whose relevance is contingent on some other factor, the choices should be independent of that contingency.
This principle certainly seems to be true, but I am curious how it could be shown theoretically.
Jonathan Campbell:
This principle certainly seems to be true, but I am curious how it could be shown theoretically.
You’re absolutely right that this is the key principle. Von Neumann and Morgenstern took it as an axiom. Alterntatively, it follows from the assumption that people maximize expected utility (which is not at all the same thing as maximizing expected value!!).
Ron–
Again, that would Also make Q1 the equivalent of “Here is a lotto ticket with a 1% chance of being worth nothing, a 10% chance of being worth $5m, and a 89% chance of being worth $1m. That is yours now. But, I will buy it back from you for $1m.”
But either way it does not reflect the Q1 in Steven’s quiz. Because in the quiz you are not being given $1m then asked to part with it for a 99% chance of keeping or growing it. Or the other way around.
You are being given the choice, having been given nothing at all yet, between $1m and a 99% chance at $1m-$5m. The choice must be made before you receive anything therefore the consequences of that choice cannot consider a loss only a gain vs. possible gain. And it goes back to risk-attitude, in most cases the advantage a person would gain by being given $1m is so abstractly immense that the marginal value of the advantage gained by another $4m is not enough to risk it. Even if the risk of loss only 1%.
If I were carrying my son, I would not drop him (a fall of 4′) to catch someone else’s baby that had been dropped from a burning building. The value I place on my son is so high that even the teensy chance of him bonking his head and suffering permanent harm or death (teensy but not zero) yields a larger expected-harm to me than letting someone else’s kid dash against the sidewalk.
But refusing to drop my son is not absolute, nor is it an “unaffordable loss”. In this situation I would drop him that 4′ in an instant to catch my daughter. I value my children more or less equally (never let them know who the favorite is, only that there is one). So the expected harm to me of dropping my son 4′ is certainly less then that of allowing my daughter to fall 30′ to the cement. The value of the risk is the same in each case (the expected harm to my son), only the value of the reward has changed (saving the life of a stranger vs. the life of my daughter). Applying the same criteria leads to different choices.
So I would say that there is no such thing as an “unfordable loss” in absolute terms, only losses with costs so abstractly high that the decision becomes moot. They are not exactly the same thing.
Benkyou Burito-
I agree with almost everything you wrote. However, let me
expand the part I don’t agree with.
Suppose you tell me that
5X + 10 = 40
I then tell you, “So
5X = 30
and you say, “No,
5X + 10 = 40
Let me attempt to demonstrate the equivalence in question 1.
Suppose I hand you $1M. I then offer you the lottery ticket,
if you hand back the million. You say no. At this moment,
you’re exactly where you’d be if you said Yes to answer A.
Suppose I hand you $1M. I then offer you the lottery ticket.
if hand back the million. You hand me the million and take
the ticket. At this moment, you’re exactly where you’d be if
you said Yes to answer B.
There is no third or more alternative. You can’t avoid being
in one of the two states as long as you play by the same rules.
The absolute certainty in answer A allows the equivalent of
a commutative property. However, looking at it in this way
makes it very clear that you’re spending a million dollars on
that lottery ticket.
Ron– Your Q1A does not account for endowment-effect, the tendency to value something more after taking possession of it.
A great (and commonly cited) example is the Duke University basketball ticket experiment. Demand for basketball tickets at Duke is apparently much higher than supply. They could just raise the price to equilibrium but that would put dedicated fans out in the cold while casual yet wealthy fans buy up the seats. They could have a lottery but then lottery winners would scalp the tickets and the end result would be the same as raising the price. So someone came up with the idea to have a lottery that was more trouble than the price scalping the tickets would compensate for. A week before a game anyone who wants to buy a ticket shows up and is given a punch-card. The hopeful fans set up a tent city on the lawn. At random intervals and frequencies a school official will step out and sound an alarm, at which the ticket buyers must line up within 5 minutes, walk through a turnstile and validate a single punch-card (their own or someone else’s, but only one) under observation. Anyone who misses a punch is cut from the line. At the end of the week the properly endorsed punchcards are put into a drum and then drawn at random. If your card is drawn you may purchase up to 2 tickets at $100 each.
So, a couple economists got a great idea. They pretended to be scalpers and tracked down students who had made it to the drawing. Students who lost the drawing haggled them down to an average of $170 per ticket. Students who had won the lottery required, on average, an offer of $2400 to part with their’s. Remember, the effort to obtain the ticket isn’t a factor because the the ticket buyers and the ticket sellers all had gone through the entire identical process. Yet the willingness to pay and the willingness to accept for the same ticket differed by a factor of 14X ( http://www.scribd.com/doc/20662377/The-Endowment-Effect )
Long-story-short… Q1 and Q1a are fundamentally different questions. In Q1 possession of neither payout has been taken at the point the choice has to be made between A or B. As such there is an element of Knightian-uncertainty ascribed to either choice which makes endowment-effect impossible and a rational decision can be made based on perceived value of either option.
In Q1a, that uncertainty is removed from only one of the options. The nature of option B (that its value cannot be realized and thus “possessed” until after the decision) prevents the kind of endowment effect possible with option A and/or maintains a degree of uncertainty in addition to its inherent risk even after being delivered.
You cannot have a Q1a in which option B is given at the onset and thus risked by choosing to trade it for option A because the reward for option B can not be owned before the decision is made.
Aha.. Try it this way:
The value of $1m in your pocket is worth (perhaps un-measurably) more than the expected value of a 100% chance of receiving $1m. But Steven’s Q1 is not intended to determine whether you would choose the expected value of a 99% chance at $1-5m over $1m. It is intended to determine whether you would choose the expected value of a 99% chance at $1-5m over the expected value of a 100% chance at $1m.
According to several and various forms of decision/game theory they are decidedly not the same question. Empiric studies mostly support this. But who knows.
Benkyou Burito –
Okay, I’ll concede that the endowment effect tends to mess up my
alternate example (good explanation, BTW). However, there’s still
something else in operation in the original Q1. It’s cleverly
designed to bring about an “illogical” result. The effect of the
million dollar sure-thing still biases the results. As I’ve noted
previously, if both problems had all dollar figures reduced by a
factor of 100,000, then very few people would fall into the
“illogical” category. When only $10 is at stake, it’s easy to
choose the alternative with the best expected value.
If you accept the changed result with the dollar values reduced,
then it seems incorrect to me to simply label the other “illogical”
and walk away. Instead, economists should recognize the difference
and come up with some modifying factor that would reflect how the
real world operates.
I’ll also note that the original pair of problems produces a
perfectly logical result, even with a 1A, 2B answer, if given the
right axioms to follow.
1. Don’t gamble.
2. If you’re forced to gamble, go for the best expected value.
I did an informal poll at work among a group of full and part-qualified actuaries (so it’s worth bearing in mind that these people, even if they are not earning a lot right now, have quite high expected lifetime earnings and also have, or should have, a reasonable grasp of probability and statistics). The results are:
AA 3
AB 3
BB 2
BA 0
When probed for their reasoning:
BB’s could easily explain their reasoning in terms of expected values.
AA’s gave a sensible argument along the lines of 5,000,000 is not much more to me right now than 1,000,000 so I should take the option which maximises my chance of winning at least 1,000,000 which is A in both cases
ABs found it harder to explain their reasoning. They tried to use the fact that 1,000,000 and 5,000,000 had approximately the same utility to them to explain their answer in scenario 1, but couldn’t really then work out why they preferred a 10% chance of receiving that utility rather than a 11% chance in scenario 2. But from the clues I did gather, the reason for their choices seems to be the regret they are scared about feeling. In scenario 1, if you choose option B there is a small chance that you will feel a massive amount of regret – i.e. you will KNOW with hindsight that you made a choice which cost you 1,000,000. In scenario 2 even if you come away with nothing you can console yourself with the thought that you probably would have come away with nothing in any case. Given that, you might as well go for the big payout.
Ron– This is an area where mathmatics, econ. psychology, sociology, metaphysics all come together and purists in each camp are Taliban-obstinate.
Most of what I have been doing lately is linguistics and semantics consulting for a Chinese company that is developing a decision support system software. (trivia-95% of the Mandarin language can be computer translated with native fluency. But then again, they only have a single word for prostitute).
Anyway, most of the innovations they are bringing involve modelling unawareness as a function of uncertainty. You’ll recall Rumsfeld’s unknown unknowns. Imagine trying to calculate risk based on measurements of all the things that you don’t know. You can’t just make a list, you would only list the things that you know you don’t know. http://www.jstor.org/pss/2998545 is a serious but readable introduction to unawareness. You’ll need access to JStor, if you don’t have that it’s still well worth the $10.
As for your axioms, they would only be rational if they can be expected to yield the maximum advantage. Logical or not.
Erik–Rational Choice theory gets fuzzy at the margins and the work is all about modelling the extremes that do not empirically demonstrate linear risk aversion. The supposed rationality killer is always found at scales of risk too slight to matter or so severe that the stakes become abstract. Really, how much would the payout have to be to play russian roulette? For some people there is a number.
http://www.jstor.org/pss/1911053 is another econometrica article on JStor. This one is pretty thick reading, but covers a lot about ambiguity-aversion as it relates to risk and rational choice. Your experience talking to the ABs relates to regret aversion, that people will act so as to minimize possible regret (in one model). I don’t have any legitimate science handy to reference but Wiki has something if you search for “minimax regret”.
Some of the approaches being developed to try and predict human behavior are really out there. I mean if you read either of the articles I mention you get about two pages in and think it’s just a farce, someone pulling your leg.
Benkyou Burito: when I refered to Thomas Bayes, I meant the one who posted above, not the original one. Perhaps he is communicating from beyond…