Why do some people sign up to have their brains frozen for possible future resurrection, while others don’t? You might think it’s because the first group has more faith in future technology, but Scott Alexander has survey data to suggest otherwise. Active members of the forum lesswrong.com, many of whom had pre-paid for brain freezing, thought there was about a 12% chance it would work. Among members of a control group with no interest in the subject, the estimate was about 15%.
In a long and characteristically thoughtful blog post, Alexander concludes that:
Making decisions is about more than just having certain beliefs. It’s also about how you act on them.
and
The control group said “15%? That’s less than 50%, which means cryonics probably won’t work, which means I shouldn’t sign up for it.” The frequent user group said “A 12% chance of eternal life for the cost of a freezer? Sounds like a good deal!”
Goofus (says Alexander) treats new ideas as false until somebody provides incontrovertible evidence that they’re true. Gallant does cost-benefit analysis and reasons under uncertainty.
So a few weeks ago when we all thought that the chance of a global pandemic was, oh, about 10%, Goofus said “10%? That’s small. We don’t have to worry about it.”, while Gallant would have done a cost-benefit analysis and found that putting some tough measures into place, like quarantine and social distancing, would be worthwhile if they had a 10 or 20 percent chance of averting catastrophe.
Alexander’s point transcends current events. He’s not blogging just about the pandemic; he’s using the pandemic to illustrate a broader point about the virtues of being a Gallant, not a Goofus. The whole thing is well worth reading (as is almost anything else from the same author). I have nothing to add to it.
But it did put me in mind of a related matter, namely the ways in which legal systems can make either Goofuses or Gallants of us all.
A great many years ago, I attended a fascinating talk by Shlomo Sternberg, a professor of mathematics at Harvard and a scholar of Jewish law. My memory of that talk is quite vivid, and I believe quite accurate. But I’m well aware of the fallibility of human memory in general, so I can’t absolutely vouch for everything that follows in this section. (In fact there are two reasons I can’t vouch — my memory might be inaccurate and/or Sternberg might, on some points, have misspoken.) I once wrote to Sternberg to ask if he had any sort of transcript or notes from that talk, or at least a strong memory of what he had said. He (quite understandably, since at least twenty years had passed) thought he had nothing, but promised to look in his files and get back to me if he found anything. I did not hear from him again.
So take what follows as an account of a legal system that might be Jewish law, or might be some purely hypothetical system that has some things in common with Jewish law. (Perhaps there’s a Talmudic scholar in the audience who can enlighten us on this.) Either way, we can learn something by thinking about it.
Sternberg’s fundamental thesis (again, as I remember it) was that Jewish law incentivizes people to minimize the probability of bad outcomes, whereas English and American law incentivizes people to minimize expected losses (or equivalently to maximize expected gains). The difference is that “expected losses” account not just for the probability of a bad outcome, but also for its severity.
I remember these examples:
- You have three pieces of meat, two kosher, one not. You lose track of which is which. Can you eat them? Answer, according to (my memory of Sternberg’s account of) the Talmud: Each individual piece of meat has a 2/3 chance of being kosher. So if you choose one of them and ask “Is this kosher?”, a “yes” answer gives you a 2/3 chance to be right and a “no” answer gives you only a 1/3 chance to be right. A 2/3 chance is better than a 1/3 chance, so you should say yes. Repeat three times and you’re allowed to eat all of the meat.
There is much that is troubling here, because that strategy actually gives you a 100% chance of eating a non-kosher piece of meat, so it matters whether you inquire about each piece separately or whether you inquire about all three as a group. I’m not sure what principle the Talmud invokes to settle that issue. But that’s not the point that concerns us here. The point here is that we’re instructed to focus strictly on probabilities, without regard to any measure of how bad it would be to be wrong in either direction.
- You’re traveling to town with a left pocket full of coins designated for charity and a right pocket full of coins designated for your personal expenses. (In certain circumstances, you’re required to designate these coins in advance, and cannot substitute a coin from one pocket for a coin from the other, even if they’re otherwise identical.) You fall off your horse, and the coins all spill out into one great heap.
If there were more coins in your left pocket to begin with, then each individual coin has a greater-than-fifty-percent chance to be a charity coin, so each individual coin must be given to charity. If there were more in your right pocket, you can spend all the coins on yourself.
- You take in an abandoned child. Should you raise him as a Jew? It depends on whether he was born as a Jew. Suppose you don’t have that information. Answer: If the majority of your neighbors are Jewish, you assume he’s Jewish. If not, not.
(A later commentary amends this prescription by directing your attention not to the majority of your neighbors but to a majority of those neighbors who are of such character that they would abandon a child.)
English and American law, by contrast, tend to direct you (or incentivize you) to think about expected values. What is the cost of raising a Jew as a non-Jew or a non-Jew as a Jew? What is the cost of eating non-kosher meat or of discarding kosher meat? Multiply these costs by the probabilities, and act accordingly.
I don’t recall Sternberg mentioning this in his talk, but his take on English and American law is right in line with a tradition of scholarship that starts with the work of Judge Richard Posner, formerly an academic and a pioneer in the application of economic reasoning to the analysis of law.
In 49 of the 50 United States, we have two bodies of law — the statutory law that is enacted by legislatures, and the common law that is created by judges when they issue rulings that set precedents which future judges are expected to abide by. (Do you know which state is the exception?) Posner has argued that while statutory law reflects a mishmosh of conflicting principles, the common law is remarkably consistent in applying the principle that people should be incentivized to care about expected values. That is, the common law wants you to be Gallant, not Goofus.
I’ll give just one example, though Posner has a million of them. You and a bunch of other merchants are on a ship at sea. You’ve all brought cargo of various weights and of various values. I’ve brought jewels. You’ve brought an antique piano. Et cetera. Now the ship starts to sink, and some cargo must be jettisoned to save the ship. I throw your piano overboard. Who should bear the cost of that?
According to the common law principle known as “general average”, we all share the losses in proportion to the value of our cargo. So if my jewels account for 20% of the value of all the ship’s cargo, I’m liable for 20% of the cost of your piano — regardless of whether it was tossed overboard by me, by you, or by someone else.
The interesting thing about this rule is that it incentivizes me to act precisely in the interests of the community as a whole. If I toss the piano, 20% of the benefits (that is, 20% of the value of the cargo saved) go to me. Under the rule of general average, 20% of the costs accrue to me. So I will toss the piano if and only if 20% of the benefits exceed 20% of the costs. But that happens when and only when 100% of the benefits exceed 100% of the costs, which is exactly when an efficiency-obsessed economist would want me to toss the piano.
So the law of general average, in a way that’s not obvious at first, incentivizes everyone to behave like a socially beneficent Gallant.
It’s a commonplace observation that some legal systems elicit better behavior than others, and the best legal systems are the ones that incentivize people to care about how their actions affect others.
The point here is related, but, I think, separate. It’s perfectly possible for Gallant and Goofus both to care equally deeply about their neighbors (or to act as if they do), and still make very different policy decisions — as, for example, in Scott Alexander’s pandemic example. Therefore it’s not enough for the law to incentivize acting-as-if-you-care; it also needs to incentivize thinking-like-Gallant — that is, doing cost-benefit analysis and reasoning under uncertainty, or in other words, accounting for expected values and not just probabilities. Some people are born Gallants. Others need a bit of a nudge.
“Do you know which state is the exception?”
My guess would be Louisiana, on the basis that as a former French possession it inherited a the legal system based on the Napoleonic code.
Daniel Hill: Correct!
Steve, you write “Among active members of the forum lesswrong.com, many of whom had pre-paid for brain freezing, about 12% thought it would work. Among a control group with no interest in the subject, 15% thought it would work.”
What Scott Alexander actually said is subtly but importantly different: “Frequent users of the forum (many of whom had pre-paid for brain freezing) said they estimated there was a 12% chance the process would work and they’d get resurrected. A control group with no interest in cryonics estimated a 15% chance.”
Scott was referring to the probability of success, not the percentage of people who thought it would succeed. (Your subsequent excerpt gets it right.)
Slate Star Codex is one of my favorite sites!
Only downside (kind of like the Economist, where I let my printed subscription lapse because magazines were piling up in my house faster than I could read them…I have a digital subscription now!) is that he is capable of writing at great length, and although he is always worth reading I don’t always have the time!
Keith: You are very right. I will edit shortly. Thanks.
Update: Fixed now. Thanks again.
@Daniel Hill, #1:
Nice call. I was going to guess Hawaii, for a similar reason. I could see it having retained some of its pre-annexation legal code.
@Steve Landsburg:
The first lesswrong.com link is also wrong. It tries to go lesswrong.com under the blog post instead of straight to lesswrong.com, thus leading to the dreaded 404.
I will go on record has having been a goofus in this instance. Part of it is perhaps that I tend towards optimism, but part of it is that I have noticed a pattern with doomsayers: they are almost always wrong.
I remember as a kid hearing about how Earth was going to freeze over. We would need lasers to protect cities from encroaching glaciers. Pollution was getting so bad we would soon need to wear oxygen masks on the streets. The trees were all going to die from acid rain.
Later we were going to suffer anything from a severe depression to outright apocalypse from the 2000 computer bugs. Then 9/11 happened and we have been frantically searching every bush, basement and bag for non-existent terrorist plots since. In the meantime, Earth is also warming and the oceans are rising and we need to do something right now, because next year, and this time we are really serious, will be too late.
That is just sticking to mainstream doom-saying.
So it’s hard for me to take the alarmists serious. There is also another consideration, which is that if we listened to all the alarmists and followed their recommendations, such as preparing for the worst possible pandemic, ending all climate impacts, hoard year-long supplies for everyone, engineer all buildings to withstand Richter-9, etc, we wouldn’t have time or resources to do much of anything else.
“but part of it is that I have noticed a pattern with doomsayers: they are almost always wrong.”
We have been warned for decades that a pandemic was likely and could be very serious. It turns out they were not wrong.
Another entirely predicable problem is antibiotic resistance. Yet little is done because there is not much money in it.
We really do need to have some approach other than the market to address these problems, because the market is failing. At least, the current market is failing.
We did something about acid rain. We did something about the Y2K bug.
” if we listened to all the alarmists and followed their recommendations, such as preparing for the worst possible pandemic, ”
Not even the worst possible pandemic, just a pandemic. If we had proper poicies in place the consequences could have been reduced. Instead we carry on with head in the sand. The propblem the doomsayers have is that if we actually act on their reccommendations the problem fails to manifest, so everyone says we over-reacted. Look- nothing happened, what was all the fuss about?
This does not mean that we over-react to any possible problem. We do know what problems are likey to happen. Pandemics and antibiotic resistance are two of these. We have been found wanting on one, lets not be found wanting on the other.
Henri, your comment suggests that you are engaging in the kind of all-or-nothing thinking exemplified in Steve’s examples. Indiscriminately dismissing all warnings of doom isn’t a good long-term strategy, even if many (or all) of them are quite unlikely. It is better to consider important relevant parameters (likelihood of doom scenario *and* cost if it happens, for starters) and act in such a way as to minimize your losses/maximize your gains over the long term. IOW, think more like Gallant than Goofus.
@Harold, #9:
> It turns out they were not wrong.
Some of them were.
> the market is failing
In the current crisis, there is widespread agreement that everything failed: governments, markets, ngos, experts.
> We did something about acid rain. We did something about the Y2K bug.
True about acid rain, but the dire predictions from the 70s and 80s never materialized, despite us not doing everything the environmentalists wanted.
The panic efforts around Y2K were largely unnecessary or redundant.
> Not even the worst possible pandemic, just a pandemic
Exactly. This is the worst pandemic since the Spanish Flu 100 years ago. If we were to elevate the state of pandemic preparedness to the extent alarmists want, it would be overkill at least 99% of the time. I remain doubtful that would survive a sober cost-benefit analysis.
@arch1, #10:
“Indiscriminately dismissing all warnings of doom isn’t a good long-term strategy”
I do understand, and I’m not proposing we do that. I’m just expressing sympathy with the voices that cautioned against caution early in this crisis. For instance, on Feb. 3, I probably would have agreed with Washington Post’s headline, “Why we should be wary of an aggressive government response to coronavirus.” Even now, I don’t think the points in the article are entirely wrong.
An even more famous example of the Posnerian or expected value approach to law is the so-called “Hand formula” developed by Judge Learned Hand in order to determine whether a particular act or omission is negligent or not. The “formula” has three variables: B, P, and L, where L is the loss or severity of an accident, P is the probability of the accident occurring, and B is the “burden” or cost of avoiding the accident. According to Judge Hand (as explained by Posner), a person or firm is negligent only when B is less than P times L.
#13. Damn shame he wasn’t called Thumb.
#11. “The market is failing” I was thinking about antibiotic resistance in particular with that comment, but I acknowledge that was not what I said.
We can see that one coming but are not doing much about it. There is little in the pipeline. We could hope for a miracle, but really the world should be adressing this in some way.
The market as we currently have it is not working for this. One argumant is that the market would work if were left totally alone. However, that is not going to happen in the short term. We have to scrap patents for a start. Dealing with the situation as it is, if companies are not going to develop new antibiotics maybe Governments need to. We have seen the results of a new viral pandemic, we can anticipate similar problems if we have a sudden resitant bacterial infection pandemic. The effort is under-funded compared to the size of the potential problem and the probability of it happening. In Hand’s formula, I think B is way less than P times L. A UK charity is aiming to raise £5 million by 2021. This is really nowhere near enough.
There are examples where English and American law make the same probability mistake as Jewish law.
In civil cases, factual decisions are based on the “preponderance of the evidence”, meaning more than 50% likely. But a judge can make a decision based on a 51% probability, and then subsequent court actions will treat that as 100% certain.
Consider your meat example. Suppose the meat became spoiled, and you were entitled to be compensated for the kosher meat. If you filed 3 separate lawsuits, you could be paid in full for all 3 pieces, if the judges ruled independently.
If the judges ruled sequentially, and relied on earlier decisions, then you would win the first lawsuit. The second judge would treat that first meat as 100% sure to be kosher, so there would be a 50% chance the second piece was, and the case could go either way. Regardless, you would win 2 out of the 3 cases.
While that turns out to be a fair outcome, similar examples might not be. What if each piece of meat had a 60% chance of being kosher? Or a 70% chance? Each lawsuit wins all or nothing, so the outcome will not properly reflect the probabilities.
I wondered what the result would be if the probablility was exactly 50%. I think the proposal needs to be “more likely than not”, so would fail if were exactly 50%. In this case, is it that proposal that the meat IS kosher would fail, and you would lose case 2, but then win case 3?