[ This is post #7 in the series, “Finding reality in a post truth world.” ]
Daniel Kahneman and Amos Tversky worked for decades on this problem, and Kahneman was ultimately awarded the Nobel Prize for it. Amos Tversky had unfortunately died, but he was an equal contributor. They analyzed every day actions and tried to reconcile them with the prevailing notions of risk and economics, i.e., that people in general are rational actors.
If people are truly rational actors, it means that the expected value of a proposition can be estimated with a simple formula: actual value = nominal value x probability. For example, suppose someone offers you $10 based on getting heads with one flip of a coin. But to take advantage of that, you have to pay them $4.
Should you do that? Since the probability of getting heads is 50%, it’s 1/2 of $10, or $5. You only have to pay $4, so from a rational point of view, the offer has a “profit” of $1. If you could take advantage of that offer a million times, a purely rational actor would become a millionaire with little effort.
Now, suppose the first five flips come up tails. On the 6th flip, you are offered $100 if it comes up heads, but you’ll have to pay $60 to do it. You can’t help thinking that since the first five flips came up tails, that run surely can’t last (assuming you’re using a fair coin). You even know about “regression to the mean” – the tendency of things to average out, no matter what people do. In this case, you assume that a heads flip is due. So you take the bet. When it comes up tails again, you not only have to pay $60, but you think you’ve been cheated.
This is the Automatic System 1 (AS1) at work. Had you engaged the Deliberative System 2 (DS2) prior to taking the bet, you would have realized a couple of things. First, coin flips are independent. Second, although regression to the mean is indeed a real phenomenon, you can’t predict when the regression will happen. In other words, if you flip the coin a million times, you will get around 500,000 heads and an equal number of tails. But you can’t predict the order. You could have 10 tails, followed by 1 head, followed by 5 tails, followed by 15 heads in a row.
We have a preconceived notion of randomness that is heavily biased by our pattern seeking brain. When we see 10 heads in a row, followed by 10 tails in a row, our brains say, “Aha! There’s a pattern there. This is NOT a random sequence!” This bias is so prevalent that game designers found that players didn’t like true randomness. To them, that meant that 10 heads in a row was not a random event. So the designers actually had to make things non-random by creating sequences that only appeared to be random.
Forensic detectives who analyze large datasets take advantage of our misperception of randomness. For example, if you’re an accountant who’s cooking the books and making a lot of false entries, you will naturally distribute the first digits fairly evenly. Just as many entries will start with 9 as with 1. The forensic analyst, familiar with Benford’s law, will spot a fraud.
The amazing thing about Benford’s law is that it applies to almost every large dataset there is, from purely human datasets (addresses, stock prices, etc.) to naturally occurring ones, such as the length of rivers. Astronomer Simon Newcomb discovered this law when he looked at printed logarithm tables in 1881, and found that the beginning pages, starting with 1, were much more dog eared than the others.
What does this have to do with risk and uncertainty? Well, if we don’t instinctively know what randomness is, it follows that we also don’t really understand probability. And without a clear notion of probability, the entire edifice of the “rational actor” begins to crumble.
This is the basis of the difference between utility theory and prospect theory.
Daniel Kahneman starts his discussion on prospect theory (Chapter 26) with an example of two scenarios:
- Get $900 for sure or 90% chance to get $1,000
- Lose $900 for sure or 90% chance to lose $1,000
From the utility theory (rational actor) point of view, these are non-choices. After all, the value of the second option in #1 is nominal value x probability, or $1,000 x 90%, which equals $900. We calculate the value of the second option in #2 the same way. So “rational man” shrugs their shoulders and is indifferent.
In practice, Kahneman points out, the vast majority of people are not indifferent. They choose the first option, $900 for sure, in #1, and the second option in #2.
My aim here is not to go into a full blown explanation of prospect theory. The main point, for the purposes of this discussion, is that people’s psychology, i.e. their Automatic System 1, is what accounts for the difference in responses. When faced with a gain, people will easily “lock in” the value. But when faced with a loss, they will hold onto what they have as long as possible. As Kahneman points out, “losses loom larger then gains. This asymmetry between the power of positive and negative expectations or experiences has an evolutionary history. Organisms that treat threats as more urgent than opportunities have a better chance to survive.”
The misperception of randomness and loss aversion are both aspects of AS1. They both have their roots in evolutionary fitness. The illusion that we have control over the environment, and agency plays the dominant role in life, is an absolute necessity for survival. Yes, if we look at the entirety of our life from a universal point of view, chance is the main actor. It is pure luck that one person is born as a child of privileged, white, wealthy parents and other as a child of a refugees from Honduras who has been separated from her parents. However, if we “gave up” and lived life as if it were all luck, we would have no offspring. The idea that we not only have agency, but a lot more of it than we really do, is an important part of evolutionary fitness.
As we exercise that agency, we gain some sort of evolutionary “wealth.” On the savannah, this might have meant a good source of water and food; today it might mean a nice neighborhood. But loss aversion is still a powerful motivator. This means the creation of an environment where we minimize chance, or at least we think we do.
These two concepts go hand in hand in creating powerful political movements. Demagogues know how to use these AS1 anomalies to get people to reject low cost housing, to fear people from unfamiliar ethnicities or religions, and to pass laws declaring coffee as a cancer risk. And if anyone thinks this is a problem just for “right wingers,” they are sorely mistaken. Just go to a suburb dominated by liberal whites and see how many of them are ready and willing to accommodate low cost housing solutions.
One point I can’t emphasize enough: AS1 is a necessary part of our being, even today. The idea that we are purely rational actors is who are only employing DS2 is just a variant of the cognitive fallacy that the world is a product of our agency. AS1 is the seat of our emotions, indeed of love, and without it, even today, we wouldn’t survive.
The key for us as a species, however, is mastering the interplay between AS1 and DS2 so that we put AS1 cognition in its rightful place. The AS1 terror inflicted by abnormal wildfires might motivate us to get concerned about global warming. But the solution to global warming needs to come from DS2. That solution, though, will never become a reality if it doesn’t take into account people’s reactions to new policies based on AS1. In other words, effective public policy is rooted in the constant interplay of AS1 and DS2.
AS1 is powerful. It can easily overwhelm DS2 if we let it. It can make us think not wearing a mask in the middle of a pandemic is the right thing to do. In other words, it’s evolutionary roots are based on evolutionary fitness, but allowing it to dominate can often result in the opposite.
Given that, how do we create an environment where DS2 has a chance to grow and prosper? It’s that question I’ll turn to next, in the discussion of The Misinformation Age, How False Beliefs Spread, by Cailin O’Connor and James Owen Weatherall.