[ This is post #10 in the series, “Finding reality in a post truth world.” ]
This second part on cognitive fallacies will cover another four of the most important cognitive fallacies described in The Skeptics’ Guide, by Dr. Steven Novella.
I described this in the post about conspiracy theories, so I won’t go into a lot of detail here. Just to review, the Dunning-Kruger effect occurs when people start delving into a subject, and having learned a bit of the vocabulary, proclaim themselves as experts. In fact, because of confirmation bias and motivated reasoning, they are especially ignorant, simply because they are blissfully aware of their uninformed state.
The main reason to consider it again is to incorporate it into the discussion of information networks. Many of us have people in our information networks who are somewhat knowledgeable about a subject, but a long way from being an expert. It’s easy to make the mistake of assigning them much more credence than they warrant. This is especially true if the person is advocating a position not held by the consensus of experts in that field.
Also, we each have to take steps to make sure we don’t fall into the Dunning-Kruger trap. Remember, cognitive fallacies are like a virus; they can easily spread and overwhelm your Deliberative System 2. My personal trick is to find a peer reviewed study written by an expert in the field. Skip the conclusions section and go into the meat of the study. Can you follow all the math? Could you peer review the article yourself? If not, congratulations. You realize you’re still an amateur, and there’s no need to pass yourself off as an expert.
In the post on risk and uncertainty, I discussed the propensity of human brains to discount randomness and assign agency to everything possible. Cognitive fallacies based on misunderstanding the prevalence of coincidence are part of that genre.
Dr. Novella uses the common example of how we estimate the possibility of two people having the same birthday in a room of 23 people. Typically, people will guess somewhere between 1 in 20 to 1 in 30. The real probability is 1 in 2. Fill that room with 75 people, and there’s a 99.9% chance two people will match.
Leaving the statistical proof of this aside, the salient point here is that events like this are often the springboard for logical errors aimed at rationalizing the coincidence. Some may turn to numerology; others to astrology; and others to assigning “luck” to a piece of clothing or some artifact involved with the coincidence. Once that happens, confirmation bias does the rest of the job.
On top of that, dramatic events reinforce our memory, while the routine things in our lives fade away quickly. If we’re walking the dog at 7am on Friday the 13th, and a car barely misses hitting us, we’re likely to avoid walking the dog again at that hour, and especially on Friday the 13th. We’ve forgotten that we walked the dog dozens of times before that on that day and hour, with no adverse effects.
While sometimes strange things are not a coincidence, before we assign special meaning to them, we should examine our memories to see if we have drifted into a fallacy.
The important thing here is not to learn how to calculate p-values yourself. Save that for your statistics course. But you should know what a p-value is, along with the most common ways it’s abused.
As Dr. Novella points out, the p-value is not a predictor. It’s simply an indication of whether a proposition is worth investigating. For example, let’s say you want to test the idea that mask wearing is effective in reducing the spread of COVID. You would design a study. The p-value would be based on the null hypothesis, i.e., that mask wearing and reduction in COVID outbreaks were not correlated. If the p-value came out to be 0.05, that would indicate that there was only a 5% chance of no correlation, but a 95% chance there was a correlation.
Note that the p-value says nothing about the design of the study, the warrant of the proposition (whether it even makes sense), or that the p-value will be replicated in a subsequent study. It simply tells us, “This proposition is worth looking into.”
Since p-values say nothing about warrant, you have to rely on experts to see if there is a basis for thinking a correlation is meaningful. For example, you might find a p-value of 0.05 in a study attempting to prove that the presence of storks is correlated with birth rates. But there is no warrant here, no reason to believe this correlation has any practical meaning in the real world.
The second thing to be aware of with p-values is known as “p-hacking.” This occurs when the people doing a study “change the rules” in the midst of their study so that the p-value satisfies their beliefs. Scientifically valid studies lay out all the conditions for the study at the beginning, and they’re not tampered with. Wired had a good article on this subject and how it has been used by marketing campaigns to lure people into thinking something’s newsworthy when it really isn’t.
You don’t need to know all about p-values yourself, but you should endeavor to have someone in your information network who does. That’s one of the reasons why if you’re an amateur in a certain area, you rely on the consensus of scientific opinion in that field. Outliers often “prove” their results through p-hacking.
Before condemning placebo effect as nonsense, we should recognize that it does indeed have a biological basis.
The trouble is, it’s closely linked to expectation. The patient complaining about pain really has to believe the treatment will bring relief. Even though the relief might only be the result of the placebo effect, it appears to the patient that whatever the doctor (or chiropractor) did “worked.” If that same patient downed a sugar pill and knew it was just that, the pill would no longer work.
The placebo effect primarily works for pain and/or anxiety. It is biologically incapable of curing cancer, healing broken bones, preventing tooth decay, etc. And it only works as long as the patient believes it works.
Placebos have always been used in drug trials. A manufacturer must prove to the FDA that their medicine or remedy performs better than a placebo. The gold standard of proof is a double blind study, where subjects are selected at random and neither subjects nor administrators know who received what until the study has been completed.
The misperception of coincidence, described above, probably enhances the placebo effect. Another contributing factor is regression to the mean – outliers in any area over time go back to the average level, regardless of intervention. That’s why, for example, most back pain heals itself without any treatment other than over the counter pain relievers.
Charlatans in the medical field are quick to take advantage of the placebo effect, magnifying its role in health and advocating for its ability to cure all sorts of conditions that don’t respond at all to bursts of dopamine and/or endorphins. And because millions of people haven’t learned to use DS2 to control AS1 misdirections, they fall for it.
Are placebo effects dangerous? Is it dangerous to believe in them? The evidence would suggest no, provided they are limited to their very narrow scope. If a chiropractor’s “adjustment” helps your back pain, no problem. But I remember very well when many years ago I went to a chiropractor with severe neck pain that was radiating down my arm. We was all set “crack the neck,” but at the last minute I told him to stop. I was admitted to the hospital that evening, and the neurologist informed me that I had two ruptured disks, and that had I gone through with the neck cracking, I very likely would have severely damaged my spine.
There are even veterinarians who advocate alternative therapies based on placebo effects. Since dogs and cats don’t have the same brains as humans, one wonders how this would work. There is some basis for believing, however, that an animal can experience a placebo effect. This occurs through human contact. The evidence suggests, however, that it’s the human contact that elicits the effect. Thus an acupuncture needle is no more effective than a human hand.
The next, and last, post in this series will be a list of actions you can do to answer the question posed at the beginning: how do we find reality in a post truth world?