What would it take to prove you wrong?

As submitted to The Daily Maverick

Metaphysical claims involving things like the Law of Attraction, astrology or homeopathy all share at least one feature: It’s very easy to find evidence for them. There exist a broad set of claims for which this holds true, and they are often collected under the summary term of pseudoscience. Pseudoscientific claims make predictions or offer explanations just as scientific claims do. Where they differ is in failing to offer a robust set of underlying laws, or even hypotheses, which can be empirically shown to justify those predictions or explanations.

Pseudoscience is instead most often taken seriously, and thought to correspond to reality, for far more mundane reasons – most notably our gullibility and will to believe. Consider astrology, where Bertram Forer’s 1948 experiment tells us all we need to know about how easy it is to tap into these human frailties. Forer administered a personality test to his students, but rather than giving them individual assessments, he instead copied a few descriptive sentences from a newspaper astrology column, and gave all the students the exact same character analysis.

When the students were asked to evaluate the accuracy of the descriptions on a scale of 0 (least accurate) to 5 (most accurate), an average result of 4.26 was obtained. This result is rather dependable – in hundreds of repetitions since Forer ran the experiment, the average score remains at around 4.2. When I ran the experiment in a summer school philosophy course at the University of Cape Town in 2006, the result was 4.6 for a class of around 40 students. I haven’t run it since, on the (false and self-preserving) principle that the truth you don’t know is less likely to harm you.

What the Forer effect shows is that we tend to accept generalised descriptions of this sort through heuristics such as confirmation bias. We take notice of, and overvalue, confirming instances of apparently plausible hypotheses while discounting or ignoring evidence that runs contrary to what we’re invested in believing. For homeopathy, the same principle applies: Seeing as we can’t clone ourselves and take the homeopathic remedy in one instance of infection, and an antibiotic (or nothing at all) in another, it is easy to believe that the homeopathic remedy has healed us. We don’t know about the other possibility, namely that the antibiotic – or no intervention at all – could have resulted in a more efficient and reliable recovery.

This tendency to look for verification of our beliefs is perhaps intuitively sensible, and it also has quite a heavyweight pedigree in the philosophy of science. In the 1920s and 1930s, the famous Vienna Circle developed the position known as “logical positivism”, whereby statements or questions only had meaning if we had some way of determining whether they were true. This position is captured in the “verifiability criterion of meaning”.

As the examples above show, it’s easy to find evidence that something is true. But finding confirming evidence doesn’t tell us whether we were testing the correct hypothesis. The evidence could instead be confirming some other hypothesis that we haven’t even considered – for example, the hypothesis that we are superstitious, and prone to believing all sorts of nonsense.

The Vienna Circle had the noble goal of trying to offer a clear principle to distinguish science from pseudoscience. They wanted to make it clear that metaphysical and theological statements were not cognitively meaningful. But as Karl Popper pointed out in 1934, the criterion of verifiability was too strong, and also too easy to satisfy. First, because some statements in science are useful and cannot (yet) be verified, like the Ancient Greek notion of atoms; and second, because while it’s relatively easy to verify a hypothesis, it’s far more telling when a hypothesis is falsified, whereupon we can rule it out as being true.

This is why the modern scientific method is based on attempts to falsify, rather than attempts to verify. To put it crudely, we take our various hypotheses, develop tests which would demonstrate those hypotheses to be false, and then see which survive those tests. The ones that do are considered to be best justified and, in ordinary language, true – at least until replaced by a superior hypothesis. “Truth” is therefore arrived at by eliminating falsehoods, and our conclusions are always provisional, in the sense that they haven’t been falsified yet. So we give up certainty in exchange for increased confidence that our beliefs are the best justified ones currently available to us.

A related lesson derived from Popper’s criticism is that when claims are unfalsifiable – in other words, where no possible evidence could ever disprove them – then we have no reason to believe that they are true descriptions of the world. This does not mean that the claims are in fact false, but simply that we would never be able to know if they were. As Judge William Overton said in his ruling that “creation science” should not be taught in Arkansas public schools: “While anybody is free to approach a scientific inquiry in any fashion they choose, they cannot properly describe the methodology as scientific, if they start with the conclusion and refuse to change it regardless of the evidence developed during the course of the investigation”.

What this means for us in non-scientific contexts is that any claims we make should also take this sort of empirical risk. There needs to be something at stake – some sort of wager we place on our beliefs in fact being true. A claim that is held to be true, where no possible evidence could show it to be false, is a short distance away from mere delusion.

The notion of falsifiability is what separates the claim that HIV causes AIDS from the claim that I have a soul. If people developed AIDS in the absence of HIV, the causal claim would thereby be falsified. But what possible test could show that I have no soul? More importantly, perhaps, falsifiability is what shows us that claims involving the movement of planets affecting my personality are indistinguishable (in terms of likelihood of truth) from any claim that I in fact come from one of those planets. They can both be verified, but neither of them can be falsified – yet many believe the first to be plausible, and the second not.

We should therefore ask ourselves the question of what it would take for us to give up a metaphysical belief. If the answer is “nothing”, then we should ask ourselves more probing questions, for example that of how it is possible to differentiate our belief from, for example, the belief that one gender or race is superior to another.

And by extension, we need to ask ourselves the difficult question of whether we are still entitled to criticise the unfalsifiable beliefs held by anyone else. In other words, can you criticise the neo-Nazi, or the racist shopkeeper, when your belief in a supernatural deity has just as much evidence confirming it as his belief in the superiority of his race does?

By contrast to beliefs that are in principle falsifiable, unfalsifiable beliefs don’t admit to this demand for consistency. They allow us to mix and match, and to believe whatever is most convenient. We can have it both ways: When the available evidence is not useful or convenient, we can claim that our belief stands outside of physical reality – the object of our belief is unknowable. And when the evidence is useful, we can say “There’s the proof!” But all the while, no evidence could possibly cause us to give up that belief, because it’s been set up as unfalsifiable.

The point is that faith (of any sort) immunises us against facts. As Jerry Coyne puts it, “without a way of knowing that you’re wrong, you can never know that you’re right”. If we are not prepared to accept that anything could show that some belief of ours was false, then asserting that particular belief (God exists, I have a soul, crystals can heal) actually states nothing at all. You may be right, but nobody else has any reason to think so, or to take that belief seriously. Because for you, quite simply, that belief is true by definition – and for others to engage with it is a waste of their time.

By Jacques Rousseau

Jacques Rousseau teaches critical thinking and ethics at the University of Cape Town, South Africa, and is the founder and director of the Free Society Institute, a non-profit organisation promoting secular humanism and scientific reasoning.