Most pieces about the “post-fact” or “post-truth” world express concern regarding the possibility that it’s now commonplace – or even somehow acceptable – to make stuff up instead of offering arguments and evidence for your claims.
My contribution to the discussion was to point out that truth has never mattered as much as we might prefer. But the fact that people don’t care to (or find it difficult to) escape their filter bubbles doesn’t need to entail giving up on facts and the truth entirely.
Regular readers will know that I’ve recently been wondering whether to continue hosting comments here on Synapses, as well as about their value in a more general sense.
I’m not shutting comments down, but will move to moderating them, meaning that it might take up to 24 hours for any comment to appear, and some comments will not appear at all, if I deem them abusive or idiotic. The decision to do so is precipitated by two coincidences, featuring two friends who raised overlapping conversations on Facebook, both of which I engaged with.
The debate on Nathan Geffen’s wall about trolls on GroundUp, and how to deal with them, raised the point that without full-time moderation, comment sections can easily become toxic.
Also, I’ve been led to believe that there’s a potential for legal liability for things posted on one’s own site by commenters, while no such liability exists on Twitter or Facebook (for what other people say, I mean).
Then, Eusebius McKaiser asked for a view on Nick Cowen’s IOL piece arguing that we can’t have productive debate in online spaces, and much of what I say below is a response to that piece (in short, I think we can, but that it takes more work than many of us care to do. In my case, I get few enough comments that the necessary moderation is possible).
Before I get to responding to that IOL piece, just a note on how things will work here with regard to comment and debate. Individual posts will have a moderated comment section, but please also feel free to do one of three things instead, if you prefer:
If you’re on Facebook, there is a page for Synapses. Every entry appears there, and you can comment as much as you like, unmoderated. The same is true for Google+.
Lastly, there’s Twitter, which isn’t ideal for debate, but certainly gives you the opportunity to call me names (if that’s your thing), or to make more friendly noises.
On to the IOL piece, which you don’t have to have read to follow what is about to follow. To quote myself:
it seems to my mind at least plausible that we’re living though an era in which ideas themselves are not that welcome. Where, as Neal Gabler recently put it in a column John Maytham was kind enough to alert me to, the “public intellectual in the general media [has been replaced] by the pundit who substitutes outrageousness for thoughtfulness”.
Despite the demise of postmodernism in academic circles, it still lives and breathes in the popular viewpoint that everybody’s opinion is equally worthy of consideration, and that individuals are under no special obligation to set aside their opinions in favour of what the evidence points to.
The Internet, its potential anonymity, and the sheer volume of both opinions and outrage don’t encourage thoughtful reflection and engagement. I find that the overall quality of discourse and openness to correction is poor on the Internet, and as a result, I tend to only read comment sections to confirm that they are places where people seem unafraid to express their racism, sexism and (other forms of) stupidity.
There are pockets where people do engage earnestly and sincerely, and where there is a chance of shifting peoples’ perspectives. Eusebius’s Facebook wall is itself one small example of that. It’s true that people don’t often say “you’ve changed my mind”, but it’s something that can be intuited from how the tone and content of a conversation shifts.
Second, I’m not sure that the situation is significantly better in meatspace. There, just as on the Internet, people are stubborn, prone to confirmation bias and the backfire effect, etc. It’s partly the fact that there are more participants – with those participants not being carefully selected – in the online space that creates the impression that it’s more chaotic there. In other words, if we were to have an open house in meatspace to discuss something contentious, we might more often have the same impression of shouting past each other.
By contrast, if you do online what you do in meatspace, i.e. carefully select your interlocutors, you’d have the same “civilized” conversations (at least in a relative sense). The problem is that a) you don’t always get to select who talks to you online and b), all the non-verbal cues, such as smiles and body-language, aren’t available to us online.
Complicating this all is my sense of the conversations in both spaces being less civilized than they used to be, because everyone is now an expert in everything. The idea of democracy has been illegitimately expanded into epistemic territory, where the average person has been persuaded that their views are as legitimate as any other person’s view, and where they are somehow attacking you as a person when they criticise your view, rather than us simply having a contestation about the facts or interpretation of them.
We’ve become too personally invested in our beliefs, to put it simply.
Wittgenstein said “Whereof one cannot speak, thereof one must be silent”, and that quote seems as good a place as any to kick off a post on appeals to authority, the death of expertise, and the boundaries of disciplines. As I argued in a 2012 column, agnosticism is often the most reasonable position on any issue that you’re not an expert in (with “agnosticism” here meaning the absence of conviction, not necessarily the absence of an opinion).
Otto von Bismarck observed that politics is “the art of the possible”, but the statement holds true in many more domains than that. It’s only trivially true to say that anything is constrained by what is possible and what is not – yet that sort of retort is usually as far as the conversation might go (on social media in particular).
It’s more likely that Germany’s first Chancellor was trying to say that there’s frequently a mismatch between our ideals and what can reasonably be achieved. Not, in other words, that things are literally impossible – more that we need to bear the trade-offs in mind when making judgements as to whether people are doing a good job or not.
Cognitive biases like the Dunning-Kruger effect describe how we overemphasise our own expertise or competence, leading us to ascribe malice in situations where the explanation for someone’s screw-up is most probably simple incompetence, or simply that the job in question was actually pretty difficult, meaning that expecting perfection was always unreasonable. (As some of you would know, this paragraph describes a more gentle version of Hanlon’s Razor – “Never attribute to malice that which is adequately explained by stupidity”.)
So, instead of paying attention to the arguments and their merits when it comes to something like blood deferrals for gay men, we claim prejudice. Or, when someone dies after taking the advice of a homeopath too seriously, some of us might be too quick to call the victim stupid or overly gullible, instead of focusing on those who knowingly (because some quacks are of course victims themselves) exploit others for financial or other gain.
The point is that some problems are difficult to solve, and certainly more difficult than they appear to be from a distance, or from the perspective of 20/20 hindsight. So, when you accuse your local or national government of racism, or being anti-poor, or some other sort of malice, it’s always worth pausing to think about the problem from their point of view, as best as you are able to. They might be doing the best they can, under the circumstances.
In case you aren’t aware of two recent resources for helping us to think these things through more carefully, I’d like to draw a recent comment in the science journal Nature to your attention, as well as a response to it that was carried in the science section of The Guardian.
In late November this year, Nature offered policy-makers 20 tips for interpreting scientific claims, and even those of you who aren’t policy-makers should spend some time reading and thinking about these (though, don’t sell yourselves short in respect of not labeling yourself a policy-maker, because on one level of policy, you’d want to include for example parenting. And what you choose to feed your children, or the medicines you give them, would usually be informed – or so one would hope – on scientific claims of whatever veracity.)
The Nature piece talks about sample size, statistical significance, cherry-picking of evidence, and 17 other import issues, many of which you’d hope some scientists would themselves take on board – not only those scientists who might play fast-and-loose with some of the issues raised, but also simply in terms of how they communicate their findings to the public. If you’re asked to provide content for a newspaper, magazine or other media, the article highlights some common areas of confusion, and therefore helps you to know where you perhaps be more clear.
In short, making policy is difficult, and doing good science can be difficult too, because among the things we can be short of is time, money, attention, the public’s patience, and so forth. In the majority of cases, both policy-makers and scientists might be doing the best they can, under those situations of constraint. So before we tell them that they are wrong, we should try to ensure we at least know what they are trying to do, and whether they are going about it in the most reasonable way possible, given the circumstances.
They don’t get the luxury of ignoring what is possible and what is not when doing the science, or making the policy. When criticising them, we shouldn’t grant ourselves that luxury either.
Respect is due to people, rather than to ideas. While it may be politically incorrect to say so, there is no contradiction between saying that someone has a misguided, uninformed or laughable point of view, and at the same time recognising that person’s worth or dignity in general. But our sensitivity to being challenged, and to having the intrinsic merit of our ideas questioned, often leads us to conflate these two different sorts of respect.
Respecting a person is partly a matter of not causing them unnecessary trauma through ridicule or contempt. It also requires not prejudging their arguments or points of view, but rather judging those arguments on their merits. But if it is established that those arguments lack merit (when compared with competing arguments on the same topic), there is no wrong in pointing this out. It is perhaps even a duty to point it out, assuming that we care for having probably true, rather than probably false, beliefs about the world.
Julian Barnes’ novel “Nothing To Be Frightened Of” opens with the sentence “I don’t believe in God, but I miss Him”. This echoes a question asked by Daniel Dennett in “Breaking The Spell” – that of whether we care more about being able to believe that our beliefs are true, or about those beliefs actually being true.
We might have rational doubts about all sorts of beliefs, yet still want them to be true. Or find value in living our lives under the assumption that they are true. It would be impossible – or at least exceedingly difficult – to live your life feeling that your job was meaningless, that you were not loved or that you had no free will and no actual soul, despite the fact that one or more of those statements may be true. We seem to seek out (and perhaps that indicates need) some transcendence or metaphysics in our lives.
But those desires and/or needs do not make their objects true or real. We need to bear in mind the possibility that certain beliefs serve a social or psychological function only, and that “belief in belief” may take us as far as we can go. In other words, that no value is added by insisting on the actual truth of some of our beliefs. In particular, we need to contemplate the possibility that treating some beliefs as literally true could be harmful, rather than neutral.
Much of what I’ve been interested in over the last decade or so has revolved around epistemology, and in particular virtue epistemology – in other words, questions around what it is that we should believe, and how we should form our beliefs. These are normative questions, and raise a whole bunch of issues relating to the extent to which we are in fact able to be rational epistemic agents; what such agents would look like; and whether we would want to be disposed in this way at all.