Categories
Politics Religion

Affleck, Maher, Harris and Islamophia

sam_harris_200In 2011, I wrote a column defending Sam Harris against critics of his perceived “Islamophobia” (no scare-quotes from here on, but please assume that I consider the term problematic, for reasons including those I outline below).

I no longer agree with all that I had to say then. At the time, I thought that Islam was the subject of more critique from Harris than other religions were because he regarded Islam as the most dangerous in a range of religious beliefs. In other words, I was convinced that he had a pragmatic, rather than prejudiced, reason for focusing on it. As I said at the time:

Harris, and atheists in general, do have a problem with Islam, just as they have a problem with Christianity. If Zoroastrianism was still popular, we’d have a problem with that too. But this generalised antipathy stems from the fact that religion encourages people to believe things on the basis of poor or nonexistent evidence. If we think it a good thing that people tend to believe what is true and disbelieve what is false, believing things in this way would be a harmful trait that merits discouragement.

This discussion never really goes away, but it’s foregrounded at present thanks to the barbarism of ISIL, and – on a more prosaic level – a recent CNN interview with Reza Aslan, and then the Bill Maher segment featuring Ben Affleck and Sam Harris.

I’m not going to focus on those interviews in their specifics, but I encourage you to watch them if you care about the context. There are also numerous commentaries and critiques you could read – this one by Avicenna Last (on the Maher/Affleck/Harris segment) probably comes closest to capturing my response to Harris, and also includes a useful transcript of the show.

The purpose of this post is rather to make two points that are of general concern in this debate. First, on Islamophobia: Islam is of course not a “race”. However, there are other ways of being bigoted than simply being racist. And, when one responds to a charge that you’re prejudiced by (simply) asserting “I have nothing against Muslims, it’s their religion I hate”, you might forget that this can serve as an evasive gambit.

The religion is held by people – and held with great commitment and sincerity – so criticism of it might be difficult to separate from criticisms of them. Scott Atran is worth reading on the sociology and psychology of belief, and how wilfully obtuse the language of “I respect people, but not their ideas” can sound to people who hold the ideas you happen to disrespect.

Second, I do think that Harris (and others) don’t consistently make the point that it’s primarily the extremists that they think problematic. Their language (and sometimes tone, which I think important) can create the impression that their criticisms apply generically to Islam, especially (I’d suspect) to people of that faith.

The point that Affleck was trying to convey is that there is a tendency for critics of Islam to read or sound like fundamentalists themselves, in part because they assume that an audience is as capable of separating the context from the logic of argument as they are. Our discussions take place in a political context, and persuasion depends in part on recognising that.

It is relevant, as Affleck points out, that more than a billion Muslims are only similar to ISIL in the sense that they all pray five times a day. They’re not similar in the sense that they will kill for this right, and I’m also not persuaded by Harris’s claim in the End of Faith that moderates provide some sort of “cover” or “legitimacy” for extremists.

They all believe in the same god, sure, but from within a radically different value system – one which allows for beheading infidels and opponents, and the other not. The fact that these two sorts of Muslim are nominally on the same spectrum of belief doesn’t mean they should be conflated with each other.

Harris and other critics of Islam forget – or speak as if they have forgotten – that believers can have an interpretation of a holy text, rather than a set of dogmas related to it. Instead, critics take the most reactionary views and treat them as representative of the whole, or more broadly as the most authentic form of Islamic faith (with thanks to Kenan Malik for this insight).

What this move allows for is the invalidation of the beliefs and ways of living that are more typical or representative. If a Muslim were to say “well, I’m not offended by Danish cartoons”, you can retort with “but you’re not a typical (or even a ‘real’) Muslim, because you’re not being a literalist when it comes to interpreting your holy texts”.

But if the typical Muslim isn’t a literalist, why use that as the standard by which to criticise others? Isn’t it rather unusual to judge people by the standards of the most pure, or best, exponents of any skill, virtue of way of living? (“Son, I grant that you’re able to kick a ball, but you can’t be a real footballer until you’re as good as Cristiano Ronaldo.”)

How about if the anti-fundamentalists – like Harris – might be giving some cover or legitimacy for the extremists themselves, by making them seem more representative or relevant than they are?

Or, how about we make make an effort to keep those moderates on our side, by not speaking in ways that make it appear we see all Muslims as different only in degree, but not in kind – because when you say they are of the same kind, you’re telling your neighbour that she’s really just like the beheaders, when one dispenses with the tact.

Anti-fundamentalism can play into stereotypes, too – and maybe, in doing so, it can give some power to the extremists. Because if you cast them as martyrs, moderates will be surrounded with examples of their religious identities being questioned and attacked.

Would you think that makes them more, or less, less likely to join the secular battle against fundamentalism?

Categories
Religion

Brief thoughts on the Global Atheist Convention (#atheistcon)

Earlier today I completed the seemingly endless sequence of flights that brought me back from the Global Atheist Convention, held over this past weekend in Melbourne. Both a feeling of being distinctly sleep-deprived and/or jet-lagged, as well as the fact that I need to write up and reflect on the notes I took during various presentations, mean that all I’d like to offer now are some very general impressions. First, Melbourne was quite charming – well-worth a visit if you get a chance.

But while a large amount of time was spent sitting in the sun eating and drinking with fellow heathens, the conference was what we were all there for. On the whole, it was a great event – certainly the best such gathering that I’ve been to, partly because of the great line-up of speakers, and also because of the organisation. There were no parallel sessions, so you could attend everything, and timekeeping was meticulous (well, except for the fact that some people in post-talk Q&A should have been given far less time, or no time at all in some cases. Like the guy who asked why it is that evolution hasn’t resulted in a brain that’s larger than the universe).

Of the 4000 in attendance, you’ll no doubt find dispute as to what the highlights and disappointments were. One thing I found to be a weaker element was the proportion of time set aside for comedy, where a fair amount of this comedy was geared at the ridicule of religion. Now I don’t mind that per se, but at a conference billed as a “celebration of reason”, it didn’t always seem appropriate to pluck the low-hanging fruits of religious absurdity. In fact, one of the highlights was the contribution made by the one theologian (Marion Maddox) who participated in the official events – she came across as far more strongly in support of secularism in education than the atheists on that panel. Another problem with the comedy was that some of it – Jim Jefferies in particular – didn’t quite manage the balancing act between provocation and offence in his jokes involving gender.

Peter Singer was also disappointing. He gave what amounted to an overview of Steven Pinker’s new book on the decline of violence in modern civilizations. It seemed a lucid and comprehensive overview of Pinker’s book (I haven’t read the book yet), but filled a slot in the programme that could perhaps have been better utilised through Singer presenting some of his own ideas. But the presentations that were good were very good, starting with a Dennett talk on free will on Thursday night (at the University of Melbourne, and not part of the official conference programme). Dennett did a great job of summarising the key elements of his compatibilist position, and it was somewhat of a pity that there wasn’t more discussion of this, especially in light of the new Harris book, and Harris’s explicit disagreement with Dennett on this topic.

In the main programme, I thought Dennett, Krauss and Harris the standout presentations, along with the “3 horsemen” panel, now with Ayaan Hirsi Ali added as a horsewoman (she was always meant to be part of the conversation that resulted in the 4 horsemen DVD, but had to withdraw at the last minute). Hopefully this will all appear on YouTube at some point, and I’ll certainly say more about these later on, once I’ve had a chance to read and think about my notes from those talks. Harris was probably the most provocative, and you can get a sense as to why I say that by reading Martin Pribble’s post about his talk. It was a thoughtful reflection on the inevitability of death and what that means for how we should live, but the thing that made a bunch of us feel rather weirded-out was the session of mindfulness meditation that he decided to get these 4000 atheists (and therefore probably skeptics too) to participate in.

And yes, these sorts of things easily give rise to accusations that we were indulging in some sort of “religious” gathering ourselves. This is also something I hope to write about later, including a confession that I was rather disappointed by the way some of our number responded to the Christian (on Saturday) and Muslim (on Sunday) protesters that gathered. Both of these groups of protesters said things they shouldn’t have – especially the Muslims with their “burn in hell” chants directed at Ali – but there were times when I thought the atheists crossed over from justified retorts into juvenile insult.

Lastly, I’d say there were many who felt deeply moved by the Hitchens tribute that was screened, and then also by the memories of Hitchens recounted by Dawkins and Krauss (especially the latter, as Krauss was a close friend of Hitch). The video of the tribute is below, and is well worth watching. In summary, I’m very glad to have been there, and to have met many great people – see you all again in two years.

Categories
Daily Maverick Politics Religion

“New atheists”, stridency and fundamentalism

As submitted to the Daily Maverick.

During the Easter period we had the usual opportunity to read and hear plenty of content on religion and atheism, including ongoing debate around “New atheists” and their alleged stridency or militancy. But regardless of how particular individuals in this debate might choose to engage, we shouldn’t forget that it’s not automatically strident or militant to assert a point of view, no matter how much any participant might disagree with the view being expressed. More importantly, we shouldn’t forget that tone has absolutely nothing to do with the truth or falsity of what is being said.

Yes, atheists can be dogmatic. Anyone can be dogmatic, but while Catholics (for example) have little choice but to consider the Pope as at least broadly representative of their world-view, atheists have no obligation to fall into line behind a Dawkins or anybody else. One key advantage of an evidence-based worldview is that you can be persuaded by good arguments, and not persuaded by weak ones, regardless of who makes those arguments.

This isn’t to say that some atheists aren’t fundamentalist, nor that some aren’t uncritical disciples of some bestselling celebrity atheist. Both sides of these culture wars make the mistake of over-generalising, and both sides make the mistake of being unwilling to pick and choose between various potential points of view, based on the quality of the arguments for those points of view.

As I’ve argued previously, there are better and worse ways to encourage reflection on these issues – one way that certainly seems unhelpful to me is to caricature a point of view into labels such as “Islamophobic”, or to lump an incredibly disparate group of people together into a collective of “New atheists”. Some atheists are frequent offenders in this regard in asserting that “Muslims” or “Christians” believe one thing or another.

We should all stop doing this, but it might sometimes be slightly more difficult than atheists like to think it is. If you start from a position of thinking that a naturalistic worldview (in other words, one that can’t accommodate at least gods or souls, but often – and certainly for me – even things like free will) is our best guide to the truth, it’s easy to fool yourself into thinking you have an epistemological advantage over others, more generally.

Atheists can be fundamentalists, not only in their atheism, but also on other emotive topics like climate change or fracking. They might also be fundamentalist in their blanket rejection of any possible good coming out of religion, which can lead them to be hostile and demeaning towards people who don’t share their views.

But fundamentalist atheists typically only cause offence and irritation, while fundamentalist religious folk have been known to cause significantly worse outcomes – although these are becoming increasingly rare, at least outside of theocracies. (Lest someone feel inclined to yell out “Hitler” here, let the man speak for himself: “My feelings as a Christian points me to my Lord and Saviour as a fighter.”)

The Kim’s of North Korea are themselves gods, so their misdeeds clearly can’t count as evidence of evil atheism. Stalin was a fanatical Marxist, possibly a psychopath, and while he was certainly strongly opposed to religion, his potential atheism hardly seems the most plausible explanation for his atrocities. One could problematise any such example, and just as atheists shouldn’t cite the Reverend Fred Phelps as representative of Christians, Christians shouldn’t think of these murderous dictators as representative of atheism. Fanaticism, not only belief, kills – and the only question of importance is whether one type of belief (broadly metaphysical) is more likely to lead to fanaticism than the other (broadly naturalistic).

Of those readers that are Christian, few – hopefully none – will read the Bible as a literally true handbook on science, history or morality. Instead, it’s a sounding board for debate against the backdrop of a commitment to a certain sort of life, exemplified in the figure of the Biblical Jesus. That this is a better route to peace, economic equality and so forth than a fundamentalist reading of any religious text goes without saying, and critics of religion who don’t recognise this are certainly not playing fair.

But that this route is better doesn’t mean it’s the best route, and this is the point that is often emphasised by more sympathetic critics of religion. If we were to imagine starting afresh, disregarding the centuries of privilege that religious viewpoints have enjoyed, we’d arrive at a different understanding of religion.

When faced with the choice between centuries-old texts that includes a bunch of weird injunctions, bad science and so forth, but which also contains passages that are inspirational, we’d be far less inclined to take them seriously today if they were not so embedded in our cultures. They might well continue to serve a powerful role in our lives, but they wouldn’t lead to wars or to children dying while having demons cast out of them.

There are of course also more recent books that can serve the purpose of inspiration or guidance without including false or outdated claims, capable of interpretations that allow for misery. And while it’s true that many, perhaps even most, religious believers don’t reach for those interpretations, others do find them plausible – and it’s this ongoing possibility that is at issue for many atheists, particularly of the non-fundamentalist sort.

The believers of the type highlighted by the recent Dawkins survey are of little concern to me, because they aren’t the sort to bomb abortion clinics or fly planes into buildings. But those who are inclined to do such things could count the moderate believers as being among their number (even while recognising their relative lack of commitment), and that larger number is the one generally cited in censuses or when a politician says that we are a “Christian” country.

As I often remark to my religious friends, if they were more active in denouncing Errol Naidoo, Fred Phelps, or Boko Haram’s Abubakar Shekau (not equivalently evil people, of course), many atheists would be left with little to do – at least in the supposed “name” of atheism. The (majority) of religious believers share many of the goals that non-believers do, and I do think it an obstacle to these shared goals that stereotype and caricature are so prevalent in the language of both the faithful and the faithless.

Leaving aside these regular misrepresentations of religious believers, it nevertheless remains true that atheists have things to legitimately be angry about – and also that it’s sometimes difficult to express these concerns without appearing to be dogmatic and hostile. While concerns around winning a public-relations battle shouldn’t lead us to forget those things that motivate the anger, persuasion remains impossible unless people are willing to communicate.

I don’t believe that encouraging communication needs to (or should) entail things like Alain de Botton’s “Atheism 2.0”, but it at least needs to involve dealing with real people and their sincere beliefs instead of preconceived versions of these, designed for ridicule. But those sincere beliefs can be criticised, and doing so isn’t necessarily shrill, strident or militant. Labelling them as such can be a way of simply ignoring them, just as labelling a religious person as a superstitious fool can be a way of ruling them out of (a conception of) rational discourse.

We should all care about eliminating unfounded or dangerous beliefs, whether ours or our opponents’. At root, this is a key premise of naturalistic or atheist positions, and it’s indeed a pity that many who hold those positions sometimes appear as dogmatic as those they criticise. But how ideas are expressed only makes a difference to how they are received – not to their truth. All of us could sometimes do with a reminder of this, whether we celebrated Easter or just a few days off work.

Categories
Daily Maverick Morality

A science of morality #2

This entry is part 5 of 5 in the series Moral debate

Originally published in The Daily Maverick

Various metaphysical questions have enjoyed the attention of philosophers, whether amateur or professional, ever since we became able to articulate complex thought. From questions regarding the point of our existence to wondering about the nature and existence of a soul, we have spent much time pondering these and other questions that frequently seem insoluble, and for which it remains unclear what – if anything – should be taken as counting as evidence for or against any particular conclusion.

It’s certainly possible, even probable, that much of this has been wasted time, at least in the sense of its likelihood of resulting in answers that can assist us in dealing with practical problems that are (at least in principle) soluble. And we do have practical problems to address. The concept of moral luck highlights the fact that some of us are simply born on the back-foot, and that no matter how hard we work, or what our natural talents might be, we will always be less well-off than someone fortunate enough to be born in more fortuitous circumstances.

This is a moral issue. While libertarians are comfortable with the idea of desert, whereby your life choice to be slothful could rightly correlate with a lack of wealth and opportunity, it’s less easy to say that you get what you deserve if accidents of geography have resulted in your being born in a township with no access to quality schooling. And this is also a moral issue which is generated by entirely practical considerations, namely issues which include the proper allocation of state resources, and government policy with regard to class and race.

But when we think about morality, we often fall into a trap of subjectivity. Part of what makes H. Sapiens as interesting as it can be is our ability to engage in self-reflection, and to indulge ourselves through complex narratives that reinforce our specialness. The very activity of thinking about the metaphysical questions gestured at above is a privileged activity, in that it’s a luxury that those who are worried about where the next meal might come from would indulge in less frequently than the average reader of The Daily Maverick. It is however also an activity that is distinctly and definitively human, as we would most likely continue engaging in it even if there were no answers to be had.

As a starting point to resolving non-subjective moral dilemmas, we could usefully remind ourselves that there are a number of clear correlates between human flourishing on the one hand, and economic and social policy on the other. We should also remind ourselves that subjective welfare – my perceived happiness, and what I believe needs attention in terms of my welfare – is absolutely unreliable as a guide to what we should do, whether in a moral sense or any other. Perceptions of personal welfare are massively state-dependent, in that what I report today might be entirely different to what I report tomorrow, simply because of the cognitive biases that we are all victims of.

This means that morality should be informed by objective measures of welfare – if not completely, then at least substantially. On the macro-level of societal good, this means that where we can know that the provision of sanitation, water and electricity to a certain level results in a clear aggregate increase in health, it becomes a moral imperative to provide those goods. Where we can know that gender or racial equality, whether in terms of voting rights, access to education or any other measure results in social good in some measurable form, the provision of these rights also become a moral imperative.

Morality is therefore at least in part an issue of sound policy. Not only because we would see increases in economic and intellectual productivity if more South Africans had access to the relevant markets, but also because it seems plausible that many of our moral problems might be minimised through redress of these macro-level problems. It is no accident that violent crime, rape or spousal abuse simply doesn’t happen as often in places like Sweden. People are simply not incentivised to take what is not theirs in jurisdictions such as these, where basic needs are met. They have less reason to, and they also have more reason to work towards the common good.

This is because the relationship between self-interest and maximising the common good becomes clear in situations where you regularly experience evidence of your welfare being resolved by collective action, rather than by the experience of occasionally winning what you perceive as a zero-sum game involving a competition between yourself and a hostile other, whether the other is state or society.

To some extent, objective considerations such as these can also inform morality on a personal and subjective level. We can extrapolate various well-justified moral norms or rules applicable in personal environments from what we can know from our high-level conclusions regarding what is good for a society. If corruption, deceit and violence are negatively correlated with flourishing on a societal level, it’s certainly likely that the same relationship exists on a personal level, whether the tenderpreneur experiences it in this way or not. While free-riders can never be eliminated, they are no obstacle to our reaching agreement on calling certain actions “good” or “bad”, because we know them to be either conducive or not to objectively desirable states of the world.

Of course, some might object to any claim that there are objectively desirable states of the world. I struggle to make sense of this objection, in that it seems obvious that the vast majority of us prefer certain common goods, such as health, financial security and our preferred level of social engagement. If someone were to make the claim that the most desirable state of the world instead involves privation and violence, I see no reason not to simply exclude them from the conversation, in the same way that we can justifiably ignore the opinions of young-earth creationists when we talk about cosmology.

But if we are to take such objections seriously, it seems clear that they lead to an impasse of one sort or another. We could say something like “okay, I can’t prove that you are right, while I am wrong about what is good, but I can know that I don’t want to be part of a society in which you are well-represented”. In other words, even though our social contract might be entirely pragmatic, it will tend to exclude or discount these views, and it further seems to be the case that even on your standards, your own prospects of a good life will be compromised by your minority status, making your view somewhat self-defeating.

Or, one might object that morality is about something else entirely, and that these measures of objective welfare are not the issue at all. If this is the case, the task is then yours to explain what morality is for, if it is for anything at all. Certainly, moral debate could simply be one of the sorts of noises that humans make, and only that – it could perhaps just be one of the social and intellectual habits that we have developed for our own entertainment, or to buttress our narratives of self-identity, much like our desperation to believe in free will or souls despite there being no evidence for the existence of either.

But again, even if this is true, it remains rational for us to desire to live better lives as opposed to worse ones, and to seek out ways to make this the case. It also seems clear that most of us agree that measures such as our health and financial security are good proxies for knowing when lives are better or worse. And if there is any data about what makes a good life more rather than less likely, it makes sense to say that moral theory has to take that data into account, and that aggregating this data into “rules” is what morality is for.

The position sketched above is not relativistic, in that moral principles are derived from objectively measurable data. It is also not an absolutist position, because the preferences of individuals and societies are not necessarily immutable, and if different sorts of lives become desirable in the future, we should be ready to accommodate observations to this effect. Instead, what I’ve outlined is a position that is naturalistic, and for which we already have well-developed tools to separate sense from nonsense. If moral claims are not subject to the only successful tools we’ve ever developed for evaluating truth-claims – the tools of science – then there is truly nothing we can know in morality, and there seems little reason to discuss it any further.

Categories
Daily Maverick Morality Religion

A science of morality #1

This entry is part 4 of 5 in the series Moral debate

Originally published in The Daily Maverick

The previous instalments of this series on morality have argued that we are handicapped in our ability to engage in moral debate. This handicap exists because of our overconfidence and complacency with regard to our existing moral beliefs, as well as through the lack of guidance offered by the dominant moral theories. But a negative proof – showing what might be wrong with existing beliefs – is often an easier task than a positive argument for some viable alternative. The positive argument is the focus of the final two parts of this series.

A summary of these concluding instalments is perhaps the claim that moral knowledge is just like any other knowledge, and should therefore be understood and debated using the same tools and resources we deploy in trying to understand other areas of epistemological contestation. The most successful tools and resources we’ve found so far are those of the scientific method, and I will thus be arguing that what we need is a “science of morality”.

The idea of a science of morality has recently enjoyed increased public attention thanks to Sam Harris, and the recent publication of his book “The Moral Landscape”. Many columns and reviews – including some from prominent moral philosophers – have been quick to dismiss Harris as philosophically ignorant, mostly on the basis that he fails to take the concerns of Hume seriously. Hume, the critics say, told us that one cannot derive an “ought” from an “is” – in other words that empirical observations about what is the case cannot tell us how things ought to be.

But instead of being willing to contemplate the possibility that Hume was wrong, or that Hume can be misunderstood, these refutations of Harris’s arguments usually amount to the simple assertion that Hume’s Guillotine (as the argument is known) shows that he is wrong. It’s useful to remind ourselves that simple appeals to authority are a logical fallacy – it doesn’t matter who said what, but rather that what they say stands up to logical scrutiny. This is what Hume says, in a passage from “A Treatise of Human Nature” (1740):

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

Read that last sentence again: Hume says that the derivation of “ought” from “is” needs to be explained, and that a reason should be given. He does not say that such explanations are impossible, or that no relevant reasons for such derivations exist. So here we have a clear example of how appeals to authority can appear convincing, even to many who regard themselves as being well acquainted with the relevant literature. Now, of course I’m simplifying – an opinion piece does not allow for excursions into subsequent work by Moore and others in which this is/ought (or fact/value) distinction is further explored and defended.

But Harris is not the first to think that this distinction is at best misleading, or even false. Those who think that empirical facts can tell us nothing about morality could spend some time reading the work of Railton, Jackson, Boyd, Binmore, Churchland and others who have presented strong cases for the possibility that facts about the world can indeed tell us something about morality. As I’ve previously argued, the idea that morality involves absolute principles has enjoyed the privilege of being grounded in dogmatic faith – whether religious or secular – and that faith doesn’t necessarily correspond to actual justification.

So if we are to entertain the notion that values can be derived from facts, how should we proceed in doing so? Applying the scientific method does not have to equal scientism. For some, it does, and this is indeed unfortunate. The more modest and useful perspective is to recognise what it is that we value about science, and why we find it so useful. We value it and find it useful because it provides us with the best possible answers to questions that potentially have answers, and allow us to make the sorts of predictions about the future that are most likely to be borne out by subsequent observations.

It does not offer us guarantees, and it never has. It’s important here to reflect on the difference between a lay understanding of science as offering absolute certainty, versus the actual products of scientific inquiry, which are always qualified by reference to statistical tools like margins of error and confidence levels. These things are usually not reported in the mainstream press, but are universally present in any respectable scientific publication.

To take an extreme example: It’s virtually certain that my habit of smoking cigarettes will lead to my suffering some unpleasant health consequences in the future. But when we say things like “smoking causes cancer”, that shorthand statement stands in for something far more complicated. A more accurate utterance would be something like “thanks to a vast body of empirical data, the most plausible hypothesis is that smoking has a positive causal relation to cancer, and we can confidently predict that Jacques is likely to develop cancer thanks to this behaviour”.

Many of our hypotheses and predictions do not allow for as much confidence as the example of smoking does. But as soon as there is any evidence – any evidence at all – the possibility exists for us to make better and worse predictions about the consequences of our actions. And we do have some evidence related to the sorts of things that allow for increases or decreases in the welfare of sentient creatures.

Following the advice offered by John Watson’s best-selling childcare book in 1928 – that you should not kiss your child more than once per year – will almost certainly have a negative effect on the welfare of that child, other things being equal. So, this fact about what conduces to your child’s welfare allows us to infer the moral principle that it is wrong to neglect your child’s emotional needs.

If you agree that there are some aspects of welfare that can be measured – and if you agree that morality has something to do with welfare – then it seems plain that facts about the world can tell us something about how we should live in that world and how we should treat not only each other, but also other sentient creatures. We have some data, and any amount of data allows for us to make better and worse predictions regarding the consequences of adopting one moral principle versus another.

Of course we don’t have certainty. But we don’t have it anywhere else, and it is unclear how this is a flaw for moral knowledge, yet not for any other kind of knowledge. This double-standard has no justification, and seems little more than an excuse to do less thinking.

Categories
Daily Maverick Morality Religion

Moral absolutism: deontology and religious morality

This entry is part 3 of 5 in the series Moral debate

Originally published in The Daily Maverick.

It is difficult to have any misconceptions about Immanuel Kant’s position on certain moral issues. His thoughts on whether it is permissible to lie are perhaps the most striking example of his moral absolutism. If the title of his essay “On a Supposed Right to Tell Lies from Benevolent Motives” doesn’t make the absolute impermissibility of lying clear in itself, the notorious case of the enquiring murderer certainly will.

But first, how did Kant get to absolute moral principles – ones which are not relativistic, and that apply to us all, no matter what our preferences or circumstances might be? In a very simplified form, the argument goes like this: While moral theories like utilitarianism speaks of happiness as the goal of morality, Kant instead focussed on what we need to do to be worthy of happiness at all. In terms of morality, this involved doing what was right, regardless of whether it made one happy or not.

How do we know what is right? Compare what Kant referred to as hypothetical imperatives and the stronger categorical imperative. Hypothetical imperatives derive their force from relevant desires – for example, my desire to not get wet results in the imperative to carry an umbrella. However, if I lacked that particular desire, the imperative would have no force. By contrast, categorical imperatives derive their force from reason and are binding on us all, because they are products of a principle which every rational person must accept (at least, according to Kant).

That principle is the Categorical Imperative, which (in one formulation) says: “Act only according to that maxim by which you can at the same time will that it should become a universal law”. This summarises the procedure for deciding whether something is morally permissible. We should ask ourselves what rule (maxim) we would be following by performing the action in question, then ask whether we could will (in other words, make it so, rather than simply imagine) that everybody followed that rule.

In other words, what we are being prompted to consider is whether the rule in question could be universalised. If it can be, then the rule may be followed, and the act is permissible. If I chose to borrow money from you that I had no intention of repaying, I could ask whether the rule “it is permissible to borrow money without intending to repay it” could be universalised. For Kant, the answer is obviously “no”, in that this rule is self-defeating – it would effectively eliminate commerce.

So reason itself is meant to lead us to categorical imperatives involving repaying debts, and also not lying, in that a rule permitting lying would, according to Kant, make communication impossible, or at least futile (given that much communication is premised on the sharing of useful and true information). An early challenge to this prohibition on lying came from Benjamin Constant, who pointed out that according to Kant, one is morally obliged to tell a known murderer the location of his intended victim, if he were to request that information.

A lie intended to divert the murderer is impermissible, because it’s possible that unbeknownst to you, the intended victim has in fact changed location, and is now in the place you identify in your lie. This thought experiment can easily be modified to make an answer such as “no comment” equally revelatory – and therefore leaving us responsible for the death of the victim in both these cases. Or not, at least according to Kant, who said that so long as you tell the truth, it’s the murderer who has done wrong, not you. You’ve done what’s morally obliged by reason, consistency, and the categorical imperative – you have acted from a “good” will.

Some readers may want to bite this particular bullet, and agree with Kant that the only way we can avoid moral rules becoming the victims of subjective preference or other forces is to treat them as absolute. I’d however be willing to place a wager on most of you thinking that there’s something absurd about no allowing us to lie to an enquiring murderer, simply because doing so would introduce cost-benefit analysis into our choice. Of course we’re not all equally good at such analyses, and of course some situations don’t lend themselves to these calculations as well as others.

Regardless, it’s reasonable to suspect that most people would agree it’s morally permissible to lie when doing so appears guaranteed to save an innocent person from murder. And if you do agree, then you admit that at least some moral principles are not absolute, but are instead ideals to be followed as closely as possible. Or, you might say that it’s unreasonable for Kant to formulate the maxim or rule in such absolute terms: what’s wrong with a maxim like “don’t lie, except when you can save an innocent life by doing so”?

For Kant, what’s wrong is of course that this sort of maxim appeals to consequences, and thus offers us no absolute – rather than context specific – guidance. It opens the door to a potentially infinite number of revisions and subtle qualifiers, and leaves us in exactly the moral mess that he thought he was clarifying with his deontological (duty-based) ethics. But an inflexible set of rules that don’t appear to account for what can be known (like when you know that a lie can save a life), or that don’t allow for any ranking of rules (if all rules are absolute, what would one do when a rule to tell the truth conflicts with a rule to save lives) is an unsatisfactory attempt to clean up that mess.

This is of course why most ethical frameworks – the ones that seem to summarise our behaviour, rather than the ones discussed by moral philosophers – allow for some sort of hierarchy of principles, or exceptions to rules such as the “white lie”. We know that life is more messy than the ideal existence imagined by a very anal-retentive Prussian, whose daytime walks were so regular that (according to legend, at least) locals used to set their clocks by him as he passed by their houses.

While all but a handful of existing Kantians are probably behind a desk at a philosophy department somewhere, we still have a large number of deontologists in our midst. This is because the other historically popular route to absolute moral rules is through religion, and the Abrahamic religions in particular. I’ve perhaps said enough about the absurdity of basing contemporary moral reasoning on the superstitions and lacks of knowledge of our ancestors in previous columns, so will make only a few simple points here.

First, as Plato argued in Euthyphro’s dilemma (around 399 BCE), propositions can’t be right simply because they are commanded by God, because this could make anything potentially “right”, if God had that particular whim. It would strike most believers as implausible that God commands them to cut off their ears – and rightly so, seeing as the immediacy of the harm should make any non-immediate (and untestable) rewards for doing so less attractive. Believers might of course respond that God would not command this. But this is precisely the point: We expect some conformity between the commands of God and what we can understand as rational or reasonable, and this shows that instead of what God commanding being right (by virtue of the command), God instead commands what is independently right.

Second, the Golden Rule – which strikes us as an absolute principle, and one which is common to many religious outlooks – is also far from absolute. It’s also far from being derived via religion, given that reciprocal altruism can be observed in many non-human animals. The idea of “doing as you would be done by” is however of little use if we were to apply it in all situations, without any recognition of context. As Simon Blackburn points out, imagine the criminal asking the judge “how would you like to be sentenced?” In other words, there are cases in which we think applying the Golden Rule to be inappropriate, because it’s been superseded by some other moral judgement – again showing that we do rank these things in practice, rather than treat them as absolute.

Both of these versions of a deontological morality fail, often because they provide easy answers to questions that should be understood as difficult. It is of course often politically desirable to give people answers that correspond to an imagined black and white world. But that is not the world we live in anymore, now that more of us have educations, and now that we find no compelling reasons to subjugate ourselves to edicts and forces that are not intelligible.

They fail also by offering moral confusion where there should be none. Consider policy decisions like whether to allow euthanasia or gay marriage. Both are cases in which one has to work very hard (with little reward) to understand how they can ever be considered immoral practices in themselves. For surely morality has to have something to do with making life better or worse? If I prefer for my life to end, not allowing euthanasia makes my life worse than it could otherwise be, and it seems immoral to deny me that right. And even if you want to object that having myself euthanased could make life worse for some, by (for example) causing my wife distress, consider instead the case in which I’m a widower, and have no other family? Again, the context changes the analysis – or at least it should.

Most importantly, defining morality as necessarily absolute and objective is an illegitimate way to privilege religious morality, even as it continues to become less and less useful to people living in a modern world. Not only less useful, but also potentially harmful, in that our moral sensibilities atrophy further every time we think a dilemma should not be debated – or cannot be debated – because of some absolute principle we’ve never allowed ourselves to question.

And just as religious morality should be considered part of our intellectual history, rather than inform our actions in the present, so too for much of secularists are inclined to call “Enlightenment reason”. While the Enlightenment project did the species an enormous favour at the time, our knowledge of cognition in the 18th century should not be considered a permanent guide to how to live. Reason, for Kant, was absolute and transcendent, but in reality it is probably no such thing.

There is no unified locus of rationality in the brain, and no disembodied faculty of reason – our will is tied to established habits, and also seems to follow after our emotional moral judgements, rather than generate them. As Jonathan Haidt suggests, the emotional dog wags the rational tail – we form emotive reactions, and then construct post-hoc rationalisations to justify them. John Dewey’s characterisation of the self as an “interpenetration of habits” should remind us that some of the very concepts of morality (like “person”) are metaphorical – they have no necessary and sufficient conditions.

For moral theory to make sense in light of what we now know about how we reason, it must take the work of people like Haidt, de Waal, Harris and Binmore into account. Of course the answers are going to be messy, at least for now. But so are we, and so is the world we live in. If you don’t want to help with cleaning it up, so be it – but standing on the sidelines reciting dogma (religious or otherwise) is simply getting in the way of important work. And that’s surely not morally good, on any definition.