Daily Maverick Morality Religion

Moral absolutism: deontology and religious morality

This entry is part 3 of 5 in the series Moral debate

Originally published in The Daily Maverick.

It is difficult to have any misconceptions about Immanuel Kant’s position on certain moral issues. His thoughts on whether it is permissible to lie are perhaps the most striking example of his moral absolutism. If the title of his essay “On a Supposed Right to Tell Lies from Benevolent Motives” doesn’t make the absolute impermissibility of lying clear in itself, the notorious case of the enquiring murderer certainly will.

But first, how did Kant get to absolute moral principles – ones which are not relativistic, and that apply to us all, no matter what our preferences or circumstances might be? In a very simplified form, the argument goes like this: While moral theories like utilitarianism speaks of happiness as the goal of morality, Kant instead focussed on what we need to do to be worthy of happiness at all. In terms of morality, this involved doing what was right, regardless of whether it made one happy or not.

How do we know what is right? Compare what Kant referred to as hypothetical imperatives and the stronger categorical imperative. Hypothetical imperatives derive their force from relevant desires – for example, my desire to not get wet results in the imperative to carry an umbrella. However, if I lacked that particular desire, the imperative would have no force. By contrast, categorical imperatives derive their force from reason and are binding on us all, because they are products of a principle which every rational person must accept (at least, according to Kant).

That principle is the Categorical Imperative, which (in one formulation) says: “Act only according to that maxim by which you can at the same time will that it should become a universal law”. This summarises the procedure for deciding whether something is morally permissible. We should ask ourselves what rule (maxim) we would be following by performing the action in question, then ask whether we could will (in other words, make it so, rather than simply imagine) that everybody followed that rule.

In other words, what we are being prompted to consider is whether the rule in question could be universalised. If it can be, then the rule may be followed, and the act is permissible. If I chose to borrow money from you that I had no intention of repaying, I could ask whether the rule “it is permissible to borrow money without intending to repay it” could be universalised. For Kant, the answer is obviously “no”, in that this rule is self-defeating – it would effectively eliminate commerce.

So reason itself is meant to lead us to categorical imperatives involving repaying debts, and also not lying, in that a rule permitting lying would, according to Kant, make communication impossible, or at least futile (given that much communication is premised on the sharing of useful and true information). An early challenge to this prohibition on lying came from Benjamin Constant, who pointed out that according to Kant, one is morally obliged to tell a known murderer the location of his intended victim, if he were to request that information.

A lie intended to divert the murderer is impermissible, because it’s possible that unbeknownst to you, the intended victim has in fact changed location, and is now in the place you identify in your lie. This thought experiment can easily be modified to make an answer such as “no comment” equally revelatory – and therefore leaving us responsible for the death of the victim in both these cases. Or not, at least according to Kant, who said that so long as you tell the truth, it’s the murderer who has done wrong, not you. You’ve done what’s morally obliged by reason, consistency, and the categorical imperative – you have acted from a “good” will.

Some readers may want to bite this particular bullet, and agree with Kant that the only way we can avoid moral rules becoming the victims of subjective preference or other forces is to treat them as absolute. I’d however be willing to place a wager on most of you thinking that there’s something absurd about no allowing us to lie to an enquiring murderer, simply because doing so would introduce cost-benefit analysis into our choice. Of course we’re not all equally good at such analyses, and of course some situations don’t lend themselves to these calculations as well as others.

Regardless, it’s reasonable to suspect that most people would agree it’s morally permissible to lie when doing so appears guaranteed to save an innocent person from murder. And if you do agree, then you admit that at least some moral principles are not absolute, but are instead ideals to be followed as closely as possible. Or, you might say that it’s unreasonable for Kant to formulate the maxim or rule in such absolute terms: what’s wrong with a maxim like “don’t lie, except when you can save an innocent life by doing so”?

For Kant, what’s wrong is of course that this sort of maxim appeals to consequences, and thus offers us no absolute – rather than context specific – guidance. It opens the door to a potentially infinite number of revisions and subtle qualifiers, and leaves us in exactly the moral mess that he thought he was clarifying with his deontological (duty-based) ethics. But an inflexible set of rules that don’t appear to account for what can be known (like when you know that a lie can save a life), or that don’t allow for any ranking of rules (if all rules are absolute, what would one do when a rule to tell the truth conflicts with a rule to save lives) is an unsatisfactory attempt to clean up that mess.

This is of course why most ethical frameworks – the ones that seem to summarise our behaviour, rather than the ones discussed by moral philosophers – allow for some sort of hierarchy of principles, or exceptions to rules such as the “white lie”. We know that life is more messy than the ideal existence imagined by a very anal-retentive Prussian, whose daytime walks were so regular that (according to legend, at least) locals used to set their clocks by him as he passed by their houses.

While all but a handful of existing Kantians are probably behind a desk at a philosophy department somewhere, we still have a large number of deontologists in our midst. This is because the other historically popular route to absolute moral rules is through religion, and the Abrahamic religions in particular. I’ve perhaps said enough about the absurdity of basing contemporary moral reasoning on the superstitions and lacks of knowledge of our ancestors in previous columns, so will make only a few simple points here.

First, as Plato argued in Euthyphro’s dilemma (around 399 BCE), propositions can’t be right simply because they are commanded by God, because this could make anything potentially “right”, if God had that particular whim. It would strike most believers as implausible that God commands them to cut off their ears – and rightly so, seeing as the immediacy of the harm should make any non-immediate (and untestable) rewards for doing so less attractive. Believers might of course respond that God would not command this. But this is precisely the point: We expect some conformity between the commands of God and what we can understand as rational or reasonable, and this shows that instead of what God commanding being right (by virtue of the command), God instead commands what is independently right.

Second, the Golden Rule – which strikes us as an absolute principle, and one which is common to many religious outlooks – is also far from absolute. It’s also far from being derived via religion, given that reciprocal altruism can be observed in many non-human animals. The idea of “doing as you would be done by” is however of little use if we were to apply it in all situations, without any recognition of context. As Simon Blackburn points out, imagine the criminal asking the judge “how would you like to be sentenced?” In other words, there are cases in which we think applying the Golden Rule to be inappropriate, because it’s been superseded by some other moral judgement – again showing that we do rank these things in practice, rather than treat them as absolute.

Both of these versions of a deontological morality fail, often because they provide easy answers to questions that should be understood as difficult. It is of course often politically desirable to give people answers that correspond to an imagined black and white world. But that is not the world we live in anymore, now that more of us have educations, and now that we find no compelling reasons to subjugate ourselves to edicts and forces that are not intelligible.

They fail also by offering moral confusion where there should be none. Consider policy decisions like whether to allow euthanasia or gay marriage. Both are cases in which one has to work very hard (with little reward) to understand how they can ever be considered immoral practices in themselves. For surely morality has to have something to do with making life better or worse? If I prefer for my life to end, not allowing euthanasia makes my life worse than it could otherwise be, and it seems immoral to deny me that right. And even if you want to object that having myself euthanased could make life worse for some, by (for example) causing my wife distress, consider instead the case in which I’m a widower, and have no other family? Again, the context changes the analysis – or at least it should.

Most importantly, defining morality as necessarily absolute and objective is an illegitimate way to privilege religious morality, even as it continues to become less and less useful to people living in a modern world. Not only less useful, but also potentially harmful, in that our moral sensibilities atrophy further every time we think a dilemma should not be debated – or cannot be debated – because of some absolute principle we’ve never allowed ourselves to question.

And just as religious morality should be considered part of our intellectual history, rather than inform our actions in the present, so too for much of secularists are inclined to call “Enlightenment reason”. While the Enlightenment project did the species an enormous favour at the time, our knowledge of cognition in the 18th century should not be considered a permanent guide to how to live. Reason, for Kant, was absolute and transcendent, but in reality it is probably no such thing.

There is no unified locus of rationality in the brain, and no disembodied faculty of reason – our will is tied to established habits, and also seems to follow after our emotional moral judgements, rather than generate them. As Jonathan Haidt suggests, the emotional dog wags the rational tail – we form emotive reactions, and then construct post-hoc rationalisations to justify them. John Dewey’s characterisation of the self as an “interpenetration of habits” should remind us that some of the very concepts of morality (like “person”) are metaphorical – they have no necessary and sufficient conditions.

For moral theory to make sense in light of what we now know about how we reason, it must take the work of people like Haidt, de Waal, Harris and Binmore into account. Of course the answers are going to be messy, at least for now. But so are we, and so is the world we live in. If you don’t want to help with cleaning it up, so be it – but standing on the sidelines reciting dogma (religious or otherwise) is simply getting in the way of important work. And that’s surely not morally good, on any definition.

Daily Maverick Morality Religion

The possibility of moral debate

This entry is part 1 of 5 in the series Moral debate

Originally published in The Daily Maverick

One of the great misfortunes of our age is perhaps that we are all special. Or at least, that we are all considered to be special by others, and that we tend to believe them. By special, I mean important, significant, or worth taking seriously as individuals with distinct interests, rights, characteristics and so forth.

This sanctity of the individual is of course a different matter to the status of ideas or arguments, which can be worth taking seriously (or not) on their own merits, independently of the character or reputation of the person expressing those ideas. The problem, however, is that the alleged sanctity of the individual tends to reinforce the status of her ideas, making us more reluctant than we should be to criticise her strongly-held beliefs – or, of course, our own.

As recently as the 1970’s, when I started becoming conscious, things were not this way. Of course, your parents, other family and friends may have thought you were special. Since then, however, rhetoric around notions such as human rights – as well as the scourge of identity politics – has resulted in individuals having far less humility than they perhaps should, especially when it comes to their feelings of entitlement to be taken seriously.

One key manifestation of this is the confidence we exhibit in our own moral judgements. We might have a strong conviction that the races and genders are equal, or that homophobia is an unacceptable form of prejudice. Or, we might have the opposite conviction. But either way, we believe whatever we do emphatically, even dogmatically, while at the same time somehow respecting that others have the right to believe the opposite.

Something is obviously amiss with this state of affairs. If we are convinced that we are right, we should be equally convinced that others are wrong. And if we are talking about a principle or idea that is believed to affect human welfare, one would think that we’d also feel an obligation to persuade others that they are wrong, and that they should instead adopt our point of view. Unfortunately, the strength of our convictions is not always backed up by equally strong justification, and we thus find ourselves unable to do the work of persuading others to change their minds.

Think back to the last time you were party to an argument around a moral issue. In the majority of cases, we can confidently predict that the bulk of the exchange consisted in the parties involved simply stating their positions, where those positions usually fit quite neatly into one of the established and socially legitimised frames. So, I might say I’m a libertarian, explain what I mean by that, and show how that position leads me to a certain conclusion on the topic at hand.

You might say in response that some measure of paternalism is merited, seeing as we have such a poor track-record of making rational choices regarding our welfare. And then you might explain how your position justifies some particular limitation of freedom, such as making me wear a seatbelt while driving. But these exchanges are typically characterised by only this superficial level of intellectual exchange – they rarely challenge us to question the frameworks themselves.

This is perhaps because we trust that our interlocutor has arrived at their theoretical commitments via hours of reading and deliberation. But is this ever true, except for those of us secluded in ivory towers of one form or another? Is it not instead the case that we’re oftentimes simply making it up, or at most relying on some formative exposure to one point of view or another, which we haven’t bothered to interrogate since it played the role of shaping our worldviews?

The background problem here is that most of us rely on what could be called a “folk theory” or moral law. Just like folk psychology, where (for example) our common-sense intuitions around the pains and pleasures we feel are radically over- and misinterpreted to result in a completely misleading view of the self and its relation to the external world, we seem to believe that we have some innate ability to discern right from wrong. What we forget in the act of making these judgements is that much of this is learned behaviour, and that our lessons may have been provided by incompetent teachers.

Of course, not all of morality consists in learned behaviours. Some evidence of reciprocal altruism has been found in 8 month-old infants as well as in other primates, suggesting that at least some of our moral instincts may develop largely independently of the social mechanisms we happen to be exposed to. Notions such as fairness and justice appear to be well understood in the absence of language, and have even been observed in the behaviour of domestic dogs.

Much of our more complex moral framework does however emerge from a process of learning, whether that learning is through social osmosis or something like studying moral philosophy. And for many of us, that learning commits us to one of two positions: moral absolutism or moral relativism. Unfortunately, neither of these positions is well-suited to moral debate, or to changing the minds of others.

Absolutism in a moral sense does not mean that you need to be certain of the correct answer to any particular moral dilemma, nor that all moral dilemmas have a certain answer. It does however mean that certain actions are absolutely right or wrong. The absolutist would typically justify their judgements as to which actions can be known to be right or wrong through appeal to deontological frameworks, such as that of Immanuel Kant, or through religious moral codes. The immediate reason why absolutism could handicap moral debate is because the foundational principles (a commitment to reason for Kant, a belief in a particular deity for religion) are difficult to reach agreement on.

Relativism, on the other hand, makes the claim that because we can’t agree on any objective basis for moral judgements, there cannot be any objective truths in morality, and that we should therefore reconcile ourselves to the fact that terms such as “right” and “wrong” are relative to culture. At its extremes, this sort of reasoning can also be used to justify egoism, whereby the meaning of right and wrong are determined solely by the agent herself.

Relativism offers a clear handicap to moral debate, in that the idea of debate presupposes that some sort of resolution is possible. If all moral dilemmas are resolved by simply fact-checking what a particular culture (or person) happens to believe, we’d have little reason to engage with those dilemmas, as the conversations involved would be short and uninteresting.

Our educations into one of these unhelpful frameworks for debate – as well as our convictions with regard to the privileged status of our own judgements – tends to handicap our ability to reach principled agreement on moral debates. What are our alternatives, and what is the full case for rejecting absolutism and/or relativism? If you believe these to be important questions, come back next week for a continuation of this attempt to sketch some possible answers.