Categories
Daily Maverick Morality Religion

The possibility of moral debate

This entry is part 1 of 5 in the series Moral debate

Originally published in The Daily Maverick

One of the great misfortunes of our age is perhaps that we are all special. Or at least, that we are all considered to be special by others, and that we tend to believe them. By special, I mean important, significant, or worth taking seriously as individuals with distinct interests, rights, characteristics and so forth.

This sanctity of the individual is of course a different matter to the status of ideas or arguments, which can be worth taking seriously (or not) on their own merits, independently of the character or reputation of the person expressing those ideas. The problem, however, is that the alleged sanctity of the individual tends to reinforce the status of her ideas, making us more reluctant than we should be to criticise her strongly-held beliefs – or, of course, our own.

As recently as the 1970’s, when I started becoming conscious, things were not this way. Of course, your parents, other family and friends may have thought you were special. Since then, however, rhetoric around notions such as human rights – as well as the scourge of identity politics – has resulted in individuals having far less humility than they perhaps should, especially when it comes to their feelings of entitlement to be taken seriously.

One key manifestation of this is the confidence we exhibit in our own moral judgements. We might have a strong conviction that the races and genders are equal, or that homophobia is an unacceptable form of prejudice. Or, we might have the opposite conviction. But either way, we believe whatever we do emphatically, even dogmatically, while at the same time somehow respecting that others have the right to believe the opposite.

Something is obviously amiss with this state of affairs. If we are convinced that we are right, we should be equally convinced that others are wrong. And if we are talking about a principle or idea that is believed to affect human welfare, one would think that we’d also feel an obligation to persuade others that they are wrong, and that they should instead adopt our point of view. Unfortunately, the strength of our convictions is not always backed up by equally strong justification, and we thus find ourselves unable to do the work of persuading others to change their minds.

Think back to the last time you were party to an argument around a moral issue. In the majority of cases, we can confidently predict that the bulk of the exchange consisted in the parties involved simply stating their positions, where those positions usually fit quite neatly into one of the established and socially legitimised frames. So, I might say I’m a libertarian, explain what I mean by that, and show how that position leads me to a certain conclusion on the topic at hand.

You might say in response that some measure of paternalism is merited, seeing as we have such a poor track-record of making rational choices regarding our welfare. And then you might explain how your position justifies some particular limitation of freedom, such as making me wear a seatbelt while driving. But these exchanges are typically characterised by only this superficial level of intellectual exchange – they rarely challenge us to question the frameworks themselves.

This is perhaps because we trust that our interlocutor has arrived at their theoretical commitments via hours of reading and deliberation. But is this ever true, except for those of us secluded in ivory towers of one form or another? Is it not instead the case that we’re oftentimes simply making it up, or at most relying on some formative exposure to one point of view or another, which we haven’t bothered to interrogate since it played the role of shaping our worldviews?

The background problem here is that most of us rely on what could be called a “folk theory” or moral law. Just like folk psychology, where (for example) our common-sense intuitions around the pains and pleasures we feel are radically over- and misinterpreted to result in a completely misleading view of the self and its relation to the external world, we seem to believe that we have some innate ability to discern right from wrong. What we forget in the act of making these judgements is that much of this is learned behaviour, and that our lessons may have been provided by incompetent teachers.

Of course, not all of morality consists in learned behaviours. Some evidence of reciprocal altruism has been found in 8 month-old infants as well as in other primates, suggesting that at least some of our moral instincts may develop largely independently of the social mechanisms we happen to be exposed to. Notions such as fairness and justice appear to be well understood in the absence of language, and have even been observed in the behaviour of domestic dogs.

Much of our more complex moral framework does however emerge from a process of learning, whether that learning is through social osmosis or something like studying moral philosophy. And for many of us, that learning commits us to one of two positions: moral absolutism or moral relativism. Unfortunately, neither of these positions is well-suited to moral debate, or to changing the minds of others.

Absolutism in a moral sense does not mean that you need to be certain of the correct answer to any particular moral dilemma, nor that all moral dilemmas have a certain answer. It does however mean that certain actions are absolutely right or wrong. The absolutist would typically justify their judgements as to which actions can be known to be right or wrong through appeal to deontological frameworks, such as that of Immanuel Kant, or through religious moral codes. The immediate reason why absolutism could handicap moral debate is because the foundational principles (a commitment to reason for Kant, a belief in a particular deity for religion) are difficult to reach agreement on.

Relativism, on the other hand, makes the claim that because we can’t agree on any objective basis for moral judgements, there cannot be any objective truths in morality, and that we should therefore reconcile ourselves to the fact that terms such as “right” and “wrong” are relative to culture. At its extremes, this sort of reasoning can also be used to justify egoism, whereby the meaning of right and wrong are determined solely by the agent herself.

Relativism offers a clear handicap to moral debate, in that the idea of debate presupposes that some sort of resolution is possible. If all moral dilemmas are resolved by simply fact-checking what a particular culture (or person) happens to believe, we’d have little reason to engage with those dilemmas, as the conversations involved would be short and uninteresting.

Our educations into one of these unhelpful frameworks for debate – as well as our convictions with regard to the privileged status of our own judgements – tends to handicap our ability to reach principled agreement on moral debates. What are our alternatives, and what is the full case for rejecting absolutism and/or relativism? If you believe these to be important questions, come back next week for a continuation of this attempt to sketch some possible answers.

Categories
Daily Maverick Morality Religion

Moral debate and the problem of relativism

This entry is part 2 of 5 in the series Moral debate

Originally published in The Daily Maverick

It is not only because of the privileged status we accord to our ideas that we are reluctant to unsettle them, or that others are wary of challenging them. In some areas of knowledge – or potential knowledge – some of us think that no truths can in fact be known, and that we therefore need to find other ways of resolving disputes. Or sometimes, the claim is that we should not even bother trying to resolve disputes, because they are in principle not resolvable.

One area where this can be observed is in the debate between naturalism, broadly defined as the view that everything can potentially be explained by reference to empirically verifiable data, and supernaturalism, where objects like deities play a significant role in explaining our lives and our physical surrounds. Another is aesthetics, where some claim that beauty only exists in the eye of the beholder. And of course there is morality, where according to a certain school of thought, there are no objective grounds on which to judge one moral viewpoint as superior to another.

Categories
Daily Maverick Morality Religion

Moral absolutism: deontology and religious morality

This entry is part 3 of 5 in the series Moral debate

Originally published in The Daily Maverick.

It is difficult to have any misconceptions about Immanuel Kant’s position on certain moral issues. His thoughts on whether it is permissible to lie are perhaps the most striking example of his moral absolutism. If the title of his essay “On a Supposed Right to Tell Lies from Benevolent Motives” doesn’t make the absolute impermissibility of lying clear in itself, the notorious case of the enquiring murderer certainly will.

But first, how did Kant get to absolute moral principles – ones which are not relativistic, and that apply to us all, no matter what our preferences or circumstances might be? In a very simplified form, the argument goes like this: While moral theories like utilitarianism speaks of happiness as the goal of morality, Kant instead focussed on what we need to do to be worthy of happiness at all. In terms of morality, this involved doing what was right, regardless of whether it made one happy or not.

How do we know what is right? Compare what Kant referred to as hypothetical imperatives and the stronger categorical imperative. Hypothetical imperatives derive their force from relevant desires – for example, my desire to not get wet results in the imperative to carry an umbrella. However, if I lacked that particular desire, the imperative would have no force. By contrast, categorical imperatives derive their force from reason and are binding on us all, because they are products of a principle which every rational person must accept (at least, according to Kant).

That principle is the Categorical Imperative, which (in one formulation) says: “Act only according to that maxim by which you can at the same time will that it should become a universal law”. This summarises the procedure for deciding whether something is morally permissible. We should ask ourselves what rule (maxim) we would be following by performing the action in question, then ask whether we could will (in other words, make it so, rather than simply imagine) that everybody followed that rule.

In other words, what we are being prompted to consider is whether the rule in question could be universalised. If it can be, then the rule may be followed, and the act is permissible. If I chose to borrow money from you that I had no intention of repaying, I could ask whether the rule “it is permissible to borrow money without intending to repay it” could be universalised. For Kant, the answer is obviously “no”, in that this rule is self-defeating – it would effectively eliminate commerce.

So reason itself is meant to lead us to categorical imperatives involving repaying debts, and also not lying, in that a rule permitting lying would, according to Kant, make communication impossible, or at least futile (given that much communication is premised on the sharing of useful and true information). An early challenge to this prohibition on lying came from Benjamin Constant, who pointed out that according to Kant, one is morally obliged to tell a known murderer the location of his intended victim, if he were to request that information.

A lie intended to divert the murderer is impermissible, because it’s possible that unbeknownst to you, the intended victim has in fact changed location, and is now in the place you identify in your lie. This thought experiment can easily be modified to make an answer such as “no comment” equally revelatory – and therefore leaving us responsible for the death of the victim in both these cases. Or not, at least according to Kant, who said that so long as you tell the truth, it’s the murderer who has done wrong, not you. You’ve done what’s morally obliged by reason, consistency, and the categorical imperative – you have acted from a “good” will.

Some readers may want to bite this particular bullet, and agree with Kant that the only way we can avoid moral rules becoming the victims of subjective preference or other forces is to treat them as absolute. I’d however be willing to place a wager on most of you thinking that there’s something absurd about no allowing us to lie to an enquiring murderer, simply because doing so would introduce cost-benefit analysis into our choice. Of course we’re not all equally good at such analyses, and of course some situations don’t lend themselves to these calculations as well as others.

Regardless, it’s reasonable to suspect that most people would agree it’s morally permissible to lie when doing so appears guaranteed to save an innocent person from murder. And if you do agree, then you admit that at least some moral principles are not absolute, but are instead ideals to be followed as closely as possible. Or, you might say that it’s unreasonable for Kant to formulate the maxim or rule in such absolute terms: what’s wrong with a maxim like “don’t lie, except when you can save an innocent life by doing so”?

For Kant, what’s wrong is of course that this sort of maxim appeals to consequences, and thus offers us no absolute – rather than context specific – guidance. It opens the door to a potentially infinite number of revisions and subtle qualifiers, and leaves us in exactly the moral mess that he thought he was clarifying with his deontological (duty-based) ethics. But an inflexible set of rules that don’t appear to account for what can be known (like when you know that a lie can save a life), or that don’t allow for any ranking of rules (if all rules are absolute, what would one do when a rule to tell the truth conflicts with a rule to save lives) is an unsatisfactory attempt to clean up that mess.

This is of course why most ethical frameworks – the ones that seem to summarise our behaviour, rather than the ones discussed by moral philosophers – allow for some sort of hierarchy of principles, or exceptions to rules such as the “white lie”. We know that life is more messy than the ideal existence imagined by a very anal-retentive Prussian, whose daytime walks were so regular that (according to legend, at least) locals used to set their clocks by him as he passed by their houses.

While all but a handful of existing Kantians are probably behind a desk at a philosophy department somewhere, we still have a large number of deontologists in our midst. This is because the other historically popular route to absolute moral rules is through religion, and the Abrahamic religions in particular. I’ve perhaps said enough about the absurdity of basing contemporary moral reasoning on the superstitions and lacks of knowledge of our ancestors in previous columns, so will make only a few simple points here.

First, as Plato argued in Euthyphro’s dilemma (around 399 BCE), propositions can’t be right simply because they are commanded by God, because this could make anything potentially “right”, if God had that particular whim. It would strike most believers as implausible that God commands them to cut off their ears – and rightly so, seeing as the immediacy of the harm should make any non-immediate (and untestable) rewards for doing so less attractive. Believers might of course respond that God would not command this. But this is precisely the point: We expect some conformity between the commands of God and what we can understand as rational or reasonable, and this shows that instead of what God commanding being right (by virtue of the command), God instead commands what is independently right.

Second, the Golden Rule – which strikes us as an absolute principle, and one which is common to many religious outlooks – is also far from absolute. It’s also far from being derived via religion, given that reciprocal altruism can be observed in many non-human animals. The idea of “doing as you would be done by” is however of little use if we were to apply it in all situations, without any recognition of context. As Simon Blackburn points out, imagine the criminal asking the judge “how would you like to be sentenced?” In other words, there are cases in which we think applying the Golden Rule to be inappropriate, because it’s been superseded by some other moral judgement – again showing that we do rank these things in practice, rather than treat them as absolute.

Both of these versions of a deontological morality fail, often because they provide easy answers to questions that should be understood as difficult. It is of course often politically desirable to give people answers that correspond to an imagined black and white world. But that is not the world we live in anymore, now that more of us have educations, and now that we find no compelling reasons to subjugate ourselves to edicts and forces that are not intelligible.

They fail also by offering moral confusion where there should be none. Consider policy decisions like whether to allow euthanasia or gay marriage. Both are cases in which one has to work very hard (with little reward) to understand how they can ever be considered immoral practices in themselves. For surely morality has to have something to do with making life better or worse? If I prefer for my life to end, not allowing euthanasia makes my life worse than it could otherwise be, and it seems immoral to deny me that right. And even if you want to object that having myself euthanased could make life worse for some, by (for example) causing my wife distress, consider instead the case in which I’m a widower, and have no other family? Again, the context changes the analysis – or at least it should.

Most importantly, defining morality as necessarily absolute and objective is an illegitimate way to privilege religious morality, even as it continues to become less and less useful to people living in a modern world. Not only less useful, but also potentially harmful, in that our moral sensibilities atrophy further every time we think a dilemma should not be debated – or cannot be debated – because of some absolute principle we’ve never allowed ourselves to question.

And just as religious morality should be considered part of our intellectual history, rather than inform our actions in the present, so too for much of secularists are inclined to call “Enlightenment reason”. While the Enlightenment project did the species an enormous favour at the time, our knowledge of cognition in the 18th century should not be considered a permanent guide to how to live. Reason, for Kant, was absolute and transcendent, but in reality it is probably no such thing.

There is no unified locus of rationality in the brain, and no disembodied faculty of reason – our will is tied to established habits, and also seems to follow after our emotional moral judgements, rather than generate them. As Jonathan Haidt suggests, the emotional dog wags the rational tail – we form emotive reactions, and then construct post-hoc rationalisations to justify them. John Dewey’s characterisation of the self as an “interpenetration of habits” should remind us that some of the very concepts of morality (like “person”) are metaphorical – they have no necessary and sufficient conditions.

For moral theory to make sense in light of what we now know about how we reason, it must take the work of people like Haidt, de Waal, Harris and Binmore into account. Of course the answers are going to be messy, at least for now. But so are we, and so is the world we live in. If you don’t want to help with cleaning it up, so be it – but standing on the sidelines reciting dogma (religious or otherwise) is simply getting in the way of important work. And that’s surely not morally good, on any definition.

Categories
Daily Maverick Morality Religion

A science of morality #1

This entry is part 4 of 5 in the series Moral debate

Originally published in The Daily Maverick

The previous instalments of this series on morality have argued that we are handicapped in our ability to engage in moral debate. This handicap exists because of our overconfidence and complacency with regard to our existing moral beliefs, as well as through the lack of guidance offered by the dominant moral theories. But a negative proof – showing what might be wrong with existing beliefs – is often an easier task than a positive argument for some viable alternative. The positive argument is the focus of the final two parts of this series.

A summary of these concluding instalments is perhaps the claim that moral knowledge is just like any other knowledge, and should therefore be understood and debated using the same tools and resources we deploy in trying to understand other areas of epistemological contestation. The most successful tools and resources we’ve found so far are those of the scientific method, and I will thus be arguing that what we need is a “science of morality”.

The idea of a science of morality has recently enjoyed increased public attention thanks to Sam Harris, and the recent publication of his book “The Moral Landscape”. Many columns and reviews – including some from prominent moral philosophers – have been quick to dismiss Harris as philosophically ignorant, mostly on the basis that he fails to take the concerns of Hume seriously. Hume, the critics say, told us that one cannot derive an “ought” from an “is” – in other words that empirical observations about what is the case cannot tell us how things ought to be.

But instead of being willing to contemplate the possibility that Hume was wrong, or that Hume can be misunderstood, these refutations of Harris’s arguments usually amount to the simple assertion that Hume’s Guillotine (as the argument is known) shows that he is wrong. It’s useful to remind ourselves that simple appeals to authority are a logical fallacy – it doesn’t matter who said what, but rather that what they say stands up to logical scrutiny. This is what Hume says, in a passage from “A Treatise of Human Nature” (1740):

In every system of morality, which I have hitherto met with, I have always remark’d, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz’d to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, ’tis necessary that it shou’d be observ’d and explain’d; and at the same time that a reason should be given; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

Read that last sentence again: Hume says that the derivation of “ought” from “is” needs to be explained, and that a reason should be given. He does not say that such explanations are impossible, or that no relevant reasons for such derivations exist. So here we have a clear example of how appeals to authority can appear convincing, even to many who regard themselves as being well acquainted with the relevant literature. Now, of course I’m simplifying – an opinion piece does not allow for excursions into subsequent work by Moore and others in which this is/ought (or fact/value) distinction is further explored and defended.

But Harris is not the first to think that this distinction is at best misleading, or even false. Those who think that empirical facts can tell us nothing about morality could spend some time reading the work of Railton, Jackson, Boyd, Binmore, Churchland and others who have presented strong cases for the possibility that facts about the world can indeed tell us something about morality. As I’ve previously argued, the idea that morality involves absolute principles has enjoyed the privilege of being grounded in dogmatic faith – whether religious or secular – and that faith doesn’t necessarily correspond to actual justification.

So if we are to entertain the notion that values can be derived from facts, how should we proceed in doing so? Applying the scientific method does not have to equal scientism. For some, it does, and this is indeed unfortunate. The more modest and useful perspective is to recognise what it is that we value about science, and why we find it so useful. We value it and find it useful because it provides us with the best possible answers to questions that potentially have answers, and allow us to make the sorts of predictions about the future that are most likely to be borne out by subsequent observations.

It does not offer us guarantees, and it never has. It’s important here to reflect on the difference between a lay understanding of science as offering absolute certainty, versus the actual products of scientific inquiry, which are always qualified by reference to statistical tools like margins of error and confidence levels. These things are usually not reported in the mainstream press, but are universally present in any respectable scientific publication.

To take an extreme example: It’s virtually certain that my habit of smoking cigarettes will lead to my suffering some unpleasant health consequences in the future. But when we say things like “smoking causes cancer”, that shorthand statement stands in for something far more complicated. A more accurate utterance would be something like “thanks to a vast body of empirical data, the most plausible hypothesis is that smoking has a positive causal relation to cancer, and we can confidently predict that Jacques is likely to develop cancer thanks to this behaviour”.

Many of our hypotheses and predictions do not allow for as much confidence as the example of smoking does. But as soon as there is any evidence – any evidence at all – the possibility exists for us to make better and worse predictions about the consequences of our actions. And we do have some evidence related to the sorts of things that allow for increases or decreases in the welfare of sentient creatures.

Following the advice offered by John Watson’s best-selling childcare book in 1928 – that you should not kiss your child more than once per year – will almost certainly have a negative effect on the welfare of that child, other things being equal. So, this fact about what conduces to your child’s welfare allows us to infer the moral principle that it is wrong to neglect your child’s emotional needs.

If you agree that there are some aspects of welfare that can be measured – and if you agree that morality has something to do with welfare – then it seems plain that facts about the world can tell us something about how we should live in that world and how we should treat not only each other, but also other sentient creatures. We have some data, and any amount of data allows for us to make better and worse predictions regarding the consequences of adopting one moral principle versus another.

Of course we don’t have certainty. But we don’t have it anywhere else, and it is unclear how this is a flaw for moral knowledge, yet not for any other kind of knowledge. This double-standard has no justification, and seems little more than an excuse to do less thinking.

Categories
Daily Maverick Morality

A science of morality #2

This entry is part 5 of 5 in the series Moral debate

Originally published in The Daily Maverick

Various metaphysical questions have enjoyed the attention of philosophers, whether amateur or professional, ever since we became able to articulate complex thought. From questions regarding the point of our existence to wondering about the nature and existence of a soul, we have spent much time pondering these and other questions that frequently seem insoluble, and for which it remains unclear what – if anything – should be taken as counting as evidence for or against any particular conclusion.

It’s certainly possible, even probable, that much of this has been wasted time, at least in the sense of its likelihood of resulting in answers that can assist us in dealing with practical problems that are (at least in principle) soluble. And we do have practical problems to address. The concept of moral luck highlights the fact that some of us are simply born on the back-foot, and that no matter how hard we work, or what our natural talents might be, we will always be less well-off than someone fortunate enough to be born in more fortuitous circumstances.

This is a moral issue. While libertarians are comfortable with the idea of desert, whereby your life choice to be slothful could rightly correlate with a lack of wealth and opportunity, it’s less easy to say that you get what you deserve if accidents of geography have resulted in your being born in a township with no access to quality schooling. And this is also a moral issue which is generated by entirely practical considerations, namely issues which include the proper allocation of state resources, and government policy with regard to class and race.

But when we think about morality, we often fall into a trap of subjectivity. Part of what makes H. Sapiens as interesting as it can be is our ability to engage in self-reflection, and to indulge ourselves through complex narratives that reinforce our specialness. The very activity of thinking about the metaphysical questions gestured at above is a privileged activity, in that it’s a luxury that those who are worried about where the next meal might come from would indulge in less frequently than the average reader of The Daily Maverick. It is however also an activity that is distinctly and definitively human, as we would most likely continue engaging in it even if there were no answers to be had.

As a starting point to resolving non-subjective moral dilemmas, we could usefully remind ourselves that there are a number of clear correlates between human flourishing on the one hand, and economic and social policy on the other. We should also remind ourselves that subjective welfare – my perceived happiness, and what I believe needs attention in terms of my welfare – is absolutely unreliable as a guide to what we should do, whether in a moral sense or any other. Perceptions of personal welfare are massively state-dependent, in that what I report today might be entirely different to what I report tomorrow, simply because of the cognitive biases that we are all victims of.

This means that morality should be informed by objective measures of welfare – if not completely, then at least substantially. On the macro-level of societal good, this means that where we can know that the provision of sanitation, water and electricity to a certain level results in a clear aggregate increase in health, it becomes a moral imperative to provide those goods. Where we can know that gender or racial equality, whether in terms of voting rights, access to education or any other measure results in social good in some measurable form, the provision of these rights also become a moral imperative.

Morality is therefore at least in part an issue of sound policy. Not only because we would see increases in economic and intellectual productivity if more South Africans had access to the relevant markets, but also because it seems plausible that many of our moral problems might be minimised through redress of these macro-level problems. It is no accident that violent crime, rape or spousal abuse simply doesn’t happen as often in places like Sweden. People are simply not incentivised to take what is not theirs in jurisdictions such as these, where basic needs are met. They have less reason to, and they also have more reason to work towards the common good.

This is because the relationship between self-interest and maximising the common good becomes clear in situations where you regularly experience evidence of your welfare being resolved by collective action, rather than by the experience of occasionally winning what you perceive as a zero-sum game involving a competition between yourself and a hostile other, whether the other is state or society.

To some extent, objective considerations such as these can also inform morality on a personal and subjective level. We can extrapolate various well-justified moral norms or rules applicable in personal environments from what we can know from our high-level conclusions regarding what is good for a society. If corruption, deceit and violence are negatively correlated with flourishing on a societal level, it’s certainly likely that the same relationship exists on a personal level, whether the tenderpreneur experiences it in this way or not. While free-riders can never be eliminated, they are no obstacle to our reaching agreement on calling certain actions “good” or “bad”, because we know them to be either conducive or not to objectively desirable states of the world.

Of course, some might object to any claim that there are objectively desirable states of the world. I struggle to make sense of this objection, in that it seems obvious that the vast majority of us prefer certain common goods, such as health, financial security and our preferred level of social engagement. If someone were to make the claim that the most desirable state of the world instead involves privation and violence, I see no reason not to simply exclude them from the conversation, in the same way that we can justifiably ignore the opinions of young-earth creationists when we talk about cosmology.

But if we are to take such objections seriously, it seems clear that they lead to an impasse of one sort or another. We could say something like “okay, I can’t prove that you are right, while I am wrong about what is good, but I can know that I don’t want to be part of a society in which you are well-represented”. In other words, even though our social contract might be entirely pragmatic, it will tend to exclude or discount these views, and it further seems to be the case that even on your standards, your own prospects of a good life will be compromised by your minority status, making your view somewhat self-defeating.

Or, one might object that morality is about something else entirely, and that these measures of objective welfare are not the issue at all. If this is the case, the task is then yours to explain what morality is for, if it is for anything at all. Certainly, moral debate could simply be one of the sorts of noises that humans make, and only that – it could perhaps just be one of the social and intellectual habits that we have developed for our own entertainment, or to buttress our narratives of self-identity, much like our desperation to believe in free will or souls despite there being no evidence for the existence of either.

But again, even if this is true, it remains rational for us to desire to live better lives as opposed to worse ones, and to seek out ways to make this the case. It also seems clear that most of us agree that measures such as our health and financial security are good proxies for knowing when lives are better or worse. And if there is any data about what makes a good life more rather than less likely, it makes sense to say that moral theory has to take that data into account, and that aggregating this data into “rules” is what morality is for.

The position sketched above is not relativistic, in that moral principles are derived from objectively measurable data. It is also not an absolutist position, because the preferences of individuals and societies are not necessarily immutable, and if different sorts of lives become desirable in the future, we should be ready to accommodate observations to this effect. Instead, what I’ve outlined is a position that is naturalistic, and for which we already have well-developed tools to separate sense from nonsense. If moral claims are not subject to the only successful tools we’ve ever developed for evaluating truth-claims – the tools of science – then there is truly nothing we can know in morality, and there seems little reason to discuss it any further.