Much of what I’ve been interested in over the last decade or so has revolved around epistemology, and in particular virtue epistemology – in other words, questions around what it is that we should believe, and how we should form our beliefs. These are normative questions, and raise a whole bunch of issues relating to the extent to which we are in fact able to be rational epistemic agents; what such agents would look like; and whether we would want to be disposed in this way at all.
For example, it’s legitimate to ask whether our beliefs in fact aim at truth at all, or whether they in fact aim at something else. What would that “something else” be? Coherence? Preservation of a definition of self? In terms of our moral beliefs, are they empirical? Do (should) we assess them similarly to other judgements?
Following years of teaching non-philosophically minded Commerce students, it seems to me that the average person may not care much about normative reasons for believing. This is where experimental philosophy (or X-Phi, as it’s been dubbed) may be useful – to develop tools for determining to what extent normative reasons for believing influence what we actually end up believing, or claim to believe (another question entirely).
If it ends up being the case that we care very little for normative reasons, what are the theoretical implications of this for society? What role do non-normative reasons for believing (whatever they end up being) play in decision-making, and are these decisions still “rational”? My initial suspicions are that much of our discourse is aimed at making ourselves appear reasonable, without ever really being concerned with being rational – we’re mainly playing a political game, where the stakes are conceptions of personal identity.
In this game, I simply need to say enough to you that you are given the impression that I have reasons for believing what I happen to believe. Such reasons as I have may be reactive, in other words formed after the fact of having come to the belief in question, and as such serve a role of justifying the belief to myself, and allowing me to think of myself as a rational person. I then use these reasons in conversation with you, and reinforce – to both of us – this conception of myself as a rational agent. But we allow each other so much flexibility here, and permit so many inconsistencies, that our claimed justification of these beliefs may rarely relate to any objectively coherent or plausible reason for believing – and we rarely seem to care deeply about this.
Literature around the concept of the “risk society” may be relevant to these thoughts – we somehow have developed the need to believe, and are uncomfortable with expressing our intuitions as unfounded. We seem to think that we are less significant – or at least our beliefs are – if we can’t express them in rational-type discourse. But if you can give me a subjective narrative that “justifies” your belief, and I play the typical social game of allowing these justifications more weight than they – well – justify, then we’re simply co-operating in reinforcing the same fantasy for each other – that we are rational agents, who believe things responsibly, and are responsible for what we believe.
The “ordinary” approach that non-philosophers use seems to work out, much of the time. But in some cases, absolutely nutty beliefs (god, crystals, angels, The Secret) slip through – are these acceptable costs emerging from a system that, in general, works to our advantage?