On Moral Theory and Truth with Richard Carrier – Part I

You argued recently in ‘Open Letter to Academic Philosophy’ that the three most prominent theories in moral philosophy: consequentialist, deontological, and virtue ethics… are all actually the same theory. Can you give me a brief overview of the argument, and also, tell me a little about the process which led you to the conclusion?

Derek Parfit has argued much the same thing (https://en.m.wikipedia.org/wiki/On_What_Matters). We share a common insight: that the truth in moral theory is at the nexus of all moral theories. They are all looking at a common truth but from a different angle. And one way to realize this is discovering that they all reduce to each other when you carry them through with a valid logical application of their own premises, with the basic principle of total evidence, that “no evidence is to be excluded.”

Parfit takes a different approach to the same conclusion. I got there the other way: once you realize that deontologists are concealing consequentialism in their reasoning and that consequentialism leads to the same general conclusions as deontologists, and that all moral decision making is as a matter of psychological fact mediated by habituated virtues, all moral theories reduce to the same one theory. Which happens to be Philippa Foot’s original proposal that all morality is a system of hypothetical imperatives. Even Kantian ethics, which were supposed to not be, are. All these moral theories are simply each looking at the truth from a different, and incomplete, perspective.

The problem I illustrate in my article is that Kant is ignoring consequences in his reasoning even when he logically cannot do that to get a true proposition. Just as the utilitarians are ignoring the consequences Kant was calling attention to. When we include all the consequences, we end up with a unification of Kant and Mill. Virtue ethics folds in when you add the neuroscientific facts of human decision making, with habituated virtues mediating access to the relevant consequences, both internally (to the agent) and externally (to what gets realized in the world, which will in turn affect the agent, either through reciprocal effects or through the agent’s greater satisfaction with the result, or both).

And this can’t be avoided with semantics. Some will want to narrowly define what consequences count as consequences in consequentialism. But then all you are doing is ignoring a set of consequences. Which just happen to be the consequences the deontologists have been pointed out, which are typically those related to rule consequentialism (the effect of universalizing a behavior on a social system) and those related to agent satisfaction (which is what Kant said made his moral system true: the agent’s desire to be a better or more consistent person, and the agent’s greater access to personal happiness that results). The end result of combining both sets of consequences is simply a consequentialism that accounts for all consequences.
Hence it’s consequentialism all the way down. And since it all ties to the agent’s desires, it’s all just a system of hypothetical imperatives.
Kant reduced his categorical imperative to the pursuing of a singular internal consequence, and thus to a hypothetical imperative: the consequence to oneself of accepting a certain behavior. Once you factor that in, it’s just egoist consequentialism. And if you remove it, you remove all claim to its imperatives being true, and Kantianism then becomes self-evidently false (as in, there is no true sense in which you “ought” to adopt it). But that consequence to oneself requires that the systemic consequences also be true: that encouraging more people to behave that way will make the system better. And that is just rule utilitarianism. Which is another well-known form of consequentialism.

My process toward discovering this came from my years of philosophical study for my 2005 book Sense and Goodness without God. Subsequent study has only multiply confirmed it. Key was discovering Foot’s analysis. Which unlocked everything. It’s a tragedy that hers is not taught alongside the other three great systems of approach (consequentialist, deontological, and virtue ethics). Because it’s even more important than those. Everyone who knows what they are talking about knows hypothetical imperatives are empirical propositions capable of being true or false. Propositions like “you ought to sterilize your instruments” is an empirically discoverable, objectively true imperative for a surgeon intending to save a patient’s life. The condition (the desires of the surgeon) is a material fact of the world and the consequence (the behavior necessary to obtain that desire, instrument sterilization in this instance) is a material fact of the world. Moral facts are simply a special subset of these kinds of imperatives. And you can construct that set using the tools developed by deontologists, consequentialists, and virtue ethicists.

All philosophers should be working on that project. In conjunction with scientists studying the neurology of moral reasoning, psychologists studying the phenomenology of moral thought and learning, and sociologists studying the properties and behavior of social systems.

Excellent. There are two points you made that I want to dig a little deeper into:

i) In his book ‘The Things We Mean’ – Stephen Schiffer starts section 6.5: “If one person fully believes a moral proposition, then it is always possible in principle for there to be another person who doesn’t believe the moral proposition but is nevertheless, as regards epistemic justification and knowledge, on par with the believer… [which leads to the view] rationally irre-soluble dispute about moral propositions is always in principle possible.” Here I take Schiffer to be speaking of one type of hypothetical imperative not capable of being determinately true or false. How does your theory overcome this problem?

ii) It seems plausible that hypothetical judgements such as: “You ought to sterilise your equipment if you intend to save a patients life” or “If you want to avoid getting wet, you ought to carry an umbrella” can have determinate truth-conditions. However, how do you bridge the gap between that type of hypothetical, and the seemingly different hypothetical moral judgement, which in the surgeon situation might be: Why ought the surgeon save a life at all?

i) I’d have to see an example of what Schiffer means. He can’t claim such propositions exist, if he can never present one. So it’s unclear what kind of occasion he has in mind. For example, you and I can each be fully warranted in believing different things about whether the earth is round or flat, and still one of us must be wrong. You could believe the earth is flat and that could be a correct inference from all the data so far available to you, and yet the earth is round. I have to assume that’s not what Schiffer means. Yet since all moral facts are claims to fact, there cannot be any case in moral propositions not analogous to this one. If we disagree, one of us is always wrong.

What he could perhaps mean is that there could be, for example, different species (humans and some alien race), for each of whom different moral facts obtain. But they would then agree about that when fully and correctly informed, e.g. we would agree they ought to behave as they ought and we ought to behave as we ought. The same follows if there are different kinds of humans for whom this is the case. I discuss in my work the possibility that sociopaths are such people. It turns out they aren’t really, but in popular imagination they are, whereby different moral facts hold for them than for the rest of us, and in fact this entails our own moral commitments change with respect to them as well, e.g. if they are in fact irredeemably amoral monsters, we should kill them. But again, when fully informed, we would all agree that’s the case. Even the sociopath (in that fictional sense) would have to agree in that case that we are morally right to kill them. If they didn’t believe that, they would simply be factually wrong. It just so happens that their knowing that then justifies they ought to behave better. Which negates the popular conception of the sociopath. It turns out, the same moral facts do apply to them after all.

But assume there really is some sort of sociopath for whom that wasn’t the case, and thus for whom different moral facts apply, and for us toward them different moral facts apply. That’s just another variety of situational ethics. No moral system can be coherent (and thus true) that ignores how what one ought to do changes with their circumstances. I have used the example of rescuing someone drowning: whether you ought to jump in and save them will be dependent on whether you know how to swim, or have the strength to effect a rescue in the sea conditions present even if you can swim. Moral facts are facts about physical and social systems, so if you change the configuration of the system, you change what moral facts hold for that system. So, change the sea conditions, or your skill or strength, and you change the system you are present in, and that changes the moral facts. Just as it does and can change straightforward physical facts about that system.

So where in any of this can two people ever be properly justified in believing different things? Justified as in not just being warranted, but actually believing what is true? I do not see any more possibility of that in moral facts as there is in a dispute over the shape of the earth. Even when different moral facts obtain for different people, because their systemic context is different, there is still only one true fact of the matter (what that person, in that circumstance, ought to do), and everyone who has justified true beliefs will agree what it is.

My formal, peer reviewed work on this is the best to consult for answering questions like these. They are often covered in the endnotes or directly in the text of “Moral Facts Naturally Exist” in The End of Christianity (John Loftus, ed., 2011). I discuss aliens and the species-dependency of moral facts near the end of the chapter (and its relevance for AI). The swimming and context-dependency point appears in the text a little earlier on. Also, in a whole section, I discuss the possibility of objective relativism (that there may be types of humans for each of whom different moral facts are objectively true) and why that probably is not the case for humans (we have too many similarities, internally and external to us, for there not to be some common set of moral facts).

And notes 28, 33, 34, and 35 pertain in different ways as well, depending on what Schiffer intended to mean. Notes 28 and 35 address the circumstances of being inescapably ignorant of key facts, where moral facts can be true in that circumstance that are not true in the fully informed condition. Notes 33 and 34 address the circumstances of encountering conflicting moral imperatives (one always overrides the other, or else there is no truth of the matter which must be preferred over the other) and unachievable imperatives (which as such can never be true). That latter is the more general set that also contains the particular case of being in a state of key ignorance due to the impossibility of access to the requisite knowledge, which just creates another example of an unachievable imperative.

So my conclusion is, that the situation Schiffer describes is not possible, even in principle.

ii) The question reduces to why someone should want one thing rather than another. And in structure, it answers itself. Why should you want to sterilize your instruments? Because you want to save the patient. Likewise, why should you want to save the patient? Because of some other desire, just as in the first step of reasoning. Aristotle noted this thousands of years ago: when you analyze why you ought to do something, it derives from a cascade of desires; you ask of each desire, “Why do I want that rather than something else?”, and then you ask the same question of the answer, “Why do I want that rather than something else?”, and so on, until you land on whatever it is that you desire for no other reason than itself.

Aristotle concluded we always land on the same thing: all desires stem from the same single fundamental desire, for eudaemonia, “the blessed state,” which people today often translate as “happiness” though that doesn’t carry quite the same valence. He meant something more like a sense of contentment. And he was speaking long before psychology became a proper science. Now we can say something more sophisticated, that what people really want above all things is to feel satisfied with themselves and their lives and choices, with who they are and who they have become as a person; satisfied with life, in the sense of being able to answer the question, “What’s the point of even doing all this? What’s the point of even being alive?”

This was Kant’s belief as well, since this proper sense of self-satisfaction is the fundamental desire on which he rested the truth of his entire system of moral imperatives (thus reducing all his categorical imperatives to hypothetical imperatives after all).

Once you realize this, and put it in conjunction with your circumstances (internally and externally, as I discussed earlier), all other desires become either what you factually ought to desire or what you factually ought not to desire. Surgeons ought to want to save their patients because otherwise there is no point in their being surgeons or even living. And the only way to escape that realization (to escape your conscience, upon which contentment and satisfaction with yourself and your life and life choices depends) is to escape the facts. You have to retreat into a delusion, or avoid learning the truth, either way you must possess a number of false beliefs about yourself and the world. And no true moral facts can follow from correctable false beliefs. Willful ignorance is not blameless. And malevolence is not deserving of the same treatment from your peers as benevolence. Hence we imprison malevolent doctors. Or even kill them (as in the case of Nazi medical criminals, for example).

A Mengele possessed of factually true beliefs, for example, would have to agree everyone ought to kill him. And loathe him. And it is not genuinely possible to be content with yourself, your life, and your choices, when you know that they fully deserve not merely your death, but the loathing of you by everyone else possessed of true beliefs.

What happens instead, when one pursues true beliefs, is the realization of the importance of compassion and honesty and reasonableness to any self-contentment built on true beliefs. All moral-fact-making desires then follow. As in, follow by logical entailment and as a matter of factual truth.

Whether someone actually recognizes this is a wholly different question. Just as someone can be faced with all the evidence that logically entails agreeing the earth is round, yet they stubbornly and irrationally persist in believing it is flat, so they can reject belief in true moral facts even when presented with every demonstration imaginable. So a fact being true is not a magical potion that causes people to believe it. But facts remain true even when people refuse to believe them.

(I should add that note 36 in my peer reviewed chapter in The End of Christianity goes on to cover the question of whether it can ever be true that we ought to cultivate or maintain irrational and uninformed beliefs. It’s not difficult to show why the answer is no.)

Schiffer’s thesis is cognitivist but argues that moral truths are indeterminate, which evolves from his master argument:

(1) If there are determinately true moral propositions, then there are moral principles which are knowable a priori.
(2) There are no such moral principles.
(3) ∴ There are no determinately true moral propositions.

I think you will attempt to reject (2) and so will offer the two reasons he gives in the book for his accepting it:

The first starts “if one person knows p a priori, then so will any other person in the relevantly same epistemic situation (with respect to p)”. Which means the latter, if she does not, is epistemically “lacking” in some regard to p. But it seems intuitively possible that in equal situations relevant to a moral proposition p, the first and the second person could still disagree. “You take yourself to know a priori that a woman has a right to abort a 3-week old foetus if having a child would interfere with her career. But there are people” with access to all the same facts as you, equal intelligence, reasoning ability and so on, who do not believe this to be the case. “Take someone who implicitly accepts Derek Parfit’s Self-interest Theory and has as her single ultimate moral principle that one” ought to pursue what makes their life go “as well as possible”. This person might be a “sincere moral egotist” who takes her only moral obligation to be the satisfying of her ends. She may agree killing is morally wrong, but would have to concede that she could not assert without qualification, that killing is morally wrong a priori. “Yet if anyone has a priori knowledge of any moral principle” one can hardly think of a better example than the killing of another.

The second argues that “if one has a priori knowledge of a moral principle, then there must be a correct explanation of how one has that knowledge, of what it is by virtue of which one’s mental state constitutes a priori knowledge of the principle”. This doesn’t stretch over all principles but argues that, I think in your case, the “less ultimate principle” (derived from the more ultimate) with empirical aid precludes one’s knowledge of the lesser from being a priori. “Substantive moral principles link moral concepts with non-normative concepts” but the former “float free” in that it is “never the case that a person who possesses both both concepts can’t believe one applies unless she believes the other.”

You covered some of this in your last answer, but now that I’ve unpacked the argument a little, do you hold the same objections? How does your theory cope with the a priori challenge?

I’ll mention this again in a blog post I’m writing about the nonnatural moral realism of Russ Shafer-Landau. But in short, moral facts, to be true (and thus to be actual facts), depend on two things: a judgment of the consequences of a behavior, and the locating of a relevant source of judgment.

On the first point, e.g., we cannot answer “Is killing bad?” if we have no a posteriori data on what killing does. For instance, what if humans always automatically rise from the dead? What if in fact they not only always resurrect, but resurrect healthier and more fit? In such a world, killing someone might actually be good, such as to cure them of a debilitating disease or mental illness. You cannot know a priori which world you are in, ours or that one. So no moral truth can be known a priori. On just this point alone.

But the second point has the same effect, too. It is not enough to just point to a source of judgment and say that’s the source of moral truth. That’s fatally arbitrary. It is therefore in no relevant sense “true.” That just gets you to a million competing systems of opinions that no one has any valid reason to obey. And if you have no valid reason to obey an imperative, that imperative is literally false—there is literally no sense in which you “ought” to do what an imperative prescribes, if in fact you have no sufficient reason to do what it prescribes.

But when we locate a valid reason to obey an imperative, it will correspond to a paramount desire in the agent (for whom the imperative is supposed to be true), something the agent wants more than anything else (or would so want, when fully informed and reasoning coherently). If the agent does not want that most, then there can be no true sense in which they ought to pursue some other end instead, and therefore all imperatives that would have them pursuing some other end are literally false for that agent.

But we cannot know a priori what any moral agent’s paramount desire is. Or what any of their desires are. If, for instance, we lived in a world where no one had any reason to live—nor would ever have such a reason even when we fully informed them and got them to reason everything out perfectly rationally—then no one would exist who would judge death to be bad, and in that world there would be no sense in which killing was wrong. And we cannot know a priori which world we are in: that one, or some other—like, say, the actual world we are in, where rational informed agents agree death is bad and why it is bad.

True moral facts therefore can never be known a priori.

Moral facts are facts about agents in social systems. And as such, they depend on what is factually true about those agents and those systems. Which facts are always only accessible a posteriori.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s