This is the final post in a series on morality and marginal existence. We started with “Morality and Marginal Existence,” and followed with “Unreal Persons.”
I’ve written a lot more about morality here than I ever expected to, and I’m hoping this will be the end of it. It somehow feels immoral to have this much to say about morality (though, I must admit, it is a lot of fun).
In “Morality and Marginal Existence,” I said:
Morality is, after all (I think), a decision-support system, not a branch of metaphysics. So I am not a moral realist—I don’t think there is any underlying moral truth that we should try to discover. But I also diverge from the pragmatist utilitarians that avoid the repugnant conclusion by saying, “Yeah, if you take it to its extreme [meaning its logical conclusions; the ‘truths’ that follow from its axioms], it’s repugnant, but, listen… utilitarianism works.” But a formal system is only as good as the conclusions that follow from its axioms…
The question of “moral truth” is hotly contested. A survey conducted in 2009 found that around half of philosophers accept or at least lean towards moral realism.
I would like to compare the question of moral truth to the question of mathematical truth: is math true? I mean the system of mathematics as a whole—is it true? Is it possible to have knowledge of objective mathematical truth? Is it true that 2 + 2 = 4? Is it true that the sum of the sequence of integers is equal to -1/12? Did we invent math, or discover it? Does it even make sense to ask these questions?
A lot of people think of math as something like the inarguable, fundamental structure of reality. But this isn’t right. Math is, at least in part, something we created. It is a formal system.
From Wikipedia: “A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules.” In other words, a formal system is a system that emerges as a consequence of a set of things that are taken for granted. The things that are taken for granted are called axioms. Axioms are not proven or demonstrated to be true—they are assumed to be true, and they provide a base layer from which propositions can be evaluated to be true or false. The axioms you choose for a formal system completely determine the rest of the system, and you can never circle back around and prove that the axioms are true.
This is the way everything works. We can’t know anything for sure. We can only construct good formal systems. Axioms must be chosen. They are accepted or rejected not based on evidence per se, but intuition, the effectiveness of the formal system which emerges, and the subjective elegance of the system. We have the math that we have now in part because the mathematicians of old were a lot more sensitive than we are now, and they devoted their lives to develop a mathematics that was concise and beautiful.
The entirety of mathematics is basically the result of a set of axioms that were chosen in order to support a formal system that matches our intuition and basic understanding of mathematical phenomena, e.g, counting quantities, some quantities are bigger than others, you can add quantities together, but it is impossible to have knowledge of objective mathematical truth, because axioms can never be demonstrated to be objectively true—you can only have truth with respect to a formal system, not with respect to reality.
I think it’s also important to realize that the axioms themselves aren’t always intuitive. Sometimes they are difficult to comprehend, or seem arbitrary, but they make the rest of the system more intuitive, more useful, and more elegant.
So, we sort of observed math, we sort of discovered math, and we sort of invented math.
When we ask whether it is true that 2 + 2 = 4, we say “yes” because it is true in the agreed upon formal system which defines concepts like 2, +, and = (I mean, that’s not actually why we say “yes” in real life, but from a rigorous epistemological standpoint, that’s they only way we can really say it’s true: true within the formal system). And when we ask whether it is true that 1 + 2 + 3 + … = -1/12, we say “uh, well” and sigh, because there are certain unusual situations in the agreed upon formal system in which this can be made to appear to follow from the axioms. And when we ask whether mathematics in general is “true,” we have to clarify what we’re asking about exactly, but I think it is fair to say that it is neither true nor false. It is useful, it is incredible, it aligns with reality in important ways, but it has no truth value. You can’t say that mathematics as a whole is objectively true, because the axioms are assumed with no objective basis. You can’t say that mathematics is false, or unknowable, either. It is what it is. It is a formal system that we created. It is a structure that we designed to capture real phenomena, and that has many rooms and corridors which are yet to be explored.
So: the question of moral truth. Is there such a thing as objective and knowable moral truth? Or is morality something of which you can have no proper knowledge?
If we think about morality as a formal system, the question evaporates. We cannot discover or a morality that’s objectively true, but we can design an intuitive, useful, formal system.
Utilitarianism is a little different from other moral philosophies (including consequentialism, which is a broader term that includes utilitarianism) in that it is a “mathematization” of morality. It’s an attempt to quantify morality, yes, but it’s also very transparently a formal system. Take “morality is about maximization of well being” as your axiom, and you have a robust morality proposition-evaluating system. Simply add up all the well being, and you can arrange all possible moral propositions on the number line.
But other positions in moral philosophy are formal systems, too. Consider virtue ethics: the axioms are the virtues. A proposition (that is, a question about the morality of something) is evaluated based on its agreement with virtues. The main difference between virtue ethics and utilitarianism, from a structural meta-ethical point of view, is that the axioms of utilitarianism produce a more math-like formal system, in which arithmetic properties can be observed (or supposed), so it is theoretically easier to compare arbitrary propositions. Whether those evaluations are any good is another question.
The problems with utilitarianism stem from the way it makes moral evaluations seem like they should always be easy. Things like the repugnant conclusion demonstrate that morality shouldn’t be mapped onto the number line.
But, although utilitarianism has issues, it might be the most useful moral formal system we have for real life decisions, like how to allocate philanthropy. If you’re giving 10% of your income to charity, you have to decide exactly what charities to give to, and there has to be a better way to do that than donating to the charity that makes you feel the most warm fuzzies, or the charity that you’re most familiar with. Utilitarian thinking helps you get over those bad heuristics. However, if you’re committed to utilitarianism, you get into another bad position where charities working to secure the long-term future of humanity always look way more important than charities working to improve the conditions of people alive right now, even if the effectiveness of those long-term charities is extremely doubtful; the expected value is still better, because the number of potential future people (marginal existences) is so large.
By tweaking the axioms of utilitarianism a little bit, we can smooth out some of these quirks. This is the way I was thinking when I came up with the concept of marginal existence. My proposition in “Morality and Marginal Existence” is to change the axiom “morality is about maximization of well being” to “morality is about transitions in well being of conditionally existent persons.” This puts you in a (logically coherent) position to prioritize the well being of persons currently alive and persons hypothetically possible in a more sensible mix…
Anyway, to me, it is obvious that there is no such thing as moral truth. There’s no morality baked into reality, just as there is no math baked into reality. In math and in morality, there are certain things that we treat as self-evident: 2 + 2 = 4, and killing people is wrong. There are other things that are weird and counter-intuitive, or even repugnant. The sum of the sequence of integers can appear to be -1/12, and a large world filled with miserable people can appear to be better than a small world of happy people. These weird conclusions are important not because they give us insight by themselves, but because they tell us something about the formal system itself, even if it is only something wrong with the axioms we have chosen. Thinking about morality as a formal system can make discussions more productive—instead of arguing about what’s true, we can argue about what axioms produce the most elegant formal system.