I have a strong intuition that there’s something off about anti-natalist arguments. In fact, I have the same intuition about pro-natalist arguments.
Is it good or evil to have kids? There’s something weird about evaluating the morality of bringing a new person into the world. It’s an evaluation of the morality associated with a marginal existence. We can’t know what the life would be like if realized, but more importantly, even if we could know what the life would be like, the person in question has no consciousness with which they suffer their circumstances—whether those circumstances are an inexistence or a hypothetical miserable existence. They cannot be glad that they’re avoiding a miserable life, and they do not long for existence. Should we assign value to their well-being or suffering if it doesn’t exist? Should we assign value to the lack of well-being or suffering, or is lack of well-being only significant insofar as it is displacing well-being?
Consider a horrible society in which every single life is barely worth living: it is miserable but not quite so miserable that it would be better to be dead. The pro-natalist looks at this and says, “A person would not kill himself, therefore there is positive utility associated with the existence of that person, therefore there is positive utility associated with bringing people into this world.” The assumption that the pro-natalist makes is that the proposition of killing a person and the proposition of creating a person are equivalent in magnitude and point in opposite directions. The argument is: a person’s death is -1, therefore a person’s birth is +1. It’s the “therefore” that I’m not sure about, because the person subject to death exists, and the person subject to birth does not. I think this asymmetry should be enough to put it into question whether the +1 value assigned to birth follows from the -1 value assigned to death.
Consider also the pro-natalist position that the best thing we can do as a society is simply procreate as much as possible to easily manufacture as much positive utility as possible. This is ridiculous to me on its face, and apparently to many others, because this snippet from the transcript of Sam Bankman-Fried’s appearance on Tyler Cowen’s podcast keeps popping up in Twitter threads about utilitarianism:
(To be fair to Sam Bankman-Fried: this is a snippet taken out of context, although the context does not really change the meaning of the snippet. He does go on to say that life extension would be “great to have”, and he says a lot of other reasonable and interesting things in the conversation.)
The repugnant conclusion of utilitarianism is the scenario where a large enough society of people whose lives are barely worth living is considered better than a small society of people with rich, happy lives because the small amount of utility in each miserable life can add up to be a greater amount of total utility than whatever is offered by the relatively small number of happy people. The rhetorical point here is that our intuitions tell us that a society filled with lives barely worth living must be worse than a society filled with happiness and meaning. It is very often put forth as an argument against utilitarianism, but utilitarians don’t see it as completely defeating the philosophy. Some utilitarians accept the repugnant conclusion as the truth, others do not accept it but choose not to worry about it.
What the anti- and pro-natalist positions have in common with the repugnant conclusion is that they all deal with marginal existences: hypothetical incremental persons. The proposition of having children deals with marginal existences—the future children are marginal. And the repugnant conclusion uses marginal existences as its vehicle toward repugnancy—it depends on your imagination adding all these hypothetical people to the scenario. Derek Parfit called this “mere addition,” and as such the repugnant conclusion is also called the “Mere Addition Paradox,” but I don’t think that term works so well for the prospect of having children.
The oddness of anti-natalism, pro-natalism, and utilitarianism results from value placed on marginal existences. This is where my intuition diverges from these philosophies—I’m not sure there should be value associated with marginal existence.
Let’s consider a system in which it is stipulated that the value of a marginal existence is zero, no matter the quality of the hypothetical existence. (you could call this the “no mere addition axiom,” if you wanted to).
I would call this zero-value marginal existence consequentialism. I would call it this because it makes a cool acronym: Z-VMEC. I would call it consequentialism rather than utilitarianism (despite the fact that I am still talking about moral evaluations based on maximization of theoretical quantities of well-being (or whatever)), simply to avoid the suggestion of the exact problems with utilitarianism (now synonymous with the name) which I am attempting to solve, but also to emphasize a slight difference in the basic currency of the system (elaborated below), which I think is better described as "consequences" than "utility" (however, I'm going to keep using the word "utility" to refers to quantities of moral goodness).
Here is an outline of the proposed system. I won’t attempt to formalize it because I’m not a philosopher. Also because I’m not a philosopher, I haven’t done enough reading of literature to know if this has been proposed before [it has been pointed out that this is more or less the same as the person-affecting view] or if there is some serious logical problem that I have not recognized. If this is the case, I apologize and I encourage you to let me know. But even if that is the case, I think this can add to the discourse around utilitarianism and repugnant conclusions regardless. Anyway, outline:
Improving the well-being of an existing person is good (positive utility).
The life of an existing person ending is bad (negative utility).
The marginal existence of a person is neutral (zero utility)—this obtains regardless of the quality of life of the marginal person.
To be clear, “marginal existence” refers to the proposition of a hypothetical, incremental person: mere addition. It only goes in one direction—the other direction is called death. I think the recognition that we’re talking about hypothetical people already gets at the shakiness of any intrinsic value associated with marginal existence. The principal of this system is the idea that merely adding a life brings no utility, utility only comes from improving well-being of persons conditioned to exist.
So this system does not conform to arithmetic the way utilitarians want morality to (which is another reason that I would not like to describe this system as utilitarianism—the arithmetization of morality seems to be a requirement for most utilitarians). There is only a partial ordering:
Good life > bad life*,
but it is not true that good life > bad life > nonexistence.
It is true, however, that good life > bad life > death.
(Note that the way I’m using the word “true” from here on out is to mean that the proposition is either an axiom of Z-VMEC or it follows from those axioms; “true” from the perspective of Z-VMEC.)
Z-VMEC holds the question of whether or not to have kids as morally neutral. I think this matches intuition—most people feel that having kids isn’t an inherently good or bad thing. So the position of anti- or pro-natalism is inconsequential. It also frames Bankman-Fried’s position as utterly reprehensible—nothing but negative utility—and that also matches our intuitions. And of course, it does not lead to the original repugnant conclusion of (total) utilitarianism. Z-VMEC does not assign any value to the mere addition of miserable lives, so the large miserable society is just way, way worse than the happy society. It also does not suffer from the repugnant conclusion of average utilitarianism, which is the scenario where (again, paraphrasing) the utility of a moderately sized miserable society is vastly improved by introducing one very rich, very happy person. Z-VMEC does not assign any value to the mere addition of this rich, happy person, either, so that mere addition represents no change in utility.
Another way of getting at the same idea of Z-VMEC is asserting that the proposition of bringing a person into the world is not a proposition subject to moral evaluation, because there is no existing person to be the subject of well-being or suffering, which gets you out of the anti- and pro-natalism arguments, but does not get you out of the repugnant conclusions of utilitarianism. This is a more agreeable position, but of course it’s more agreeable because it’s weaker.
*Or, to be sure, it would be more accurate to say “bad life —> good life > 0” (that is, the transition from a bad life to a better life has positive utility) rather than “good life > bad life.” This is because if we are only considering the hypothetical existence of two independent hypothetical people—A who has a good life and B who has a miserable life—it is not true that A > B, because A and B are both marginal. The fact that I am requiring it to be understood as a transition is equivalent (as I see it) to conditioning on the existence of that person. So to revise the first point in the list above, good life > bad life, conditional on the existence of that life. I think this satisfies intuition: we value the betterment of lives, but we don’t necessarily value certain lives over others. We don’t say that a rich person’s life has more value than a poor person’s; such a comparison is amoral.
So what propositions are subject to moral evaluation? We want to be able to say things like, “the transition from poverty to wealth is a moral good.” In my opinion, the only propositions that we should bother evaluating morally are decisions or actions, and we should evaluate them on the basis of the transitions in states of affairs that they bring about (consequences).
If it seems a bit silly to you (as it does to me) to evaluate / compare static scenarios on a moral basis (as is done to arrive at the repugnant conclusion), that’s because morality is more relevant for evaluating decisions and actions. But where does that leave the repugnant conclusions from the perspective of Z-VMEC? Well, let’s get a little more precise on the concept of marginal existence. Strictly speaking, we could understand all of the individuals in any hypothetical situation (including those of the repugnant conclusion) marginal—they are nonexistent, hypothetical people, after all—in which case all static hypothetical scenarios come with a utility value of zero. Only hypotheticals in which there is a transition in the state of affairs would come with a non-zero utility. But I think this is a bit unsatisfying. We want to be able to say something about the scenarios in the repugnant conclusion. To allow this, remember that a transition in state of affairs is more or less equivalent to conditioning on existence. So we can say that, conditional on the existence of N people, a society in which those N people are happy is preferable to a society in which those N people are miserable. Unconditional on the existence of M people (that, added to N, make up a large enough society of miserable people to create more utilitarian value than that of the N happy people)—and it must be unconditional, because they are included in one scenario but not the other—there is zero utility associated with those M people, whether they are happy or miserable. In other words, we can only meaningfully compare scenarios of equal numbers of people (condition on their existence) to get non-equal utility value. This matches my intuition; when I hear the descriptions of the scenarios in the repugnant conclusion, I think something along the lines of, “Well those are just different scenarios. I’m not sure if they can be directly compared.” Z-VMEC satisfies that intuition.
Morality is, after all (I think), a decision-support system, not a branch of metaphysics. So I am not a moral realist—I don’t think there is any underlying moral truth that we should try to discover. But I also diverge from the pragmatist utilitarians that avoid the repugnant conclusion by saying, “Yeah, if you take it to its extreme [meaning its logical conclusions; the ‘truths’ that follow from its axioms], it’s repugnant, but, listen… utilitarianism works.” But a formal system is only as good as the conclusions that follow from its axioms; I would think that utilitarians, so interested in the mathematization of morality, would recognize that idea as it is recognized in mathematics. The conclusions are all we have. Mathematics was not developed by coming up with axioms that sounded good and then accepting whatever system followed—it was developed backwards, by observing reality and then trying to come up with the minimal set of axioms from which a system that describes reality just… falls out.
Besides the pragmatist utilitarians, there’s also the faithful utilitarians who choose to accept the repugnant conclusion. These are the people like Sam Bankman-Fried and Eliezer Yudkowsky who apparently establish their positions based on the belief in some mystical moral truth that we just have to figure out—and utilitarianism is the way to do that. When they arrive at the repugnant conclusion they say, “Well that’s odd,” but then they say, “but it must be the truth because it’s the conclusion of utilitarian logic. I guess I better commit to that position.” Which is just odd. It comes from having faith in the existence of a moral truth, or at least faith in the irrefutability of the axioms of utilitarianism.
A frequent problem in conversations about utilitarianism is that people sneak in the condition of existence. They’re not being malicious or anything—it’s hard not to sneak it in. When you think about these scenarios of hypothetical happy people and hypothetical miserable people, the shortcut you use to compare the scenarios is think that you would rather be one of the happy people than one of the miserable people. But you just conditioned on the existence of the person by using yourself as a proxy—you exist, so you introduced the condition that the person in question exists.
Over at Astral Codex Ten, in his review of Will MacAskill’s What We Owe The Future, Scott Alexander describes a feeling that you get in these conversations—the feeling that you’re getting mugged.
The use of the term mugging comes from “Counterfactual Mugging,” which comes from “Pascal’s Mugging” (which comes from Pascal’s Wager), which uses huge utility values associated with tiny probabilities to get considerable expected utility values (a bit more on that below), but personally I just like the imagery of being in a discussion with a philosopher when he suddenly pulls a gun on you.
Scott says the feeling he gets from the section from the book on potential people is the feeling of being mugged. I think that the mugging—the trick that the philosopher employs—is quietly introducing the condition of existence. Scott has a great outline of the repugnant conclusion (and other similar arguments and thought experiments), and says:
This argument, popularly called the Repugnant Conclusion, seems to involve a sleight-of-hand: the philosopher convinces you to add some extra people, pointing out that it won’t make the existing people any worse. Then once the people exist, he says “Ha! Now that these people exist, you’re morally obligated to redistribute utility to help them.”
“Now that these people exist,” (meaning: conditional on their existence), it is a moral good to improve their well-being. The problem—the trick—is that you can’t condition on their existence if you want to compare to a scenario where they do not exist. The condition doesn’t apply to the other scenario, therefore you can’t draw conclusions dependent on that condition, which is what the repugnant conclusion is. (To be clear, even though I am making it sound as though I have identified a logical error in the repugnant conclusion, that’s not what’s going on. I am only approaching it with a slightly different framing that dissolves the conclusion.)
Scott goes on to discuss the repugnant conclusion and similar arguments, trying to come up with some sort of solution, and he gets very close to proposing zero-value marginal existence (or something functionally identical).
Just don’t create new people! I agree it’s slightly awkward to have to say creating new happy people isn’t morally praiseworthy, but it’s only a minor deviation from my intuitions, and accepting any of these muggings is much worse.
If I had to play the philosophy game, I would assert that it’s always bad to create new people whose lives are below zero, and neutral to slightly bad to create new people whose lives are positive but below average. This sort of implies that very poor people shouldn’t have kids, but I’m happy to shrug this off by saying it’s a very minor sin and the joy that the child brings the parents more than compensates for the harm against abstract utility. This series of commitments feels basically right to me and I think it prevents muggings.
I would love to hear from Scott why he didn’t take it all the way to zero-value. He acknowledges the intuition that creating new happy people is actively good, simply because it’s the reverse of creating new miserable people which seems obviously actively bad, and maybe he just wants to hold on to that intuition? Or maybe it’s just because
I’m not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something.
Which is totally reasonable, and I fully expect to have my eyes pecked out by angry seagulls in the near future. Speaking of angry seagulls, the comments section on the ACX post is full of great discussion around this whole mugging idea and utilitarianism in general, but also lots of intellectual people confident that this entire discussion is an enormous waste of time. I’ll link a few of my favorites (this sounds sarcastic but I really do appreciate these comments, there is real and valuable discussion always happening at ACX):
1 (“There's a lot of silly questions like this.”)
2 (“None of this is real, it’s just words.”)
3 (“Personally, I think people trying to plan for thousands of years in the future need some epistemic humility.”)
4 (“‘Arguments? You can prove anything with arguments.’”)
I’m not trying to make the argument that we owe nothing to the future. Z-VMEC is compatible with longtermism, but maybe only a weaker form of longtermism. And maybe that’s a good thing, because the longtermist argument has this sort of irrefutability to it, this feeling of being mugged, this feeling that they’re using a gun in a game of rock-paper-scissors.
Z-VMEC does not say that we cannot assign value to any hypothetical future humans. We must only condition on their existence for them to have value. The question becomes: how many people should we condition on existing? There will certainly be another generation after the one that currently exists, so it is a good idea to condition on their existence. There will certainly be another generation after that, and another after that. It seems pretty safe to assume that there will be quite a few more generations to come, and therefore it is pretty useful to condition on their existence, and therefore there is still a strong case to make for longtermism. We have a moral responsibility to consider how we can improve the lives of future generations, because we know they will exist. But the further out into the future you go, the less confident we are that the hypothetical people will exist, and therefore the less useful it seems to be to condition on their existence. This satisfies my intuition that the longtermism argument is a little too strong. Most people think, intuitively, should we really be concerned about people that will exist a billion years in the future? Not because those people have less value than the ones that currently exist, but because they are simply less likely to exist (and we are less likely to be able to intelligently exert influence on them). So Z-VMEC defeats the utilitarian version of using a gun in rock-paper-scissors, which is invoking the hypothetical existence of all 100 nonillion potential people.
When it comes to existential risk, or X-risk, which consideration is an important aspect to longtermism, Z-VMEC does not let us down. Again, we arrive at a weaker form of longtermism—X-risks are serious because they have the potential to wipe out all existing human life, or seriously alter the state of affairs in a bad way, but the gun never comes into the rock-paper-scissors game because you don’t have to consider the value of the 100 nonillion people that could have theoretically followed a preventable cataclysm. This leaves us much more receptive to the value of “traditional” effective altruism, that is, donation to current or near-term issues that affect a lot of people. We can improve the state of affairs for existing people, and that is a moral good, and it is not a waste of money in comparison to preventing X-risks. We all feel intuitively that there is real moral good that comes with helping existing people today, and we shouldn’t be expected to forsake all of the people on earth for the hypothetical future people. Z-VMEC allows you to come up with a current/near term/long term utility distribution depending on how many generations you think we should condition to exist.
Counterargument: Z-VMEC would say that the utility associated with the existence of an extremely happy, transcendental person and the existence of a person whose life are barely worth living are equal.
Response: Yes, but simply saying they are equal is not taking it all the way, which might be why it seems wrong. They are not only equal—they are both equal to zero. Both propositions are utility neutral. Like I said before, we don’t value the life of a rich person more than we value the life of a poor person. We don’t value the life of a person predisposed to happiness more than we value the life of a person with clinical depression. I think your intuition that there should be a preference for the first scenario comes from the fact that the actual moral thing to do is improve existing lives (or lives that we condition to exist); if we condition on the existence of this person, there is utility associated with improving his or her life from the poor state to the rich state. That’s where our intuitions come from, because in real life we never really have to think about the marginal existence of hypothetical people, knowing exactly what their lives will be like; what we have to do is make decisions that affect people.
Counterargument: There is utility associated with marginal existence, because, well, I exist! and I prefer that to not existing.
Response: This is where your intuitions fail you. It is difficult to evaluate the scenario of your existence compared to the scenario of nonexistence, because you exist, so any comparison you attempt to make is implicitly conditioned on your existence. In other words, the only comparison you are able to make based on your experience is a preference for continuing to live rather than dying, which Z-VMEC also has a preference for. You must acknowledge that if you did not exist, you would have no experience; you would not feel any pleasure, you would not suffer, you would not have any well-being, you would not long for existence. For it to be fair to say “I currently exist, and I prefer that to not existing, therefore there is positive utility associated with existence,” I would argue that you would also have to be able to say “I currently do not exist, and I would prefer to exist, therefore there is positive utility associated with existence” (at least to achieve the arithmetic morality that utilitarians are looking for: this is simply saying that for existence - nonexistence > 0 to be true, nonexistence - existence < 0 must also be true, simply by way of the principles of arithmetic), but you can’t say that, because you don’t exist.
Counterargument: It sounds like you don’t value human life at all.
Response: No, Z-VMEC doesn’t value hypothetical human life at all. In fact, Z-VMEC values (non-hypothetical) human life much more than utilitarianism does, because under utilitarianism, the value of existing human life is very small compared to the value of all potential hypothetical humans. And this might be the fundamental problem with utilitarianism: it places little value on existing human life. This is plainly demonstrated by Sam Bankman-Fried’s comments above.
Counterargument: From here, you cannot make an argument for the continued existence of humanity.
Response: Yes, Z-VMEC cannot make an argument for the continued existence of humanity, which may be the point where you want to stop taking it seriously.
I would be remiss not to mention that a form of this idea already exists, albeit in an entirely different context and usage of the term “mere addition,” and that that context is statistical, and that I am something of a statistician myself, and that that might appear to bias my understanding of the issue here. I can only promise you that I only found out about this other “no mere addition axiom” after I had already come up with my own axiom, and so it most likely did not influence my thinking, and I will not attempt to discuss the problems of applying statistical problem solving principles to morality, but yes, I did steal the name.
This is where we coincide perfectly with the other usage of mere addition from note 1.
Great article, very well argued!
However I still have a concern. If you are a utilitarian, what is the meta ethical basis for denying the value of additional life? Clearly if another person is born, average and total utility are subject to change. What grounds do you have for rejecting that besides not wanting to deal with the RC?
This isn’t a problem specifically with your theory, but with utilitarians in general, who create ad-hoc restrictions on utilitarian calculus without a sufficient basis for doing so (say, denying pleasure derived from immoral acts so as to not have to deal with the potential implications).
I wrote a reply to your article -- here it is. https://benthams.substack.com/p/contra-eliason-on-totalism