Effective Altruism (the institution) has been rapidly crashing forward since its inception as a handful of thought experiments, but effective altruism (the idea) has remained more or less the same. There’s nothing really surprising about this. The institution is a movement; it’s something that is in motion. Your choice is to either try to keep up with that movement, or settle down with a set of principles that make sense to you.
The most modern version of the EA movement is very interested in longtermism, and especially AI. This is not unrelated to the rise of SBF and FTX, since FTXFF was explicitly focused on funding longtermist causes, and AI was at the top of their list. But this represents a movement away from the original principles of EA, and maybe after all the FTX dust settles, effective altruism as an idea will be a lot more palatable and useful.
Effective altruism began as a couple of thought experiments like the shallow pond. It was an idea and practice that could be summarized in just a couple points:
We should commit to donating a portion of our incomes.
We should commit to being effective with our donations.
It followed from the recognition that our money could go a long way to improving the well-being of others, especially in developing countries, but instead most of us spend that money on frivolous things. It cultivated the understanding that there is a huge range of effectiveness on charitable donations, that most charities don’t even bother to measure outcomes, and altruism is mostly feel-good vanity and PR. It proposed that maybe we all (all of us that are relatively well-off; all of us likely to read this) could probably stand to donate at least 10% of our income.
I’m using the impersonal singular pronoun “it” to refer to the nascent idea of effective altruism, and the handful of thought experiments and philosophers that developed it. Since then, effective altruism has become a proper noun. Effective Altruism, or EA. Not merely an idea, but an institution. “It” consists of organizations like GiveWell, Giving What We Can, 80,000 hours, and many others, and billions of dollars in funding. There are communities like the LessWrong forum, the effectivealtruism.org forum, and of course Twitter, where people who identify as Effective Altruists get together and socialize and date each other, too. There is this shift towards the newer idea of longtermism and its unstoppable utilitarian logic. Until recently, there was Sam Bankman-Fried, the multi-billionaire founder of FTX who promised to give all his money away, and FTXFF, the longtermist charity arm of FTX that funded a lot of AI-risk projects.
FTX imploded and a lot of funding vanished and it’s like an entire arm of the EA institution has shriveled and died. There are people doing work that really helps society who are in trouble now. Lots of those EAs on twitter are worried about the institution’s “reputation” and “PR” and “social capital.” But it seems pretty clear that EA is not going anywhere. The institution isn’t going anywhere, if only because of the fervor of its members; but the idea especially isn’t going anywhere, because it’s an effective one that’s already infected too many people for it to die out.
There’s a lot of cynicism floating around out there right now, partly because the EA institution put a bunch of stock into SBF and he turned out to be a fraudster.
But the other part is this underlying cynicism against EA more basically that is only airing because of the depressurization caused by FTX. A lot of people apparently just don’t buy the whole EA thing because it seems like mere virtue signaling. I do wonder: is it possible to be merely virtue signaling while also putting real money up? Because that's the thing about EA—EAs really are sincere. From
: “EAs really believe. Like, really believe.” This might be both the best and worst thing about the movement. Most people don’t like it when people really believe in something. Cynicism dominates, especially on Twitter.There is this urge to distance oneself from the EA movement: notice Tyler Cowen (“I am not an ‘EA person’”) and Sam Harris (“I’m not an ‘Effective Altruist,” capital E, capital A”). But you shouldn’t let Twitter distort your perceptions of reality. People and groups aren’t really like the most visible examples that you see on Twitter. As far as I’m concerned, if you’re donating money and you’re intentionally choosing a cause based on some measure of effectiveness, you’re an EA.
And, there are a lot of EAs talking about the “PR of EA.” But I think the biggest PR mistake made by the EA community was trying to have any sort of PR at all.
But others still are talking about a “return to 2015 EA,” (2015 being the year Peter Singer published The Most Good You Can Do and Will McAskill published Doing Good Better) (discussion along these lines has also appeared on the EA forum: 1, 2) which I think describes classic effective altruism; consisting mostly of the Giving What We Can Pledge and an effort to evaluate the effectiveness of charity. It was EA without the Silicon Valley veneer (it was started by a brit and an aussie, after all) and the obsession with AI, before a focus on longtermism. It’s not absent longtermism, but it’s more about trying trying to be effective at all rather than trying to do what’s most effective, and it’s about intelligent discussion on being effective and encouraging critical thinking on altruism.
The frontier EAs are the ones visible on Twitter, because they’re the people who talk about EA. Classic EAs don’t talk about EA all that much, because being a classical effective altruist doesn’t give you a whole lot to talk about—the deal with small-e small-a effective altruism is that it’s pretty boring: part of what we’re trying to defeat with effective altruism is that desire to donate to things that make you feel good rather than things that do good. The “do good” things tend to be less exciting than the “feel good” things, so we’re less likely to be excited to discuss them—but that’s exactly the point.
If EA is something that’s exciting to you, I think you might be doing it wrong.
To be clear, I’m not trying to argue that we should not be funding AI alignment or other longtermist / x-risk causes at all. (Personally, my automatic donations are split 45-45-10 between the GiveWell Top Charities Fund, the Longterm Future Fund, and the Centre for Effective Altruism, respectively.) Instead, I’m arguing that it’s perfectly rational to mix current and near-term concerns with longterm concerns, that our discussion should reflect that, and that we should return to an emphasis on critical thinking on the effectiveness of causes.
EA is a valuable idea and a strong community that was successful before FTX, so it will carry on. But capital-L Longtermism, characterized maybe by the prioritization of the aggregate potential of humanity over the current reality of humanity, might be dead. And that might be for the best. People aren’t receptive to the too-strong argument behind longtermism or the suggestion that we should divert funds away from those that are currently suffering, and many, such as Tyler Cowen, are skeptical about the effectiveness of longtermist causes.
Is AI risk actually one of the more “feel good” things than one of the “do good” things? People love talking about AI (this is why I have difficulty following places like LessWrong and the EA forums; they have been almost exclusively about AI, vs. EA-ish Substacks like
and and , which hit a bunch of interesting topics), but despite the volume of AI discourse, I’ve noticed a disturbingly un-EA-like lack of investigation into the vulnerability of AI to money (which is to say, our ability to mitigate AI risk by funding AI alignment research). A cause can not be considered “effective” unless there is evidence that money actually make a difference in that cause.Longtermists might argue that while we can’t demonstrate that AI funding is making any difference, it’s still a good thing for EAs specifically to fund because almost all funding outside of EA is not going to mitigating AI risk. But then I would want to see research demonstrating this point. For a cause to be considered “effective,” it must also be underfunded, and I haven’t seen anyone asking whether AI alignment research is underfunded, either. In fact, not only have I not seen anything demonstrating the vulnerability of AI risk to money or an underfunded status, I haven't seen anything even asking these questions. (I would love to provide evidence for this, but it’s hard to provide evidence for a lack of something, so the best I can do is ask for counter-evidence and promise humility. Please, if you know of some such research, send it my way—I would love to read it.)
A return to classic effective altruism means a sincere effort to ask how much of a difference our philanthropy is really making, and I’m just not seeing that with AI (yet).
I have written previously about a utilitarian-ish ethical stance that permits effectiveness calculations but avoids the “gun in the rock-paper-scissors game” of longtermism. But since classic EA is more about trying to be effective at all rather than trying to do what’s most effective, it’s not really about utility maximization, which is where the too-strong argument for longtermism and other funny-smelling ideas sneak in. Utility calculations can be safely used as heuristic for evaluating effectiveness without justifying the repugnant conclusion or speculative spending on issues which we have no idea how effective we’re being.
In the same vein, it’s not important that we identify the most effective cause and we all fund that cause. I think this is a message that EA (the institution) could benefit from endorsing, because it makes the practice more agreeable and effective. You shouldn’t be alienated just because you’re not funding AI alignment or not buying bed nets, but also there needs to be diversity in causes because diversity will force healthy discussion and investigation into what’s effective. Besides, there are multiple causes worth funding, and if we all are funding the same cause it won’t be the “most effective” cause anymore. I liked
’s post, which has discussion along these lines as well as some sober thinking about the EA movement and criticisms in general.David Roberts wrote an excellent Twitter thread about the FTX disaster and EA. He points out that as humans we are extremely good at bullshitting ourselves, and that utilitarianism works better as a heuristic than as a “quasi-mathematical rule.” He goes on to suggest epistemic humility:
I think, as individuals, we should all try to be epistemically modest EAs. As an institution, EA should work at overcoming our tendency to bullshit ourselves. As David points out, that’s something that can only happen socially, and with critical and diverse discourse, and this is something that the EA institution does very well, but to me it has recently felt like longtermism and an obsession with AI has taken over the discussion and negatively affected both outsiders’ perception of EA and internal discourse. I am optimistic that the collapse of FTX will lead to a return to classic effective altruism, with a more attractive proposition, and more varied, interesting, and productive discussion.
Besides the heterogeneity of discussion, another part of the problem with the most modern EA community, as well as SBF and “earning to give” in general, is it has led to this misunderstanding that you have to be rich to be an EA (or worse, that you’re automatically an EA if you’re rich). Part of the cynicism out there comes from a perception that EA is something that goes hand in hand with making yourself rich. There’s this sort of eye-rolling “how convenient” attitude. (If we’re talking about PR, I think the EA community made a couple of mistakes: the fixation on a billionaire rather than regular people, and the fixation on his plan to give it all away, rather than the money that he was giving away.)
But there really is a genuinely convenient have-your-cake-and-eat-it-too message behind EA, and this is exactly what makes EA effective as an idea: perhaps the best thing you can do is try to make a decent amount of money and commit to giving 10% of it away to the most effective charities. You don’t have to be Bill Gates to be a good person, and you don’t have to be a monk to be a good person. This is the power of EA, the real revolution, this is the message that gets people on board. EA restructures incentives to actually improve the well-being of others. It ignites discourse on how to do good, and makes it easy to do it. But all of that is weak compared to the true promise of EA, which it actually delivers: that it gives meaning to the necessity that is work.