What effective altruism misses
Effective altruism (EA) centres around a dozen or so “cause areas”, areas in which you can have a big positive impact. Common cause areas include AI safety, global poverty, farm animal welfare, biorisk, nuclear security and EA field building. As of a couple of years, AI safety is very hot within EA, with some of the EA golden boys, like Will MacAskill and Holden Karnofsky, making AI going well their top priority.
Say that EAs solve the problems associated with the above cause areas. Humanity has smoothly transitioned to a post-AGI world, eliminated global poverty, put an end to factory farming and reduced existential risk to zero. While this sounds like a rosy vision of the future, unless you live in extreme poverty, your life in this hypothetical world will be the same. Those who regularly worry about all the world’s problems will find something else to worry about. And if you are dissatisfied with life – which, if you’re anything like the average American, isn’t entirely unlikely1 – you’ll likely remain dissatisfied.
EA, then, appears to miss something very fundamental about what it means to be human2. If EAs want to maximise positive impact, they should make wellbeing a core cause area.
What I mean #
The cause area of wellbeing, as I see it, could be concerned with the following question: How can we build better societies, where the average citizen has a higher baseline level of wellbeing? Here, I use the term “wellbeing” to mean “mental health” or “welfare”.
Imagine a society organised around the principle of improving wellbeing in the population at large, rather than around some ideology. While I don’t know exactly what such a society would look like, current societies leave a lot to be desired.
Some ideas that come to mind. Can we create societies where everyone feels a sense of community? Societies where people report high levels of life satisfaction well into their 80s? Societies where teenagers take care of one another?
Designing such societies is a long-term goal. In the near-term, we must fix broken healthcare systems, e.g. by providing widespread access to therapy. In America, about 50% of adults with mental illness receive no treatment3.
Why EAs should care #
EAs have good grounds to make wellbeing a core cause area. An SNT analysis reveals big potential for positive impact.
To begin with, the “wellbeing problem” is huge in scope. According to one estimate, mental health disorders accounted for about 16% of global DALYs4 in 2019. If everyone had a baseline level of mental health, we’d see many positive downstream effects. Of course, EAs have estimated the benefits of merely treating EAs suffering from depression, making them more effective in their altruistic endeavours. Moreover, as material living standards increase, mental health problems might become the biggest obstacle towards humanity’s flourishing. Who cares if we can colonise other galaxies if one in six men reports being lonely?
The topic of mental health also seems neglected in EA circles. The classic cause areas, along with AI safety, dominate the EA discourse. To my knowledge, there are only two wellbeing-oriented EA organisations: Rethink wellbeing, which offers therapy to altruists, and the Happier Lives Institute, which finds cost-effective charities to improve wellbeing. However, I’d love to see EA think tanks focused on designing happier societies. Designing happier societies is a huge social engineering task, where an EA mindset of truth-seeking and scope-sensitivity would be useful.
Finally, improving wellbeing appears to be a relatively tractable problem. Sound decision-making in healthcare can have seismic effects, and now, with a growing body of research on wellbeing, policy-makers can make more well informed decisions. Efficient policy advocacy might be hard, but it shouldn’t be any harder in healthcare than in AI5.
Towards a better future #
If EAs want to maximise happiness, they should cut to the chase, making wellbeing a central cause area. Once those who need help receive it, think about how to design better societies. Don’t aim for utopia – for a start, make teenagers happy again, eliminate male loneliness, and restore the elderly’s sense of dignity. While everyone experiences – and probably should experience – some degree of existential angst, a society doesn’t need to have depressed incels, burned-out careerists and disoriented NEETs.
I agree with the sentiment that EA is becoming somewhat outdated, expressed by Will MacAskill in his article on EA in the age of AGI. In order for EA to stay relevant, the movement needs to reprioritise cause areas. However, rather than morphing into a subfield of AI safety, EAs should ask what would fundamentally create a better world.
One in five Americans say they’re dissatisfied with life. ↩︎
Maybe the best evidence for this claim is that EA only attracts certain kinds of people. ↩︎
In practise, therapy is restricted to those with strong negotiation skills, who can convince healthcare workers they need priority, and CEOs, who receive therapy in the form of CEO coaching. ↩︎
DALYs, disability-adjusted life years, roughly measure the overall negative effects of a disease. ↩︎
I have limited experience with policymaking though, so take the point about tractability with a grain of salt. ↩︎