On my relation to effective altruism

rationality, essay

I’ve spent more time engaging with the effective altruist (EA) community this year. Not just reading EA books and blog posts, but participating in seminars, attending conferences and going on EA retreats. For context, I’d viewed myself “EA adjacent” ever since I came across The Life You Can Save back in high school. However, during my master degree, the prospect of graduating soon - of becoming an adult - made me reflect more carefully on EA.

First, our definitions, taken from the introduction to effective altruism:

Effective altruism is a project that aims to find the best ways to help others, and put them into practice.

It’s both a research field, which aims to identify the world’s most pressing problems and the best solutions to them, and a practical community that aims to use those findings to do good.

When I refer to the EA community, I mean the practical community. I’ll also use the abbreviation “EA” to refer to effective altruists.

Many people who have engaged with the EA community at some level or other often find themselves questioning their relationship with EA. In fact, there are plenty of blog posts on the theme “EA identity crisis”1. Here’s my contribution to the genre.

So, do I consider myself part of the EA community?

Yes #

Technically, yes.

  • I think the core ideas of EA - prioritisation, impartiality, open truthseeking and collaboration - make sense. The article does an excellent job at explaining these terms, so I’ll refer to their explanations2. I’ve desperately tried red-teaming EA - I’d love for the drowning child argument to be less convincing - but I still think the core ideas hold up.
  • I care about finding the best ways to help others, and putting them into practise. While I don’t want to impose any moral standards on others, I feel a certain moral obligation to do good3. I also find it fulfilling working on high-impact projects. It almost seems tautological that one should try having a large positive impact on the world.
  • I’ve attended several EA community events. All of these events have been tremendously valuable, both on a professional and a personal level.
  • My interest in EA doesn’t seem to be “just a phase”. At this point, I’ve spent about five years reading and learning more about EA.

Maybe also non-technically? Here are some gut-feeling-level arguments:

  • I enjoy exchanging ideas on EA-related topics with people in the EA community - it’s almost like a hobby of mine. But I also think there’s significant value in doing collaborative sensemaking on topics like existential risk, AGI timelines and longtermism.
  • In general, I find many people in the EA community to be very thoughtful. Some pieces by Holden Karnofsky, Ajeya Cotra and Benjamin Todd have had a profound influence on my worldview. Similarly, some of my favourite non-fiction books are about EA. For example, I thought Doing Good Better and What We Owe The Future were exceptional reads.

No #

But of course, it’s complicated…

  • I mainly care about existential risk reduction, although this may very well change in the future. I’ve also devoted much more time and effort to AI safety than any other cause area4. It’d be more accurate saying I’m into AI safety and existential risk reduction rather than EA, which is a much broader term.

  • I don’t seem to fit the public perception of an EA. In my experience, many people think all EAs work on farm animal welfare or global poverty reduction. Sure, I’m vegetarian (Peter Singer’s fault), but I still haven’t donated to GiveWell, nor signed the 10% pledge. This point is mostly about me not living up to my moral standards, though.

  • In general, I try avoid identity markers related to social movements. While I think neutrality is somewhat of an illusion, I want to hold my opinions lightly. My worry is that self-identifying as an EA might make me less open-minded. But perhaps this worry is somewhat ungrounded, at least when it comes to EA. People in the EA community tend to be very open to criticism5.

  • The EA community has its flaws, despite making a number of changes after FTX. Many of the concerns raised in this article are still valid. This excerpt summarises one of my main concerns well:

    The EA community is notoriously homogeneous, and the “average EA” is extremely easy to imagine: he is a white male in his twenties or thirties from an upper-middle class family in North America or Western Europe. He is ethically utilitarian and politically centrist; an atheist, but culturally protestant. He studied analytic philosophy, mathematics, computer science, or economics at an elite university in the US or UK. He is neurodivergent. He thinks space is really cool. He highly values intelligence, and believes that his own is significantly above average. He hung around LessWrong for a while as a teenager, and now wears EA-branded shirts and hoodies, drinks Huel, and consumes a narrow range of blogs, podcasts, and vegan ready-meals.

Resolution #

While I do have some reservations, my views are pretty consistent with the EA agenda. Also, I’ll (reluctantly) admit that overthinking the question of whether you’re part of a given community is very EA.


  1. This post on EA identities is a good starting point. There’s also Neel Nanda’s favourite blog post↩︎

  2. I endorse a much broader form of the impartiality principle than the one outlined in the article. I also think strangers, animals and future people should be part of our circle of moral consideration↩︎

  3. I assume this is because my mother, coming from the Philippines, always told me to be grateful. I’m glad she did, but I used to find it annoying as a child. ↩︎

  4. For a list of cause areas, see the 80,000 hours list of the world’s most pressing problems↩︎

  5. See the posts in the Criticism of effective altruism thread. I very much liked Ben Kuhn’s critique of EA↩︎