Is AI safety the new climate change?
Ever since Eliezer Yudkowsky and Nick Bostrom first drew our attention to the risks of superintelligent AI, people have likened AI safety to climate change. Here, I’m referring to the overall goals of developing safe AI and transitioning to a net zero economy – both the technical and social dimensions.
Since the publication of Superintelligence (2014), AI safety has matured as a field. It’s no longer a fringe cause area of effective altruism or a topic confined to rationalist forums online. Whether for good or bad, AI safety is increasingly becoming like climate change in a number of ways.
Perhaps most strikingly, AI safety and climate change debates center around a point of no return. AI safety researchers speak of a technological singularity, the point where AIs start to recursively self-improve and we see AI progress taking off; climate change scientists speak of tipping points, temperature thresholds where additional warming triggers irreversible changes.
Similarly, within both AI and climate change, there are so-called ‘doomers’, i.e. people who think we’re bound for some kind of apocalypse. In AI, the archetypal doomer is Eliezer Yudkowsky, who seems to think AGI will inevitably kill everyone (or see the title of his book). Likewise, many AI safety researchers think catastrophic outcomes are highly likely in the absence of regulations. For instance, here’s Max Tegmark in an interview from November 20251:
…if we go ahead and continue having nothing like the FDA for AI, so people can legally just launch superintelligence and worry about getting sued later… yeah, I would think it’s definitely over 90% that we lose control.
Climate change activists like Greta Thunberg often employ similar doom-laden language. Someone has to sound the alarm.
And in both domains, the public has heard the alarm, as it were. By now, most people agree that we should build safe AI and solve global warming2. As of September 2025, 50% of American adults were more concerned than excited about the use of AI in everyday life; in 2021, the same figure was 37%3.
Finally, I suspect that both AI safety and climate change are primarily policy problems. By this, I mean that the bottleneck is political willpower rather than a lack of technical solutions. Says Buck Shleregis of Redwood Research:
Five years ago I thought of misalignment risk from AIs as a really hard problem that you’d need some really galaxy-brained fundamental insights to resolve. Whereas now, to me the situation feels a lot more like we just really know a list of 40 things where, if you did them — none of which seem that hard — you’d probably be able to not have very much of your problem.
Moreover, Anthropic’s conflict with Pentagon highlight the dependence on policy-makers: AI safety is becoming a political issue. In climate change, it seems like we’re facing the same situation, with fixes being available but policy mess.
In conclusion, AI safety is becoming more mainstream – mainstream to the point of becoming more like a global coordination problem. And while there are a few differences4, discourse on AI safety and climate change bear an uncanny resemblance: the doomers speak of a doomsday, and the threat comes from big corps prioritising commercial interests over social considerations.
AI safety has its roots in the no-nonsense culture on LessWrong, and I’m hoping we can use this to handle AI safety better than climate change5.
Most people in the AI safety community hold more moderate views; see e.g. this article. Jonas Vollmer puts the probability of AI killing us and creating a world run by AI systems at 20%, while Buck Shleregis thinks an AI takeover is 40% likely. ↩︎
Interestingly, the recognition of the AI safety problem has led to an analogue of green-washing. ↩︎
For an overview of public opinion on AI, see this 80k podcast episode. ↩︎
See Sam Clarke’s discussion here. To me, the most striking difference is that the effects of transformative AI appear lesser understood. What will happen to the labour market? ↩︎
No, we’re not on track for our climate change goals. ↩︎